query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
07e4b7aa9c45c55ad067b1c298f3bacb | Business Process Modeling: Current Issues and Future Challenges | [
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "2620ce1c5ef543fded3a02dfb9e5c3f8",
"text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "465a5d9a2fe72abaf1fb8c2c041a9b64",
"text": "A huge number of academic papers are coming out from a lot of conferences and journals these days. In these circumstances, most researchers rely on key-based search or browsing through proceedings of top conferences and journals to find their related work. To ease this difficulty, we propose a Personalized Academic Research Paper Recommendation System, which recommends related articles, for each researcher, that may be interesting to her/him. In this paper, we first introduce our web crawler to retrieve research papers from the web. Then, we define similarity between two research papers based on the text similarity between them. Finally, we propose our recommender system developed using collaborative filtering methods. Our evaluation results demonstrate that our system recommends good quality research papers.",
"title": ""
},
{
"docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b",
"text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.",
"title": ""
},
{
"docid": "71a65ff432ae4b53085ca5c923c29a95",
"text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.",
"title": ""
},
{
"docid": "31e25f598fb15964358c482b6c37271f",
"text": "Any bacterial population harbors a small number of phenotypic variants that survive exposure to high concentrations of antibiotic. Importantly, these so-called 'persister cells' compromise successful antibiotic therapy of bacterial infections and are thought to contribute to the development of antibiotic resistance. Intriguingly, drug-tolerant persisters have also been identified as a factor underlying failure of chemotherapy in tumor cell populations. Recent studies have begun to unravel the complex molecular mechanisms underlying persister formation and revolve around stress responses and toxin-antitoxin modules. Additionally, in vitro evolution experiments are revealing insights into the evolutionary and adaptive aspects of this phenotype. Furthermore, ever-improving experimental techniques are stimulating efforts to investigate persisters in their natural, infection-associated, in vivo environment. This review summarizes recent insights into the molecular mechanisms of persister formation, explains how persisters complicate antibiotic treatment of infections, and outlines emerging strategies to combat these tolerant cells.",
"title": ""
},
{
"docid": "85c133eecada3c4bb25f96ad8127eec3",
"text": "With the advent of brain computer interfaces based on real-time fMRI (rtfMRI-BCI), the possibility of performing neurofeedback based on brain hemodynamics has become a reality. In the early stage of the development of this field, studies have focused on the volitional control of activity in circumscribed brain regions. However, based on the understanding that the brain functions by coordinated activity of spatially distributed regions, there have recently been further developments to incorporate real-time feedback of functional connectivity and spatio-temporal patterns of brain activity. The present article reviews the principles of rtfMRI neurofeedback, its applications, benefits and limitations. A special emphasis is given to the discussion of novel developments that have enabled the use of this methodology to achieve self-regulation of the functional connectivity between different brain areas and of distributed brain networks, anticipating new and exciting applications for cognitive neuroscience and for the potential alleviation of neuropsychiatric disorders.",
"title": ""
},
{
"docid": "7ec6147d4549f07c2d8c6c24566faa2f",
"text": "Physical unclonable functions (PUFs) are security features that are based on process variations that occur during silicon chip fabrication. As PUFs are dependent on process variations, they need to be robust against reversible and irreversible temporal variabilities. In this paper, we present experimental results showing temporal variability in 4, 5, and 7-stage ring oscillator PUFs (ROPUFs). The reversible temporal variabilities are studied based on voltage and temperature variations, and the irreversible temporal variabilities are studied based on accelerated aging. Our results show that ROPUFs are sensitive to temperature and voltage variations regardless of the number of RO stages used. It is also observed that the aging, temperature, and voltage variation effects are observed to be uniformly distributed throughout the chip. This is evidenced by noting uniform changes in the RO frequency. Our results also show that most of the bit flips occur when the frequency difference in the RO pairs is low. This leads us to the conclusion that RO comparison pairs that pass high frequency threshold should be filtered to reduce temporal variabilities effect on the ROPUF. The experimental results also show that the 3-stage ROPUF has the lowest percentage of bit flip occurrences and the highest number of RO comparison pairs that pass high frequency threshold.",
"title": ""
},
{
"docid": "439c0a4e0c2171c066a2a8286842f0c2",
"text": "Capacitive rotary encoders are widely used in motor velocity and angular position control, where high-speed and high-precision angle calculation is required. This paper illustrates implementation of arctangent operation, based on the CORDIC (an acronym for COordinate Rotational DIgital Computer) algorithm, in the capacitive rotary encoder signal demodulation in an FPGA to obtain the motor velocity and position. By skipping some unnecessary rotation in CORDIC algorithm, we improve the algorithm's computing accuracy. Experiments show that the residue angle error is almost reduced by half after the CORDIC algorithm is optimized, and is completely meet the precision requirements of the system.",
"title": ""
},
{
"docid": "d88059813c4064ec28c58a8ab23d3030",
"text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "675007890407b7e8a7d15c1255e77ec6",
"text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "872d1f216a463b354221be8b68d35d96",
"text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.",
"title": ""
},
{
"docid": "73d58bbe0550fb58efc49ae5f84a1c7b",
"text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.",
"title": ""
},
{
"docid": "14dc7c8065adad3fc3c67f5a8e35298b",
"text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions",
"title": ""
},
{
"docid": "9c262b845fff31abd1cbc2932957030d",
"text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.",
"title": ""
},
{
"docid": "73f605a48d0494d0007f242cee5c67ff",
"text": "BACKGROUND\nLarge comparative studies that have evaluated long-term functional outcome of operatively treated ankle fractures are lacking. This study was performed to analyse the influence of several combinations of malleolar fractures on long-term functional outcome and development of osteoarthritis.\n\n\nMETHODS\nRetrospective cohort-study on operated (1995-2007) malleolar fractures. Results were assessed with use of the AAOS- and AOFAS-questionnaires, VAS-pain score, dorsiflexion restriction (range of motion) and osteoarthritis. Categorisation was determined using the number of malleoli involved.\n\n\nRESULTS\n243 participants with a mean follow-up of 9.6 years were included. Significant differences for all outcomes were found between unimalleolar (isolated fibular) and bimalleolar (a combination of fibular and medial) fractures (AOFAS 97 vs 91, p = 0.035; AAOS 97 vs 90, p = 0.026; dorsiflexion restriction 2.8° vs 6.7°, p = 0.003). Outcomes after fibular fractures with an additional posterior fragment were similar to isolated fibular fractures. However, significant differences were found between unimalleolar and trimalleolar (a combination of lateral, medial and posterior) fractures (AOFAS 97 vs 88, p < 0.001; AAOS 97 vs 90, p = 0.003; VAS-pain 1.1 vs 2.3 p < 0.001; dorsiflexion restriction 2.9° vs 6.9°, p < 0.001). There was no significant difference in isolated fibular fractures with or without additional deltoid ligament injury. In addition, no functional differences were found between bimalleolar and trimalleolar fractures. Surprisingly, poor outcomes were found for isolated medial malleolar fractures. Development of osteoarthritis occurred mainly in trimalleolar fractures with a posterior fragment larger than 5 %.\n\n\nCONCLUSIONS\nThe results of our study show that long-term functional outcome is strongly associated to medial malleolar fractures, isolated or as part of bi- or trimalleolar fractures. More cases of osteoarthritis are found in trimalleolar fractures.",
"title": ""
},
{
"docid": "0f24b6c36586505c1f4cc001e3ddff13",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] | scidocsrr |
63e603175c9009da16d78693caab1772 | Spectral and Energy-Efficient Wireless Powered IoT Networks: NOMA or TDMA? | [
{
"docid": "1a615a022c441f413fcbdb3dbff9e66d",
"text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.",
"title": ""
},
{
"docid": "29360e31131f37830e0d6271bab63a6e",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.",
"title": ""
}
] | [
{
"docid": "93b87e8dde0de0c1b198f6a073858d80",
"text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.",
"title": ""
},
{
"docid": "45494f14c2d9f284dd3ad3a5be49ca78",
"text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.",
"title": ""
},
{
"docid": "fd9461aeac51be30c9d0fbbba298a79b",
"text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "9a7c915803c84bc2270896bd82b4162d",
"text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.",
"title": ""
},
{
"docid": "4e6709bf897352c4e8b24a5b77e4e2c5",
"text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.",
"title": ""
},
{
"docid": "344e5742cc3c1557589cea05b429d743",
"text": "Herein we present a novel big-data framework for healthcare applications. Healthcare data is well suited for bigdata processing and analytics because of the variety, veracity and volume of these types of data. In recent times, many areas within healthcare have been identified that can directly benefit from such treatment. However, setting up these types of architecture is not trivial. We present a novel approach of building a big-data framework that can be adapted to various healthcare applications with relative use, making this a one-stop “Big-Data-Healthcare-in-a-Box”.",
"title": ""
},
{
"docid": "ddc18f2d129d95737b8f0591560d202d",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "1c915d0ffe515aa2a7c52300d86e90ba",
"text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.",
"title": ""
},
{
"docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc",
"text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting",
"title": ""
},
{
"docid": "82a0169afe20e2965f7fdd1a8597b7d3",
"text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.",
"title": ""
},
{
"docid": "387634b226820f3aa87fede466acd6c2",
"text": "Objectives To evaluate the ability of a short-form FCE to predict future timely and sustained return-to-work. Methods A prospective cohort study was conducted using data collected during a cluster RCT. Subject performance on the items in the short-form FCE was compared to administrative recovery outcomes from a workers’ compensation database. Outcomes included days to claim closure, days to time loss benefit suspension and future recurrence (defined as re-opening a closed claim, restarting benefits, or filing a new claim for injury to the same body region). Analysis included multivariable Cox and logistic regression using a risk factor modeling strategy. Potential confounders included age, sex, injury duration, and job attachment status, among others. Results The sample included 147 compensation claimants with a variety of musculoskeletal injuries. Subjects who demonstrated job demand levels on all FCE items were more likely to have their claims closed (adjusted Hazard Ratio 5.52 (95% Confidence Interval 3.42–8.89), and benefits suspended (adjusted Hazard Ratio 5.45 (95% Confidence Interval 2.73–10.85) over the follow-up year. The proportion of variance explained by the FCE ranged from 18 to 27%. FCE performance was not significantly associated with future recurrence. Conclusion A short-form FCE appears to provide useful information for predicting time to recovery as measured through administrative outcomes, but not injury recurrence. The short-form FCE may be an efficient option for clinicians using FCE in the management of injured workers.",
"title": ""
},
{
"docid": "9fd247bb0f45d09e11c05fca48372ee8",
"text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.",
"title": ""
},
{
"docid": "0d0fae25e045c730b68d63e2df1dfc7f",
"text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.",
"title": ""
},
{
"docid": "04373474e0d9902fdee169492ece6dd0",
"text": "The development of ivermectin as a complementary vector control tool will require good quality evidence. This paper reviews the different eco-epidemiological contexts in which mass drug administration with ivermectin could be useful. Potential scenarios and pharmacological strategies are compared in order to help guide trial design. The rationale for a particular timing of an ivermectin-based tool and some potentially useful outcome measures are suggested.",
"title": ""
},
{
"docid": "5c1d6a2616a54cd8d8316b8d37f0147d",
"text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.",
"title": ""
}
] | scidocsrr |
5dd0919efe858a49f7b21cd278e925ec | A hybrid approach to fake news detection on social media | [
{
"docid": "35b025c508a568e0d73d9113c7cdb5e2",
"text": "Satire is an attractive subject in deception detection research: it is a type of deception that intentionally incorporates cues revealing its own deceptiveness. Whereas other types of fabrications aim to instill a false sense of truth in the reader, a successful satirical hoax must eventually be exposed as a jest. This paper provides a conceptual overview of satire and humor, elaborating and illustrating the unique features of satirical news, which mimics the format and style of journalistic reporting. Satirical news stories were carefully matched and examined in contrast with their legitimate news counterparts in 12 contemporary news topics in 4 domains (civics, science, business, and “soft” news). Building on previous work in satire detection, we proposed an SVMbased algorithm, enriched with 5 predictive features (Absurdity, Humor, Grammar, Negative Affect, and Punctuation) and tested their combinations on 360 news articles. Our best predicting feature combination (Absurdity, Grammar and Punctuation) detects satirical news with a 90% precision and 84% recall (F-score=87%). Our work in algorithmically identifying satirical news pieces can aid in minimizing the potential deceptive impact of satire.",
"title": ""
}
] | [
{
"docid": "f9eed4f99d70c51dc626a61724540d3c",
"text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.",
"title": ""
},
{
"docid": "b93888e6c47d6f2dab8e1b2d0fec8b14",
"text": "We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.",
"title": ""
},
{
"docid": "ef4ea0d489cd9cb7ea4734d96b379721",
"text": "A huge repository of terabytes of data is generated each day from modern information systems and digital technologies such as Internet of Things and cloud computing. Analysis of these massive data requires a lot of efforts at multiple levels to extract knowledge for decision making. Therefore, big data analysis is a current area of research and development. The basic objective of this paper is to explore the potential impact of big data challenges, open research issues, and various tools associated with it. As a result, this article provides a platform to explore big data at numerous stages. Additionally, it opens a new horizon for researchers to develop the solution, based on the challenges and open research issues. Keywords—Big data analytics; Hadoop; Massive data; Structured data; Unstructured Data",
"title": ""
},
{
"docid": "9be0134d63fbe6978d786280fb133793",
"text": "Yann Le Cun AT&T Bell Labs Holmdel NJ 07733 We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution \"annotated images\" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.",
"title": ""
},
{
"docid": "de333f099bad8a29046453e099f91b84",
"text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.",
"title": ""
},
{
"docid": "f1b1dc51cf7a6d8cb3b644931724cad6",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "bf3b8f8a609aa210b87c735743af3f31",
"text": "It has been pointed out that about 30% of the traffic congestion is caused by vehicles cruising around their destination and looking for a place to park. Therefore, addressing the problems associated with parking in crowded urban areas is of great significance. One effective solution is providing guidance for the vehicles to be parked according to the occupancy status of each parking lot. However, the existing parking guidance schemes mainly rely on deploying sensors or RSUs in the parking lot, which would incur substantial capital overhead. To reduce the aforementioned cost, we propose IPARK, which taps into the unused resources (e.g., wireless device, rechargeable battery, and storage capability) offered by parked vehicles to perform parking guidance. In IPARK, the cluster formed by parked vehicles generates the parking lot map automatically, monitors the occupancy status of each parking space in real time, and provides assistance for vehicles searching for parking spaces. We propose an efficient architecture for IPARK and investigate the challenging issues in realizing parking guidance over this architecture. Finally, we investigate IPARK through realistic experiments and simulation. The numerical results obtained verify that our scheme achieves effective parking guidance in VANETs.",
"title": ""
},
{
"docid": "e7bdf6d9a718127b5b9a94fed8afc0a5",
"text": "BACKGROUND\nUse of the Internet for health information continues to grow rapidly, but its impact on health care is unclear. Concerns include whether patients' access to large volumes of information will improve their health; whether the variable quality of the information will have a deleterious effect; the effect on health disparities; and whether the physician-patient relationship will be improved as patients become more equal partners, or be damaged if physicians have difficulty adjusting to a new role.\n\n\nMETHODS\nTelephone survey of nationally representative sample of the American public, with oversample of people in poor health.\n\n\nRESULTS\nOf the 3209 respondents, 31% had looked for health information on the Internet in the past 12 months, 16% had found health information relevant to themselves and 8% had taken information from the Internet to their physician. Looking for information on the Internet showed a strong digital divide; however, once information had been looked for, socioeconomic factors did not predict other outcomes. Most (71%) people who took information to the physician wanted the physician's opinion, rather than a specific intervention. The effect of taking information to the physician on the physician-patient relationship was likely to be positive as long as the physician had adequate communication skills, and did not appear challenged by the patient bringing in information.\n\n\nCONCLUSIONS\nFor health information on the Internet to achieve its potential as a force for equity and patient well-being, actions are required to overcome the digital divide; assist the public in developing searching and appraisal skills; and ensure physicians have adequate communication skills.",
"title": ""
},
{
"docid": "f471ac7a453dee8820f0cf6be7b8b657",
"text": "The spatial responses of many of the cells recorded in all layers of rodent medial entorhinal cortex (mEC) show mutually aligned grid patterns. Recent experimental findings have shown that grids can often be better described as elliptical rather than purely circular and that, beyond the mutual alignment of their grid axes, ellipses tend to also orient their long axis along preferred directions. Are grid alignment and ellipse orientation aspects of the same phenomenon? Does the grid alignment result from single-unit mechanisms or does it require network interactions? We address these issues by refining a single-unit adaptation model of grid formation, to describe specifically the spontaneous emergence of conjunctive grid-by-head-direction cells in layers III, V, and VI of mEC. We find that tight alignment can be produced by recurrent collateral interactions, but this requires head-direction (HD) modulation. Through a competitive learning process driven by spatial inputs, grid fields then form already aligned, and with randomly distributed spatial phases. In addition, we find that the self-organization process is influenced by any anisotropy in the behavior of the simulated rat. The common grid alignment often orients along preferred running directions (RDs), as induced in a square environment. When speed anisotropy is present in exploration behavior, the shape of individual grids is distorted toward an ellipsoid arrangement. Speed anisotropy orients the long ellipse axis along the fast direction. Speed anisotropy on its own also tends to align grids, even without collaterals, but the alignment is seen to be loose. Finally, the alignment of spatial grid fields in multiple environments shows that the network expresses the same set of grid fields across environments, modulo a coherent rotation and translation. Thus, an efficient metric encoding of space may emerge through spontaneous pattern formation at the single-unit level, but it is coherent, hence context-invariant, if aided by collateral interactions.",
"title": ""
},
{
"docid": "ab70c8814c0e15695c8142ce8aad69bc",
"text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.",
"title": ""
},
{
"docid": "1c4e4f0ffeae8b03746ca7de184989ef",
"text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.",
"title": ""
},
{
"docid": "eb91cfce08dd5e17599c18e3b291db3e",
"text": "The extraordinary electronic properties of graphene provided the main thrusts for the rapid advance of graphene electronics. In photonics, the gate-controllable electronic properties of graphene provide a route to efficiently manipulate the interaction of photons with graphene, which has recently sparked keen interest in graphene plasmonics. However, the electro-optic tuning capability of unpatterned graphene alone is still not strong enough for practical optoelectronic applications owing to its non-resonant Drude-like behaviour. Here, we demonstrate that substantial gate-induced persistent switching and linear modulation of terahertz waves can be achieved in a two-dimensional metamaterial, into which an atomically thin, gated two-dimensional graphene layer is integrated. The gate-controllable light-matter interaction in the graphene layer can be greatly enhanced by the strong resonances of the metamaterial. Although the thickness of the embedded single-layer graphene is more than six orders of magnitude smaller than the wavelength (<λ/1,000,000), the one-atom-thick layer, in conjunction with the metamaterial, can modulate both the amplitude of the transmitted wave by up to 47% and its phase by 32.2° at room temperature. More interestingly, the gate-controlled active graphene metamaterials show hysteretic behaviour in the transmission of terahertz waves, which is indicative of persistent photonic memory effects.",
"title": ""
},
{
"docid": "904efe77c6a31867cf096770b99e856b",
"text": "Deep neural networks have been proven powerful at processing perceptual data, such as images and audio. However for tabular data, tree-based models are more popular. A nice property of tree-based models is their natural interpretability. In this work, we present Deep Neural Decision Trees (DNDT) – tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree. Yet as it is also a neural network (NN), it can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. We evaluate DNDT on several tabular datasets, verify its efficacy, and investigate similarities and differences between DNDT and vanilla decision trees. Interestingly, DNDT self-prunes at both split and feature-level.",
"title": ""
},
{
"docid": "0a2cba5e6d5b6b467e34e79ee099f509",
"text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "ad79b709607e468156060d9ac543638b",
"text": "I. Introduction UNTIL 35 yr ago, most scientists did not take research on the pineal gland seriously. The decade beginning in 1956, however, provided several discoveries that laid the foundation for what has become a very active area of investigation. These important early observations included the findings that, 1), the physiological activity of the pineal is influenced by the photoperiodic environment (1–5); 2), the gland contains a substance, N-acetyl-5-methoxytryptamine or melatonin, which has obvious endocrine capabilities (6, 7); 3), the function of the reproductive system in photoperiodically dependent rodents is inextricably linked to the physiology of the pineal gland (5, 8, 9); 4), the sympathetic innervation to the pineal is required for the gland to maintain its biosynthetic and endocrine activities (10, 11); and 5), the pineal gland can be rapidly removed from rodents with minimal damage to adjacent neural structures using a specially designed trephine (12).",
"title": ""
},
{
"docid": "849e8f3932c7db5643901f3909306292",
"text": "PHP is one of the most popular languages for server-side application development. The language is highly dynamic, providing programmers with a large amount of flexibility. However, these dynamic features also have a cost, making it difficult to apply traditional static analysis techniques used in standard code analysis and transformation tools. As part of our work on creating analysis tools for PHP, we have conducted a study over a significant corpus of open-source PHP systems, looking at the sizes of actual PHP programs, which features of PHP are actually used, how often dynamic features appear, and how distributed these features are across the files that make up a PHP website. We have also looked at whether uses of these dynamic features are truly dynamic or are, in some cases, statically understandable, allowing us to identify specific patterns of use which can then be taken into account to build more precise tools. We believe this work will be of interest to creators of analysis tools for PHP, and that the methodology we present can be leveraged for other dynamic languages with similar features.",
"title": ""
},
{
"docid": "6712ec987ec783dceae442993be0cd6e",
"text": "In this paper, we have conducted a literature review on the recent developments and publications involving the vehicle routing problem and its variants, namely vehicle routing problem with time windows (VRPTW) and the capacitated vehicle routing problem (CVRP) and also their variants. The VRP is classified as an NP-hard problem. Hence, the use of exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. The vehicle routing problem comes under combinatorial problem. Hence, to get solutions in determining routes which are realistic and very close to the optimal solution, we use heuristics and meta-heuristics. In this paper we discuss the various exact methods and the heuristics and meta-heuristics used to solve the VRP and its variants.",
"title": ""
},
{
"docid": "939593fce7d86ae8ccd6a67532fd78b0",
"text": "In patients with spinal cord injury, the primary or mechanical trauma seldom causes total transection, even though the functional loss may be complete. In addition, biochemical and pathological changes in the cord may worsen after injury. To explain these phenomena, the concept of the secondary injury has evolved for which numerous pathophysiological mechanisms have been postulated. This paper reviews the concept of secondary injury with special emphasis on vascular mechanisms. Evidence is presented to support the theory of secondary injury and the hypothesis that a key mechanism is posttraumatic ischemia with resultant infarction of the spinal cord. Evidence for the role of vascular mechanisms has been obtained from a variety of models of acute spinal cord injury in several species. Many different angiographic methods have been used for assessing microcirculation of the cord and for measuring spinal cord blood flow after trauma. With these techniques, the major systemic and local vascular effects of acute spinal cord injury have been identified and implicated in the etiology of secondary injury. The systemic effects of acute spinal cord injury include hypotension and reduced cardiac output. The local effects include loss of autoregulation in the injured segment of the spinal cord and a marked reduction of the microcirculation in both gray and white matter, especially in hemorrhagic regions and in adjacent zones. The microcirculatory loss extends for a considerable distance proximal and distal to the site of injury. Many studies have shown a dose-dependent reduction of spinal cord blood flow varying with the severity of injury, and a reduction of spinal cord blood flow which worsens with time after injury. The functional deficits due to acute spinal cord injury have been measured electrophysiologically with techniques such as motor and somatosensory evoked potentials and have been found proportional to the degree of posttraumatic ischemia. The histological effects include early hemorrhagic necrosis leading to major infarction at the injury site. These posttraumatic vascular effects can be treated. Systemic normotension can be restored with volume expansion or vasopressors, and spinal cord blood flow can be improved with dopamine, steroids, nimodipine, or volume expansion. The combination of nimodipine and volume expansion improves posttraumatic spinal cord blood flow and spinal cord function measured by evoked potentials. These results provide strong evidence that posttraumatic ischemia is an important secondary mechanism of injury, and that it can be counteracted.",
"title": ""
},
{
"docid": "d61fde7b9c5a2c17c7cafb44aa51fe9b",
"text": "868 NOTICES OF THE AMS VOLUME 47, NUMBER 8 In April 2000 the National Council of Teachers of Mathematics (NCTM) released Principles and Standards for School Mathematics—the culmination of a multifaceted, three-year effort to update NCTM’s earlier standards documents and to set forth goals and recommendations for mathematics education in the prekindergarten-through-grade-twelve years. As the chair of the Writing Group, I had the privilege to interact with all aspects of the development and review of this document and with the committed groups of people, including the members of the Writing Group, who contributed immeasurably to this process. This article provides some background about NCTM and the standards, the process of development, efforts to gather input and feedback, and ways in which feedback from the mathematics community influenced the document. The article concludes with a section that provides some suggestions for mathematicians who are interested in using Principles and Standards.",
"title": ""
}
] | scidocsrr |
c5c495f5eac4239f4d35d20581d38d58 | A multi-source dataset of urban life in the city of Milan and the Province of Trentino | [
{
"docid": "a026cb81bddfa946159d02b5bb2e341d",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "f8fe22b2801a250a52e3d19ae23804e9",
"text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.",
"title": ""
}
] | [
{
"docid": "e05fc780d1f3fd4061918e50f5dd26a0",
"text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.",
"title": ""
},
{
"docid": "c74cd5b9753579517462909bd196ad90",
"text": "Interactions around money and financial services are a critical part of our lives on and off-line. New technologies and new ways of interacting with these technologies are of huge interest; they enable new business models and ways of making sense of this most important aspect of our everyday lives. At the same time, money is an essential element in HCI research and design. This workshop is intended to bring together researchers and practitioners involved in the design and use of systems that combine digital and new media with monetary and financial interactions to build on an understanding of these technologies and their impacts on users' behaviors. The workshop will focus on social, technical, and economic aspects around everyday user interactions with money and emerging financial technologies and systems.",
"title": ""
},
{
"docid": "88ca6c25c4be7523eea29d909bd84813",
"text": "A health risk appraisal function has been developed for the prediction of stroke using the Framingham Study cohort. The stroke risk factors included in the profile are age, systolic blood pressure, the use of antihypertensive therapy, diabetes mellitus, cigarette smoking, prior cardiovascular disease (coronary heart disease, cardiac failure, or intermittent claudication), atrial fibrillation, and left ventricular hypertrophy by electrocardiogram. Based on 472 stroke events occurring during 10 years' follow-up from biennial examinations 9 and 14, stroke probabilities were computed using the Cox proportional hazards model for each sex based on a point system. On the basis of the risk factors in the profile, which can be readily determined on routine physical examination in a physician's office, stroke risk can be estimated. An individual's risk can be related to the average risk of stroke for persons of the same age and sex. The information that one's risk of stroke is several times higher than average may provide the impetus for risk factor modification. It may also help to identify persons at substantially increased stroke risk resulting from borderline levels of multiple risk factors such as those with mild or borderline hypertension and facilitate multifactorial risk factor modification.",
"title": ""
},
{
"docid": "9954793c44b1b8fc87c0ae8724e0e4de",
"text": "The Khanya project has been equipping schools and educators with ICT skills and equipment to be used in the curriculum delivery in South Africa. However, research and anecdotal evidence show that there is low adoption rate of ICT among educators in Khanya schools. This interpretive study sets out to analyse the factors which are preventing the educators from using the technology in their work. The perspective of limited access and/or use of ICT as deprivation of capabilities provides a conceptual base for this paper. We employed Sen’s Capability Approach as a conceptual lens to examine the educators’ situation regarding ICT for teaching and learning. Data was collected through in-depth interviews with fourteen educators and two Khanya personnel. The results of the study show that there are a number of factors (personal, social and environmental) which are preventing the educators from realising their potential capabilities from the ICT.",
"title": ""
},
{
"docid": "cb7b53be8ef7cd9330445668f8f0eee6",
"text": "Humans have an innate tendency to anthropomorphize surrounding entities and have always been fascinated by the creation of machines endowed with human-inspired capabilities and traits. In the last few decades, this has become a reality with enormous advances in hardware performance, computer graphics, robotics technology, and artificial intelligence. New interdisciplinary research fields have brought forth cognitive robotics aimed at building a new generation of control systems and providing robots with social, empathetic and affective capabilities. This paper presents the design, implementation, and test of a human-inspired cognitive architecture for social robots. State-of-the-art design approaches and methods are thoroughly analyzed and discussed, cases where the developed system has been successfully used are reported. The tests demonstrated the system’s ability to endow a social humanoid robot with human social behaviors and with in-silico robotic emotions.",
"title": ""
},
{
"docid": "8f089d55c0ce66db7bbf27476267a8e5",
"text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.",
"title": ""
},
{
"docid": "aabf75855e39682b353c46332bc218db",
"text": "Semantic Web Mining is the outcome of two new and fast developing domains: Semantic Web and Data Mining. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Data Mining is the nontrivial process of identifying valid, previously unknown, potentially useful patterns in data. Semantic Web Mining refers to the application of data mining techniques to extract knowledge from World Wide Web or the area of data mining that refers to the use of algorithms for extracting patterns from resources distributed over in the web. The aim of Semantic Web Mining is to discover and retrieve useful and interesting patterns from a huge set of web data. This web data consists of different kind of information, including web structure data, web log data and user profiles data. Semantic Web Mining is a relatively new area, broadly interdisciplinary, attracting researchers from: computer science, information retrieval specialists and experts from business studies fields. Web data mining includes web content mining, web structure mining and web usage mining. All of these approaches attempt to extract knowledge from the web, produce some useful results from the knowledge extracted and apply these results to the real world problems. To improve the internet service quality and increase the user click rate on a specific website, it is necessary for a web developer to know what the user really want to do, predict which pages the user is potentially interested in. In this paper, various techniques for Semantic Web mining like web content mining, web usage mining and web structure mining are discussed. Our main focus is on web usage mining and its application in web personalization. Study shows that the accuracy of recommendation system has improved significantly with the use of semantic web mining in web personalization.",
"title": ""
},
{
"docid": "d4ffeb204691f9a9188e8deecaf2d811",
"text": "Salsify is a new architecture for real-time Internet video that tightly integrates a video codec and a network transport protocol, allowing it to respond quickly to changing network conditions and avoid provoking packet drops and queueing delays. To do this, Salsify optimizes the compressed length and transmission time of each frame, based on a current estimate of the network’s capacity; in contrast, existing systems generally control longer-term metrics like frame rate or bit rate. Salsify’s per-frame optimization strategy relies on a purely functional video codec, which Salsify uses to explore alternative encodings of each frame at different quality levels. We developed a testbed for evaluating real-time video systems end-to-end with reproducible video content and network conditions. Salsify achieves lower video delay and, over variable network paths, higher visual quality than five existing systems: FaceTime, Hangouts, Skype, and WebRTC’s reference implementation with and without scalable video coding.",
"title": ""
},
{
"docid": "66878197b06f3fac98f867d5457acafe",
"text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.",
"title": ""
},
{
"docid": "193aee1131ce05d5d4a4316871c193b8",
"text": "In this paper, we discuss wireless sensor and networking technologies for swarms of inexpensive aquatic surface drones in the context of the HANCAD project. The goal is to enable the swarm to perform maritime tasks such as sea-border patrolling and environmental monitoring, while keeping the cost of each drone low. Communication between drones is essential for the success of the project. Preliminary experiments show that XBee modules are promising for energy efficient multi-hop drone-to-drone communication.",
"title": ""
},
{
"docid": "2ad8723c9fce1a6264672f41824963f8",
"text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.",
"title": ""
},
{
"docid": "37572963400c8a78cef3cd4a565b328e",
"text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.",
"title": ""
},
{
"docid": "37642371bbcc3167f96548d02ccd832e",
"text": "The manipulation of light-matter interactions in two-dimensional atomically thin crystals is critical for obtaining new optoelectronic functionalities in these strongly confined materials. Here, by integrating chemically grown monolayers of MoS2 with a silver-bowtie nanoantenna array supporting narrow surface-lattice plasmonic resonances, a unique two-dimensional optical system has been achieved. The enhanced exciton-plasmon coupling enables profound changes in the emission and excitation processes leading to spectrally tunable, large photoluminescence enhancement as well as surface-enhanced Raman scattering at room temperature. Furthermore, due to the decreased damping of MoS2 excitons interacting with the plasmonic resonances of the bowtie array at low temperatures stronger exciton-plasmon coupling is achieved resulting in a Fano line shape in the reflection spectrum. The Fano line shape, which is due to the interference between the pathways involving the excitation of the exciton and plasmon, can be tuned by altering the coupling strengths between the two systems via changing the design of the bowties lattice. The ability to manipulate the optical properties of two-dimensional systems with tunable plasmonic resonators offers a new platform for the design of novel optical devices with precisely tailored responses.",
"title": ""
},
{
"docid": "3cf458392fb61a5e70647c9c951d5db8",
"text": "This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.",
"title": ""
},
{
"docid": "f0f7bd0223d69184f3391aaf790a984d",
"text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.",
"title": ""
},
{
"docid": "99efebd647fa083fab4e0f091b0b471b",
"text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "be427b129a89edb6da1b21c4f8df526b",
"text": "Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the system’s overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.",
"title": ""
},
{
"docid": "42167e7708bb73b08972e15a44a6df02",
"text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
},
{
"docid": "374383490d88240b410a14a185ff082e",
"text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.",
"title": ""
},
{
"docid": "96da5252dac0eb0010a49519592c4104",
"text": "Three-level converters are becoming a realistic alternative to the conventional converters in high-power wind-energy applications. In this paper, a complete analytical strategy to model a back-to-back three-level converter is described. This tool permits us to adapt the control strategy to the specific application. Moreover, the model of different loads can be incorporated to the overall model. Both control strategy and load models are included in the complete system model. The proposed model pays special attention to the unbalance in the capacitors' voltage of three-level converters, including the dynamics of the capacitors' voltage. In order to validate the model and the control strategy proposed in this paper, a 3-MW three-level back-to-back power converter used as a power conditioning system of a variable speed wind turbine has been simulated. Finally, the described strategy has been implemented in a 50-kVA scalable prototype as well, providing a satisfactory performance",
"title": ""
}
] | scidocsrr |
8ab4c7502d246208fbb03518c0a34b02 | Probabilistic sentential decision diagrams: Learning with massive logical constraints | [
{
"docid": "25346cdef3e97173dab5b5499c4d4567",
"text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.",
"title": ""
},
{
"docid": "d104206fd95525192240e9a6d6aedd89",
"text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.",
"title": ""
}
] | [
{
"docid": "bd8788c3d4adc5f3671f741e884c7f34",
"text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.",
"title": ""
},
{
"docid": "94c6f94e805a366c6fa6f995f13a92ba",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "473d8cbcd597c961819c5be6ab2e658e",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "9d26d8e8b34319defd89a0daca3969e9",
"text": "The paper presents a safe robot navigation system based on omnidirectional vision. The 360 degree camera images are analyzed for obstacle detection and avoidance and of course for navigating safely in the given indoor environment. This module can process images in real time and extracts the direction and distance information of the obstacles from the camera system mounted on the robot. This two data is the output of the module. Because of the distortions of the omnidirectional vision, it is necessary to calibrate the camera and not only for that but also to get the right direction and distances information. Several image processing methods and technics were used which are investigated in the rest of this paper.",
"title": ""
},
{
"docid": "14ab095775e6687bde93fbe7849475f5",
"text": "In this paper, we present an evolutionary trust game to investigate the formation of trust in the so-called sharing economy from a population perspective. To the best of our knowledge, this is the first attempt to model trust in the sharing economy using the evolutionary game theory framework. Our sharing economy trust model consists of four types of players: a trustworthy provider, an untrustworthy provider, a trustworthy consumer, and an untrustworthy consumer. Through systematic simulation experiments, five different scenarios with varying proportions and types of providers and consumers were considered. Our results show that each type of players influences the existence and survival of other types of players, and untrustworthy players do not necessarily dominate the population even when the temptation to defect (i.e., to be untrustworthy) is high. Our findings may have important implications for understanding the emergence of trust in the context of sharing economy transactions.",
"title": ""
},
{
"docid": "7aca9b586e30a735c51ffe38ef858b38",
"text": "In Chapter 1, Reigeluth described design theory as being different from descriptive theory in that it offers means to achieve goals. For an applied field like education, design theory is more useful and more easily applied than its descriptive counterpart, learning theory. But none of the 22 theories described in this book has yet been developed to a state of perfection; at very least they can all benefit from more detailed guidance for applying their methods to diverse situations. And more theories are sorely needed to provide guidance for additional kinds of learning and human development and for different kinds of situations, including the use of new information technologies as tools. This leads us to the important question, \" What research methods are most helpful for creating and improving instructional design theories? \" In this chapter, we offer a detailed description of one research methodology that holds much promise for generating the kind of knowledge that we believe is most useful to educators—a methodology that several theorists in this book have intuitively used to develop their theories. We refer to this methodology as \"formative research\"—a kind of developmental research or action research that is intended to improve design theory for designing instructional practices or processes. Reigeluth (1989) and Romiszowski (1988) have recommended this approach to expand the knowledge base in instructional-design theory. Newman (1990) has suggested something similar for research on the organizational impact of computers in schools. And Greeno, Collins and Resnick (1996) have identified several groups of researchers who are conducting something similar that they call \" design experiments, \" in which \" researchers and practitioners, particularly teachers, collaborate in the design, implementation, and analysis of changes in practice. \" (p. 15) Formative research has also been used for generating knowledge in as broad an area as systemic change in education We intend for this chapter to help guide educational researchers who are developing and refining instructional-design theories. Most researchers have not had the opportunity to learn formal research methodologies for developing design theories. Doctoral programs in universities tend to emphasize quantitative and qualitative research methodologies for creating descriptive knowledge of education. However, design theories are guidelines for practice, which tell us \"how to do\" education, not \"what is.\" We have found that traditional quantitative research methods (e.g., experiments, surveys, correlational analyses) are not particularly useful for improving instructional-design theory— especially in the early stages of development. Instead, …",
"title": ""
},
{
"docid": "56b3a5ff0295d0ffce2a60dc60c0033a",
"text": "This first installment of the new Human Augmentation department looks at various technologies designed to augment the human intellect and amplify human perception and cognition. Linking back to early work in interactive computing, Albrecht Schmidt considers how novel technologies can create a new relationship between digital technologies and humans.",
"title": ""
},
{
"docid": "16d3a7217182ad331d85eb619fa459ee",
"text": "Pupil diameter was monitored during picture viewing to assess effects of hedonic valence and emotional arousal on pupillary responses. Autonomic activity (heart rate and skin conductance) was concurrently measured to determine whether pupillary changes are mediated by parasympathetic or sympathetic activation. Following an initial light reflex, pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance change, supporting the interpretation that sympathetic nervous system activity modulates these changes in the context of affective picture viewing. Taken together, the data provide strong support for the hypothesis that the pupil's response during affective picture viewing reflects emotional arousal associated with increased sympathetic activity.",
"title": ""
},
{
"docid": "b12defb3d9d7c5ccda8c3e0b0858f55f",
"text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.",
"title": ""
},
{
"docid": "088fdd091c2cc70f2e000622be4f3c62",
"text": "Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.",
"title": ""
},
{
"docid": "77e61d56d297b62e1078542fd74ffe5e",
"text": "This paper introduces a complete design method to construct an adaptive fuzzy logic controller (AFLC) for DC–DC converter. In a conventional fuzzy logic controller (FLC), knowledge on the system supplied by an expert is required for developing membership functions (parameters) and control rules. The proposed AFLC, on the other hand, do not required expert for making parameters and control rules. Instead, parameters and rules are generated using a model data file, which contains summary of input–output pairs. The FLC use Mamdani type fuzzy logic controllers for the defuzzification strategy and inference operators. The proposed controller is designed and verified by digital computer simulation and then implemented for buck, boost and buck–boost converters by using an 8-bit microcontroller. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e1edaf3e8754e8403b9be29f58ba3550",
"text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "a4a56e0647849c22b48e7e5dc3f3049b",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "53cf85922865609c4a7591bd06679660",
"text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.",
"title": ""
},
{
"docid": "0a94a995f91afd641013b97dcec7da2a",
"text": "Two competing encoding concepts are known to scale well with growing amounts of XML data: XPath Accelerator encoding implemented by MonetDB for in-memory documents and X-Hive’s Persistent DOM for on-disk storage. We identified two ways to improve XPath Accelerator and present prototypes for the respective techniques: BaseX boosts inmemory performance with optimized data and value index structures while Idefix introduces native block-oriented persistence with logarithmic update behavior for true scalability, overcoming main-memory constraints. An easy-to-use Java-based benchmarking framework was developed and used to consistently compare these competing techniques and perform scalability measurements. The established XMark benchmark was applied to all four systems under test. Additional fulltext-sensitive queries against the well-known DBLP database complement the XMark results. Not only did the latest version of X-Hive finally surprise with good scalability and performance numbers. Also, both BaseX and Idefix hold their promise to push XPath Accelerator to its limits: BaseX efficiently exploits available main memory to speedup XML queries while Idefix surpasses main-memory constraints and rivals the on-disk leadership of X-Hive. The competition between XPath Accelerator and Persistent DOM definitely is relaunched.",
"title": ""
},
{
"docid": "e56af4a3a8fbef80493d77b441ee1970",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
},
{
"docid": "7835a3ecdb9a8563e29ee122e5987503",
"text": "Women diagnosed with complete spinal cord injury (SCI) at T10 or higher report sensations generated by vaginal-cervical mechanical self-stimulation (CSS). In this paper we review brain responses to sexual arousal and orgasm in such women, and further hypothesize that the afferent pathway for this unexpected perception is provided by the Vagus nerves, which bypass the spinal cord. Using functional magnetic resonance imaging (fMRI), we ascertained that the region of the medulla oblongata to which the Vagus nerves project (the Nucleus of the Solitary Tract or NTS) is activated by CSS. We also used an objective measure, CSS-induced analgesia response to experimentally induced finger pain, to ascertain the functionality of this pathway. During CSS, several women experienced orgasms. Brain regions activated during orgasm included the hypothalamic paraventricular nucleus, amygdala, accumbens-bed nucleus of the stria terminalis-preoptic area, hippocampus, basal ganglia (especially putamen), cerebellum, and anterior cingulate, insular, parietal and frontal cortices, and lower brainstem (central gray, mesencephalic reticular formation, and NTS). We conclude that the Vagus nerves provide a spinal cord-bypass pathway for vaginal-cervical sensibility and that activation of this pathway can produce analgesia and orgasm.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] | scidocsrr |
75e47e330359d1afc684d4cd17beae29 | Depth camera tracking with contour cues | [
{
"docid": "1782fc75827937c6b31951bfca997f48",
"text": "Registering 2 or more range scans is a fundamental problem, with application to 3D modeling. While this problem is well addressed by existing techniques such as ICP when the views overlap significantly at a good initialization, no satisfactory solution exists for wide baseline registration. We propose here a novel approach which leverages contour coherence and allows us to align two wide baseline range scans with limited overlap from a poor initialization. Inspired by ICP, we maximize the contour coherence by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. We use the contour coherence under a multi-view rigid registration framework, and this enables the reconstruction of accurate and complete 3D models from as few as 4 frames. We further extend it to handle articulations, and this allows us to model articulated objects such as human body. Experimental results on both synthetic and real data demonstrate the effectiveness and robustness of our contour coherence based registration approach to wide baseline range scans, and to 3D modeling.",
"title": ""
},
{
"docid": "c64d5309c8f1e2254144215377b366b1",
"text": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.",
"title": ""
},
{
"docid": "5dac8ef81c7a6c508c603b3fd6a87581",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
}
] | [
{
"docid": "6bce7698f908721da38a3c6e6916a30e",
"text": "For learning in big datasets, the classification performance of ELM might be low due to input samples are not extracted features properly. To address this problem, the hierarchical extreme learning machine (H-ELM) framework was proposed based on the hierarchical learning architecture of multilayer perceptron. H-ELM composes of two parts; the first is the unsupervised multilayer encoding part and the second part is the supervised feature classification part. H-ELM can give higher accuracy rate than of the traditional ELM. However, it still has to enhance its classification performance. Therefore, this paper proposes a new method namely as the extending hierarchical extreme learning machine (EH-ELM). For the extended supervisor part of EH-ELM, we have got an idea from the two-layers extreme learning machine. To evaluate the performance of EH-ELM, three different image datasets; Semeion, MNIST, and NORB, were studied. The experimental results show that EH-ELM achieves better performance than of H-ELM and the other multi-layer framework.",
"title": ""
},
{
"docid": "dfc51ea36992f8afccfbf625e3016054",
"text": "Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.",
"title": ""
},
{
"docid": "3f95493016925d4f4a8a0d0a1bc8dc9d",
"text": "A consequent pole, dual rotor, axial flux vernier permanent magnet (VPM) machine is developed to reduce magnet usage and increase torque density. Its end winding length is much shorter than that of regular VPM machines due to its toroidal winding configuration. The configurations and features of the proposed machine are discussed. Benefited from its vernier and consequent pole structure, this new machine exhibits much higher back-EMF and torque density than that of a regular dual rotor axial flux machine, while the magnet usage is halved. The influence of main design parameters, such as slot opening, ratio of inner to outer stator diameter, magnet thickness etc., on torque performance is analyzed based on the quasi-3-dimensional (quasi-3D) finite element analysis (FEA). The analyzing results are validated by real 3D FEA.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "e456ab6399ad84b575737d2a91597fdc",
"text": "In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student’s dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data. Keywords– Data Mining, Machine Learning Algorithms, Dropout Management and Predictive Models",
"title": ""
},
{
"docid": "4ee62d81dcdf6e1dc9b06757668e0fc8",
"text": "The frequent and protracted use of video games with serious personal, family and social consequences is no longer just a pleasant pastime and could lead to mental and physical health problems. Although there is no official recognition of video game addiction on the Internet as a mild mental health disorder, further scientific research is needed.",
"title": ""
},
{
"docid": "bc018ef7cbcf7fc032fe8556016d08b1",
"text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.",
"title": ""
},
{
"docid": "089808010a2925a7eaca71736fbabcaf",
"text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.",
"title": ""
},
{
"docid": "3327a70849d7331bb1db01d99a3d0000",
"text": "Queueing network models have proved to be cost effectwe tools for analyzing modern computer systems. This tutorial paper presents the basic results using the operational approach, a framework which allows the analyst to test whether each assumption is met in a given system. The early sections describe the nature of queueing network models and their apphcations for calculating and predicting performance quantitms The basic performance quantities--such as utilizations, mean queue lengths, and mean response tunes--are defined, and operatmnal relationships among them are derwed Following this, the concept of job flow balance is introduced and used to study asymptotic throughputs and response tunes. The concepts of state transition balance, one-step behavior, and homogeneity are then used to relate the proportions of time that each system state is occupied to the parameters of job demand and to dewce charactenstms Efficmnt methods for computing basic performance quantities are also described. Finally the concept of decomposition is used to stmphfy analyses by replacing subsystems with equivalent devices. All concepts are illustrated liberally with examples",
"title": ""
},
{
"docid": "100c62f22feea14ac54c21408432c371",
"text": "Modern approach to the FOREX currency exchange market requires support from the computer algorithms to manage huge volumes of the transactions and to find opportunities in a vast number of currency pairs traded daily. There are many well known techniques used by market participants on both FOREX and stock-exchange markets (i.e. Fundamental and technical analysis) but nowadays AI based techniques seem to play key role in the automated transaction and decision supporting systems. This paper presents the comprehensive analysis over Feed Forward Multilayer Perceptron (ANN) parameters and their impact to accurately forecast FOREX trend of the selected currency pair. The goal of this paper is to provide information on how to construct an ANN with particular respect to its parameters and training method to obtain the best possible forecasting capabilities. The ANN parameters investigated in this paper include: number of hidden layers, number of neurons in hidden layers, use of constant/bias neurons, activation functions, but also reviews the impact of the training methods in the process of the creating reliable and valuable ANN, useful to predict the market trends. The experimental part has been performed on the historical data of the EUR/USD pair.",
"title": ""
},
{
"docid": "49575576bc5a0b949c81b0275cbc5f41",
"text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.",
"title": ""
},
{
"docid": "b21135f6c627d7dfd95ad68c9fc9cc48",
"text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.",
"title": ""
},
{
"docid": "34c9a0b4f4fdf3d4ef0fbb97e750754b",
"text": "Plants are affected by complex genome×environment×management interactions which determine phenotypic plasticity as a result of the variability of genetic components. Whereas great advances have been made in the cost-efficient and high-throughput analyses of genetic information and non-invasive phenotyping, the large-scale analyses of the underlying physiological mechanisms lag behind. The external phenotype is determined by the sum of the complex interactions of metabolic pathways and intracellular regulatory networks that is reflected in an internal, physiological, and biochemical phenotype. These various scales of dynamic physiological responses need to be considered, and genotyping and external phenotyping should be linked to the physiology at the cellular and tissue level. A high-dimensional physiological phenotyping across scales is needed that integrates the precise characterization of the internal phenotype into high-throughput phenotyping of whole plants and canopies. By this means, complex traits can be broken down into individual components of physiological traits. Since the higher resolution of physiological phenotyping by 'wet chemistry' is inherently limited in throughput, high-throughput non-invasive phenotyping needs to be validated and verified across scales to be used as proxy for the underlying processes. Armed with this interdisciplinary and multidimensional phenomics approach, plant physiology, non-invasive phenotyping, and functional genomics will complement each other, ultimately enabling the in silico assessment of responses under defined environments with advanced crop models. This will allow generation of robust physiological predictors also for complex traits to bridge the knowledge gap between genotype and phenotype for applications in breeding, precision farming, and basic research.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "26ee1e5770a77d030b6230b8eef7e644",
"text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.",
"title": ""
},
{
"docid": "75233d6d94fec1f43fa02e8043470d4d",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
},
{
"docid": "3d95e2db34f0b1f999833946a173de3d",
"text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.",
"title": ""
},
{
"docid": "7eed84f959268599e1b724b0752f6aa5",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
},
{
"docid": "c699fc9a25183e998aa5cdebac1c0a43",
"text": "DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as image and text, but existing methods often face the challenge of insufficient cross-modal training data. In single-modal scenario, similar problem is usually relieved by transferring knowledge from largescale auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is also very useful for cross-modal retrieval, which can provide rich general semantic information that can be shared across different modalities. However, it is challenging to transfer useful knowledge from single-modal (as image) source domain to cross-modal (as image/text) target domain. Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process. This paper proposes Cross-modal Hybrid Transfer Network (CHTN) with two subnetworks: Modalsharing transfer subnetwork utilizes the modality in both source and target domains as a bridge, for transferring knowledge to both two modalities simultaneously; Layer-sharing correlation subnetwork preserves the inherent cross-modal semantic correlation to further adapt to cross-modal retrieval task. Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiments on 3 datasets show its effectiveness.",
"title": ""
},
{
"docid": "5ab8a8f4991f7c701c51e32de7f97b36",
"text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.",
"title": ""
}
] | scidocsrr |
b10bd07f3a3c5cb0ff56d279dac00f02 | Modelling IT projects success with Fuzzy Cognitive Maps | [
{
"docid": "447c36d34216b8cb890776248d9cc010",
"text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.",
"title": ""
}
] | [
{
"docid": "347509d68f6efd4da747a7a3e704a9a2",
"text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.",
"title": ""
},
{
"docid": "5b0eaf636d6d8cf0523e3f00290b780f",
"text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.",
"title": ""
},
{
"docid": "7f070d85f4680a2b88d3b530dff0cfc5",
"text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.",
"title": ""
},
{
"docid": "a33f862d0b7dfde7b9f18aa193db9acf",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "d0253bb3efe714e6a34e8dd5fc7dcf81",
"text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.",
"title": ""
},
{
"docid": "73267467deec2701d6628a0d3572132e",
"text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729",
"title": ""
},
{
"docid": "6a1073b72ef20fd59e705400dbdcc868",
"text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e67bb4c784b89b2fee1ab7687b545683",
"text": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.",
"title": ""
},
{
"docid": "b36cc742445db810d40c884a90e2cf42",
"text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.",
"title": ""
},
{
"docid": "6bf38b6decda962ea03ab429f5fbde4f",
"text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "e16bf4ab7c56b6827369f19afb2d4744",
"text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.",
"title": ""
},
{
"docid": "fbcdb3d565519b47922394dc9d84985f",
"text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.",
"title": ""
},
{
"docid": "b8322d65e61be7fb252b2e418df85d3e",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "b4bd19c2285199e280cb41e733ec5498",
"text": "In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95:7% and successfully process 40 image processing jobs before deadline. ARTICLE HISTORY Received 6 June 2018 Accepted 30 June 2018",
"title": ""
},
{
"docid": "cc2a7d6ac63f12b29a6d30f20b5547be",
"text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.",
"title": ""
},
{
"docid": "5b0b8da7faa91343bad6296fd7cb181f",
"text": "Transportation research relies heavily on a variety of data. From sensors to surveys, data supports dayto-day operations as well as long-term planning and decision-making. The challenges that arise due to the volume and variety of data that are found in transportation research can be effectively addressed by ontologies. This opportunity has already been recognized – there are a number of existing transportation ontologies, however the relationship between them is unclear. The goal of this work is to provide an overview of the opportunities for ontologies in transportation research and operation, and to present a survey of existing transportation ontologies to serve two purposes: (1) to provide a resource for the transportation research community to aid in understanding (and potentially selecting between) existing transportation ontologies; and (2) to identify future work for the development of transportation ontologies, by identifying areas that may be lacking.",
"title": ""
},
{
"docid": "96f616c7a821c1f74fc77e5649483343",
"text": "Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We model a set of comprehensive causative relationships over this dataset for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.",
"title": ""
},
{
"docid": "a5168d6ca63300f26b7388f67d10cb3c",
"text": "In recent years, the improvement of wireless protocols, the development of cloud services and the lower cost of hardware have started a new era for smart homes. One such enabling technologies is fog computing, which extends cloud computing to the edge of a network allowing for developing novel Internet of Things (IoT) applications and services. Under the IoT fog computing paradigm, IoT gateways are usually utilized to exchange messages with IoT nodes and a cloud. WiFi and ZigBee stand out as preferred communication technologies for smart homes. WiFi has become very popular, but it has a limited application due to its high energy consumption and the lack of standard mesh networking capabilities for low-power devices. For such reasons, ZigBee was selected by many manufacturers for developing wireless home automation devices. As a consequence, these technologies may coexist in the 2.4 GHz band, which leads to collisions, lower speed rates and increased communications latencies. This article presents ZiWi, a distributed fog computing Home Automation System (HAS) that allows for carrying out seamless communications among ZigBee and WiFi devices. This approach diverges from traditional home automation systems, which often rely on expensive central controllers. In addition, to ease the platform's building process, whenever possible, the system makes use of open-source software (all the code of the nodes is available on GitHub) and Commercial Off-The-Shelf (COTS) hardware. The initial results, which were obtained in a number of representative home scenarios, show that the developed fog services respond several times faster than the evaluated cloud services, and that cross-interference has to be taken seriously to prevent collisions. In addition, the current consumption of ZiWi's nodes was measured, showing the impact of encryption mechanisms.",
"title": ""
},
{
"docid": "52d2ff16f6974af4643a15440ae09fec",
"text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.",
"title": ""
}
] | scidocsrr |
3044c0fae6c720c091ba4b7260555350 | OPTIMIZATION OF A WAVE CANCELLATION MULTIHULL SHIP USING CFD TOOLS | [
{
"docid": "7e1c5f17ac930b3582b4dd696785bcf5",
"text": "Four methods of analysis | a nonlinear method based on Euler's equations and three linear potential ow methods | are used to determine the optimal location of the outer hulls for a wave cancellation multihull ship that consists of a main center hull and two outer hulls. The three potential ow methods correspond to a hierarchy of simple approximations based on the Fourier-Kochin representation of ship waves and the slender-ship approximation.",
"title": ""
}
] | [
{
"docid": "316cdb1b9f67f4156931a9d2eb06145c",
"text": "Irony is an important device in human communication, both in everyday spoken conversations as well as in written texts including books, websites, chats, reviews, and Twitter messages among others. Specific cases of irony and sarcasm have been studied in different contexts but, to the best of our knowledge, only recently the first publicly available corpus including annotations about whether a text is ironic or not has been published by Filatova (2012). However, no baseline for classification of ironic or sarcastic reviews has been provided. With this paper, we aim at closing this gap. We formulate the problem as a supervised classification task and evaluate different classifiers, reaching an F1-measure of up to 74 % using logistic regression. We analyze the impact of a number of features which have been proposed in previous research as well as combinations of them.",
"title": ""
},
{
"docid": "f268718ceac79dbf8d0dcda2ea6557ca",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: [email protected] (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "41a0681812527ef288ac4016550e53dd",
"text": "Supervised learning using deep convolutional neural network has shown its promise in large-scale image classification task. As a building block, it is now well positioned to be part of a larger system that tackles real-life multimedia tasks. An unresolved issue is that such model is trained on a static snapshot of data. Instead, this paper positions the training as a continuous learning process as new classes of data arrive. A system with such capability is useful in practical scenarios, as it gradually expands its capacity to predict increasing number of new classes. It is also our attempt to address the more fundamental issue: a good learning system must deal with new knowledge that it is exposed to, much as how human do.\n We developed a training algorithm that grows a network not only incrementally but also hierarchically. Classes are grouped according to similarities, and self-organized into levels. The newly added capacities are divided into component models that predict coarse-grained superclasses and those return final prediction within a superclass. Importantly, all models are cloned from existing ones and can be trained in parallel. These models inherit features from existing ones and thus further speed up the learning. Our experiment points out advantages of this approach, and also yields a few important open questions.",
"title": ""
},
{
"docid": "3c891452e416c5faa3da8b6e32a57b3f",
"text": "Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel svms. We generalise this model to locally finite dimensional kernel svm.",
"title": ""
},
{
"docid": "e2988860c1e8b4aebd6c288d37d1ca4e",
"text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.",
"title": ""
},
{
"docid": "76def4ca02a25669610811881531e875",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "09dbfbd77307b0cd152772618c40e083",
"text": "Textbook Question Answering (TQA) [1] is a newly proposed task to answer arbitrary questions in middle school curricula, which has particular challenges to understand the long essays in additional to the images. Bilinear models [2], [3] are effective at learning high-level associations between questions and images, but are inefficient to handle the long essays. In this paper, we propose an Essay-anchor Attentive Multi-modal Bilinear pooling (EAMB), a novel method to encode the long essays into the joint space of the questions and images. The essay-anchors, embedded from the keywords, represent the essay information in a latent space. We propose a novel network architecture to pay special attention on the keywords in the questions, consequently encoding the essay information into the question features, and thus the joint space with the images. We then use the bilinear models to extract the multi-modal interactions to obtain the answers. EAMB successfully utilizes the redundancy of the pre-trained word embedding space to represent the essay-anchors. This avoids the extra learning difficulties from exploiting large network structures. Quantitative and qualitative experiments show the outperforming effects of EAMB on the TQA dataset.",
"title": ""
},
{
"docid": "c36986dd83276fe01e73a4125f99f7c0",
"text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.",
"title": ""
},
{
"docid": "4004ab452ef58403af00bcf16c34e227",
"text": "Wheel-spinning refers to a phenomenon in which a student has spent a considerable amount of time practicing a skill, yet displays little or no progress towards mastery. Wheel-spinning has been shown to be a common problem affecting a significant number of students in different tutoring systems and is negatively associated with learning. In this study, we construct a model of wheel-spinning, using generic features easily calculated from most tutoring systems. We show that for two different systems' data, the model generalizes to future students very well and can detect wheel-spinning in an early stage with high accuracy. We also refine the scope of the wheel-spinning problem in two systems using the model's predictions.",
"title": ""
},
{
"docid": "e91622bf18d268991b6e15936574bc7e",
"text": "This essay addresses the question of how participatory design (PD) researchers and practitioners can pursue commitments to social justice and democracy while retaining commitments to reflective practice, the voices of the marginal, and design experiments “in the small.” I argue that contemporary feminist utopianism has, on its own terms, confronted similar issues, and I observe that it and PD pursue similar agendas, but with complementary strengths. I thus propose a cooperative engagement between feminist utopianism and PD at the levels of theory, methodology, and on-the-ground practice. I offer an analysis of a case—an urban renewal project in Taipei, Taiwan—as a means of exploring what such a cooperative engagement might entail. I argue that feminist utopianism and PD have complementary strengths that could be united to develop and to propose alternative futures that reflect democratic values and procedures, emerging technologies and infrastructures as design materials, a commitment to marginalized voices (and the bodies that speak them), and an ambitious, even literary, imagination.",
"title": ""
},
{
"docid": "ea5a455bca9ff0dbb1996bd97d89dfe5",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "2c0b13b5a1a4c207d52e674a518bf868",
"text": "We have developed a new mutual information-based registration method for matching unlabeled point features. In contrast to earlier mutual information-based registration methods, which estimate the mutual information using image intensity information, our approach uses the point feature location information. A novel aspect of our approach is the emergence of correspondence (between the two sets of features) as a natural by-product of joint density estimation. We have applied this algorithm to the problem of geometric alignment of primate autoradiographs. We also present preliminary results on three-dimensional robust matching of sulci derived from anatomical magnetic resonance images. Finally, we present an experimental comparison between the mutual information approach and other recent approaches which explicitly parameterize feature correspondence.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "5ec58d07ad4b92fa14d541c696ee48fd",
"text": "We present Jabberwocky, a social computing stack that consists of three components: a human and machine resource management system called Dormouse, a parallel programming framework for human and machine computation called ManReduce, and a high-level programming language on top of ManReduce called Dog. Dormouse is designed to enable cross-platform programming languages for social computation, so, for example, programs written for Mechanical Turk can also run on other crowdsourcing platforms. Dormouse also enables a programmer to easily combine crowdsourcing platforms or create new ones. Further, machines and people are both first-class citizens in Dormouse, allowing for natural parallelization and control flows for a broad range of data-intensive applications. And finally and importantly, Dormouse includes notions of real identity, heterogeneity, and social structure. We show that the unique properties of Dormouse enable elegant programming models for complex and useful problems, and we propose two such frameworks. ManReduce is a framework for combining human and machine computation into an intuitive parallel data flow that goes beyond existing frameworks in several important ways, such as enabling functions on arbitrary communication graphs between human and machine clusters. And Dog is a high-level procedural language written on top of ManReduce that focuses on expressivity and reuse. We explore two applications written in Dog: bootstrapping product recommendations without purchase data, and expert labeling of medical images.",
"title": ""
},
{
"docid": "1ff317c5514dfc1179ee7c474187d4e5",
"text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.",
"title": ""
},
{
"docid": "8a363d7fa2bbf4b30312ca9efc2b3fa5",
"text": "The objective of the present study was to investigate whether transpedicular bone grafting as a supplement to posterior pedicle screw fixation in thoracolumbar fractures results in a stable reconstruction of the anterior column, that allows healing of the fracture without loss of correction. Posterior instrumentation using an internal fixator is a standard procedure for stabilizing the injured thoracolumbar spine. Transpedicular bone grafting was first described by Daniaux in 1986 to achieve intrabody fusion. Pedicle screw fixation with additional transpedicular fusion has remained controversial because of inconsistent reports. A retrospective single surgeon cohort study was performed. Between October 2001 and May 2007, 30 consecutive patients with 31 acute traumatic burst fractures of the thoracolumbar spine (D12-L5) were treated operatively. The mean age of the patients was 45.7 years (range: 19-78). There were 23 men and 7 women. Nineteen thoracolumbar fractures were sustained in falls from a height; the other fractures were the result of motor vehicle accidents. The vertebrae most often involved were L1 in 13 patients and L2 in 8 patients. According to the Magerl classification, 25 patients sustained Type A1, 4 Type A2 and 2 Type A3 fractures. The mean time from injury to surgery was 6 days (range 2-14 days). Two postoperative complications were observed: one superficial and one deep infection. Mean Cobb's angle improved from +7.16 degrees (SD 12.44) preoperatively to -5.48 degrees (SD 11.44) immediately after operation, with a mean loss of correction of 1.00 degrees (SD 3.04) at two years. Reconstruction of the anterior column is important to prevent loss of correction. In our experience, the use of transpedicular bone grafting has efficiently restored the anterior column and has preserved the post-operative correction of kyphosis until healing of the fracture.",
"title": ""
},
{
"docid": "780ffc42f3a9a49dc0c6dcba26be33a5",
"text": "Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term stateregularization, makes RNNs transition between a finite set of learnable states. We evaluate stateregularized RNNs on (1) regular languages for the purpose of automata extraction; (2) nonregular languages such as balanced parentheses, palindromes, and the copy task where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition, and language modeling. We show that state-regularization (a) simplifies the extraction of finite state automata modeling an RNN’s state transition dynamics; (b) forces RNNs to operate more like automata with external memory and less like finite state machines; (c) makes RNNs have better interpretability and explainability.",
"title": ""
},
{
"docid": "1ebb4d7a99734f19fc202971b46bf568",
"text": "Carnegie Mellon University has proposed an educationaland entertainment-based robotic lunar mission which will last two years and cover 1000km on the moon and revisit several historic sites. With the transmission of live panoramic video, participants will be provided the opportunity for interactively exploring the moon through teleoperation and telepresence. The requirement of panoramic video and telepresence demands high data rates on the order of 7.5 Mbps. This is challenging since the power available for communication is approximately 100W and occupied bandwidth is limited to less than 10 MHz. The tough environment on the moon introduces additional challenges of survivability and reliability. A communication system based on a phased array antenna, Nyquist QPSK modulation and a rate 2/3 Turbo code is presented which can satisfy requirements of continuous high data rate communication at low power and bandwidth reliably over a two year mission duration. Three ground stations with 22m parabolic antennas are required around the world to maintain continuous communication. The transmission will then be relayed via satellite to the current control station location. This paper presents an overview of the mission, and communication requirements and design.",
"title": ""
},
{
"docid": "2f7b0229fc9e126e09abe769d2b927dc",
"text": "Complex event processing has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. The goal is to extract patterns from such event streams in order to make informed decisions in real-time. However, networking latencies and even machine failure may cause events to arrive out-of-order at the event stream processing engine. In this work, we address the problem of processing event pattern queries specified over event streams that may contain out-of-order data. First, we analyze the problems state-of-the-art event stream processing technology would experience when faced with out-of-order data arrival. We then propose a new solution of physical implementation strategies for the core stream algebra operators such as sequence scan and pattern construction, including stack- based data structures and associated purge algorithms. Optimizations for sequence scan and construction as well as state purging to minimize CPU cost and memory consumption are also introduced. Lastly, we conduct an experimental study demonstrating the effectiveness of our approach.",
"title": ""
}
] | scidocsrr |
ef23a68c8a9a134db33900965e814f8d | Analysis of Permission-based Security in Android through Policy Expert, Developer, and End User Perspectives | [
{
"docid": "948d3835e90c530c4290e18f541d5ef2",
"text": "Each time a user installs an application on their Android phone they are presented with a full screen of information describing what access they will be granting that application. This information is intended to help them make two choices: whether or not they trust that the application will not damage the security of their device and whether or not they are willing to share their information with the application, developer, and partners in question. We performed a series of semi-structured interviews in two cities to determine whether people read and understand these permissions screens, and to better understand how people perceive the implications of these decisions. We find that the permissions displays are generally viewed and read, but not understood by Android users. Alarmingly, we find that people are unaware of the security risks associated with mobile apps and believe that app marketplaces test and reject applications. In sum, users are not currently well prepared to make informed privacy and security decisions around installing applications.",
"title": ""
}
] | [
{
"docid": "8e10d20723be23d699c0c581c529ee19",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "12b855b39278c49d448fbda9aa56cacf",
"text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.",
"title": ""
},
{
"docid": "6a9738cbe28b53b3a9ef179091f05a4a",
"text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.",
"title": ""
},
{
"docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "755f7d663e813d7450089fc0d7058037",
"text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.",
"title": ""
},
{
"docid": "de39f498f28cf8cfc01f851ca3582d32",
"text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.",
"title": ""
},
{
"docid": "96fd2fdc0ea4fbde407ac1f56452ca24",
"text": "EEG signals, measuring transient brain activities, can be used as a source of biometric information with potential application in high-security person recognition scenarios. However, due to the inherent nature of these signals and the process used for their acquisition, their effective preprocessing is critical for their successful utilisation. In this paper we compare the effectiveness of different wavelet-based noise removal methods and propose an EEG-based biometric identification system which combines two such de-noising methods to enhance the signal preprocessing stage. In tests using 50 subjects from a public database, the proposed new approach is shown to provide improved identification performance over alternative techniques. Another important preprocessing consideration is the segmentation of the EEG record prior to de-noising. Different segmentation approaches were investigated and the trade-off between performance and computation time is explored. Finally the paper reports on the impact of the choice of wavelet function used for feature extraction on system performance.",
"title": ""
},
{
"docid": "4e86e02be77fe4e10c199efa1e9456c4",
"text": "This paper presents EsdRank, a new technique for improving ranking using external semi-structured data such as controlled vocabularies and knowledge bases. EsdRank treats vocabularies, terms and entities from external data, as objects connecting query and documents. Evidence used to link query to objects, and to rank documents are incorporated as features between query-object and object-document correspondingly. A latent listwise learning to rank algorithm, Latent-ListMLE, models the objects as latent space between query and documents, and learns how to handle all evidence in a unified procedure from document relevance judgments. EsdRank is tested in two scenarios: Using a knowledge base for web search, and using a controlled vocabulary for medical search. Experiments on TREC Web Track and OHSUMED data show significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "5d05addd1cac2ea4ca5008950a21bd06",
"text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "c01dd2ae90781291cb5915957bd42ae1",
"text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.",
"title": ""
},
{
"docid": "4a70c88a031195a5593aaa403b9681cd",
"text": "In this paper, we are interested in two seemingly different concepts: adversarial training and generative adversarial networks (GANs). Particularly, how these techniques help to improve each other. To this end, we analyze the limitation of adversarial training as the defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the “fake” images sampled from generative adversarial network. After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry etla., 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018). Source code is publicly available online at https://github.com/anonymous.",
"title": ""
},
{
"docid": "813106ce10d23483ef8afa56857277a2",
"text": "Reinforced concrete walls are commonly used as the primary lateral force-resisting system for tall buildings. As the tools for conducting nonlinear response history analysis have improved and with the advent of performancebased seismic design, reinforced concrete walls and core walls are often employed as the only lateral forceresisting system. Proper modelling of the load versus deformation behaviour of reinforced concrete walls and link beams is essential to accurately predict important response quantities. Given this critical need, an overview of modelling approaches appropriate to capture the lateral load responses of both slender and stout reinforced concrete walls, as well as link beams, is presented. Modelling of both fl exural and shear responses is addressed, as well as the potential impact of coupled fl exure–shear behaviour. Model results are compared with experimental results to assess the ability of common modelling approaches to accurately predict both global and local experimental responses. Based on the fi ndings, specifi c recommendations are made for general modelling issues, limiting material strains for combined bending and axial load, and shear backbone relations. Copyright © 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2dee247b24afc7ddba44b312c0832bc1",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "49517920ddecf10a384dc3e98e39459b",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "cc2579bb621338908cacc7808cb1f851",
"text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.",
"title": ""
},
{
"docid": "3902afc560de6f0b028315977bc55976",
"text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.",
"title": ""
},
{
"docid": "6831c633bf7359b8d22296b52a9a60b8",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
},
{
"docid": "64c06bffe4aeff54fbae9d87370e552c",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3",
"text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.",
"title": ""
}
] | scidocsrr |
a7cce9cae6e35e04b891a7e3a340ab2b | Sex & the City . How Emotional Factors Affect Financial Choices | [
{
"docid": "4fa7ee44cdc4b0cd439723e9600131bd",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "9a1201d68018fce5ce413511dc64e8b7",
"text": "In the health sciences it is quite common to carry out studies designed to determine the influence of one or more variables upon a given response variable. When this response variable is numerical, simple or multiple regression techniques are used, depending on the case. If the response variable is a qualitative variable (dichotomic or polychotomic), as for example the presence or absence of a disease, linear regression methodology is not applicable, and simple or multinomial logistic regression is used, as applicable.",
"title": ""
}
] | [
{
"docid": "c367d19e00816538753e6226785d05fd",
"text": "BACKGROUND AND OBJECTIVE\nMildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke.\n\n\nMETHODS\nWe performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated.\n\n\nRESULTS\nA total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups.\n\n\nCONCLUSION\nMildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.",
"title": ""
},
{
"docid": "b2332b118b846c9f417558a02975e20a",
"text": "This is the third in a series of four tutorial papers on biomedical signal processing and concerns the estimation of the power spectrum (PS) and coherence function (CF) od biomedical data. The PS is introduced and its estimation by means of the discrete Fourier transform is considered in terms of the problem of resolution in the frequency domain. The periodogram is introduced and its variance, bias and the effects of windowing and smoothing are considered. The use of the autocovariance function as a stage in power spectral estimation is described and the effects of windows in the autocorrelation domain are compared with the related effects of windows in the original time domain. The concept of coherence is introduced and the many ways in which coherence functions might be estimated are considered.",
"title": ""
},
{
"docid": "99ea14010fe3acd37952fb355a25b71c",
"text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.",
"title": ""
},
{
"docid": "001784246312172835ceca4461ec28c5",
"text": "Abstract-The string-searching problem is to find all occurrences of pattern(s) in a text string. The Aho-Corasick string searching algorithm simultaneously finds all occurrences of multiple patterns in one pass through the text. On the other hand, the Boyer-Moore algorithm is understood to be the fastest algorithm for a single pattern. By combining the ideas of these two algorithms, we present an efficient string searching algorithm for multiple patterns. The algorithm runs in sublinear time, on the average, as the BM algorithm achieves, and its preprocessing time is linear proportional to the sum of the lengths of the patterns like the AC algorithm.",
"title": ""
},
{
"docid": "8e1b10ebb48b86ce151ab44dc0473829",
"text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.",
"title": ""
},
{
"docid": "2a5f555c00d98a87fe8dd6d10e27dc38",
"text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.",
"title": ""
},
{
"docid": "350aeae5c69db969c35c673c0be2a98a",
"text": "Driver yawning detection is one of the key technologies used in driver fatigue monitoring systems. Real-time driver yawning detection is a very challenging problem due to the dynamics in driver's movements and lighting conditions. In this paper, we present a yawning detection system that consists of a face detector, a nose detector, a nose tracker and a yawning detector. Deep learning algorithms are developed for detecting driver face area and nose location. A nose tracking algorithm that combines Kalman filter with a dedicated open-source TLD (Track-Learning-Detection) tracker is developed to generate robust tracking results under dynamic driving conditions. Finally a neural network is developed for yawning detection based on the features including nose tracking confidence value, gradient features around corners of mouth and face motion features. Experiments are conducted on real-world driving data, and results show that the deep convolutional networks can generate a satisfactory classification result for detecting driver's face and nose when compared with other pattern classification methods, and the proposed yawning detection system is effective in real-time detection of driver's yawning states.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "1dbb34265c9b01f69262b3270fa24e97",
"text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.",
"title": ""
},
{
"docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0",
"text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.",
"title": ""
},
{
"docid": "15b05bdc1310d038110b545686082c98",
"text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.",
"title": ""
},
{
"docid": "b8b2d68955d6ed917900d30e4e15f71e",
"text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "d19a77b3835b7b43acf57da377b11cb4",
"text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "4bfc1e2fbb2b1dea29360c410e5258b4",
"text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",
"title": ""
},
{
"docid": "35dda21bd1f2c06a446773b0bfff2dd7",
"text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.",
"title": ""
},
{
"docid": "df78e51c3ed3a6924bf92db6000062e1",
"text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.",
"title": ""
},
{
"docid": "6d5480bf1ee5d401e39f5e65d0aaba25",
"text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.",
"title": ""
}
] | scidocsrr |
0362bc28f18510c434c802902068035d | An ontology approach to object-based image retrieval | [
{
"docid": "5e0cff7f2b8e5aa8d112eacf2f149d60",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] | [
{
"docid": "e751fdbc980c36b95c81f0f865bb5033",
"text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.",
"title": ""
},
{
"docid": "aea0aeea95d251b5a7102825ad5c66ce",
"text": "The life time extension in the wireless sensor network (WSN) is the major concern in real time application, if the battery attached with the sensor node life is not optimized properly then the network life fall short. A protocol using a new evolutionary technique, cat swarm optimization (CSO), is designed and implemented in real time to minimize the intra-cluster distances between the cluster members and their cluster heads and optimize the energy distribution for the WSNs. We analyzed the performance of WSN protocol with the help of sensor nodes deployed in a field and grouped in to clusters. The novelty in our proposed scheme is considering the received signal strength, residual battery voltage and intra cluster distance of sensor nodes in cluster head selection with the help of CSO. The result is compared with the well-known protocol Low-energy adaptive clustering hierarchy-centralized (LEACH-C) and the swarm based optimization technique Particle swarm optimization (PSO). It was found that the battery energy level has been increased considerably of the traditional LEACH and PSO algorithm.",
"title": ""
},
{
"docid": "21e47bd70185299e94f8553ca7e60a6e",
"text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "917154ffa5d9108fd07782d1c9a183ba",
"text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.",
"title": ""
},
{
"docid": "70fd930a2a6504404bec67779cba71b2",
"text": "This article discusses the logical implementation of the media access control and the physical layer of 100 Gb/s Ethernet. The target are a MAC/PCS LSI, supporting MAC and physical coding sublayer, and a gearbox LSI, providing 10:4 parallel lane-width exchange inside an optical module. The two LSIs are connected by a 100 gigabit attachment unit interface, which consists of ten 10 Gb/s lines. We realized a MAC/PCS logical circuit with a low-frequency clock on a FPGA, whose size is 250 kilo LUTs with a 5.7 Mbit RAM, and the power consumption of the gearbox LSI estimated to become 2.3 W.",
"title": ""
},
{
"docid": "6ea91574db57616682cf2a9608b0ac0b",
"text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.",
"title": ""
},
{
"docid": "94cf1976c10d632cfce12ce3f32be4cc",
"text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.",
"title": ""
},
{
"docid": "db1abd38db0295fc573bdfca2c2b19a3",
"text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.",
"title": ""
},
{
"docid": "5528f1ee010e7fba440f1f7ff84a3e8e",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "d10afc83c234c1c0531e23b29b5d8895",
"text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.",
"title": ""
},
{
"docid": "ee1e2400ed5c944826747a8e616b18c1",
"text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.",
"title": ""
},
{
"docid": "ec673efa5f837ba4c997ee7ccd845ce1",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "a27a05cb00d350f9021b5c4f609d772c",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "64b3bcb65b9e890810561ab5700dec32",
"text": "In this paper, we address the fusion problem of two estimates, where the cross-correlation between the estimates is unknown. To solve the problem within the Bayesian framework, we assume that the covariance matrix has a prior distribution. We also assume that we know the covariance of each estimate, i.e., the diagonal block of the entire co-variance matrix (of the random vector consisting of the two estimates). We then derive the conditional distribution of the off-diagonal blocks, which is the cross-correlation of our interest. The conditional distribution happens to be the inverted matrix variate t-distribution. We can readily sample from this distribution and use a Monte Carlo method to compute the minimum mean square error estimate for the fusion problem. Simulations show that the proposed method works better than the popular covariance intersection method.",
"title": ""
},
{
"docid": "b47980c393116ac598a7f5c38fb402b9",
"text": "Skin cancer, the most common human malignancy, is primarily diagnosed visually by physicians . Classification with an automated method like CNN [2, 3] shows potential for diagnosing the skin cancer according to the medical photographs. By now, the deep convolutional neural networks can achieve the level of human dermatologist . This work is dedicated on developing a Deep Learning method for ISIC [5] 2017 Skin Lesion Detection Competition to classify the dermatology pictures, which is aiming at improving the diagnostic accuracy rate. As an result, it will improve the general level of the human health. The challenge falls into three sub-challenges, including Lesion Segmentation, Lesion Dermoscopic Feature Extraction and Lesion Classification. We focus on the Lesion Classification task. The proposed algorithm is comprised of three steps: (1) original images preprocessing, (2) modelling the processed images using CNN [2, 3] in Caffe [4] framework, (3) predicting the test images and calculating the scores that represent the likelihood of corresponding classification. The models are built on the source images are using the Caffe [4] framework. The scores in prediction step are obtained by two different models from the source images.",
"title": ""
},
{
"docid": "d7a2708fc70f6480d9026aeefce46610",
"text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.",
"title": ""
},
{
"docid": "ed9d72566cdf3e353bf4b1e589bf85eb",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
}
] | scidocsrr |
5a812bb56de310cac6c24d113eaa568c | Secure control against replay attacks | [
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "f2db57e59a2e7a91a0dff36487be3aa4",
"text": "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.",
"title": ""
}
] | [
{
"docid": "ffea50948eab00d47f603d24bcfc1bfd",
"text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.",
"title": ""
},
{
"docid": "d038c7b29701654f8ee908aad395fe8c",
"text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.",
"title": ""
},
{
"docid": "caab24c7af0c58965833d56382132d66",
"text": "Mesh slicing is one of the most common operations in additive manufacturing (AM). However, the computing burden for such an application is usually very heavy, especially when dealing with large models. Nowadays the graphics processing units (GPU) have abundant resources and it is reasonable to utilize the computing power of GPU for mesh slicing. In the paper, we propose a parallel implementation of the slicing algorithm using GPU. We test the GPU-accelerated slicer on several models and obtain a speedup factor of about 30 when dealing with large models, compared with the CPU implementation. Results show the power of GPU on the mesh slicing problem. In the future, we will extend our work and standardize the slicing process.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "94076bd2a4587df2bee9d09e81af2109",
"text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.",
"title": ""
},
{
"docid": "3e2c79715d8ae80e952d1aabf03db540",
"text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].",
"title": ""
},
{
"docid": "b53f2f922661bfb14bf2181236fad566",
"text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains",
"title": ""
},
{
"docid": "c36986dd83276fe01e73a4125f99f7c0",
"text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.",
"title": ""
},
{
"docid": "b9dfc489ff1bf6907929a450ea614d0b",
"text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.",
"title": ""
},
{
"docid": "16816ba52854f6242701b27fdd0263fe",
"text": "The economic viability of colonizing Mars is examined. It is shown, that of all bodies in the solar system other than Earth, Mars is unique in that it has the resources required to support a population of sufficient size to create locally a new branch of human civilization. It is also shown that while Mars may lack any cash material directly exportable to Earth, Mars’ orbital elements and other physical parameters gives a unique positional advantage that will allow it to act as a keystone supporting extractive activities in the asteroid belt and elsewhere in the solar system. The potential of relatively near-term types of interplanetary transportation systems is examined, and it is shown that with very modest advances on a historical scale, systems can be put in place that will allow individuals and families to emigrate to Mars at their own discretion. Their motives for doing so will parallel in many ways the historical motives for Europeans and others to come to America, including higher pay rates in a labor-short economy, escape from tradition and oppression, as well as freedom to exercise their drive to create in an untamed and undefined world. Under conditions of such large scale immigration, sale of real-estate will add a significant source of income to the planet’s economy. Potential increases in real-estate values after terraforming will provide a sufficient financial incentive to do so. In analogy to frontier America, social conditions on Mars will make it a pressure cooker for invention. These inventions, licensed on Earth, will raise both Terrestrial and Martian living standards and contribute large amounts of income to support the development of the colony.",
"title": ""
},
{
"docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
},
{
"docid": "6f3a902ed5871a95f6b5adf197684748",
"text": "BACKGROUND\nThe choice of antimicrobials for initial treatment of peritoneal dialysis (PD)-related peritonitis is crucial for a favorable outcome. There is no consensus about the best therapy; few prospective controlled studies have been published, and the only published systematic reviews did not report superiority of any class of antimicrobials. The objective of this review was to analyze the results of PD peritonitis treatment in adult patients by employing a new methodology, the proportional meta-analysis.\n\n\nMETHODS\nA review of the literature was conducted. There was no language restriction. Studies were obtained from MEDLINE, EMBASE, and LILACS. The inclusion criteria were: (a) case series and RCTs with the number of reported patients in each study greater than five, (b) use of any antibiotic therapy for initial treatment (e.g., cefazolin plus gentamicin or vancomycin plus gentamicin), for Gram-positive (e.g., vancomycin or a first generation cephalosporin), or for Gram-negative rods (e.g., gentamicin, ceftazidime, and fluoroquinolone), (c) patients with PD-related peritonitis, and (d) studies specifying the rates of resolution. A proportional meta-analysis was performed on outcomes using a random-effects model, and the pooled resolution rates were calculated.\n\n\nRESULTS\nA total of 64 studies (32 for initial treatment and negative culture, 28 reporting treatment for Gram-positive rods and 24 reporting treatment for Gram-negative rods) and 21 RCTs met all inclusion criteria (14 for initial treatment and negative culture, 8 reporting treatment for Gram-positive rods and 8 reporting treatment for Gram-negative rods). The pooled resolution rate of ceftazidime plus glycopeptide as initial treatment (pooled proportion = 86% [95% CI 0.82-0.89]) was significantly higher than first generation cephalosporin plus aminoglycosides (pooled proportion = 66% [95% CI 0.57-0.75]) and significantly higher than glycopeptides plus aminoglycosides (pooled proportion = 75% [95% CI 0.69-0.80]. Other comparisons of regimens used for either initial treatment, treatment for Gram-positive rods or Gram-negative rods did not show statistically significant differences.\n\n\nCONCLUSION\nWe showed that the association of a glycopeptide plus ceftazidime is superior to other regimens for initial treatment of PD peritonitis. This result should be carefully analyzed and does not exclude the necessity of monitoring the local microbiologic profile in each dialysis center to choice the initial therapeutic protocol.",
"title": ""
},
{
"docid": "26a6ba8cba43ddfd3cac0c90750bf4ad",
"text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.",
"title": ""
},
{
"docid": "b7f15089db3f5d04c1ce1d5f09b0b1f0",
"text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.",
"title": ""
},
{
"docid": "0f87fefbe2cfc9893b6fc490dd3d40b7",
"text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.",
"title": ""
},
{
"docid": "031e3f1ae2537b603b4b2119f3dad572",
"text": "Efficient storage and querying of large repositories of RDF content is important due to the widespread growth of Semantic Web and Linked Open Data initiatives. Many novel database systems that store RDF in its native form or within traditional relational storage have demonstrated their ability to scale to large volumes of RDF content. However, it is increasingly becoming obvious that the simple dyadic relationship captured through traditional triples alone is not sufficient for modelling multi-entity relationships, provenance of facts, etc. Such richer models are supported in RDF through two techniques - first, called reification which retains the triple nature of RDF and the second, a non-standard extension called N-Quads. In this paper, we explore the challenges of supporting such richer semantic data by extending the state-of-the-art RDF-3X system. We describe our implementation of RQ-RDF-3X, a reification and quad enhanced RDF-3X, which involved a significant re-engineering ranging from the set of indexes and their compression schemes to the query processing pipeline for queries over reified content. Using large RDF repositories such as YAGO2S and DBpedia, and a set of SPARQL queries that utilize reification model, we demonstrate that RQ-RDF-3X is significantly faster than RDF-3X.",
"title": ""
},
{
"docid": "cea9c1bab28363fc6f225b7843b8df99",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "2805c831c044d0b4da29bf96e9e7d3ad",
"text": "Combined chromatin immunoprecipitation and next-generation sequencing (ChIP-seq) has enabled genome-wide epigenetic profiling of numerous cell lines and tissue types. A major limitation of ChIP-seq, however, is the large number of cells required to generate high-quality data sets, precluding the study of rare cell populations. Here, we present an ultra-low-input micrococcal nuclease-based native ChIP (ULI-NChIP) and sequencing method to generate genome-wide histone mark profiles with high resolution from as few as 10(3) cells. We demonstrate that ULI-NChIP-seq generates high-quality maps of covalent histone marks from 10(3) to 10(6) embryonic stem cells. Subsequently, we show that ULI-NChIP-seq H3K27me3 profiles generated from E13.5 primordial germ cells isolated from single male and female embryos show high similarity to recent data sets generated using 50-180 × more material. Finally, we identify sexually dimorphic H3K27me3 enrichment at specific genic promoters, thereby illustrating the utility of this method for generating high-quality and -complexity libraries from rare cell populations.",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] | scidocsrr |
84e93effd1fc051cbfd1fdda0017d7f0 | A review on deep learning for recommender systems: challenges and remedies | [
{
"docid": "659b3c56790b92b5b02dcdbab76bef0c",
"text": "Recommender systems based on deep learning technology pay huge attention recently. In this paper, we propose a collaborative filtering based recommendation algorithm that utilizes the difference of similarities among users derived from different layers in stacked denoising autoencoders. Since different layers in a stacked autoencoder represent the relationships among items with rating at different levels of abstraction, we can expect to make recommendations more novel, various and serendipitous, compared with a normal collaborative filtering using single similarity. The results of experiments using MovieLens dataset show that the proposed recommendation algorithm can improve the diversity of recommendation lists without great loss of accuracy.",
"title": ""
},
{
"docid": "89fd46da8542a8ed285afb0cde9cc236",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
}
] | [
{
"docid": "e2666b0eed30a4eed2ad0cde07324d73",
"text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.",
"title": ""
},
{
"docid": "83ad15e2ffeebb21705b617646dc4ed7",
"text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.",
"title": ""
},
{
"docid": "28ff3b1e9f29d7ae4b57f6565330cde5",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "ce5efa83002cee32a5ef8b8b73b81a60",
"text": "Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.",
"title": ""
},
{
"docid": "255a155986548bb873ee0bc88a00222b",
"text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.",
"title": ""
},
{
"docid": "91bbea10b8df8a708b65947c8a8832dc",
"text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.",
"title": ""
},
{
"docid": "9083d1159628f0b9a363aca5dea47591",
"text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.",
"title": ""
},
{
"docid": "4c82ba56d6532ddc57c2a2978de7fe5a",
"text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "179298e4aa5fbd8a9ea08bde263ceaf5",
"text": "Healthcare applications are considered as promising fields for wireless sensor networks, where patients can be monitored using wireless medical sensor networks (WMSNs). Current WMSN healthcare research trends focus on patient reliable communication, patient mobility, and energy-efficient routing, as a few examples. However, deploying new technologies in healthcare applications without considering security makes patient privacy vulnerable. Moreover, the physiological data of an individual are highly sensitive. Therefore, security is a paramount requirement of healthcare applications, especially in the case of patient privacy, if the patient has an embarrassing disease. This paper discusses the security and privacy issues in healthcare application using WMSNs. We highlight some popular healthcare projects using wireless medical sensor networks, and discuss their security. Our aim is to instigate discussion on these critical issues since the success of healthcare application depends directly on patient security and privacy, for ethic as well as legal reasons. In addition, we discuss the issues with existing security mechanisms, and sketch out the important security requirements for such applications. In addition, the paper reviews existing schemes that have been recently proposed to provide security solutions in wireless healthcare scenarios. Finally, the paper ends up with a summary of open security research issues that need to be explored for future healthcare applications using WMSNs.",
"title": ""
},
{
"docid": "3e98e933aff32193fe4925f39fd04198",
"text": "Estimating surface normals is an important task in computer vision, e.g. in surface reconstruction, registration and object detection. In stereo vision, the error of depth reconstruction increases quadratically with distance. This makes estimation of surface normals an especially demanding task. In this paper, we analyze how error propagates from noisy disparity data to the orientation of the estimated surface normal. Firstly, we derive a transformation for normals between disparity space and world coordinates. Afterwards, the propagation of disparity noise is analyzed by means of a Monte Carlo method. Normal reconstruction at a pixel position requires to consider a certain neighborhood of the pixel. The extent of this neighborhood affects the reconstruction error. Our method allows to determine the optimal neighborhood size required to achieve a pre specified deviation of the angular reconstruction error, defined by a confidence interval. We show that the reconstruction error only depends on the distance of the surface point to the camera, the pixel distance to the principal point in the image plane and the angle at which the viewing ray intersects the surface.",
"title": ""
},
{
"docid": "7db9cf29dd676fa3df5a2e0e95842b6e",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "6b064b9f4c90a60fab788f9d5aee8b58",
"text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "5bde29ce109714f623ae9d69184a8708",
"text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.",
"title": ""
},
{
"docid": "e8a0e37fcb90f43785da710792b02c3c",
"text": "As the use of Twitter has become more commonplace throughout many nations, its role in political discussion has also increased. This has been evident in contexts ranging from general political discussion through local, state, and national elections (such as in the 2010 Australian elections) to protests and other activist mobilisation (for example in the current uprisings in Tunisia, Egypt, and Yemen, as well as in the controversy around Wikileaks). Research into the use of Twitter in such political contexts has also developed rapidly, aided by substantial advancements in quantitative and qualitative methodologies for capturing, processing, analysing, and visualising Twitter updates by large groups of users. Recent work has especially highlighted the role of the Twitter hashtag – a short keyword, prefixed with the hash symbol ‘#’ – as a means of coordinating a distributed discussion between more or less large groups of users, who do not need to be connected through existing ‘follower’ networks. Twitter hashtags – such as ‘#ausvotes’ for the 2010 Australian elections, ‘#londonriots’ for the coordination of information and political debates around the recent unrest in London, or ‘#wikileaks’ for the controversy around Wikileaks thus aid the formation of ad hoc publics around specific themes and topics. They emerge from within the Twitter community – sometimes as a result of pre-planning or quickly reached consensus, sometimes through protracted debate about what the appropriate hashtag for an event or topic should be (which may also lead to the formation of competing publics using different hashtags). Drawing on innovative methodologies for the study of Twitter content, this paper examines the use of hashtags in political debate in the context of a number of major case studies.",
"title": ""
},
{
"docid": "f117503bf48ea9ddf575dedf196d3bcd",
"text": "In recent years, prison officials have increasingly turned to solitary confinement as a way to manage difficult or dangerous prisoners. Many of the prisoners subjected to isolation, which can extend for years, have serious mental illness, and the conditions of solitary confinement can exacerbate their symptoms or provoke recurrence. Prison rules for isolated prisoners, however, greatly restrict the nature and quantity of mental health services that they can receive. In this article, we describe the use of isolation (called segregation by prison officials) to confine prisoners with serious mental illness, the psychological consequences of such confinement, and the response of U.S. courts and human rights experts. We then address the challenges and human rights responsibilities of physicians confronting this prison practice. We conclude by urging professional organizations to adopt formal positions against the prolonged isolation of prisoners with serious mental illness.",
"title": ""
},
{
"docid": "1dd4a95adcd4f9e7518518148c3605ac",
"text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.",
"title": ""
},
{
"docid": "e7d8f97d7d76ae089842e602b91df21c",
"text": "In this paper, we propose a novel text representation paradigm and a set of follow-up text representation models based on cognitive psychology theories. The intuition of our study is that the knowledge implied in a large collection of documents may improve the understanding of single documents. Based on cognitive psychology theories, we propose a general text enrichment framework, study the key factors to enable activation of implicit information, and develop new text representation methods to enrich text with the implicit information. Our study aims to mimic some aspects of human cognitive procedure in which given stimulant words serve to activate understanding implicit concepts. By incorporating human cognition into text representation, the proposed models advance existing studies by mining implicit information from given text and coordinating with most existing text representation approaches at the same time, which essentially bridges the gap between explicit and implicit information. Experiments on multiple tasks show that the implicit information activated by our proposed models matches human intuition and significantly improves the performance of the text mining tasks as well.",
"title": ""
},
{
"docid": "2a98cfbd1036ef0cdccbc491b33a9af4",
"text": "Optimal facility layout is of the factors that can affect the efficiency of any organization and annually millions of dollars costs are created for it or profits are saved. Studies done in the field of facility layout can be classified into two general categories: Facility layout issues in static position and facility layout issues in dynamic position. Because that facilities dynamic layout problem is realistic this paper investigates it and tries to consider all necessary aspects of this issue to become it more practical. In this regard, this research has developed three objectives model and tries to simultaneously minimize total operating costs and also production time. Since the calculation of production time using analytical relations is impossible, this research using simulation and regression analysis of a statistical correlation tries to measure production time. So the developed model is a combination of analytical and statistical relationships. The proposed model is of NP-HARD issues, so that even finding an optimal solution for its small scale is very difficult and time consuming. Multi-objective meta-heuristic NSGAII and NRGA algorithms are used to solve the problem. Since the outputs of meta-heuristic algorithms are highly dependent on the algorithms input parameters, Taguchi experimental design method is also used to set parameters. Alsoin order to assess the efficiency of provided procedures, the proposed method has been analyzed on generated pilot issues with various aspects. The results of the comparing algorithms on several criteria, consistently show the superiority of the NSGA-II than NRGA in problem solving.",
"title": ""
}
] | scidocsrr |
0d503425a6cfad63bab8ef9673adc50f | Differential privacy and robust statistics | [
{
"docid": "7c449b9714d937dc6a3367a851130c4a",
"text": "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.",
"title": ""
}
] | [
{
"docid": "52b55dab6e8a364bf0bcf05787e5b1ef",
"text": "In adaptive learning systems for distance learning attention is focused on adjusting the learning material to the needs of the individual. Adaptive tests adjust to the current level of knowledge of the examinee and is specific for their needs, thus it is much better at evaluating the knowledge of each individual. The basic goal of adaptive computer tests is to ensure the examinee questions that are challenging enough for them but not too difficult, which would lead to frustration and confusion. The aim of this paper is to present a computer adaptive test (CAT) realized in MATLAB.",
"title": ""
},
{
"docid": "933bc7cc6e1d56969f9d3fc0157f7ac9",
"text": "This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.",
"title": ""
},
{
"docid": "c2da932aec6f3d8c6fddc9aaa994c9cd",
"text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.",
"title": ""
},
{
"docid": "c638fe67f5d4b6e04a37e216edb849fa",
"text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.",
"title": ""
},
{
"docid": "8185da1a497e25f0c50e789847b6bd52",
"text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.",
"title": ""
},
{
"docid": "f738f79a9d516389e1ad0c7343d525c4",
"text": "The I-V curves for Schottky diodes with two different contact areas and geometries fabricated through 1.2 μm CMOS process are presented. These curves are described applying the analysis and practical layout design. It takes into account the resistance, capacitance and reverse breakdown voltage in the semiconductor structure and the dependence of these parameters to improve its operation. The described diodes are used for a charge pump circuit implementation.",
"title": ""
},
{
"docid": "ae9fb1b7ff6821dd29945f768426d7fc",
"text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.",
"title": ""
},
{
"docid": "0ce7465e40b3b13e5c316fb420a766d9",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "4264c3ed6ea24a896377a7efa2b425b0",
"text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, [email protected]} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER",
"title": ""
},
{
"docid": "cac7822c1a40b406c998449e2664815f",
"text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.",
"title": ""
},
{
"docid": "adf1cfe981d965f95e783e1f4ed5fc5d",
"text": "PAST research has shown that real-time Twitter data can be used to predict market movement of securities and other financial instruments [1]. The goal of this paper is to prove whether Twitter data relating to cryptocurrencies can be utilized to develop advantageous crypto coin trading strategies. By way of supervised machine learning techniques, our team will outline several machine learning pipelines with the objective of identifying cryptocurrency market movement. The prominent alternative currency examined in this paper is Bitcoin (BTC). Our approach to cleaning data and applying supervised learning algorithms such as logistic regression, Naive Bayes, and support vector machines leads to a final hour-to-hour and day-to-day prediction accuracy exceeding 90%. In order to achieve this result, rigorous error analysis is employed in order to ensure that accurate inputs are utilized at each step of the model. This analysis yields a 25% accuracy increase on average.",
"title": ""
},
{
"docid": "b0741999659724f8fa5dc1117ec86f0d",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "50d22974ef09d0f02ee05d345e434055",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
},
{
"docid": "6f709e89edaa619f41335b1a06eb713a",
"text": "Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.",
"title": ""
},
{
"docid": "d73d16ff470669b4935e85e2de815cb8",
"text": "As organizations aggressively deploy radio frequency identification systems, activists are increasingly concerned about RFID's potential to invade user privacy. This overview highlights potential threats and how they might be addressed using both technology and public policy.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "ffdfd49ad9216806ab6a6bf156bb5a87",
"text": "PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics. In this paper we present a new approach to PDDL+ planning based on Constraint Answer Set Programming (CASP), i.e. ASP rules plus numerical constraints. To the best of our knowledge, ours is the first attempt to link PDDL+ planning and logic programming. We provide an encoding of PDDL+ models into CASP problems. The encoding can handle non-linear hybrid domains, and represents a solid basis for applying logic programming to PDDL+ planning. As a case study, we consider the EZCSP CASP solver and obtain promising results on a set of PDDL+ benchmark problems.",
"title": ""
},
{
"docid": "d5a816dd44d4d95b0d281880f1917831",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "26a9fb64389a5dbbbd8afdc6af0b6f07",
"text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
}
] | scidocsrr |
4b78e80f2a680dcde17697d86ec3ba2e | Robust discrete optimization and network flows | [
{
"docid": "d53726710ce73fbcf903a1537f149419",
"text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
}
] | [
{
"docid": "a33ed384b8f4a86e8cc82970c7074bad",
"text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.",
"title": ""
},
{
"docid": "43e39433013ca845703af053e5ef9e11",
"text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.",
"title": ""
},
{
"docid": "c26ff98ac6cc027b07fec213a192a446",
"text": "Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation, and expectations. The organism organizes itself in relation to the challenge. We propose to call this the \"preparatory set\" (PS). We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of body-mind therapeutic and educational systems (BTES) can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.",
"title": ""
},
{
"docid": "1d348cc6b7a98dc5b61c66af9e94153c",
"text": "With the rapid development of web technology, an increasing number of enterprises having been seeking for a method to facilitate business decision making process, power the bottom line, and achieve a fully coordinated organization, called business intelligence (BI). Unfortunately, traditional BI tends to be unmanageable, risky and prohibitively expensive, especially for Small and Medium Enterprises (SMEs). The emergence of cloud computing and Software as a Service (SaaS) provides a cost effective solution. Recently, business intelligence applications delivered via SaaS, termed as Business Intelligence as a Service (SaaS BI), has proved to be the next generation in BI market. However, since SaaS BI just comes in its infant stage, a general framework maybe poorly considered. Therefore, in this paper we proposed a general conceptual framework for SaaS BI, and presented several possible future directions of SaaS BI.",
"title": ""
},
{
"docid": "89b92204d76120bef660eb55303752d2",
"text": "In this paper, we present a design of RTE template structure for AUTOSAR-based vehicle applications. Due to an increase in software complexity in recent years, much greater efforts are necessary to manage and develop software modules in automotive industries. To deal with this issue, an automotive Open system architecture (AUTOSAR) partnership was launched. This embedded platform standardizes software architectures and provides a methodology supporting distributed process. The RTE that is located at the heart of AUTOSAR implements the virtual function bus functionality for a particle electronic control unit (ECU). It enables to communicate between application software components. The purpose of this paper is to design a RTE structure with the AUTOSAR standard concept. As a future work, this research will be further extended, and we will propose the development of a RTE generator that is drawn on an AUTOSAR RTE template structure.",
"title": ""
},
{
"docid": "51f2ba8b460be1c9902fb265b2632232",
"text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "3eb419ef59ad59e60bf357cfb2e69fba",
"text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.",
"title": ""
},
{
"docid": "a219afda822413bbed34a21145807b47",
"text": "In this work, the author implemented a NOVEL technique of multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) based on space frequency block coding (SF-BC). Where, the implemented code is designed based on the QOC using the techniques of the reconfigurable antennas. The proposed system is implemented using MATLAB program, and the results showing best performance of a wireless communications system of higher coding gain and diversity.",
"title": ""
},
{
"docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86",
"text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.",
"title": ""
},
{
"docid": "5eeb17964742e1bf1e517afcb1963b02",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "3980da6e0c81bf029bbada09d7ea59e3",
"text": "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10%, compared to that obtained using the perfect F-CSI.",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "6f8a2292749aaae6add667d153e3abbd",
"text": "Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: [email protected] Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "f141bd66dc2a842c21f905e3e01fa93c",
"text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature",
"title": ""
},
{
"docid": "0774820345f37dd1ae474fc4da1a3a86",
"text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.",
"title": ""
},
{
"docid": "1c9c30e3e007c2d11c6f5ebd0092050b",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
}
] | scidocsrr |
a521c595f1f154661d9c236149a82c09 | Evaluation of the ARTutor augmented reality educational platform in tertiary education | [
{
"docid": "1cf3ee00f638ca44a3b9772a2df60585",
"text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.",
"title": ""
}
] | [
{
"docid": "4a9174770b8ca962e8135fbda0e41425",
"text": "is published by Princeton University Press and copyrighted, © 2006, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers.",
"title": ""
},
{
"docid": "42db53797dc57cfdb7f963c55bb7f039",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "a5e8a45d3de85acfd4192a0d096be7c4",
"text": "Fog computing, as a promising technique, is with huge advantages in dealing with large amounts of data and information with low latency and high security. We introduce a promising multiple access technique entitled nonorthogonal multiple access to provide communication service between the fog layer and the Internet of Things (IoT) device layer in fog computing, and propose a dynamic cooperative framework containing two stages. At the first stage, dynamic IoT device clustering is solved to reduce the system complexity and the delay for the IoT devices with better channel conditions. At the second stage, power allocation based energy management is solved using Nash bargaining solution in each cluster to ensure fairness among IoT devices. Simulation results reveal that our proposed scheme can simultaneously achieve higher spectrum efficiency and ensure fairness among IoT devices compared to other schemes.",
"title": ""
},
{
"docid": "1fcff5c3a30886780734bf9765b629af",
"text": "Automatic video captioning is challenging due to the complex interactions in dynamic real scenes. A comprehensive system would ultimately localize and track the objects, actions and interactions present in a video and generate a description that relies on temporal localization in order to ground the visual concepts. However, most existing automatic video captioning systems map from raw video data to high level textual description, bypassing localization and recognition, thus discarding potentially valuable information for content localization and generalization. In this work we present an automatic video captioning model that combines spatio-temporal attention and image classification by means of deep neural network structures based on long short-term memory. The resulting system is demonstrated to produce state-of-the-art results in the standard YouTube captioning benchmark while also offering the advantage of localizing the visual concepts (subjects, verbs, objects), with no grounding supervision, over space and time.",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "12b3918c438f3bed74d85cf643f613c9",
"text": "Knowledge extraction from unstructured data is a challenging research problem in research domain of Natural Language Processing (NLP). It requires complex NLP tasks like entity extraction and Information Extraction (IE), but one of the most challenging tasks is to extract all the required entities of data in the form of structured format so that data analysis can be applied. Our focus is to explain how the data is extracted in the form of datasets or conventional database so that further text and data analysis can be carried out. This paper presents a framework for Hadith data extraction from the Hadith authentic sources. Hadith is the collection of sayings of Holy Prophet Muhammad, who is the last holy prophet according to Islamic teachings. This paper discusses the preparation of the dataset repository and highlights issues in the relevant research domain. The research problem and their solutions of data extraction, preprocessing and data analysis are elaborated. The results have been evaluated using the standard performance evaluation measures. The dataset is available in multiple languages, multiple formats and is available free of cost for research purposes. Keywords—Data extraction; preprocessing; regex; Hadith; text analysis; parsing",
"title": ""
},
{
"docid": "7a72f69ad4926798e12f6fa8e598d206",
"text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "ae9bdb80a60dd6820c1c9d9557a73ffc",
"text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.",
"title": ""
},
{
"docid": "bd0b0cef8ef780a44ad92258ac705395",
"text": "This chapter introduces some of the theoretical foundations of swarm intelligence. We focus on the design and implementation of the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms for various types of function optimization problems, real world applications and data mining. Results are analyzed, discussed and their potentials are illustrated.",
"title": ""
},
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "ca1c193e5e5af821772a5d123e84b72a",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "c13aff70c3b080cfd5d374639e5ec0e9",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "f70949a29dbac974a7559684d1da3ebb",
"text": "Cybercrime is all about the crimes in which communication channel and communication device has been used directly or indirectly as a medium whether it is a Laptop, Desktop, PDA, Mobile phones, Watches, Vehicles. The report titled “Global Risks for 2012”, predicts cyber-attacks as one of the top five risks in the World for Government and business sector. Cyber crime is a crime which is harder to detect and hardest to stop once occurred causing a long term negative impact on victims. With the increasing popularity of online banking, online shopping which requires sensitive personal and financial data, it is a term that we hear in the news with some frequency. Now, in order to protect ourselves from this crime we need to know what it is and how it does works against us. This paper presents a brief overview of all about cyber criminals and crime with its evolution, types, case study, preventive majors and the department working to combat this crime.",
"title": ""
},
{
"docid": "807cd6adc45a2adb7943c5a0fb5baa94",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
},
{
"docid": "edf8d1bb84c0845dddad417a939e343b",
"text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.",
"title": ""
},
{
"docid": "9709064022fd1ab5ef145ce58d2841c1",
"text": "Enforcing security in Internet of Things environments has been identified as one of the top barriers for realizing the vision of smart, energy-efficient homes and buildings. In this context, understanding the risks related to the use and potential misuse of information about homes, partners, and end-users, as well as, forming methods for integrating security-enhancing measures in the design is not straightforward and thus requires substantial investigation. A risk analysis applied on a smart home automation system developed in a research project involving leading industrial actors has been conducted. Out of 32 examined risks, 9 were classified as low and 4 as high, i.e., most of the identified risks were deemed as moderate. The risks classified as high were either related to the human factor or to the software components of the system. The results indicate that with the implementation of standard security features, new, as well as, current risks can be minimized to acceptable levels albeit that the most serious risks, i.e., those derived from the human factor, need more careful consideration, as they are inherently complex to handle. A discussion of the implications of the risk analysis results points to the need for a more general model of security and privacy included in the design phase of smart homes. With such a model of security and privacy in design in place, it will contribute to enforcing system security and enhancing user privacy in smart homes, and thus helping to further realize the potential in such IoT environments.",
"title": ""
},
{
"docid": "a76d5685b383e45778417d5eccdd8b6c",
"text": "The advent of both Cloud computing and Internet of Things (IoT) is changing the way of conceiving information and communication systems. Generally, we talk about IoT Cloud to indicate a new type of distributed system consisting of a set of smart devices interconnected with a remote Cloud infrastructure, platform, or software through the Internet and able to provide IoT as a Service (IoTaaS). In this paper, we discuss the near future evolution of IoT Clouds towards federated ecosystems, where IoT providers cooperate to offer more flexible services. Moreover, we present a general three-layer IoT Cloud Federation architecture, highlighting new business opportunities and challenges.",
"title": ""
},
{
"docid": "67509b64aaf1ead0bcba557d8cfe84bc",
"text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.",
"title": ""
}
] | scidocsrr |
5ae31b1f3def0f1716b2d78df9aa3b18 | Malacology: A Programmable Storage System | [
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
}
] | [
{
"docid": "8583ec1f457469dac3e2517a90a58423",
"text": "Sentiment analysis is the automatic classification of the overall opinion conveyed by a text towards its subject matter. This paper discusses an experiment in the sentiment analysis of of a collection of movie reviews that have been automatically translated to Indonesian. Following [1], we employ three well known classification techniques: naive bayes, maximum entropy, and support vector machines, employing unigram presence and frequency values as the features. The translation is achieved through machine translation and simple word substitutions based on a bilingual dictionary constructed from various online resources. Analysis of the Indonesian translations yielded an accuracy of up to 78.82%, still short of the accuracy for the English documents (80.09%), but satisfactorily high given the simple translation approach.",
"title": ""
},
{
"docid": "f6f00f03f195b1f03f15fffba2ccd199",
"text": "BACKGROUND\nRecent estimates concerning the prevalence of autistic spectrum disorder are much higher than those reported 30 years ago, with at least 1 in 400 children affected. This group of children and families have important service needs. The involvement of parents in implementing intervention strategies designed to help their autistic children has long been accepted as helpful. The potential benefits are increased skills and reduced stress for parents as well as children.\n\n\nOBJECTIVES\nThe objective of this review was to determine the extent to which parent-mediated early intervention has been shown to be effective in the treatment of children aged 1 year to 6 years 11 months with autistic spectrum disorder. In particular, it aimed to assess the effectiveness of such interventions in terms of the benefits for both children and their parents.\n\n\nSEARCH STRATEGY\nA range of psychological, educational and biomedical databases were searched. Bibliographies and reference lists of key articles were searched, field experts were contacted and key journals were hand searched.\n\n\nSELECTION CRITERIA\nOnly randomised or quasi-randomised studies were included. Study interventions had a significant focus on parent-implemented early intervention, compared to a group of children who received no treatment, a waiting list group or a different form of intervention. There was at least one objective, child related outcome measure.\n\n\nDATA COLLECTION AND ANALYSIS\nAppraisal of the methodological quality of included studies was carried out independently by two reviewers. Differences between the included studies in terms of the type of intervention, the comparison groups used and the outcome measures were too great to allow for direct comparison.\n\n\nMAIN RESULTS\nThe results of this review are based on data from two studies. Two significant results were found to favour parent training in one study: child language and maternal knowledge of autism. In the other, intensive intervention (involving parents, but primarily delivered by professionals) was associated with better child outcomes on direct measurement than were found for parent-mediated early intervention, but no differences were found in relation to measures of parent and teacher perceptions of skills and behaviours.\n\n\nREVIEWER'S CONCLUSIONS\nThis review has little to offer in the way of implications for practice: there were only two studies, the numbers of participants included were small, and the two studies could not be compared directly to one another. In terms of research, randomised controlled trials involving large samples need to be carried out, involving both short and long-term outcome information and full economic evaluations. Research in this area is hampered by barriers to randomisation, such as availability of equivalent services.",
"title": ""
},
{
"docid": "f285815e47ea0613fb1ceb9b69aee7df",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "a09d03e2de70774f443d2da88a32b555",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "9696e2f6ff6e16f378ae377798ee3332",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: [email protected] (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd3a01ec1db7672e07163534c0dfb32c",
"text": "In order to dock an embedded intelligent wheelchair into a U-shape bed automatically through visual servo, this paper proposes a real-time U-shape bed localization method on an embedded vision system based on FPGA and DSP. This method locates the U-shape bed through finding its line contours. The task can be done in a parallel way with FPGA does line extraction and DSP does line contour finding. Experiments show that, the speed and precision of the U-shape bed localization method proposed in this paper which based on an embedded vision system can satisfy the needs of the system.",
"title": ""
},
{
"docid": "1d44b1a7a9581981257a786ea38d54cb",
"text": "The decoder of the sphinx-4 speech recognition system incorporates several new design strategies which have not been used earlier in conventional decoders of HMM-based large vocabulary speech recognition systems. Some new design aspects include graph construction for multilevel parallel decoding with independent simultaneous feature streams without the use of compound HMMs, the incorporation of a generalized search algorithm that subsumes Viterbi and full-forward decoding as special cases, design of generalized language HMM graphs from grammars and language models of multiple standard formats, that toggles trivially from flat search structure to tree search structure etc. This paper describes some salient design aspects of the Sphinx-4 decoder and includes preliminary performance measures relating to speed and accu-",
"title": ""
},
{
"docid": "16880162165f4c95d6b01dc4cfc40543",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "3ce1ed2dd4bacd96395b84c2b715d361",
"text": "In this paper, we present the design and fabrication of a centimeter-scale propulsion system for a robotic fish. The key to the design is selection of an appropriate actuator and a body frame that is simple and compact. SMA spring actuators are customized to provide the necessary work output for the microrobotic fish. The flexure joints, electrical wiring and attachment pads for SMA actuators are all embedded in a single layer of copper laminated polymer film, sandwiched between two layers of glass fiber. Instead of using individual actuators to rotate each joint, each actuator rotates all the joints to a certain mode shape and undulatory motion is created by a timed sequence of these mode shapes. Subcarangiform swimming mode of minnows has been emulated using five links and four actuators. The size of the four-joint propulsion system is 6 mm wide, 40 mm long with the body frame thickness of 0.25 mm.",
"title": ""
},
{
"docid": "e6662ebd9842e43bd31926ac171807ca",
"text": "INTRODUCTION\nDisruptions in sleep and circadian rhythms are observed in individuals with bipolar disorders (BD), both during acute mood episodes and remission. Such abnormalities may relate to dysfunction of the molecular circadian clock and could offer a target for new drugs.\n\n\nAREAS COVERED\nThis review focuses on clinical, actigraphic, biochemical and genetic biomarkers of BDs, as well as animal and cellular models, and highlights that sleep and circadian rhythm disturbances are closely linked to the susceptibility to BDs and vulnerability to mood relapses. As lithium is likely to act as a synchronizer and stabilizer of circadian rhythms, we will review pharmacogenetic studies testing circadian gene polymorphisms and prophylactic response to lithium. Interventions such as sleep deprivation, light therapy and psychological therapies may also target sleep and circadian disruptions in BDs efficiently for treatment and prevention of bipolar depression.\n\n\nEXPERT OPINION\nWe suggest that future research should clarify the associations between sleep and circadian rhythm disturbances and alterations of the molecular clock in order to identify critical targets within the circadian pathway. The investigation of such targets using human cellular models or animal models combined with 'omics' approaches are crucial steps for new drug development.",
"title": ""
},
{
"docid": "9e037018da3ebcd7967b9fbf07c83909",
"text": "Studying temporal dynamics of topics in social media is very useful to understand online user behaviors. Most of the existing work on this subject usually monitors the global trends, ignoring variation among communities. Since users from different communities tend to have varying tastes and interests, capturing communitylevel temporal change can improve the understanding and management of social content. Additionally, it can further facilitate the applications such as community discovery, temporal prediction and online marketing. However, this kind of extraction becomes challenging due to the intricate interactions between community and topic, and intractable computational complexity. In this paper, we take a unified solution towards the communitylevel topic dynamic extraction. A probabilistic model, CosTot (Community Specific Topics-over-Time) is proposed to uncover the hidden topics and communities, as well as capture community-specific temporal dynamics. Specifically, CosTot considers text, time, and network information simultaneously, and well discovers the interactions between community and topic over time. We then discuss the approximate inference implementation to enable scalable computation of model parameters, especially for large social data. Based on this, the application layer support for multi-scale temporal analysis and community exploration is also investigated. We conduct extensive experimental studies on a large real microblog dataset, and demonstrate the superiority of proposed model on tasks of time stamp prediction, link prediction and topic perplexity.",
"title": ""
},
{
"docid": "d1b20385d90fe1e98a07f9cf55af6adb",
"text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.",
"title": ""
},
{
"docid": "be43ca444001f766e14dd042c411a34f",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "be4b6d68005337457e66fcdf21a04733",
"text": "A real-time algorithm to detect eye blinks in a video sequence from a standard camera is proposed. Recent landmark detectors, trained on in-the-wild datasets exhibit excellent robustness against face resolution, varying illumination and facial expressions. We show that the landmarks are detected precisely enough to reliably estimate the level of the eye openness. The proposed algorithm therefore estimates the facial landmark positions, extracts a single scalar quantity – eye aspect ratio (EAR) – characterizing the eye openness in each frame. Finally, blinks are detected either by an SVM classifier detecting eye blinks as a pattern of EAR values in a short temporal window or by hidden Markov model that estimates the eye states followed by a simple state machine recognizing the blinks according to the eye closure lengths. The proposed algorithm has comparable results with the state-of-the-art methods on three standard datasets.",
"title": ""
},
{
"docid": "a7a51eb9cb434a581eac782da559094b",
"text": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.",
"title": ""
},
{
"docid": "b18af8eaea5dd26cdac5d13c7aa6ce4c",
"text": "Botnet is one of the most serious threats to cyber security as it provides a distributed platform for several illegal activities. Regardless of the availability of numerous methods proposed to detect botnets, still it is a challenging issue as botmasters are continuously improving bots to make them stealthier and evade detection. Most of the existing detection techniques cannot detect modern botnets in an early stage, or they are specific to command and control protocol and structures. In this paper, we propose a novel approach to detect botnets irrespective of their structures, based on network traffic flow behavior analysis and machine learning techniques. The experimental evaluation of the proposed method with real-world benchmark datasets shows the efficiency of the method. Also, the system is able to identify the new botnets with high detection accuracy and low false positive rate. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
42307349e98184277b54859bda8bc971 | Towards an Ontology-Driven Blockchain Design for Supply Chain Provenance | [
{
"docid": "a7111544f5240a8f42c7b564b4f8e292",
"text": "increasingly agile and integrated across their functions. Enterprise models play a critical role in this integration, enabling better designs for enterprises, analysis of their performance, and management of their operations. This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual enterprise effort at the University of Toronto.",
"title": ""
}
] | [
{
"docid": "2f3734b49e9d2e6ea7898622dac8a296",
"text": "Dropout prediction in MOOCs is a well-researched problem where we classify which students are likely to persist or drop out of a course. Most research into creating models which can predict outcomes is based on student engagement data. Why these students might be dropping out has only been studied through retroactive exit surveys. This helps identify an important extension area to dropout prediction— how can we interpret dropout predictions at the student and model level? We demonstrate how existing MOOC dropout prediction pipelines can be made interpretable, all while having predictive performance close to existing techniques. We explore each stage of the pipeline as design components in the context of interpretability. Our end result is a layer which longitudinally interprets both predictions and entire classification models of MOOC dropout to provide researchers with in-depth insights of why a student is likely to dropout.",
"title": ""
},
{
"docid": "bd0691351920e8fa74c8197b9a4e91e0",
"text": "Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.",
"title": ""
},
{
"docid": "0d9c60a9cdd5809e02da5ba660ba3c65",
"text": "In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.",
"title": ""
},
{
"docid": "7b99f2b0c903797c5ed33496f69481fc",
"text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.",
"title": ""
},
{
"docid": "fb03575e346b527560f42db53b5b736e",
"text": "We have demonstrated a RLC matched GaN HEMT power amplifier with 12 dB gain, 0.05-2.0 GHz bandwidth, 8 W CW output power and 36.7-65.4% drain efficiency over the band. The amplifier is packaged in a ceramic S08 package and contains a GaN on SiC device operating at 28 V drain voltage, alongside GaAs integrated passive matching circuitry. A second circuit designed for 48 V operation and 15 W CW power over the same band, obtains over 20 W under pulsed condition with 10% duty cycle and 100 mus pulse width. CW measurements are pending after assembly in an alternate high power package. These amplifiers are suitable for use in wideband digital cellular infrastructure, handheld radios, and jamming applications.",
"title": ""
},
{
"docid": "5bdaaf3735bcd39dd66a8bea79105a95",
"text": "Retailing, and particularly fashion retailing, is changing into a much more technology driven business model using omni-channel retailing approaches. Also analytical and data-driven marketing is on the rise. However, there has not been paid a lot of attention to the underlying and underpinning datastructures, the characteristics for fashion retailing, the relationship between static and dynamic data, and the governance of this. This paper is analysing and discussing the data dimension of fashion retailing with focus on data-model development, master data management and the impact of this on business development in the form of increased operational effectiveness, better adaptation the omni-channel environment and improved alignment between the business strategy and the supporting data. The paper presents a case study of a major European fashion retail and wholesale company that is in the process of reorganising its master data model and master data governance to remove silos of data, connect and utilise data across business processes, and design a global product master data database that integrates data for all existing and expected sales channels. As a major finding of this paper is fashion retailing needs more strict master data governance than general retailing as products are plenty, designed products are not necessarily marketed, and product life-cycles generally are short.",
"title": ""
},
{
"docid": "9a7016a02eda7fcae628197b0625832b",
"text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.",
"title": ""
},
{
"docid": "0f78628c309cc863680d60dd641cb7f0",
"text": "A systematic review was conducted to evaluate whether chocolate or its constituents were capable of influencing cognitive function and/or mood. Studies investigating potentially psychoactive fractions of chocolate were also included. Eight studies (in six articles) met the inclusion criteria for assessment of chocolate or its components on mood, of which five showed either an improvement in mood state or an attenuation of negative mood. Regarding cognitive function, eight studies (in six articles) met the criteria for inclusion, of which three revealed clear evidence of cognitive enhancement (following cocoa flavanols and methylxanthine). Two studies failed to demonstrate behavioral benefits but did identify significant alterations in brain activation patterns. It is unclear whether the effects of chocolate on mood are due to the orosensory characteristics of chocolate or to the pharmacological actions of chocolate constituents. Two studies have reported acute cognitive effects of supplementation with cocoa polyphenols. Further exploration of the effect of chocolate on cognitive facilitation is recommended, along with substantiation of functional brain changes associated with the components of cocoa.",
"title": ""
},
{
"docid": "4de49becf20e255d56e7faaa158ddf07",
"text": "Given a fixed budget and an arbitrary cost for selecting each node, the budgeted influence maximization (BIM) problem concerns selecting a set of seed nodes to disseminate some information that maximizes the total number of nodes influenced (termed as influence spread) in social networks at a total cost no more than the budget. Our proposed seed selection algorithm for the BIM problem guarantees an approximation ratio of (1-1/√e). The seed selection algorithm needs to calculate the influence spread of candidate seed sets, which is known to be #P-complex. Identifying the linkage between the computation of marginal probabilities in Bayesian networks and the influence spread, we devise efficient heuristic algorithms for the latter problem. Experiments using both large-scale social networks and synthetically generated networks demonstrate superior performance of the proposed algorithm with moderate computation costs. Moreover, synthetic datasets allow us to vary the network parameters and gain important insights on the impact of graph structures on the performance of different algorithms.",
"title": ""
},
{
"docid": "16ad31562105693691e7d4b016c35522",
"text": "Steganalysis is the art of detecting the message's existence and blockading the covert communication. Various steganography techniques have been proposed in literature. The Least Significant Bit (LSB) steganography is one such technique in which least significant bit of the image is replaced with data bit. As this method is vulnerable to steganalysis so as to make it more secure we encrypt the raw data before embedding it in the image. Though the encryption process increases the time complexity, but at the same time provides higher security also. This paper uses two popular techniques Rivest, Shamir, Adleman (RSA) algorithm and Diffie Hellman algorithm to encrypt the data. The result shows that the use of encryption in Steganalysis does not affect the time complexity if Diffie Hellman algorithm is used in stead of RSA algorithm.",
"title": ""
},
{
"docid": "cd6e70b655db88b3a1eb3d810db511e8",
"text": "We develop a profit-maximizing neoclassical model of optimal firm size and growth across different industries based on differences in industry fundamentals and firm productivity. In the model, a conglomerate discount is consistent with profit maximization. The model predicts how conglomerate firms will allocate resources across divisions over the business cycle and how their responses to industry shocks will differ from those of single-segment firms. Using plant level data, we find that growth and investment of conglomerate and single-segment firms is related to fundamental industry factors and individual segment level productivity. The majority of conglomerate firms exhibit growth across industry segments that is consistent with optimal behavior. SEVERAL RECENT ACADEMIC PAPERS and the business press claim that conglomerate firms destroy value and do a poor job of investing across business segments.1 Explanations for this underperformance share the idea that there is an imperfection either in firm governance ~agency theory! or in financial markets ~incorrect valuation of firm industry segments!. These studies implicitly assume that the conglomerates and single-industry firms possess similar ability to compete, and that they differ mainly in that conglomerates * Robert H. Smith School of Business, The University of Maryland. A previous draft was circulated and presented with the title “Optimal Firm Size and the Growth of Conglomerate and Single-industry Firms.” This research was supported by NSF Grant #SBR-9709427. We wish to thank the editor, René Stulz, two anonymous referees, John Graham, Robert Hauswald, Sheridan Titman, Anjan Thakor, Haluk Unal, and seminar participants at Carnegie Mellon, Colorado, Harvard, Illinois, Indiana, Maryland, Michigan, Ohio State, the 1999 AFA meetings, the National Bureau of Economic Research and the 1998 Finance and Accounting conference at NYU for their comments. We also wish to thank researchers at the Center for Economic Studies for comments and their help with the data used in this study. The research was conducted at the Center for Economic Studies, U.S. Bureau of the Census, Department of Commerce. The authors alone are responsible for the work and any errors or omissions. 1 Lang and Stulz ~1994!, Berger and Ofek ~1995!, and Comment and Jarrell ~1995! document a conglomerate discount in the stock market and low returns to conglomerate firms. Rajan, Servaes, and Zingales ~2000! and Scharfstein ~1997! examine conglomerate investment across different business segments. Lamont ~1997! and Shin and Stulz ~1997! examine the relation of investment to cash f low for conglomerate industry segments. For an example from the business press see Deutsch ~1998!. THE JOURNAL OF FINANCE • VOL. LVII, NO. 2 • APRIL 2002",
"title": ""
},
{
"docid": "db907780a2022761d2595a8ad5d03401",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "2a0b49c9844d4b048688cbabdf6daa18",
"text": "This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",
"title": ""
},
{
"docid": "324c0fe0d57734b54dd03e468b7b4603",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "4e3628286312adf393d1f2ca0e820795",
"text": "Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. Our approach uses an encoder-decoder trained over an initial parallel corpus to build multilingual sentence representations, which are then incorporated into a new margin-based method to score, mine and filter parallel sentences. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC shared task on parallel corpus mining by more than 10 F1 points. We also improve the precision from 48.9 to 83.3 on the reconstruction of 11.3M English-French sentence pairs of the UN corpus. Finally, filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version.",
"title": ""
},
{
"docid": "0b1172f46a667e2344958a1d61a11e12",
"text": "The improvement of many applications such as web search, latency reduction, and personalization/ recommendation systems depends on surfing prediction. Predicting user surfing paths involves tradeoffs between model complexity and predictive accuracy. In this paper, we combine two classification techniques, namely, the Markov model and Support Vector Machines (SVM), to resolve prediction using Dempster’s rule. Such fusion overcomes the inability of the Markov model in predicting the unseen data as well as overcoming the problem of multiclassification in the case of SVM, especially when dealing with large number of classes. We apply feature extraction to increase the power of discrimination of SVM. In addition, during prediction we employ domain knowledge to reduce the number of classifiers for the improvement of accuracy and the reduction of prediction time. We demonstrate the effectiveness of our hybrid approach by comparing our results with widely used techniques, namely, SVM, the Markov model, and association rule mining.",
"title": ""
},
{
"docid": "33915af49384d028a591d93336feffd6",
"text": "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local/semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.",
"title": ""
},
{
"docid": "96a0e29eb5a55f71bce6d51ce0fedc7d",
"text": "This article describes a new method for assessing the effect of a given film on viewers’ brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching. Our results demonstrate that some films can exert considerable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers’ brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers’ brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of “neurocinematic” studies.",
"title": ""
},
{
"docid": "41c99f4746fc299ae886b6274f899c4b",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
},
{
"docid": "b08438b921f957f809cf760526f1e2bd",
"text": "Data access control is an effective way to ensure the data security in the cloud. Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage, because it gives data owners more direct control on access policies. However, it is difficult to directly apply existing CP-ABE schemes to data access control for cloud storage systems because of the attribute revocation problem. In this paper, we design an expressive, efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities co-exist and each authority is able to issue attributes independently. Specifically, we propose a revocable multi-authority CP-ABE scheme, and apply it as the underlying techniques to design the data access control scheme. Our attribute revocation method can efficiently achieve both forward security and backward security. The analysis and simulation results show that our proposed data access control scheme is secure in the random oracle model and is more efficient than previous works.",
"title": ""
}
] | scidocsrr |
ee3980d847bd4096df0981ec2a67d508 | Recent Developments in Indian Sign Language Recognition : An Analysis | [
{
"docid": "f267b329f52628d3c52a8f618485ae95",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
}
] | [
{
"docid": "ec0bc85d241f71f5511b54f107987e5a",
"text": "We present a new deep learning approach to pose-guided resynthesis of human photographs. At the heart of the new approach is the estimation of the complete body surface texture based on a single photograph. Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body. Rather than working directly with colors of texture elements, the inpainting network estimates an appropriate source location in the input image for each element of the body surface. This correspondence field between the input image and the texture is then further warped into the target image coordinate frame based on the desired pose, effectively establishing the correspondence between the source and the target view even when the pose change is drastic. The final convolutional network then uses the established correspondence and all other available information to synthesize the output image using a fully-convolutional architecture with deformable convolutions. We show stateof-the-art result for pose-guided image synthesis. Additionally, we demonstrate the performance of our system for garment transfer and pose-guided face resynthesis.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "92ec1f93124ddfa1faa1d7a3ab371935",
"text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "e45ba62d473dd5926e6efa2778e567ca",
"text": "This contribution introduces a novel approach to cross-calibrate automotive vision and ranging sensors. The resulting sensor alignment allows the incorporation of multiple sensor data into a detection and tracking framework. Exemplarily, we show how a realtime vehicle detection system, intended for emergency breaking or ACC applications, benefits from the low level fusion of multibeam lidar and vision sensor measurements in discrimination performance and computational complexity",
"title": ""
},
{
"docid": "7012a0cb7b0270b43d733cf96ab33bac",
"text": "The evolution of Cloud computing makes the major changes in computing world as with the assistance of basic cloud computing service models like SaaS, PaaS, and IaaS an organization achieves their business goal with minimum effort as compared to traditional computing environment. On the other hand security of the data in the cloud database server is the key area of concern in the acceptance of cloud. It requires a very high degree of privacy and authentication. To protect the data in cloud database server cryptography is one of the important methods. Cryptography provides various symmetric and asymmetric algorithms to secure the data. This paper presents the symmetric cryptographic algorithm named as AES (Advanced Encryption Standard). It is based on several substitutions, permutation and transformation.",
"title": ""
},
{
"docid": "c47f53874e92c3d036538509f57d0020",
"text": "Cyber-attacks against Smart Grids have been found in the real world. Malware such as Havex and BlackEnergy have been found targeting industrial control systems (ICS) and researchers have shown that cyber-attacks can exploit vulnerabilities in widely used Smart Grid communication standards. This paper addresses a deep investigation of attacks against the manufacturing message specification of IEC 61850, which is expected to become one of the most widely used communication services in Smart Grids. We investigate how an attacker can build a custom tool to execute man-in-the-middle attacks, manipulate data, and affect the physical system. Attack capabilities are demonstrated based on NESCOR scenarios to make it possible to thoroughly test these scenarios in a real system. The goal is to help understand the potential for such attacks, and to aid the development and testing of cyber security solutions. An attack use-case is presented that focuses on the standard for power utility automation, IEC 61850 in the context of inverter-based distributed energy resource devices; especially photovoltaics (PV) generators.",
"title": ""
},
{
"docid": "61f2ffb9f8f54c71e30d5d19580ce54c",
"text": "Galleries, Libraries, Archives and Museums (short: GLAMs) around the globe are beginning to explore the potential of crowdsourcing, i. e. outsourcing specific activities to a community though an open call. In this paper, we propose a typology of these activities, based on an empirical study of a substantial amount of projects initiated by relevant cultural heritage institutions. We use the Digital Content Life Cycle model to study the relation between the different types of crowdsourcing and the core activities of heritage organizations. Finally, we focus on two critical challenges that will define the success of these collaborations between amateurs and professionals: (1) finding sufficient knowledgeable, and loyal users; (2) maintaining a reasonable level of quality. We thus show the path towards a more open, connected and smart cultural heritage: open (the data is open, shared and accessible), connected (the use of linked data allows for interoperable infrastructures, with users and providers getting more and more connected), and smart (the use of knowledge and web technologies allows us to provide interesting data to the right users, in the right context, anytime, anywhere -- both with involved users/consumers and providers). It leads to a future cultural heritage that is open, has intelligent infrastructures and has involved users, consumers and providers.",
"title": ""
},
{
"docid": "c9577cd328841c1574e18677c00731d0",
"text": "Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the“over-clustering” of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.",
"title": ""
},
{
"docid": "8ef51eeb7705a1369103a36f60268414",
"text": "Cloud computing is a new way of delivering computing resources, not a new technology. Computing services ranging from data storage and processing to software, such as email handling, are now available instantly, commitment-free and on-demand. Since we are in a time of belt-tightening, this new economic model for computing has found fertile ground and is seeing massive global investment. According to IDC’s analysis, the worldwide forecast for cloud services in 2009 will be in the order of $17.4bn. The estimation for 2013 amounts to $44.2bn, with the European market ranging from €971m in 2008 to €6,005m in 2013 .",
"title": ""
},
{
"docid": "87c3f3ab2c5c1e9a556ed6f467f613a9",
"text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.",
"title": ""
},
{
"docid": "ebfd9accf86d5908c12d2c3b92758620",
"text": "A circular microstrip array with beam focused for RFID applications was presented. An analogy with the optical lens and optical diffraction was made to describe the behaviour of this system. The circular configuration of the array requires less phase shift and exhibite smaller side lobe level compare to a square array. The measurement result shows a good agreement with simulation and theory. This system is a good way to increase the efficiency of RFID communication without useless power. This solution could also be used to develop RFID devices for the localization problematic. The next step of this work is to design a system with an adjustable focus length.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "3f053da8cc3faa5d0dd9e0f501d751d2",
"text": "An automatic classification system of the music genres is proposed. Based on the timbre features such as mel-frequency cepstral coefficients, the spectro-temporal features are obtained to capture the temporal evolution and variation of the spectral characteristics of the music signal. Mean, variance, minimum, and maximum values of the timbre features are calculated. Modulation spectral flatness, crest, contrast, and valley are estimated for both original spectra and timbre-feature vectors. A support vector machine (SVM) is used as a classifier where an elaborated kernel function is defined. To reduce the computational complexity, an SVM ranker is applied for feature selection. Compared with the best algorithms submitted to the music information retrieval evaluation exchange (MIREX) contests, the proposed method provides higher accuracy at a lower feature dimension for the GTZAN and ISMIR2004 databases.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "3e6dbaf4ef18449c82e29e878fa9a8c5",
"text": "The description of a software architecture style must include the structural model of the components and their interactions, the laws governing the dynamic changes in the architecture, and the communication pattern. In our work we represent a system as a graph where hyperedges are components and nodes are ports of communication. The construction and dynamic evolut,ion of the style will be represented as context-free productions and graph rewriting. To model the evolution of the system we propose to use techniques of constraint solving. From this approach we obtain an intuitive way to model systems with nice characteristics for the description of dynamic architectures and reconfiguration and, a unique language to describe the style, model the evolution of the system and prove properties.",
"title": ""
},
{
"docid": "71706d4e580e0e5ba3f1e025e6a3b33e",
"text": "We report a novel paper-based biobattery which generates power from microorganism-containing liquid derived from renewable and sustainable wastewater which is readily accessible in the local environment. The device fuses the art of origami and the technology of microbial fuel cells (MFCs) and has the potential to shift the paradigm for flexible and stackable paper-based batteries by enabling exceptional electrical characteristics and functionalities. 3D, modular, and retractable battery stack is created from (i) 2D paper sheets through high degrees of folding and (ii) multifunctional layers sandwiched for MFC device configuration. The stack is based on ninja star-shaped origami design formed by eight MFC modular blades, which is retractable from sharp shuriken (closed) to round frisbee (opened). The microorganism-containing wastewater is added into an inlet of the closed battery stack and it is transported into each MFC module through patterned fluidic pathways in the paper layers. During operation, the battery stack is transformed into the round frisbee to connect eight MFC modules in series for improving the power output and simultaneously expose all air-cathodes to the air for their cathodic reactions. The device generates desired values of electrical current and potential for powering an LED for more than 20min.",
"title": ""
},
{
"docid": "4770ca4d8c12f949b8010e286d147802",
"text": "The clinical, radiologic, and pathologic findings in radiation injury of the brain are reviewed. Late radiation injury is the major, dose-limiting complication of brain irradiation and occurs in two forms, focal and diffuse, which differ significantly in clinical and radiologic features. Focal and diffuse injuries both include a wide spectrum of abnormalities, from subclinical changes detectable only by MR imaging to overt brain necrosis. Asymptomatic focal edema is commonly seen on CT and MR following focal or large-volume irradiation. Focal necrosis has the CT and MR characteristics of a mass lesion, with clinical evidence of focal neurologic abnormality and raised intracranial pressure. Microscopically, the lesion shows characteristic vascular changes and white matter pathology ranging from demyelination to coagulative necrosis. Diffuse radiation injury is characterized by periventricular decrease in attenuation of CT and increased signal on proton-density and T2-weighted MR images. Most patients are asymptomatic. When clinical manifestations occur, impairment of mental function is the most prominent feature. Pathologic findings in focal and diffuse radiation necrosis are similar. Necrotizing leukoencephalopathy is the form of diffuse white matter injury that follows chemotherapy, with or without irradiation. Vascular disease is less prominent and the latent period is shorter than in diffuse radiation injury; radiologic findings and clinical manifestations are similar. Late radiation injury of large arteries is an occasional cause of postradiation cerebral injury, and cerebral atrophy and mineralizing microangiopathy are common radiologic findings of uncertain clinical significance. Functional imaging by positron emission tomography can differentiate recurrent tumor from focal radiation necrosis with positive and negative predictive values for tumor of 80-90%. Positron emission tomography of the blood-brain barrier, glucose metabolism, and blood flow, together with MR imaging, have demonstrated some of the pathophsiology of late radiation necrosis. Focal glucose hypometabolism on positron emissin tomography in irradiated patients may have prognostic significance for subsequent development of clinically evident radiation necrosis.",
"title": ""
},
{
"docid": "0c5ebaaf0fd85312428b5d6b7479bfb6",
"text": "BACKGROUND\nPovidone-iodine solution is an antiseptic that is used worldwide as surgical paint and is considered to have a low irritant potential. Post-surgical severe irritant dermatitis has been described after the misuse of this antiseptic in the surgical setting.\n\n\nMETHODS\nBetween January 2011 and June 2013, 27 consecutive patients with post-surgical contact dermatitis localized outside of the surgical incision area were evaluated. Thirteen patients were also available for patch testing.\n\n\nRESULTS\nAll patients developed dermatitis the day after the surgical procedure. Povidone-iodine solution was the only liquid in contact with the skin of our patients. Most typical lesions were distributed in a double lumbar parallel pattern, but they were also found in a random pattern or in areas where a protective pad or an occlusive medical device was glued to the skin. The patch test results with povidone-iodine were negative.\n\n\nCONCLUSIONS\nPovidone-iodine-induced post-surgical dermatitis may be a severe complication after prolonged surgical procedures. As stated in the literature and based on the observation that povidone-iodine-induced contact irritant dermatitis occurred in areas of pooling or occlusion, we speculate that povidone-iodine together with occlusion were the causes of the dermatitis epidemic that occurred in our surgical setting. Povidone-iodine dermatitis is a problem that is easily preventable through the implementation of minimal routine changes to adequately dry the solution in contact with the skin.",
"title": ""
}
] | scidocsrr |
5e1f70eb51a33d4220fd2a8a766ad212 | Haptics: Perception, Devices, Control, and Applications | [
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
}
] | [
{
"docid": "8dce819cc31cf4899cf4bad2dd117dc1",
"text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.",
"title": ""
},
{
"docid": "53569b0225db62d4c627e41469cb91b8",
"text": "A first proof-of-concept mm-sized implant based on ultrasonic power transfer and RF uplink data transmission is presented. The prototype consists of a 1 mm × 1 mm piezoelectric receiver, a 1 mm × 2 mm chip designed in 65 nm CMOS and a 2.5 mm × 2.5 mm off-chip antenna, and operates through 3 cm of chicken meat which emulates human tissue. The implant supports a DC load power of 100 μW allowing for high-power applications. It also transmits consecutive UWB pulse sequences activated by the ultrasonic downlink data path, demonstrating sufficient power for an Mary PPM transmitter in uplink.",
"title": ""
},
{
"docid": "04549adc3e956df0f12240c4d9c02bd7",
"text": "Gamification, applying game mechanics to nongame contexts, has recently become a hot topic across a wide range of industries, and has been presented as a potential disruptive force in education. It is based on the premise that it can promote motivation and engagement and thus contribute to the learning process. However, research examining this assumption is scarce. In a set of studies we examined the effects of points, a basic element of gamification, on performance in a computerized assessment of mastery and fluency of basic mathematics concepts. The first study, with adult participants, found no effect of the point manipulation on accuracy of responses, although the speed of responses increased. In a second study, with 6e8 grade middle school participants, we found the same results for the two aspects of performance. In addition, middle school participants' reactions to the test revealed higher likeability ratings for the test under the points condition, but only in the first of the two sessions, and perceived effort during the test was higher in the points condition, but only for eighth grade students. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5e962b323daa883da2ae8416d1fc10fa",
"text": "Network security analysis and ensemble data visualization are two active research areas. Although they are treated as separate domains, they share many common challenges and characteristics. Both focus on scalability, time-dependent data analytics, and exploration of patterns and unusual behaviors in large datasets. These overlaps provide an opportunity to apply ensemble visualization research to improve network security analysis. To study this goal, we propose methods to interpret network security alerts and flow traffic as ensemble members. We can then apply ensemble visualization techniques in a network analysis environment to produce a network ensemble visualization system. Including ensemble representations provide new, in-depth insights into relationships between alerts and flow traffic. Analysts can cluster traffic with similar behavior and identify traffic with unusual patterns, something that is difficult to achieve with high-level overviews of large network datasets. Furthermore, our ensemble approach facilitates analysis of relationships between alerts and flow traffic, improves scalability, maintains accessibility and configurability, and is designed to fit our analysts' working environment, mental models, and problem solving strategies.",
"title": ""
},
{
"docid": "c8379f1382a191985cf55773d0cd02c9",
"text": "Utilizing Big Data scenarios that are generated from increasing digitization and data availability is a core topic in IS research. There are prospective advantages in generating business value from those scenarios through improved decision support and new business models. In order to harvest those potential advantages Big Data capabilities are required, including not only technological aspects of data management and analysis but also strategic and organisational aspects. To assess these capabilities, one can use capability assessment models. Employing a qualitative meta-analysis on existing capability assessment models, it can be revealed that the existing approaches greatly differ in their fundamental structure due to heterogeneous model elements. The heterogeneous elements are therefore synthesized and transformed into consistent assessment dimensions to fulfil the requirements of exhaustive and mutually exclusive aspects of a capability assessment model. As part of a broader research project to develop a consistent and harmonized Big Data Capability Assessment Model (BDCAM) a new design for a capability matrix is proposed including not only capability dimensions but also Big Data life cycle tasks in order to measure specific weaknesses along the process of data-driven value creation.",
"title": ""
},
{
"docid": "31049561dee81500d048641023bbb6dd",
"text": "Today’s networks are filled with a massive and ever-growing variety of network functions that coupled with proprietary devices, which leads to network ossification and difficulty in network management and service provision. Network Function Virtualization (NFV) is a promising paradigm to change such situation by decoupling network functions from the underlying dedicated hardware and realizing them in the form of software, which are referred to as Virtual Network Functions (VNFs). Such decoupling introduces many benefits which include reduction of Capital Expenditure (CAPEX) and Operation Expense (OPEX), improved flexibility of service provision, etc. In this paper, we intend to present a comprehensive survey on NFV, which starts from the introduction of NFV motivations. Then, we explain the main concepts of NFV in terms of terminology, standardization and history, and how NFV differs from traditional middlebox based network. After that, the standard NFV architecture is introduced using a bottom up approach, based on which the corresponding use cases and solutions are also illustrated. In addition, due to the decoupling of network functionalities and hardware, people’s attention is gradually shifted to the VNFs. Next, we provide an extensive and in-depth discussion on state-of-the-art VNF algorithms including VNF placement, scheduling, migration, chaining and multicast. Finally, to accelerate the NFV deployment and avoid pitfalls as far as possible, we survey the challenges faced by NFV and the trend for future directions. In particular, the challenges are discussed from bottom up, which include hardware design, VNF deployment, VNF life cycle control, service chaining, performance evaluation, policy enforcement, energy efficiency, reliability and security, and the future directions are discussed around the current trend towards network softwarization. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "404c1f0cc09b5c429555332d90be67d3",
"text": "Activists have used social media during modern civil uprisings, and researchers have found that the generated content is predictive of off-line protest activity. However, questions remain regarding the drivers of this predictive power. In this paper, we begin by deriving predictor variables for individuals’ protest decisions from the literature on protest participation theory. We then test these variables on the case of Twitter and the 2011 Egyptian revolution. We find significant positive correlations between the volume of future-protest descriptions on Twitter and protest onsets. We do not find significant correlations between such onsets and the preceding volume of political expressions, individuals’ access to news, and connections with political activists. These results locate the predictive power of social media in its function as a protest advertisement and organization mechanism. We then build predictive models using future-protest descriptions and compare these models with baselines informed by daily event counts from the Global Database of Events, Location, and Tone (GDELT). Inspection of the significant variables in our GDELT models reveals that an increased military presence may be predictive of protest onsets in major cities. In sum, this paper highlights the ways in which online activism shapes off-line behavior during civil uprisings.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "b4d4a42e261a5272a7865065b74ff91b",
"text": "The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23ms, which is fast enough to enable real-time applications.",
"title": ""
},
{
"docid": "39b33447ab67263414e662f07f6a3c26",
"text": "Most of current concepts for a visual prosthesis are based on neuronal electrical stimulation at different locations along the visual pathways within the central nervous system. The different designs of visual prostheses are named according to their locations (i.e., cortical, optic nerve, subretinal, and epiretinal). Visual loss caused by outer retinal degeneration in diseases such as retinitis pigmentosa or age-related macular degeneration can be reversed by electrical stimulation of the retina or the optic nerve (retinal or optic nerve prostheses, respectively). On the other hand, visual loss caused by inner or whole thickness retinal diseases, eye loss, optic nerve diseases (tumors, ischemia, inflammatory processes etc.), or diseases of the central nervous system (not including diseases of the primary and secondary visual cortices) can be reversed by a cortical visual prosthesis. The intent of this article is to provide an overview of current and future concepts of retinal and optic nerve prostheses. This article will begin with general considerations that are related to all or most of visual prostheses and then concentrate on the retinal and optic nerve designs. The authors believe that the field has grown beyond the scope of a single article so cortical prostheses will be described only because of their direct effect on the concept and technical development of the other prostheses, and this will be done in a more general and historic perspective.",
"title": ""
},
{
"docid": "edfb50c784e6e7a89ce12d524f667398",
"text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.",
"title": ""
},
{
"docid": "047949b0dba35fb11f9f3b716893701d",
"text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "90b3e6aee6351b196445843ca8367a3b",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "9c2ce030230ccd91fdbfbd9544596604",
"text": "The kind of causal inference seen in natural human thought can be \"algorithmitized\" to help produce human-level machine intelligence.",
"title": ""
},
{
"docid": "b7e5028bc6936e75692fbcebaacebb2c",
"text": "dent processes and domain-specific knowledge. Until recently, information extraction has leaned heavily on domain knowledge, which requires either manual engineering or manual tagging of examples (Miller et al. 1998; Soderland 1999; Culotta, McCallum, and Betz 2006). Semisupervised approaches (Riloff and Jones 1999, Agichtein and Gravano 2000, Rosenfeld and Feldman 2007) require only a small amount of hand-annotated training, but require this for every relation of interest. This still presents a knowledge engineering bottleneck, when one considers the unbounded number of relations in a diverse corpus such as the web. Shinyama and Sekine (2006) explored unsupervised relation discovery using a clustering algorithm with good precision, but limited scalability. The KnowItAll research group is a pioneer of a new paradigm, Open IE (Banko et al. 2007, Banko and Etzioni 2008), that operates in a totally domain-independent manner and at web scale. An Open IE system makes a single pass over its corpus and extracts a diverse set of relational tuples without requiring any relation-specific human input. Open IE is ideally suited to corpora such as the web, where the target relations are not known in advance and their number is massive. Articles",
"title": ""
},
{
"docid": "8c00b522ae9429f6f9cd7fb7174578ec",
"text": "We extend photometric stereo to make it work with internet images, which are typically associated with different viewpoints and significant noise. For popular tourism sites, thousands of images can be obtained from internet search engines. With these images, our method computes the global illumination for each image and the surface orientation at some scene points. The illumination information can then be used to estimate the weather conditions (such as sunny or cloudy) for each image, since there is a strong correlation between weather and scene illumination. We demonstrate our method on several challenging examples.",
"title": ""
},
{
"docid": "63096ce03a1aa0effffdf59e742b1653",
"text": "URB-754 (6-methyl-2-[(4-methylphenyl)amino]-1-benzoxazin-4-one) was identified as a new type of designer drug in illegal products. Though many of the synthetic cannabinoids detected in illegal products are known to have affinities for cannabinoid CB1/CB2 receptors, URB-754 was reported to inhibit an endocannabinoid deactivating enzyme. Furthermore, an unknown compound (N,5-dimethyl-N-(1-oxo-1-(p-tolyl)butan-2-yl)-2-(N'-(p-tolyl)ureido)benzamide), which is deduced to be the product of a reaction between URB-754 and a cathinone derivative 4-methylbuphedrone (4-Me-MABP), was identified along with URB-754 and 4-Me-MABP in the same product. It is of interest that the product of a reaction between two different types of designer drugs, namely, a cannabinoid-related designer drug and a cathinone-type designer drug, was found in one illegal product. In addition, 12 cannabimimetic compounds, 5-fluoropentyl-3-pyridinoylindole, JWH-307, JWH-030, UR-144, 5FUR-144 (synonym: XLR11), (4-methylnaphtyl)-JWH-022 [synonym: N-(5-fluoropentyl)-JWH-122], AM-2232, (4-methylnaphtyl)-AM-2201 (MAM-2201), N-(4-pentenyl)-JWH-122, JWH-213, (4-ethylnaphtyl)-AM-2201 (EAM-2201) and AB-001, were also detected herein as newly distributed designer drugs in Japan. Furthermore, a tryptamine derivative, 4-hydroxy-diethyltryptamine (4-OH-DET), was detected together with a synthetic cannabinoid, APINACA, in the same product.",
"title": ""
},
{
"docid": "4741e9e296bfec372df7779ffa4bb3e3",
"text": "Dry electrodes for impedimetric sensing of physiological parameters (such as ECG, EEG, and GSR) promise the ability for long duration monitoring. This paper describes the feasibility of a novel dry electrode interfacing using Patterned Vertical Carbon Nanotube (pvCNT) for physiological parameter sensing. The electrodes were fabricated on circular discs (φ = 10 mm) stainless steel substrate. Multiwalled electrically conductive carbon nanotubes were grown in pattered pillar formation of 100 μm squared with 50, 100, 200 and 500μm spacing. The heights of the pillars were between 1 to 1.5 mm. A comparative test with commercial ECG electrodes shows that pvCNT has lower electrical impedance, stable impedance over very long time, and comparable signal capture in vitro. Long duration study shows minimal degradation of impedance over 2 days period. The results demonstrate the feasibility of using pvCNT dry electrodes for physiological parameter sensing.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] | scidocsrr |
2a1200ca1f84e5678ea71c7497f575a3 | Validating the independent components of neuroimaging time series via clustering and visualization | [
{
"docid": "4daa16553442aa424a1578f02f044c6e",
"text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.",
"title": ""
}
] | [
{
"docid": "e4405c71336ea13ccbd43aa84651dc60",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "ba391ddf37a4757bc9b8d9f4465a66dc",
"text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "878292ad8dfbe9118c64a14081da561a",
"text": "Public-key cryptography is indispensable for cyber security. However, as a result of Peter Shor shows, the public-key schemes that are being used today will become insecure once quantum computers reach maturity. This paper gives an overview of the alternative public-key schemes that have the capability to resist quantum computer attacks and compares them.",
"title": ""
},
{
"docid": "c8a9f16259e437cda8914a92148901ab",
"text": "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans. This dialogue paradigm is built on a utilitarian definition of language understanding. Language is one of multiple tools which an agent may use to accomplish goals in its environment. We say an agent “understands” language only when it is able to use language productively to accomplish these goals. Under this definition, an agent’s communication success reduces to its success on tasks within its environment. This setup contrasts with many conventional natural language tasks, which maximize linguistic objectives derived from static datasets. Such applications often make the mistake of reifying language as an end in itself. The tasks prioritize an isolated measure of linguistic intelligence (often one of linguistic competence, in the sense of Chomsky (1965)), rather than measuring a model’s effectiveness in real-world scenarios. Our utilitarian definition is motivated by recent successes in reinforcement learning methods. In a reinforcement learning setting, agents maximize success metrics on real-world tasks, without requiring direct supervision of linguistic behavior.",
"title": ""
},
{
"docid": "49b71624485d8b133b8ebb0cc72d2540",
"text": "Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions.",
"title": ""
},
{
"docid": "3bd2941e72695b3214247c8c7071410b",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "acf6361be5bb883153bebd4c3ec032c2",
"text": "The first objective of the paper is to identifiy a number of issues related to crowdfunding that are worth studying from an industrial organization (IO) perspective. To this end, we first propose a definition of crowdfunding; next, on the basis on a previous empirical study, we isolate what we believe are the main features of crowdfunding; finally, we point to a number of strands of the literature that could be used to study the various features of crowdfunding. The second objective of the paper is to propose some preliminary efforts towards the modelization of crowdfunding. In a first model, we associate crowdfunding with pre-ordering and price discrimination, and we study the conditions under which crowdfunding is preferred to traditional forms of external funding. In a second model, we see crowdfunding as a way to make a product better known by the consumers and we give some theoretical underpinning for the empirical finding that non-profit organizations tend to be more successful in using crowdfunding. JEL classification codes: G32, L11, L13, L15, L21, L31",
"title": ""
},
{
"docid": "7ae8e78059bb710c46b66c120e1ff0ef",
"text": "Biometrics technology is keep growing substantially in the last decades with great advances in biometric applications. An accurate personal authentication or identification has become a critical step in a wide range of applications such as national ID, electronic commerce, and automated and remote banking. The recent developments in the biometrics area have led to smaller, faster, and cheaper systems such as mobile device systems. As a kind of human biometrics for personal identification, fingerprint is the dominant trait due to its simplicity to be captured, processed, and extracted without violating user privacy. In a wide range of applications of fingerprint recognition, including civilian and forensics implementations, a large amount of fingerprints are collected and stored everyday for different purposes. In Automatic Fingerprint Identification System (AFIS) with a large database, the input image is matched with all fields inside the database to identify the most potential identity. Although satisfactory performances have been reported for fingerprint authentication (1:1 matching), both time efficiency and matching accuracy deteriorate seriously by simple extension of a 1:1 authentication procedure to a 1:N identification system (Manhua, 2010). The system response time is the key issue of any AFIS, and it is often improved by controlling the accuracy of the identification to satisfy the system requirement. In addition to developing new technologies, it is necessary to make clear the trade-off between the response time and the accuracy in fingerprint identification systems. Moreover, from the versatility and developing cost points of view, the trade-off should be realized in terms of system design, implementation, and usability. Fingerprint classification is one of the standard approaches to speed up the matching process between the input sample and the collected database (K. Jain et al., 2007). Fingerprint classification is considered as indispensable step toward reducing the search time through large fingerprint databases. It refers to the problem of assigning fingerprint to one of several pre-specified classes, and it presents an interesting problem in pattern recognition, especially in the real and time sensitive applications that require small response time. Fingerprint classification process works on narrowing down the search domain into smaller database subsets, and hence speeds up the total response time of any AFIS. Even for",
"title": ""
},
{
"docid": "20acbae6f76e3591c8b696481baffc90",
"text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.",
"title": ""
},
{
"docid": "67e35bc7add5d6482fff4cd4f2060e6b",
"text": "There is a clear trend in the automotive industry to use more electrical systems in order to satisfy the ever-growing vehicular load demands. Thus, it is imperative that automotive electrical power systems will obviously undergo a drastic change in the next 10-20 years. Currently, the situation in the automotive industry is such that the demands for higher fuel economy and more electric power are driving advanced vehicular power system voltages to higher levels. For example, the projected increase in total power demand is estimated to be about three to four times that of the current value. This means that the total future power demand of a typical advanced vehicle could roughly reach a value as high as 10 kW. In order to satisfy this huge vehicular load, the approach is to integrate power electronics intensive solutions within advanced vehicular power systems. In view of this fact, this paper aims at reviewing the present situation as well as projected future research and development work of advanced vehicular electrical power systems including those of electric, hybrid electric, and fuel cell vehicles (EVs, HEVs, and FCVs). The paper will first introduce the proposed power system architectures for HEVs and FCVs and will then go on to exhaustively discuss the specific applications of dc/dc and dc/ac power electronic converters in advanced automotive power systems",
"title": ""
},
{
"docid": "1315247aa0384097f5f9e486bce09bd4",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
},
{
"docid": "5c6c8834304264b59db5f28021ccbdb8",
"text": "We develop a dynamic optimal control model of a fashion designers challenge of maintaining brand image in the face of short-term pro
t opportunities through expanded sales that risk brand dilution in the longer-run. The key state variable is the brands reputation, and the key decision is sales volume. Depending on the brands capacity to command higher prices, one of two regimes is observed. If the price mark-ups relative to production costs are modest, then the optimal solution may simply be to exploit whatever value can be derived from the brand in the short-run and retire the brand when that capacity is fully diluted. However, if the price markups are more substantial, then an existing brand should be preserved.",
"title": ""
},
{
"docid": "ce8729f088aaf9f656c9206fc67ff4bd",
"text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.",
"title": ""
},
{
"docid": "ac7b607cc261654939868a62822a58eb",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "353d6ed75f2a4bca5befb5fdbcea2bcc",
"text": "BACKGROUND\nThe number of mental health apps (MHapps) developed and now available to smartphone users has increased in recent years. MHapps and other technology-based solutions have the potential to play an important part in the future of mental health care; however, there is no single guide for the development of evidence-based MHapps. Many currently available MHapps lack features that would greatly improve their functionality, or include features that are not optimized. Furthermore, MHapp developers rarely conduct or publish trial-based experimental validation of their apps. Indeed, a previous systematic review revealed a complete lack of trial-based evidence for many of the hundreds of MHapps available.\n\n\nOBJECTIVE\nTo guide future MHapp development, a set of clear, practical, evidence-based recommendations is presented for MHapp developers to create better, more rigorous apps.\n\n\nMETHODS\nA literature review was conducted, scrutinizing research across diverse fields, including mental health interventions, preventative health, mobile health, and mobile app design.\n\n\nRESULTS\nSixteen recommendations were formulated. Evidence for each recommendation is discussed, and guidance on how these recommendations might be integrated into the overall design of an MHapp is offered. Each recommendation is rated on the basis of the strength of associated evidence. It is important to design an MHapp using a behavioral plan and interactive framework that encourages the user to engage with the app; thus, it may not be possible to incorporate all 16 recommendations into a single MHapp.\n\n\nCONCLUSIONS\nRandomized controlled trials are required to validate future MHapps and the principles upon which they are designed, and to further investigate the recommendations presented in this review. Effective MHapps are required to help prevent mental health problems and to ease the burden on health systems.",
"title": ""
},
{
"docid": "604b29b9e748acf2776155074fee59d8",
"text": "The growing importance of the service sector in almost every economy in the world has created a significant amount of interest in service operations. In practice, many service sectors have sought and made use of various enhancement programs to improve their operations and performance in an attempt to hold competitive success. As most researchers recognize, service operations link with customers. The customers as participants act in the service operations system driven by the goal of sufficing his/her added values. This is one of the distinctive features of service production and consumption. In the paper, first, we propose the idea of service operations improvement by mapping objectively the service experience of customers from the view of customer journey. Second, a portraying scheme of service experience of customers based on the IDEF3 technique is proposed, and last, some implications on service operations improvement are",
"title": ""
},
{
"docid": "759b5a86bc70147842a106cf20b3a0cd",
"text": "This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.",
"title": ""
},
{
"docid": "107c839a73c12606d4106af7dc04cd96",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
},
{
"docid": "51bcd4099caaebc0e789d4c546e0782b",
"text": "When speaking and reasoning about time, people around the world tend to do so with vocabulary and concepts borrowed from the domain of space. This raises the question of whether the cross-linguistic variability found for spatial representations, and the principles on which these are based, may also carry over to the domain of time. Real progress in addressing this question presupposes a taxonomy for the possible conceptualizations in one domain and its consistent and comprehensive mapping onto the other-a challenge that has been taken up only recently and is far from reaching consensus. This article aims at systematizing the theoretical and empirical advances in this field, with a focus on accounts that deal with frames of reference (FoRs). It reviews eight such accounts by identifying their conceptual ingredients and principles for space-time mapping, and it explores the potential for their integration. To evaluate their feasibility, data from some thirty empirical studies, conducted with speakers of sixteen different languages, are then scrutinized. This includes a critical assessment of the methods employed, a summary of the findings for each language group, and a (re-)analysis of the data in view of the theoretical questions. The discussion relates these findings to research on the mental time line, and explores the psychological reality of temporal FoRs, the degree of cross-domain consistency in FoR adoption, the role of deixis, and the sources and extent of space-time mapping more generally.",
"title": ""
}
] | scidocsrr |
9521d868f4608cb693bb6aa146debb8e | Exploring Gameplay Experiences on the Oculus Rift | [
{
"docid": "119dd2c7eb5533ece82cff7987f21dba",
"text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.",
"title": ""
}
] | [
{
"docid": "1d50c8598a41ed7953e569116f59ae41",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "3efef315b9b7fcf914dd763080804baf",
"text": "Performing correct anti-predator behaviour is crucial for prey to survive. But are such abilities lost in species or populations living in predator-free environments? How individuals respond to the loss of predators has been shown to depend on factors such as the degree to which anti-predator behaviour relies on experience, the type of cues evoking the behaviour, the cost of expressing the behaviour and the number of generations under which the relaxed selection has taken place. Here we investigated whether captive-born populations of meerkats (Suricata suricatta) used the same repertoire of alarm calls previously documented in wild populations and whether captive animals, as wild ones, could recognize potential predators through olfactory cues. We found that all alarm calls that have been documented in the wild also occurred in captivity and were given in broadly similar contexts. Furthermore, without prior experience of odours from predators, captive meerkats seemed to dist inguish between faeces of potential predators (carnivores) and non-predators (herbivores). Despite slight structural differences, the alarm calls given in response to the faeces largely resembled those recorded in similar contexts in the wild. These results from captive populations suggest that direct, physical interaction with predators is not necessary for meerkats to perform correct anti-predator behaviour in terms of alarm-call usage and olfactory predator recognition. Such behaviour may have been retained in captivity because relatively little experience seems necessary for correct performance in the wild and/or because of the recency of relaxed selection on these populations. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-282 Accepted Version Originally published at: Hollén, L I; Manser, M B (2007). Persistence of Alarm-Call Behaviour in the Absence of Predators: A Comparison Between Wild and Captive-Born Meerkats (Suricata suricatta). Ethology, 113(11):10381047. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x",
"title": ""
},
{
"docid": "03cd4dcbda645c26fda3b8bce1bdb80a",
"text": "Nowadays, many organizations face the issue of information and communication technology (ICT) management. The organization undertakes various activities to assess the state of their ICT or the entire information system (IS), such as IS audits, reviews, IS due diligence, or they already have implemented systems to gather information regarding their IS. For the organizations’ boards and managers there is an issue how to evaluate current IS maturity level and based on assessments, define the IS strategy how to reach the maturity target level. The problem is, what kind of approach to use for rapid and effective IS maturity assessment. This paper summarizes the research regarding IS maturity within different organizations where the authors have delivered either complete IS due diligence or made partial analysis by IS Mirror method. The main objective of this research is to present and confirm the approach which could be used for effective IS maturity assessment and could be provided quickly and even remotely. The paper presented research question, related hypothesis, and approach for rapid IS maturity assessment, and results from several case studies made on-site or remotely.",
"title": ""
},
{
"docid": "ffb7b58d947aa15cd64efbadb0f9543d",
"text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "bd121443b5a1dfb16687001c72b22199",
"text": "We review the nosological criteria and functional neuroanatomical basis for brain death, coma, vegetative state, minimally conscious state, and the locked-in state. Functional neuroimaging is providing new insights into cerebral activity in patients with severe brain damage. Measurements of cerebral metabolism and brain activations in response to sensory stimuli with PET, fMRI, and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically complex and needs careful quantitative analysis and interpretation. In addition, ethical frameworks to guide research in these patients must be further developed. At present, clinical examinations identify nosological distinctions needed for accurate diagnosis and prognosis. Neuroimaging techniques remain important tools for clinical research that will extend our understanding of the underlying mechanisms of these disorders.",
"title": ""
},
{
"docid": "b45bb513f7bd9de4941785490945d53e",
"text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"title": ""
},
{
"docid": "38d3dc6b5eb1dbf85b1a371b645a17da",
"text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.",
"title": ""
},
{
"docid": "984b9737cd2566ff7d18e6e2f9e5bed2",
"text": "Advances in anatomic understanding are frequently the basis upon which surgical techniques are advanced and refined. Recent anatomic studies of the superficial tissues of the face have led to an increased understanding of the compartmentalized nature of the subcutaneous fat. This report provides a review of the locations and characteristics of the facial fat compartments and provides examples of how this knowledge can be used clinically, specifically with regard to soft tissue fillers.",
"title": ""
},
{
"docid": "cb3ea08822b3e1648d497a3f472060a7",
"text": "In this paper, a linear time-varying model-based predictive controller (LTV-MPC) for lateral vehicle guidance by front steering is proposed. Due to the fact that this controller is designed in the scope of a Collision Avoidance System, it has to fulfill the requirement of an appropiate control performance at the limits of vehicle dynamics. To achieve this objective, the introduced approach employs estimations of the applied steering angle as well as the state variable trajectory to be used for successive linearization of the nonlinear prediction model over the prediction horizon. To evaluate the control performance, the proposed controller is compared to a LTV-MPC controller that uses linearizations of the nonlinear prediction model that remain unchanged over the prediction horizon. Simulation results show that an improved control performance can be achieved by the estimation based approach.",
"title": ""
},
{
"docid": "665f7b5de2ce617dd3009da3d208026f",
"text": "Throughout the history of the social evolution, man and animals come into frequent contact that forms an interdependent relationship between man and animals. The images of animals root in the everyday life of all nations, forming unique animal culture of each nation. Therefore, Chinese and English, as the two languages which spoken by the most people in the world, naturally contain a lot of words relating to animals, and because of different history and culture, the connotations of animal words in one language do not coincide with those in another. The clever use of animal words is by no means scarce in everyday communication or literary works, which helps make English and Chinese vivid and lively in image, plain and expressive in character, and rich and strong in flavor. In this study, many animal words are collected for the analysis of the similarities and the differences between the cultural connotations carried by animal words in Chinese and English, find out the causes of differences, and then discuss some methods and techniques for translating these animal words.",
"title": ""
},
{
"docid": "1a063741d53147eb6060a123bff96c27",
"text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.",
"title": ""
},
{
"docid": "ce31bc3245c7f45311c65b4e8923cad0",
"text": "In this paper, we propose a probabilistic active tactile transfer learning (ATTL) method to enable robotic systems to exploit their prior tactile knowledge while discriminating among objects via their physical properties (surface texture, stiffness, and thermal conductivity). Using the proposed method, the robot autonomously selects and exploits its most relevant prior tactile knowledge to efficiently learn about new unknown objects with a few training samples or even one. The experimental results show that using our proposed method, the robot successfully discriminated among new objects with 72% discrimination accuracy using only one training sample (on-shot-tactile-learning). Furthermore, the results demonstrate that our method is robust against transferring irrelevant prior tactile knowledge (negative tactile knowledge transfer).",
"title": ""
},
{
"docid": "dbf3307ed73ec7c004b90e36f3f297c0",
"text": "Web 2.0 has had a tremendous impact on education. It facilitates access and availability of learning content in variety of new formats, content creation, learning tailored to students’ individual preferences, and collaboration. The range of Web 2.0 tools and features is constantly evolving, with focus on users and ways that enable users to socialize, share and work together on (user-generated) content. In this chapter we present ALEF – Adaptive Learning Framework that responds to the challenges posed on educational systems in Web 2.0 era. Besides its base functionality – to deliver educational content – ALEF particularly focuses on making the learning process more efficient by delivering tailored learning experience via personalized recommendation, and enabling learners to collaborate and actively participate in learning via interactive educational components. Our existing and successfully utilized solution serves as the medium for presenting key concepts that enable realizing Web 2.0 principles in education, namely lightweight models, and three components of framework infrastructure important for constant evolution and inclusion of students directly into the educational process – annotation framework, feedback infrastructure and widgets. These make possible to devise and implement various mechanisms for recommendation and collaboration – we also present selected methods for personalized recommendation and collaboration together with their evaluation in ALEF.",
"title": ""
},
{
"docid": "88be12fdd7ec90a7af7337f3d29b2130",
"text": "Classification of time series has been attracting great interest over the past decade. While dozens of techniques have been introduced, recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems, especially for large-scale datasets. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a high time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data and to make the classification result more explainable, which global characteristics of the nearest neighbor cannot provide. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. We can use the distance to the shapelet, rather than the distance to the nearest neighbor to classify objects. As we shall show with extensive empirical evaluations in diverse domains, classification algorithms based on the time series shapelet primitives can be interpretable, more accurate, and significantly faster than state-of-the-art classifiers.",
"title": ""
},
{
"docid": "b640ed2bd02ba74ee0eb925ef6504372",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "ae5202304d6a485cbcc7cf3a0983fe0c",
"text": "Most of the algorithms for inverse reinforcement learning (IRL) assume that the reward function is a linear function of the pre-defined state and action features. However, it is often difficult to manually specify the set of features that can make the true reward function representable as a linear function. We propose a Bayesian nonparametric approach to identifying useful composite features for learning the reward function. The composite features are assumed to be the logical conjunctions of the predefined atomic features so that we can represent the reward function as a linear function of the composite features. We empirically show that our approach is able to learn composite features that capture important aspects of the reward function on synthetic domains, and predict taxi drivers’ behaviour with high accuracy on a real GPS trace dataset.",
"title": ""
},
{
"docid": "c7c74089c55965eb7734ab61ac0795a7",
"text": "Many researchers see the potential of wireless mobile learning devices to achieve large-scale impact on learning because of portability, low cost, and communications features. This enthusiasm is shared but the lessons drawn from three well-documented uses of connected handheld devices in education lead towards challenges ahead. First, ‘wireless, mobile learning’ is an imprecise description of what it takes to connect learners and their devices together in a productive manner. Research needs to arrive at a more precise understanding of the attributes of wireless networking that meet acclaimed pedagogical requirements and desires. Second, ‘pedagogical applications’ are often led down the wrong road by complex views of technology and simplistic views of social practices. Further research is needed that tells the story of rich pedagogical practice arising out of simple wireless and mobile technologies. Third, ‘large scale’ impact depends on the extent to which a common platform, that meets the requirements of pedagogically rich applications, becomes available. At the moment ‘wireless mobile technologies for education’ are incredibly diverse and incompatible; to achieve scale, a strong vision will be needed to lead to standardisation, overcoming the tendency to marketplace fragmentation.",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "f4d85ad52e37bd81058bfff830a52f0a",
"text": "A number of antioxidants and trace minerals have important roles in immune function and may affect health in transition dairy cows. Vitamin E and beta-carotene are important cellular antioxidants. Selenium (Se) is involved in the antioxidant system via its role in the enzyme glutathione peroxidase. Inadequate dietary vitamin E or Se decreases neutrophil function during the perpariturient period. Supplementation of vitamin E and/or Se has reduced the incidence of mastitis and retained placenta, and reduced duration of clinical symptoms of mastitis in some experiments. Research has indicated that beta-carotene supplementation may enhance immunity and reduce the incidence of retained placenta and metritis in dairy cows. Marginal copper deficiency resulted in reduced neutrophil killing and decreased interferon production by mononuclear cells. Copper supplementation of a diet marginal in copper reduced the peak clinical response during experimental Escherichia coli mastitis. Limited research indicated that chromium supplementation during the transition period may increase immunity and reduce the incidence of retained placenta.",
"title": ""
}
] | scidocsrr |
172b8b8c70f1958383bb4c593afe2d5f | A Multimodal Deep Learning Network for Group Activity Recognition | [
{
"docid": "06c9f35d9b46266e07bcfc1218c65992",
"text": "To effectively collaborate with people, robots are expected to detect and profile the users they are interacting with, but also to modify and adapt their behavior according to the learned models. The goal of this survey is to focus on the perspective of user profiling and behavioral adaptation. On the one hand, human-robot interaction requires a human-oriented perception to model and recognize the human actions and capabilities, the intentions and goals behind such actions, and the parameters characterizing the social interaction. On the other hand, the robot behavior should be adapted in its physical movement within the space, in the actions to be selected to achieve collaboration, and by modulating the parameters characterizing the interaction. In this direction, this survey of the current literature introduces a general classification scheme for both the profiling and the behavioral adaptation research topics in terms of physical, cognitive, and social interaction viewpoints. c © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b2c3e30803ff00aaf6318395a0aec45",
"text": "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.",
"title": ""
},
{
"docid": "85e5405bdd852741f1af3f89d880805c",
"text": "The automatic assessment of the level of independence of a person, based on the recognition of a set of Activities of Daily Living, is among the most challenging research fields in Ambient Intelligence. The article proposes a framework for the recognition of motion primitives, relying on Gaussian Mixture Modeling and Gaussian Mixture Regression for the creation of activity models. A recognition procedure based on Dynamic Time Warping and Mahalanobis distance is found to: (i) ensure good classification results; (ii) exploit the properties of GMM and GMR modeling to allow for an easy run-time recognition; (iii) enhance the consistency of the recognition via the use of a classifier allowing unknown as an answer.",
"title": ""
}
] | [
{
"docid": "ce8fea47b1f323d39185403705500687",
"text": "Today, there is a big veriety of different approaches and algorithms of data filtering and recommendations giving. In this paper we describe traditional approaches and explane what kind of modern approaches have been developed lately. All the paper long we will try to explane approaches and their problems based on a movies recommendations. In the end we will show the main challanges recommender systems come across.",
"title": ""
},
{
"docid": "e4d86871669074b385f8ea36968106c0",
"text": "Verbal redundancy arises from the concurrent presentation of text and verbatim speech. To inform theories of multimedia learning that guide the design of educational materials, a meta-analysis was conducted to investigate the effects of spoken-only, written-only, and spoken–written presentations on learning retention and transfer. After an extensive search for experimental studies meeting specified inclusion criteria, data from 57 independent studies were extracted. Most of the research participants were postsecondary students. Overall, this meta-analysis revealed that outcomes comparing spoken–written and written-only presentations did not differ, but students who learned from spoken–written presentations outperformed those who learned from spoken-only presentations. This effect was dependent on learners’ prior knowledge, pacing of presentation, and inclusion of animation or diagrams. Specifically, the advantages of spoken–written presentations over spoken-only presentations were found for low prior knowledge learners, system-paced learning materials, and picture-free materials. In comparison with verbatim, spoken–written presentations, presentations displaying key terms extracted from spoken narrations were associated with better learning outcomes and accounted for much of the advantage of spoken–written over spoken-only presentations. These findings have significant implications for the design of multimedia materials.",
"title": ""
},
{
"docid": "160aef1e152fcf761a813dcabc33f1d4",
"text": "Three different samples (total N = 485) participated in the development and refinement ofthe Leadership Scale for Sports (LSS). A five-factor solution with 40 items describing the most salient dimensions of coaching behavior was selected as the most meaningful. These factors were named Training and Instruction, Democratic Behavior, Autocratic Behavior, Social Support, and Positive Feedback. Internal consistency estimates ranged from .45 to .93 and the test-retest reliability coefficients ranged from .71 to .82. The relative stability of the factor structure across the different samples confirmed the factorial validity ofthe scale. The interpretation ofthe factors established the content validity of the scale. Finally, possible uses of the LSS were pointed out.",
"title": ""
},
{
"docid": "84be70157c6a6707d8c5621c9b7aed82",
"text": "Depression is associated with significant disability, mortality and healthcare costs. It is the third leading cause of disability in high-income countries, 1 and affects approximately 840 million people worldwide. 2 Although biological, psychological and environmental theories have been advanced, 3 the underlying pathophysiology of depression remains unknown and it is probable that several different mechanisms are involved. Vitamin D is a unique neurosteroid hormone that may have an important role in the development of depression. Receptors for vitamin D are present on neurons and glia in many areas of the brain including the cingulate cortex and hippocampus, which have been implicated in the pathophysiology of depression. 4 Vitamin D is involved in numerous brain processes including neuroimmuno-modulation, regulation of neurotrophic factors, neuroprotection, neuroplasticity and brain development, 5 making it biologically plausible that this vitamin might be associated with depression and that its supplementation might play an important part in the treatment of depression. Over two-thirds of the populations of the USA and Canada have suboptimal levels of vitamin D. 6,7 Some studies have demonstrated a strong relationship between vitamin D and depression, 8,9 whereas others have shown no relationship. 10,11 To date there have been eight narrative reviews on this topic, 12–19 with the majority of reviews reporting that there is insufficient evidence for an association between vitamin D and depression. None of these reviews used a comprehensive search strategy, provided inclusion or exclusion criteria, assessed risk of bias or combined study findings. In addition, several recent studies were not included in these reviews. 9,10,20,21 Therefore, we undertook a systematic review and meta-analysis to investigate whether vitamin D deficiency is associated with depression in adults in case–control and cross-sectional studies; whether vitamin D deficiency increases the risk of developing depression in cohort studies in adults; and whether vitamin D supplementation improves depressive symptoms in adults with depression compared with placebo, or prevents depression compared with placebo, in healthy adults in randomised controlled trials (RCTs). We searched the databases MEDLINE, EMBASE, PsycINFO, CINAHL, AMED and Cochrane CENTRAL (up to 2 February 2011) using separate comprehensive strategies developed in consultation with an experienced research librarian (see online supplement DS1). A separate search of PubMed identified articles published electronically prior to print publication within 6 months of our search and therefore not available through MEDLINE. The clinical trials registries clinicaltrials.gov and Current Controlled Trials (controlled-trials.com) were searched for unpublished data. The reference lists …",
"title": ""
},
{
"docid": "2e93a604d1ec6786610780cf2e5e4eea",
"text": "The new era of the Internet of Things is driving the evolution of conventional Vehicle Ad-hoc Networks into the Internet of Vehicles (IoV). With the rapid development of computation and communication technologies, IoV promises huge commercial interest and research value, thereby attracting a large number of companies and researchers. This paper proposes an abstract network model of the IoV, discusses the technologies required to create the IoV, presents different applications based on certain currently existing technologies, provides several open research challenges and describes essential future research in the area of IoV.",
"title": ""
},
{
"docid": "6b19d08c9aa6ecfec27452a298353e1f",
"text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.",
"title": ""
},
{
"docid": "3bd2941e72695b3214247c8c7071410b",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "36ad6c19ba2c6525e75c8d42835c3b9f",
"text": "Characteristics of physical activity are indicative of one's mobility level, latent chronic diseases and aging process. Accelerometers have been widely accepted as useful and practical sensors for wearable devices to measure and assess physical activity. This paper reviews the development of wearable accelerometry-based motion detectors. The principle of accelerometry measurement, sensor properties and sensor placements are first introduced. Various research using accelerometry-based wearable motion detectors for physical activity monitoring and assessment, including posture and movement classification, estimation of energy expenditure, fall detection and balance control evaluation, are also reviewed. Finally this paper reviews and compares existing commercial products to provide a comprehensive outlook of current development status and possible emerging technologies.",
"title": ""
},
{
"docid": "36357f48cbc3ed4679c679dcb77bdd81",
"text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.",
"title": ""
},
{
"docid": "8646250c89ed7d4542c395a1d0daa384",
"text": "What makes a good recommendation or good list of recommendations?\n Research into recommender systems has traditionally focused on accuracy, in particular how closely the recommender’s predicted ratings are to the users’ true ratings. However, it has been recognized that other recommendation qualities—such as whether the list of recommendations is diverse and whether it contains novel items—may have a significant impact on the overall quality of a recommender system. Consequently, in recent years, the focus of recommender systems research has shifted to include a wider range of “beyond accuracy” objectives.\n In this article, we present a survey of the most discussed beyond-accuracy objectives in recommender systems research: diversity, serendipity, novelty, and coverage. We review the definitions of these objectives and corresponding metrics found in the literature. We also review works that propose optimization strategies for these beyond-accuracy objectives. Since the majority of works focus on one specific objective, we find that it is not clear how the different objectives relate to each other.\n Hence, we conduct a set of offline experiments aimed at comparing the performance of different optimization approaches with a view to seeing how they affect objectives other than the ones they are optimizing. We use a set of state-of-the-art recommendation algorithms optimized for recall along with a number of reranking strategies for optimizing the diversity, novelty, and serendipity of the generated recommendations. For each reranking strategy, we measure the effects on the other beyond-accuracy objectives and demonstrate important insights into the correlations between the discussed objectives. For instance, we find that rating-based diversity is positively correlated with novelty, and we demonstrate the positive influence of novelty on recommendation coverage.",
"title": ""
},
{
"docid": "8520c513d34ba33dfa4aa3fc621bccb6",
"text": "Current avionics architectures implemented on large aircraft use complex processors, which are shared by many avionics applications according Integrated Modular Avionics (IMA) concepts. Using less complex processors on smaller aircraft such as helicopters leads to a distributed IMA architecture. Allocation of the avionics applications on a distributed architecture has to deal with two main challenges. A first problem is about the feasibility of a static allocation of partitions on each processing element. The second problem is the worst-case end-to-end communication delay analysis: due to the scheduling of partitions on processing elements which are not synchronized, some allocation schemes are not valid. This paper first presents a mapping algorithm using an integrated approach taking into account these two issues. In a second step, we evaluate, on a realistic helicopter case study, the feasibility of mapping a given application on a variable number of processing elements. Finally, we present a scalability analysis of the proposed mapping algorithm.",
"title": ""
},
{
"docid": "4a2da43301b0a81801bf3acd4bd5c32d",
"text": "Video description is the automatic generation of natural language sentences that describe the contents of a given video. It has applications in human-robot interaction, helping the visually impaired and video subtitling. The past few years have seen a surge of research in this area due to the unprecedented success of deep learning in computer vision and natural language processing. Numerous methods, datasets and evaluation metrics have been proposed in the literature, calling the need for a comprehensive survey to focus research efforts in this flourishing new direction. This paper fills the gap by surveying the state of the art approaches with a focus on deep learning models; comparing benchmark datasets in terms of their domains, number of classes, and repository size; and identifying the pros and cons of various evaluation metrics like SPICE, CIDEr, ROUGE, BLEU, METEOR, and WMD. Classical video description approaches combined subject, object and verb detection with template based language models to generate sentences. However, the release of large datasets revealed that these methods can not cope with the diversity in unconstrained open domain videos. Classical approaches were followed by a very short era of statistical methods which were soon replaced with deep learning, the current state of the art in video description. Our survey shows that despite the fast-paced developments, video description research is still in its infancy due to the following reasons. Analysis of video description models is challenging because it is difficult to ascertain the contributions, towards accuracy or errors, of the visual features and the adopted language model in the final description. Existing datasets neither contain adequate visual diversity nor complexity of linguistic structures. Finally, current evaluation metrics fall short of measuring the agreement between machine generated descriptions with that of humans. We conclude our survey by listing promising future research directions.",
"title": ""
},
{
"docid": "d4ad294e898c1686a0265f6683fbc32b",
"text": "Fractional differential equations have recently been applied in various area of engineering, science, finance, applied mathematics, bio-engineering and others. However, many researchers remain unaware of this field. In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The method is based upon Legendre approximations. The properties of Legendre polynomials are utilized to reduce FDDEs to linear or nonlinear system of algebraic equations. Numerical simulation with the exact solutions of FDDEs is presented. AMS Subject Classification: 34A08, 34K37",
"title": ""
},
{
"docid": "640493ad47632e0ef74dd5c528b29eef",
"text": "Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices.",
"title": ""
},
{
"docid": "5bebef3a6ca0d595b6b3232e18f8789f",
"text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.",
"title": ""
},
{
"docid": "2c48e9908078ca192ff191121ce90e21",
"text": "In the hierarchy of data, information and knowledge, computational methods play a major role in the initial processing of data to extract information, but they alone become less effective to compile knowledge from information. The Kyoto Encyclopedia of Genes and Genomes (KEGG) resource (http://www.kegg.jp/ or http://www.genome.jp/kegg/) has been developed as a reference knowledge base to assist this latter process. In particular, the KEGG pathway maps are widely used for biological interpretation of genome sequences and other high-throughput data. The link from genomes to pathways is made through the KEGG Orthology system, a collection of manually defined ortholog groups identified by K numbers. To better automate this interpretation process the KEGG modules defined by Boolean expressions of K numbers have been expanded and improved. Once genes in a genome are annotated with K numbers, the KEGG modules can be computationally evaluated revealing metabolic capacities and other phenotypic features. The reaction modules, which represent chemical units of reactions, have been used to analyze design principles of metabolic networks and also to improve the definition of K numbers and associated annotations. For translational bioinformatics, the KEGG MEDICUS resource has been developed by integrating drug labels (package inserts) used in society.",
"title": ""
},
{
"docid": "6712ec987ec783dceae442993be0cd6e",
"text": "In this paper, we have conducted a literature review on the recent developments and publications involving the vehicle routing problem and its variants, namely vehicle routing problem with time windows (VRPTW) and the capacitated vehicle routing problem (CVRP) and also their variants. The VRP is classified as an NP-hard problem. Hence, the use of exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. The vehicle routing problem comes under combinatorial problem. Hence, to get solutions in determining routes which are realistic and very close to the optimal solution, we use heuristics and meta-heuristics. In this paper we discuss the various exact methods and the heuristics and meta-heuristics used to solve the VRP and its variants.",
"title": ""
},
{
"docid": "e864d77d8ec75b632518ad51a6862453",
"text": "This paper presents the development and experimental performance of a 10 kW high power density three-phase ac-dc-ac converter. The converter consists of a Vienna-type rectifier front-end and a two-level voltage source inverter (VSI). In order to reduce the switching loss and achieve a high operating junction temperature, a SiC JFET and SiC Schottky diode are utilized. Design considerations for the phase-leg units, gate drivers, integrated input filter — combining EMI and boost inductor stages — and the system protection are described in full detail. Experiments are carried out under different operating conditions, and the results obtained verify the performance and feasibility of the proposed converter system.",
"title": ""
},
{
"docid": "03fa5f5f6b6f307fc968a2b543e331a1",
"text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.",
"title": ""
},
{
"docid": "c15fdbcd454a2293a6745421ad397e04",
"text": "The amount of research related to Internet marketing has grown rapidly since the dawn of the Internet Age. A review of the literature base will help identify the topics that have been explored as well as identify topics for further research. This research project collects, synthesizes, and analyses both the research strategies (i.e., methodologies) and content (e.g., topics, focus, categories) of the current literature, and then discusses an agenda for future research efforts. We analyzed 411 articles published over the past eighteen years (1994-present) in thirty top Information Systems (IS) journals and 22 articles in the top 5 Marketing journals. The results indicate an increasing level of activity during the 18-year period, a biased distribution of Internet marketing articles focused on exploratory methodologies, and several research strategies that were either underrepresented or absent from the pool of Internet marketing research. We also identified several subject areas that need further exploration. The compilation of the methodologies used and Internet marketing topics being studied can serve to motivate researchers to strengthen current research and explore new areas of this research.",
"title": ""
}
] | scidocsrr |
e461a00ceb5f8937f05bf68665b57ec8 | Rumor Identification and Belief Investigation on Twitter | [
{
"docid": "0c886080015642aa5b7c103adcd2a81d",
"text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.",
"title": ""
},
{
"docid": "860894abbbafdcb71178cb9ddd173970",
"text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.",
"title": ""
}
] | [
{
"docid": "45390290974f347d559cd7e28c33c993",
"text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.",
"title": ""
},
{
"docid": "0c67628fb24c8cbd4a8e49fb30ba625e",
"text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.",
"title": ""
},
{
"docid": "fc62e84fc995deb1932b12821dfc0ada",
"text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.",
"title": ""
},
{
"docid": "e4405c71336ea13ccbd43aa84651dc60",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "bc30cd034185df96d20174b9719f3177",
"text": "Toxicity in online environments is a complex and a systemic issue. Esports communities seem to be particularly suffering from toxic behaviors. Especially in competitive esports games, negative behavior, such as harassment, can create barriers to players achieving high performance and can reduce players' enjoyment which may cause them to leave the game. The aim of this study is to review design approaches in six major esports games to deal with toxic behaviors and to investigate how players perceive and deal with toxicity in those games. Our preliminary findings from an interview study with 17 participants (3 female) from a university esports club show that players define toxicity as behaviors disrupt their morale and team dynamics, and participants are inclined to normalize negative behaviors and rationalize it as part of the competitive game culture. If they choose to take an action against toxic players, they are likely to ostracize toxic players.",
"title": ""
},
{
"docid": "cbe1dc1b56716f57fca0977383e35482",
"text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.",
"title": ""
},
{
"docid": "6a8ac2a2786371dcb043d92fa522b726",
"text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.",
"title": ""
},
{
"docid": "f264d5b90dfb774e9ec2ad055c4ebe62",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "57d162c64d93b28f6be1e086b5a1c134",
"text": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.",
"title": ""
},
{
"docid": "2d7892534b0e279a426e3fdbc3849454",
"text": "What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition.",
"title": ""
},
{
"docid": "5cc26542d0f4602b2b257e19443839b3",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
},
{
"docid": "4704f3ed7a5d5d9b244689019025730f",
"text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.",
"title": ""
},
{
"docid": "82917c4e6fb56587cc395078c14f3bb7",
"text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.",
"title": ""
},
{
"docid": "70d874f2f919c6749c4105f35776532b",
"text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Hadoop is one such open-source framework that is enjoying widespread adoption. In this paper, we detail an approach to indexing and performing key analytics on spatial data that is persisted in HDFS. Our technique differs from other approaches in that it combines spatial indexing, data load balancing, and data clustering in order to optimize performance across the cluster. In addition, our index supports efficient, random-access queries without requiring a MapReduce job; neither a full table scan, nor any MapReduce overhead is incurred when searching. This facilitates large numbers of concurrent query executions. We will also demonstrate how indexing and clustering positively impacts the performance of range and k-NN queries on large real-world datasets. The performance analysis will enable a number of interesting observations to be made on the behavior of spatial indexes and spatial queries in this distributed processing environment.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "865d7b8fae1cab739570229889177d58",
"text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F",
"title": ""
},
{
"docid": "1f7fb5da093f0f0b69b1cc368cea0701",
"text": "This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on \"what\" and \"where\" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.",
"title": ""
},
{
"docid": "86f25f09b801d28ce32f1257a39ddd44",
"text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.",
"title": ""
},
{
"docid": "8fc87a5f89792b3ea69833dcae90cd6e",
"text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.",
"title": ""
},
{
"docid": "1c2cc1120129eca44443a637c0f06729",
"text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF",
"title": ""
}
] | scidocsrr |
36f0805e18c045c98071bd2b1469cab3 | Fake News Detection Enhancement with Data Imputation | [
{
"docid": "3f9c720773146d83c61cfbbada3938c4",
"text": "How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
}
] | [
{
"docid": "2e731a5ec3abb71ad0bc3253e2688fbe",
"text": "Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.",
"title": ""
},
{
"docid": "d5b4018422fdee8d3f4f33343f3de290",
"text": "Given a pair of handwritten documents written by different individuals, compute a document similarity score irrespective of (i) handwritten styles, (ii) word forms, word ordering and word overflow. • IIIT-HWS: Introducing a large scale synthetic corpus of handwritten word images for enabling deep architectures. • HWNet: A deep CNN architecture for state of the art handwritten word spotting in multi-writer scenarios. • MODS: Measure of document similarity score irrespective of word forms, ordering and paraphrasing of the content. • Applications in Educational Scenario: Comparing handwritten assignments, searching through instructional videos. 2. Contributions 3. Challenges",
"title": ""
},
{
"docid": "c4256017c214eabda8e5b47c604e0e49",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "016f1963dc657a6148afc7f067ad7c82",
"text": "Privacy protection is a crucial problem in many biomedical signal processing applications. For this reason, particular attention has been given to the use of secure multiparty computation techniques for processing biomedical signals, whereby nontrusted parties are able to manipulate the signals although they are encrypted. This paper focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the client without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs (a particular kind of decision tree) and the latter relying on neural networks. The paper deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps, including the necessity of working with fixed point arithmetic with no truncation while guaranteeing the same performance of a floating point implementation in the plain domain. A highly efficient version of the underlying cryptographic primitives is used, ensuring a good efficiency of the two proposed methods, from both a communication and computational complexity perspectives. The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.",
"title": ""
},
{
"docid": "412c61657893bb1ed2f579936d47dc02",
"text": "In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks. Stacked hourglass network, which was originally designed for human pose estimation in natural images, is applied to a music source separation task. The network learns features from a spectrogram image across multiple scales and generates masks for each music source. The estimated mask is refined as it passes over stacked hourglass modules. The proposed framework is able to separate multiple music sources using a single network. Experimental results on MIR-1K and DSD100 datasets validate that the proposed method achieves competitive results comparable to the state-of-the-art methods in multiple music source separation and singing voice separation tasks.",
"title": ""
},
{
"docid": "042fcc75e4541d27b97e8c2fe02a2ddf",
"text": "Folk medicine suggests that pomegranate (peels, seeds and leaves) has anti-inflammatory properties; however, the precise mechanisms by which this plant affects the inflammatory process remain unclear. Herein, we analyzed the anti-inflammatory properties of a hydroalcoholic extract prepared from pomegranate leaves using a rat model of lipopolysaccharide-induced acute peritonitis. Male Wistar rats were treated with either the hydroalcoholic extract, sodium diclofenac, or saline, and 1 h later received an intraperitoneal injection of lipopolysaccharides. Saline-injected animals (i. p.) were used as controls. Animals were culled 4 h after peritonitis induction, and peritoneal lavage and peripheral blood samples were collected. Serum and peritoneal lavage levels of TNF-α as well as TNF-α mRNA expression in peritoneal lavage leukocytes were quantified. Total and differential leukocyte populations were analyzed in peritoneal lavage samples. Lipopolysaccharide-induced increases of both TNF-α mRNA and protein levels were diminished by treatment with either pomegranate leaf hydroalcoholic extract (57 % and 48 % mean reduction, respectively) or sodium diclofenac (41 % and 33 % reduction, respectively). Additionally, the numbers of peritoneal leukocytes, especially neutrophils, were markedly reduced in hydroalcoholic extract-treated rats with acute peritonitis. These results demonstrate that pomegranate leaf extract may be used as an anti-inflammatory drug which suppresses the levels of TNF-α in acute inflammation.",
"title": ""
},
{
"docid": "ee1bc0f1681240eaf630421bb2e6fbb2",
"text": "The Caltech Multi-Vehicle Wireless Testbed (MVWT) is a platform designed to explore theoretical advances in multi-vehicle coordination and control, networked control systems and high confidence distributed computation. The contribution of this report is to present simulation and experimental results on the generation and implementation of optimal trajectories for the MVWT vehicles. The vehicles are nonlinear, spatially constrained and their input controls are bounded. The trajectories are generated using the NTG software package developed at Caltech. Minimum time trajectories and the application of Model Predictive Control (MPC) are investigated.",
"title": ""
},
{
"docid": "8fd049da24568dea2227483415532f9b",
"text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.",
"title": ""
},
{
"docid": "2c3877232f71fe80a21fff336256c627",
"text": "OBJECTIVE\nTo examine the association between absence of the nasal bone at the 11-14-week ultrasound scan and chromosomal defects.\n\n\nMETHODS\nUltrasound examination was carried out in 3829 fetuses at 11-14 weeks' gestation immediately before fetal karyotyping. At the scan the fetal crown-rump length (CRL) and nuchal translucency (NT) thickness were measured and the fetal profile was examined for the presence or absence of the nasal bone. Maternal characteristics including ethnic origin were also recorded.\n\n\nRESULTS\nThe fetal profile was successfully examined in 3788 (98.9%) cases. In 3358/3788 cases the fetal karyotype was normal and in 430 it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related firstly to the ethnic origin of the mother (2.8% for Caucasians, 10.4% for Afro-Caribbeans and 6.8% for Asians), secondly to fetal CRL (4.6% for CRL of 45-54 mm, 3.9% for CRL of 55-64 mm, 1.5% for CRL of 65-74 mm and 1.0% for CRL of 75-84 mm) and thirdly, to NT thickness, (1.8% for NT < 2.5 mm, 3.4% for NT 2.5-3.4 mm, 5.0% for NT 3.5-4.4 mm and 11.8% for NT > or = 4.5 mm. In the chromosomally abnormal group the nasal bone was absent in 161/242 (66.9%) with trisomy 21, in 48/84 (57.1%) with trisomy 18, in 7/22 (31.8%) with trisomy 13, in 3/34 (8.8%) with Turner syndrome and in 4/48 (8.3%) with other defects.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT thickness and ethnic origin.",
"title": ""
},
{
"docid": "6064bdefac3e861bcd46fa303b0756be",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "86f1142b9ee47dfbe110907d72a4803a",
"text": "Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pretrained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular OVERNIGHT dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embedding can bring significant improvement.",
"title": ""
},
{
"docid": "5f5c74e55d9be8cbad43c543d6898f3d",
"text": "Multi robot systems are envisioned to play an important role in many robotic applications. A main prerequisite for a team deployed in a wide unknown area is the capability of autonomously navigate, exploiting the information acquired through the on-line estimation of both robot poses and surrounding environment model, according to Simultaneous Localization And Mapping (SLAM) framework. As team coordination is improved, distributed techniques for filtering are required in order to enhance autonomous exploration and large scale SLAM increasing both efficiency and robustness of operation. Although Rao-Blackwellized Particle Filters (RBPF) have been demonstrated to be an effective solution to the problem of single robot SLAM, few extensions to teams of robots exist, and these approaches are characterized by strict assumptions on both communication bandwidth and prior knowledge on relative poses of the teammates. In the present paper we address the problem of multi robot SLAM in the case of limited communication and unknown relative initial poses. Starting from the well established single robot RBPF-SLAM, we propose a simple technique which jointly estimates SLAM posterior of the robots by fusing the prioceptive and the eteroceptive information acquired by each teammate. The approach intrinsically reduces the amount of data to be exchanged among the robots, while taking into account the uncertainty in relative pose measurements. Moreover it can be naturally extended to different communication technologies (bluetooth, RFId, wifi, etc.) regardless their sensing range. The proposed approach is validated through experimental test.",
"title": ""
},
{
"docid": "5f8ddaa9130446373a9a5d44c17ca604",
"text": "Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires realtime inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.,,,,,, In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network we use convolutional layers not only to extract feature maps, but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fullyconvolutional, which leads to small model size and better energy efficiency. Finally, our experiments show that our model is very accurate, achieving state-of-the-art accuracy on the KITTI [10] benchmark. The source code of SqueezeDet is open-source released.",
"title": ""
},
{
"docid": "48cdea9a78353111d236f6d0f822dc3a",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
},
{
"docid": "e22495967c8ed452f552ab79fa7333be",
"text": "Recently developed object detectors employ a convolutional neural network (CNN) by gradually increasing the number of feature layers with a pyramidal shape instead of using a featurized image pyramid. However, the different abstraction levels of CNN feature layers often limit the detection performance, especially on small objects. To overcome this limitation, we propose a CNN-based object detection architecture, referred to as a parallel feature pyramid (FP) network (PFPNet), where the FP is constructed by widening the network width instead of increasing the network depth. First, we adopt spatial pyramid pooling and some additional feature transformations to generate a pool of feature maps with different sizes. In PFPNet, the additional feature transformation is performed in parallel, which yields the feature maps with similar levels of semantic abstraction across the scales. We then resize the elements of the feature pool to a uniform size and aggregate their contextual information to generate each level of the final FP. The experimental results confirmed that PFPNet increases the performance of the latest version of the single-shot multi-box detector (SSD) by mAP of 6.4% AP and especially, 7.8% APsmall on the MS-COCO dataset.",
"title": ""
},
{
"docid": "18011cbde7d1a16da234c1e886371a6c",
"text": "The increased prevalence of cardiovascular disease among the aging population has prompted greater interest in the field of smart home monitoring and unobtrusive cardiac measurements. This paper introduces the design of a capacitive electrocardiogram (ECG) sensor that measures heart rate with no conscious effort from the user. The sensor consists of two active electrodes and an analog processing circuit that is low cost and customizable to the surfaces of common household objects. Prototype testing was performed in a home laboratory by embedding the sensor into a couch, walker, office and dining chairs. The sensor produced highly accurate heart rate measurements (<; 2.3% error) via either direct skin contact or through one and two layers of clothing. The sensor requires no gel dielectric and no grounding electrode, making it particularly suited to the “zero-effort” nature of an autonomous smart home environment. Motion artifacts caused by deviations in body contact with the electrodes were identified as the largest source of unreliability in continuous ECG measurements and will be a primary focus in the next phase of this project.",
"title": ""
},
{
"docid": "4fe8d9aef1b635d0dc8e6d1dfb3b17f3",
"text": "A fully differential architecture from the antenna to the integrated circuit is proposed for radio transceivers in this paper. The physical implementation of the architecture into truly single-chip radio transceivers is described for the first time. Two key building blocks, the differential antenna and the differential transmit-receive (T-R) switch, were designed, fabricated, and tested. The differential antenna implemented in a package in low-temperature cofired-ceramic technology achieved impedance bandwidth of 2%, radiation efficiency of 84%, and gain of 3.2 dBi at 5.425 GHz in a size of 15 x 15 x 1.6 mm3. The differential T-R switch in a standard complementary metal-oxide-semiconductor technology achieved 1.8-dB insertion loss, 15-dB isolation, and 15-dBm 1-dB power compression point (P 1dB) without using additional techniques to enhance the linearity at 5.425 GHz in a die area of 60 x 40 mum2.",
"title": ""
},
{
"docid": "3473e863d335725776281fe2082b756f",
"text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.",
"title": ""
},
{
"docid": "5c0b73ea742ce615d072ae5eedc7f56f",
"text": "Drama, at least according to the Aristotelian view, is effective inasmuch as it successfully mirrors real aspects of human behavior. This leads to the hypothesis that successful dramas will portray fictional social networks that have the same properties as those typical of human beings across ages and cultures. We outline a methodology for investigating this hypothesis and use it to examine ten of Shakespeare's plays. The cliques and groups portrayed in the plays correspond closely to those which have been observed in spontaneous human interaction, including in hunter-gatherer societies, and the networks of the plays exhibit \"small world\" properties of the type which have been observed in many human-made and natural systems.",
"title": ""
},
{
"docid": "4328277c12a0f2478446841f8082601f",
"text": "Short circuit protection remains one of the major technical barriers in DC microgrids. This paper reviews state of the art of DC solid state circuit breakers (SSCBs). A new concept of a self-powered SSCB using normally-on wideband gap (WBG) semiconductor devices as the main static switch is described in this paper. The new SSCB detects short circuit faults by sensing its terminal voltage rise, and draws power from the fault condition itself to turn and hold off the static switch. The new two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. Challenges and future trends in protecting low voltage distribution microgrids against short circuit and other faults are discussed.",
"title": ""
}
] | scidocsrr |
75f6be81731dbdcf1264cddd7c47d904 | 28-Gbaud PAM4 and 56-Gb/s NRZ Performance Comparison Using 1310-nm Al-BH DFB Laser | [
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
}
] | [
{
"docid": "8af28adbd019c07a9d27e6189abecb7a",
"text": "We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1",
"title": ""
},
{
"docid": "68bcc07055db2c714001ccb5f8dc96a7",
"text": "The concept of an antipodal bipolar fuzzy graph of a given bipolar fuzzy graph is introduced. Characterizations of antipodal bipolar fuzzy graphs are presented when the bipolar fuzzy graph is complete or strong. Some isomorphic properties of antipodal bipolar fuzzy graph are discussed. The notion of self median bipolar fuzzy graphs of a given bipolar fuzzy graph is also introduced.",
"title": ""
},
{
"docid": "ab2f3971a6b4f2f22e911b2367d1ef9e",
"text": "Many multimedia systems stream real-time visual data continuously for a wide variety of applications. These systems can produce vast amounts of data, but few studies take advantage of the versatile and real-time data. This paper presents a novel model based on the Convolutional Neural Networks (CNNs) to handle such imbalanced and heterogeneous data and successfully identifies the semantic concepts in these multimedia systems. The proposed model can discover the semantic concepts from the data with a skewed distribution using a dynamic sampling technique. The paper also presents a system that can retrieve real-time visual data from heterogeneous cameras, and the run-time environment allows the analysis programs to process the data from thousands of cameras simultaneously. The evaluation results in comparison with several state-of-the-art methods demonstrate the ability and effectiveness of the proposed model on visual data captured by public network cameras.",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "9b7b5d50a6578342f230b291bfeea443",
"text": "Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that complexity often takes the form of modularity in structure and functionality. Therefore, a hierarchical perspective can be essential to understanding complex ecological systems. But, how can such hierarchical approach help us with modeling spatially heterogeneous, nonlinear dynamic systems like landscapes, be they natural or human-dominated? In this paper, we present a spatially explicit hierarchical modeling approach to studying the patterns and processes of heterogeneous landscapes. We first discuss the theoretical basis for the modeling approach—the hierarchical patch dynamics (HPD) paradigm and the scaling ladder strategy, and then describe the general structure of a hierarchical urban landscape model (HPDM-PHX) which is developed using this modeling approach. In addition, we introduce a hierarchical patch dynamics modeling platform (HPD-MP), a software package that is designed to facilitate the development of spatial hierarchical models. We then illustrate the utility of HPD-MP through two examples: a hierarchical cellular automata model of land use change and a spatial multi-species population dynamics model. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "64a7885b10d3439feee616c359851ec4",
"text": "While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.",
"title": ""
},
{
"docid": "5987154caa7a77b20707a45f2f3b3545",
"text": "In community question answering (cQA), the quality of answers are determined by the matching degree between question-answer pairs and the correlation among the answers. In this paper, we show that the dependency between the answer quality labels also plays a pivotal role. To validate the effectiveness of label dependency, we propose two neural network-based models, with different combination modes of Convolutional Neural Networks, Long Short Term Memory and Conditional Random Fields. Extensive experiments are taken on the dataset released by the SemEval-2015 cQA shared task. The first model is a stacked ensemble of the networks. It achieves 58.96% on macro averaged F1, which improves the state-of-the-art neural networkbased method by 2.82% and outperforms the Top-1 system in the shared task by 1.77%. The second is a simple attention-based model whose input is the connection of the question and its corresponding answers. It produces promising results with 58.29% on overall F1 and gains the best performance on the Good and Bad categories.",
"title": ""
},
{
"docid": "51d534721e7003cf191189be37342394",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "40a88d168ad559c1f68051e710c49d6b",
"text": "Modern robotic systems tend to get more complex sensors at their disposal, resulting in complex algorithms to process their data. For example, camera images are being used map their environment and plan their route. On the other hand, the robotic systems are becoming mobile more often and need to be as energy-efficient as possible; quadcopters are an example of this. These two trends interfere with each other: Data-intensive, complex algorithms require a lot of processing power, which is in general not energy-friendly nor mobile-friendly. In this paper, we describe how to move the complex algorithms to a computing platform that is not part of the mobile part of the setup, i.e. to offload the processing part to a base station. We use the ROS framework for this, as ROS provides a lot of existing computation solutions. On the mobile part of the system, our hard real-time execution framework, called LUNA, is used, to make it possible to run the loop controllers on it. The design of a `bridge node' is explained, which is used to connect the LUNA framework to ROS. The main issue to tackle is to subscribe to an arbitrary ROS topic at run-time, instead of defining the ROS topics at compile-time. Furthermore, it is shown that this principle is working and the requirements of network bandwidth are discussed.",
"title": ""
},
{
"docid": "fd48614d255b7c7bc7054b4d5de69a15",
"text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009",
"title": ""
},
{
"docid": "d4ab2085eec138f99d4d490b0cbf9e3a",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "42d2cdb17f23e22da74a405ccb71f09b",
"text": "Nostalgia is a psychological phenomenon we all can relate to but have a hard time to define. What characterizes the mental state of feeling nostalgia? What psychological function does it serve? Different published materials in a wide range of fields, from consumption research and sport science to clinical psychology, psychoanalysis and sociology, all have slightly different definition of this mental experience. Some claim it is a psychiatric disease giving melancholic emotions to a memory you would consider a happy one, while others state it enforces positivity in our mood. First in this paper a thorough review of the history of nostalgia is presented, then a look at the body of contemporary nostalgia research to see what it could be constituted of. Finally, we want to dig even deeper to see what is suggested by the literature in terms of triggers and functions. Some say that digitally recorded material like music and videos has a potential nostalgic component, which could trigger a reflection of the past in ways that was difficult before such inventions. Hinting towards that nostalgia as a cultural phenomenon is on a rising scene. Some authors say that odors have the strongest impact on nostalgic reverie due to activating it without too much cognitive appraisal. Cognitive neuropsychology has shed new light on a lot of human psychological phenomena‘s and even though empirical testing have been scarce in this field, it should get a fair scrutiny within this perspective as well and hopefully helping to clarify the definition of the word to ease future investigations, both scientifically speaking and in laymen‘s retro hysteria.",
"title": ""
},
{
"docid": "25eedd2defb9e0a0b22e44195a4b767b",
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"title": ""
},
{
"docid": "6f8e565aff657cbc1b65217d72ead3ab",
"text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.",
"title": ""
},
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "7b8dffab502fae2abbea65464e2727aa",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "b66dc21916352dc19a25073b32331edb",
"text": "We present a framework and algorithm to analyze first person RGBD videos captured from the robot while physically interacting with humans. Specifically, we explore reactions and interactions of persons facing a mobile robot from a robot centric view. This new perspective offers social awareness to the robots, enabling interesting applications. As far as we know, there is no public 3D dataset for this problem. Therefore, we record two multi-modal first-person RGBD datasets that reflect the setting we are analyzing. We use a humanoid and a non-humanoid robot equipped with a Kinect. Notably, the videos contain a high percentage of ego-motion due to the robot self-exploration as well as its reactions to the persons' interactions. We show that separating the descriptors extracted from ego-motion and independent motion areas, and using them both, allows us to achieve superior recognition results. Experiments show that our algorithm recognizes the activities effectively and outperforms other state-of-the-art methods on related tasks.",
"title": ""
}
] | scidocsrr |
aa0535d08ed619450e3cdee0e847b806 | Approximate Note Transcription for the Improved Identification of Difficult Chords | [
{
"docid": "e8933b0afcd695e492d5ddd9f87aeb81",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
}
] | [
{
"docid": "1f1c4c69a4c366614f0cc9ecc24365ba",
"text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.",
"title": ""
},
{
"docid": "b2cd02622ec0fc29b54e567c7f10a935",
"text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.",
"title": ""
},
{
"docid": "69d7ec6fe0f847cebe3d1d0ae721c950",
"text": "Circularly polarized (CP) dielectric resonator antenna (DRA) subarrays have been numerically studied and experimentally verified. Elliptical CP DRA is used as the antenna element, which is excited by either a narrow slot or a probe. The elements are arranged in a 2 by 2 subarray configuration and are excited sequentially. In order to optimize the CP bandwidth, wideband feeding networks have been designed. Three different types of feeding network are studied; they are parallel feeding network, series feeding network and hybrid ring feeding network. For the CP DRA subarray with hybrid ring feeding network, the impedance matching bandwidth (S11<-10 dB) and 3-dB AR bandwidth achieved are 44% and 26% respectively",
"title": ""
},
{
"docid": "7d603d154025f7160c0711bba92e1049",
"text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.",
"title": ""
},
{
"docid": "d131f4f22826a2083d35dfa96bf2206b",
"text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.",
"title": ""
},
{
"docid": "d59e21319b9915c2f6d7a8931af5503c",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "bc3924d12ee9d07a752fce80a67bb438",
"text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.",
"title": ""
},
{
"docid": "5b89c42eb7681aff070448bc22e501ea",
"text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.",
"title": ""
},
{
"docid": "f0dbe9ad4934ff4d5857cfc5a876bcb6",
"text": "Although pricing fraud is an important issue for improving service quality of online shopping malls, research on automatic fraud detection has been limited. In this paper, we propose an unsupervised learning method based on a finite mixture model to identify pricing frauds. We consider two states, normal and fraud, for each item according to whether an item description is relevant to its price by utilizing the known number of item clusters. Two states of an observed item are modeled as hidden variables, and the proposed models estimate the state by using an expectation maximization (EM) algorithm. Subsequently, we suggest a special case of the proposed model, which is applicable when the number of item clusters is unknown. The experiment results show that the proposed models are more effective in identifying pricing frauds than the existing outlier detection methods. Furthermore, it is presented that utilizing the number of clusters is helpful in facilitating the improvement of pricing fraud detection",
"title": ""
},
{
"docid": "49e91d22adb0cdeb014b8330e31f226d",
"text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.",
"title": ""
},
{
"docid": "9f5ab2f666eb801d4839fcf8f0293ceb",
"text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7e6a2145b9ae7ca2801a3df01f42f5e",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "c3539090bef61fcfe4a194058a61d381",
"text": "Real-time environment monitoring and analysis is an important research area of Internet of Things (IoT). Understanding the behavior of the complex ecosystem requires analysis of detailed observations of an environment over a range of different conditions. One such example in urban areas includes the study of tree canopy cover over the microclimate environment using heterogeneous sensor data. There are several challenges that need to be addressed, such as obtaining reliable and detailed observations over monitoring area, detecting unusual events from data, and visualizing events in real-time in a way that is easily understandable by the end users (e.g., city councils). In this regard, we propose an integrated geovisualization framework, built for real-time wireless sensor network data on the synergy of computational intelligence and visual methods, to analyze complex patterns of urban microclimate. A Bayesian maximum entropy-based method and a hyperellipsoidal model-based algorithm have been build in our integrated framework to address above challenges. The proposed integrated framework was verified using the dataset from an indoor and two outdoor network of IoT devices deployed at two strategically selected locations in Melbourne, Australia. The data from these deployments are used for evaluation and demonstration of these components’ functionality along with the designed interactive visualization components.",
"title": ""
},
{
"docid": "e2134985f8067efe41935adff8ef2150",
"text": "In this paper, a high efficiency and high power factor single-stage balanced forward-flyback converter merging a foward and flyback converter topologies is proposed. The conventional AC/DC flyback converter can achieve a good power factor but it has a high offset current through the transformer magnetizing inductor, which results in a large core loss and low power conversion efficiency. And, the conventional forward converter can achieve the good power conversion efficiency with the aid of the low core loss but the input current dead zone near zero cross AC input voltage deteriorates the power factor. On the other hand, since the proposed converter can operate as the forward and flyback converters during switch on and off periods, respectively, it cannot only perform the power transfer during an entire switching period but also achieve the high power factor due to the flyback operation. Moreover, since the current balanced capacitor can minimize the offset current through the transformer magnetizing inductor regardless of the AC input voltage, the core loss and volume of the transformer can be minimized. Therefore, the proposed converter features a high efficiency and high power factor. To confirm the validity of the proposed converter, theoretical analysis and experimental results from a prototype of 24W LED driver are presented.",
"title": ""
},
{
"docid": "956be237e0b6e7bafbf774d56a8841d2",
"text": "Wireless sensor networks (WSNs) will play an active role in the 21th Century Healthcare IT to reduce the healthcare cost and improve the quality of care. The protection of data confidentiality and patient privacy are the most critical requirements for the ubiquitous use of WSNs in healthcare environments. This requires a secure and lightweight user authentication and access control. Symmetric key based access control is not suitable for WSNs in healthcare due to dynamic network topology, mobility, and stringent resource constraints. In this paper, we propose a secure, lightweight public key based security scheme, Mutual Authentication and Access Control based on Elliptic curve cryptography (MAACE). MAACE is a mutual authentication protocol where a healthcare professional can authenticate to an accessed node (a PDA or medical sensor) and vice versa. This is to ensure that medical data is not exposed to an unauthorized person. On the other hand, it ensures that medical data sent to healthcare professionals did not originate from a malicious node. MAACE is more scalable and requires less memory compared to symmetric key-based schemes. Furthermore, it is much more lightweight than other public key-based schemes. Security analysis and performance evaluation results are presented and compared to existing schemes to show advantages of the proposed scheme.",
"title": ""
},
{
"docid": "8b641a8f504b550e1eed0dca54bfbe04",
"text": "Overlay architectures are programmable logic systems that are compiled on top of a traditional FPGA. These architectures give designers flexibility, and have a number of benefits, such as being designed or optimized for specific application domains, making it easier or more efficient to implement solutions, being independent of platform, allowing the ability to do partial reconfiguration regardless of the underlying architecture, and allowing compilation without using vendor tools, in some cases with fully open source tool chains. This thesis describes the implementation of two FPGA overlay architectures, ZUMA and CARBON. These overlay implementations include optimizations to reduce area and increase speed which may be applicable to many other FPGAs and also ASIC systems. ZUMA is a fine-grain overlay which resembles a modern commercial FPGA, and is compatible with the VTR open source compilation tools. The implementation includes a number of novel features tailored to efficient FPGA implementation, including the utilization of reprogrammable LUTRAMs, a novel two-stage local routing crossbar, and an area efficient configuration controller. CARBON",
"title": ""
},
{
"docid": "2efb10a430e001acd201a0b16ab74836",
"text": "As the cost of human full genome sequencing continues to fall, we will soon witness a prodigious amount of human genomic data in the public cloud. To protect the confidentiality of the genetic information of individuals, the data has to be encrypted at rest. On the other hand, encryption severely hinders the use of this valuable information, such as Genome-wide Range Query (GRQ), in medical/genomic research. While the problem of secure range query on outsourced encrypted data has been extensively studied, the current schemes are far from practical deployment in terms of efficiency and scalability due to the data volume in human genome sequencing. In this paper, we investigate the problem of secure GRQ over human raw aligned genomic data in a third-party outsourcing model. Our solution contains a novel secure range query scheme based on multi-keyword symmetric searchable encryption (MSSE). The proposed scheme incurs minimal ciphertext expansion and computation overhead. We also present a hierarchical GRQ-oriented secure index structure tailored for efficient and large-scale genomic data lookup in the cloud while preserving the query privacy. Our experiment on real human genomic data shows that a secure GRQ request with range size 100,000 over more than 300 million encrypted short reads takes less than 3 minutes, which is orders of magnitude faster than existing solutions.",
"title": ""
},
{
"docid": "b7617b5dd2a6f392f282f6a34f5b6751",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
},
{
"docid": "c16ff028e77459867eed4c2b9c1f44c6",
"text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"title": ""
}
] | scidocsrr |
92c620efee1f4f18338169cd66c2a2a6 | Unsupervised Learning of Visual Representations using Videos | [
{
"docid": "112fc675cce705b3bab9cb66ca1c08da",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
}
] | [
{
"docid": "795e22957969913f2bfbc16c59a9a95d",
"text": "We present an incremental polynomial-time algorithm for enumerating all circuits of a matroid or, more generally, all minimal spanning sets for a flat. We also show the NP-hardness of several related enumeration problems. †RUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway NJ 08854-8003; ({boros,elbassio,gurvich}@rutcor.rutgers.edu). ‡Department of Computer Science, Rutgers University, 110 Frelinghuysen Road, Piscataway NJ 08854-8003; ([email protected]). ∗This research was supported in part by the National Science Foundation Grant IIS0118635. The research of the first and third authors was also supported in part by the Office of Naval Research Grant N00014-92-J-1375. The second and third authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science.",
"title": ""
},
{
"docid": "476f2a1970349b00ee296cf48aaf4983",
"text": "Web personalization systems are used to enhance the user experience by providing tailor-made services based on the user’s interests and preferences which are typically stored in user profiles. For such systems to remain effective, the profiles need to be able to adapt and reflect the users’ changing behaviour. In this paper, we introduce a set of methods designed to capture and track user interests and maintain dynamic user profiles within a personalization system. User interests are represented as ontological concepts which are constructed by mapping web pages visited by a user to a reference ontology and are subsequently used to learn short-term and long-term interests. A multi-agent system facilitates and coordinates the capture, storage, management and adaptation of user interests. We propose a search system that utilizes our dynamic user profile to provide a personalized search experience. We present a series of experiments that show how our system can effectively model a dynamic user profile and is capable of learning and adapting to different user browsing behaviours.",
"title": ""
},
{
"docid": "b4b06fc0372537459de882b48152c4c9",
"text": "As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.",
"title": ""
},
{
"docid": "106f80b025d0f48cb80718bc82573961",
"text": "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.",
"title": ""
},
{
"docid": "50d0b1e141bcea869352c9b96b0b2ad5",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
},
{
"docid": "dc0bd7e2a394c3ce16b86fad31c4e5fc",
"text": "In this paper, we introduce linear and nonlinear consensus protocols for networks of dynamic agents that allow the agents to agree in a distributed and cooperative fashion. We consider the cases of networks with communication time-delays and channels that have filtering effects. We find a tight upper bound on the maximum fixed time-delay that can be tolerated in the network. It turns out that the connectivity of the network is the key in reaching a consensus. The case of agreement with bounded inputs is considered by analyzing the convergence of a class of nonlinear protocols. A Lyapunov function is introduced that quantifies the total disagreement among the nodes of a network. Simulation results are provided for agreement in networks with communication time-delays and constrained inputs.",
"title": ""
},
{
"docid": "b510e02eccd0155d5d411be805f81009",
"text": "Dynamically typed languages trade flexibility and ease of use for safety, while statically typed languages prioritize the early detection of bugs, and provide a better framework for structure large programs. The idea of optional typing is to combine the two approaches in the same language: the programmer can begin development with dynamic types, and migrate to static types as the program matures. The challenge is designing a type system that feels natural to the programmer that is used to programming in a dynamic language.\n This paper presents the initial design of Typed Lua, an optionally-typed extension of the Lua scripting language. Lua is an imperative scripting language with first class functions and lightweight metaprogramming mechanisms. The design of Typed Lua's type system has a novel combination of features that preserves some of the idioms that Lua programmers are used to, while bringing static type safety to them. We show how the major features of the type system type these idioms with some examples, and discuss some of the design issues we faced.",
"title": ""
},
{
"docid": "82e282703eeed354d2e5dc39992b779c",
"text": "Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine-tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross-validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.",
"title": ""
},
{
"docid": "081b09442d347a4a29d8cc3978079f79",
"text": "The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.",
"title": ""
},
{
"docid": "978c1712bf6b469059218697ea552524",
"text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.",
"title": ""
},
{
"docid": "48eacd86c14439454525e5a570db083d",
"text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.",
"title": ""
},
{
"docid": "48370cc694460cc6900213a69f13b5a5",
"text": "This paper describes a strategy to feature point correspondence and motion recovery in vehicle navigation. A transformation of the image plane is proposed that keeps the motion of the vehicle on a plane parallel to the transformed image plane. This permits to de\"ne linear tracking \"lters to estimate the real-world positions of the features, and allows us to select the matches that accomplish the rigidity of the scene by a Hough transform. Candidate correspondences are selected by similarity, taking into account the smoothness of motion. Further processing brings out the \"nal matching. The methods have been tested in a real application. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a6c3a4dfd33eb902f5338f7b8c7f78e5",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "14440d5bac428cf5202aa7b7163cb6bc",
"text": "Global energy consumption is projected to increase, even in the face of substantial declines in energy intensity, at least 2-fold by midcentury relative to the present because of population and economic growth. This demand could be met, in principle, from fossil energy resources, particularly coal. However, the cumulative nature of CO(2) emissions in the atmosphere demands that holding atmospheric CO(2) levels to even twice their preanthropogenic values by midcentury will require invention, development, and deployment of schemes for carbon-neutral energy production on a scale commensurate with, or larger than, the entire present-day energy supply from all sources combined. Among renewable energy resources, solar energy is by far the largest exploitable resource, providing more energy in 1 hour to the earth than all of the energy consumed by humans in an entire year. In view of the intermittency of insolation, if solar energy is to be a major primary energy source, it must be stored and dispatched on demand to the end user. An especially attractive approach is to store solar-converted energy in the form of chemical bonds, i.e., in a photosynthetic process at a year-round average efficiency significantly higher than current plants or algae, to reduce land-area requirements. Scientific challenges involved with this process include schemes to capture and convert solar energy and then store the energy in the form of chemical bonds, producing oxygen from water and a reduced fuel such as hydrogen, methane, methanol, or other hydrocarbon species.",
"title": ""
},
{
"docid": "7107c5a8e81e3cedf750bcd9937581cd",
"text": "AVR XMEGA is the recent general-purpose 8-bit microcontroller from Atmel featuring symmetric crypto engines. We analyze the resistance of XMEGA crypto engines to side channel attacks. We reveal the relatively strong side channel leakage of the AES engine that enables full 128-bit AES secret key recovery in a matter of several minutes with a measurement setup cost about 1000 USD. 3000 power consumption traces are sufficient for the successful attack. Our analysis was performed without knowing the details of the crypto engine internals; quite the contrary, it reveals some details about the implementation. We sketch other feasible side channel attacks on XMEGA and suggest the counter-measures that can raise the complexity of the attacks but not fully prevent them.",
"title": ""
},
{
"docid": "ce8df10940df980b5b9ff635843a76f9",
"text": "This letter presents a 60-GHz 2 × 2 low temperature co-fired ceramic (LTCC) aperture-coupled patch antenna array with an integrated Sievenpiper electromagnetic band-gap (EBG) structure used to suppress TM-mode surface waves. The merit of this EBG structure is to yield a predicted 4-dB enhancement in broadside directivity and gain, and an 8-dB improvement in sidelobe level. The novelty of this antenna lies in the combination of a relatively new LTCC material system (DuPont Greentape 9K7) along with laser ablation processing for fine line and fine slot definition (50-μm gaps with +/ - 6 μm tolerance) allowing the first successful integration of a Sievenpiper EBG structure with a millimeter-wave LTCC patch array. A measured broadside gain/directivity of 11.5/14 dBi at 60 GHz is achieved with an aperture footprint of only 350 × 410 mil2 (1.78λ × 2.08λ) including the EBG structure. This thin (27 mil) LTCC array is well suited for chip-scale package applications.",
"title": ""
},
{
"docid": "917c065df63312d222053246fc5d14c2",
"text": "Current pervasive games are mostly location-aware applications, played on handheld computing devices. Considering pervasive games for children, it is argued that the interaction paradigm existing games support limits essential aspects of outdoor play like spontaneous social interaction, physical movement, and rich face-to-face communication. We present a new genre of pervasive games conceived to address this problem, that we call “Head Up Games” (HUGs) to underline that they liberate players from facing down to attend to screen-based interactions. The article discusses characteristics of HUG and relates them to existing genres of pervasive games. We present lessons learned during the design and evaluation of three HUG and chart future challenges.",
"title": ""
},
{
"docid": "de304ae0f87412c6b0bfca6a3e6835bb",
"text": "This paper presents a novel method for sensorless brushless dc (BLDC) motor drives. Based on the characteristics of the back electromotive force (EMF), the rotor position signals would be constructed. It is intended to construct these signals, which make the phase difference between the constructed signals and the back EMFs controllable. Then, the rotor-position error can be compensated by controlling the phase-difference angle in real time. In this paper, the rotor-position-detection error is analyzed. Using the TMS320F2812 chip as a control core, some experiments have been carried out on the prototype, which is the surface-mounted permanent magnet BLDC motor, and the experimental results verify the analysis and demonstrate advantages of the proposed sensorless-control method.",
"title": ""
},
{
"docid": "641f8ac3567d543dd5df40a21629fbd7",
"text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.",
"title": ""
},
{
"docid": "062839e72c6bdc6c6bf2ba1d1041d07b",
"text": "Students’ increasing use of text messaging language has prompted concern that textisms (e.g., 2 for to, dont for don’t, ☺) will intrude into their formal written work. Eighty-six Australian and 150 Canadian undergraduates were asked to rate the appropriateness of textism use in various situations. Students distinguished between the appropriateness of using textisms in different writing modalities and to different recipients, rating textism use as inappropriate in formal exams and assignments, but appropriate in text messages, online chat and emails with friends and siblings. In a second study, we checked the examination papers of a separate sample of 153 Australian undergraduates for the presence of textisms. Only a negligible number were found. We conclude that, overall, university students recognise the different requirements of different recipients and modalities when considering textism use and that students are able to avoid textism use in exams despite media reports to the contrary.",
"title": ""
}
] | scidocsrr |
02041c4237e3afe2f0040c7beb453c57 | Training a text classifier with a single word using Twitter Lists and domain adaptation | [
{
"docid": "8d0c5de2054b7c6b4ef97a211febf1d0",
"text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:",
"title": ""
},
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4",
"text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.",
"title": ""
}
] | [
{
"docid": "3531efcf8308541b0187b2ea4ab91721",
"text": "This paper proposed a novel controlling technique of pulse width modulation (PWM) mode and pulse frequency modulation (PFM) mode to keep the high efficiency within width range of loading. The novel control method is using PWM and PFM detector to achieve two modes switching appropriately. The controlling technique can make the efficiency of current mode DC-DC buck converter up to 88% at light loading and this paper is implemented by TSMC 0.35 mum CMOS process.",
"title": ""
},
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "70dd5d85ec3864b8c6cbd6b300f4640f",
"text": "An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified.",
"title": ""
},
{
"docid": "4deea3312fe396f81919b07462551682",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
},
{
"docid": "f6ae71fee81a8560f37cb0dccfd1e3cd",
"text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.",
"title": ""
},
{
"docid": "db3bb02dde6c818b173cf12c9c7440b7",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "1e3e1e272f1b8da8f28cc9d3a338bfc6",
"text": "In an attempt to preserve the structural information in malware binaries during feature extraction, function call graph-based features have been used in various research works in malware classification. However, the approach usually employed when performing classification on these graphs, is based on computing graph similarity using computationally intensive techniques. Due to this, much of the previous work in this area incurred large performance overhead and does not scale well. In this paper, we propose a linear time function call graph (FCG) vector representation based on function clustering that has significant performance gains in addition to improved classification accuracy. We also show how this representation can enable using graph features together with other non-graph features.",
"title": ""
},
{
"docid": "744162ac558f212f73327ba435c1d578",
"text": "Massive classification, a classification task defined over a vast number of classes (hundreds of thousands or even millions), has become an essential part of many real-world systems, such as face recognition. Existing methods, including the deep networks that achieved remarkable success in recent years, were mostly devised for problems with a moderate number of classes. They would meet with substantial difficulties, e.g. excessive memory demand and computational cost, when applied to massive problems. We present a new method to tackle this problem. This method can efficiently and accurately identify a small number of “active classes” for each mini-batch, based on a set of dynamic class hierarchies constructed on the fly. We also develop an adaptive allocation scheme thereon, which leads to a better tradeoff between performance and cost. On several large-scale benchmarks, our method significantly reduces the training cost and memory demand, while maintaining competitive performance.",
"title": ""
},
{
"docid": "80058ed2de002f05c9f4c1451c53e69c",
"text": "Purchasing decisions in many product categories are heavily influenced by the shopper's aesthetic preferences. It's insufficient to simply match a shopper with popular items from the category in question; a successful shopping experience also identifies products that match those aesthetics. The challenge of capturing shoppers' styles becomes more difficult as the size and diversity of the marketplace increases. At Etsy, an online marketplace for handmade and vintage goods with over 30 million diverse listings, the problem of capturing taste is particularly important -- users come to the site specifically to find items that match their eclectic styles.\n In this paper, we describe our methods and experiments for deploying two new style-based recommender systems on the Etsy site. We use Latent Dirichlet Allocation (LDA) to discover trending categories and styles on Etsy, which are then used to describe a user's \"interest\" profile. We also explore hashing methods to perform fast nearest neighbor search on a map-reduce framework, in order to efficiently obtain recommendations. These techniques have been implemented successfully at very large scale, substantially improving many key business metrics.",
"title": ""
},
{
"docid": "b36058bcfcb5f5f4084fe131c42b13d9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "f690ffa886d3ed44a6b2171226cd151f",
"text": "Gold nanoparticles are widely used in biomedical imaging and diagnostic tests. Based on their established use in the laboratory and the chemical stability of Au(0), gold nanoparticles were expected to be safe. The recent literature, however, contains conflicting data regarding the cytotoxicity of gold nanoparticles. Against this background a systematic study of water-soluble gold nanoparticles stabilized by triphenylphosphine derivatives ranging in size from 0.8 to 15 nm is made. The cytotoxicity of these particles in four cell lines representing major functional cell types with barrier and phagocyte function are tested. Connective tissue fibroblasts, epithelial cells, macrophages, and melanoma cells prove most sensitive to gold particles 1.4 nm in size, which results in IC(50) values ranging from 30 to 56 microM depending on the particular 1.4-nm Au compound-cell line combination. In contrast, gold particles 15 nm in size and Tauredon (gold thiomalate) are nontoxic at up to 60-fold and 100-fold higher concentrations, respectively. The cellular response is size dependent, in that 1.4-nm particles cause predominantly rapid cell death by necrosis within 12 h while closely related particles 1.2 nm in diameter effect predominantly programmed cell death by apoptosis.",
"title": ""
},
{
"docid": "53d4995e474fb897fb6f10cef54c894f",
"text": "The measurement of different target parameters using radar systems has been an active research area for the last decades. Particularly target angle measurement is a very demanding topic, because obtaining good measurement results often goes hand in hand with extensive hardware effort. Especially for sensors used in the mass market, e.g. in automotive applications like adaptive cruise control this may be prohibitive. Therefore we address target localization using a compact frequency-modulated continuous-wave (FMCW) radar sensor. The angular measurement results are improved compared to standard beamforming methods using an adaptive beamforming approach. This approach will be applied to the FMCW principle in a way that allows the use of well known methods for the determination of other target parameters like range or velocity. The applicability of the developed theory will be shown on different measurement scenarios using a 24-GHz prototype radar system.",
"title": ""
},
{
"docid": "12dd3908cd443785d8e28c1f6d3d720e",
"text": "Deep neural networks have been widely used in numerous computer vision applications, particularly in face recognition. However, deploying deep neural network face recognition on mobile devices is still limited since most highaccuracy deep models are both time and GPU consumption in the inference stage. Therefore, developing a lightweight deep neural network is one of the most promising solutions to deploy face recognition on mobile devices. Such the lightweight deep neural network requires efficient memory with small number of weights representation and low cost operators. In this paper a novel deep neural network named MobiFace, which is simple but effective, is proposed for productively deploying face recognition on mobile devices. The experimental results have shown that our lightweight MobiFace is able to achieve high performance with 99.7% on LFW database and 91.3% on large-scale challenging Megaface database. It is also eventually competitive against large-scale deep-networks face recognition while significant reducing computational time and memory consumption.",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "ebd40aaf7fa87beec30ceba483cc5047",
"text": "Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F1), which is comparable to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "bf62cf6deb1b11816fa271bfecde1077",
"text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-",
"title": ""
},
{
"docid": "a8c3c2aad1853e5322d62bf0f688d358",
"text": "Individual decision-making forms the basis for nearly all of microeconomic analysis. These notes outline the standard economic model of rational choice in decisionmaking. In the standard view, rational choice is defined to mean the process of determining what options are available and then choosing the most preferred one according to some consistent criterion. In a certain sense, this rational choice model is already an optimization-based approach. We will find that by adding one empirically unrestrictive assumption, the problem of rational choice can be represented as one of maximizing a real-valued utility function. The utility maximization approach grew out of a remarkable intellectual convergence that began during the 19th century. On one hand, utilitarian philosophers were seeking an objective criterion for a science of government. If policies were to be decided based on attaining the “greatest good for the greatest number,” they would need to find a utility index that could measure of how beneficial different policies were to different people. On the other hand, thinkers following Adam Smith were trying to refine his ideas about how an economic system based on individual self-interest would work. Perhaps that project, too, could be advanced by developing an index of self-interest, assessing how beneficial various outcomes ∗These notes are an evolving, collaborative product. The first version was by Antonio Rangel in Fall 2000. Those original notes were edited and expanded by Jon Levin in Fall 2001 and 2004, and by Paul Milgrom in Fall 2002 and 2003.",
"title": ""
},
{
"docid": "ede8b89c37c10313a84ce0d0d21af8fc",
"text": "The adaptive fuzzy and fuzzy neural models are being widely used for identification of dynamic systems. This paper describes different fuzzy logic and neural fuzzy models. The robustness of models has further been checked by Simulink implementation of the models with application to the problem of system identification. The approach is to identify the system by minimizing the cost function using parameters update.",
"title": ""
},
{
"docid": "adfe1398a35e63b0bfbf2fd55e7a9d81",
"text": "Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity. Theoretically, the need for discernment is revealed by decision logic, and more recently by the new neutrosophic logic and by constructing neutrosophic-type index numbers, exemplified in the context and applied to the world of prices, and, from a practical standpoint, by the possibility to use index numbers in characterization of some cyclical phenomena and economic processes, e.g. inflation rate. The neutrosophic index numbers or neutrosophic indexes are the key topic of this article. The next step is an interrogative and applicative one, drawing the coordinates of an optimized discernment centered on neutrosophic-type index numbers. The inevitable conclusions are optimistic in relation to the common future of the index method and neutrosophic logic, with statistical and economic meaning and utility.",
"title": ""
},
{
"docid": "17806963c91f6d6981f1dcebf3880927",
"text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.",
"title": ""
}
] | scidocsrr |
97b97b86086b35fd1b19558349c1a489 | Character-Aware Neural Networks for Arabic Named Entity Recognition for Social Media | [
{
"docid": "cb929b640f8ee7b550512dd4d0dc8e17",
"text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.",
"title": ""
},
{
"docid": "fe1bc993047a95102f4331f57b1f9197",
"text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.",
"title": ""
},
{
"docid": "51ef96b352d36f5ab933c10184bb385b",
"text": "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.",
"title": ""
},
{
"docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2",
"text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"title": ""
}
] | [
{
"docid": "2f2cab35a8cf44c4564c0e26e0490f29",
"text": "In this paper, we propose a synthetic generationmethod for time-series data based on generative adversarial networks (GANs) and apply it to data augmentation for biosinal classification. GANs are a recently proposed framework for learning a generative model, where two neural networks, one generating synthetic data and the other discriminating synthetic and real data, are trained while competing with each other. In the proposed method, each neural network in GANs is developed based on a recurrent neural network using long short-term memories, thereby allowing the adaptation of the GANs framework to time-series data generation. In the experiments, we confirmed the capability of the proposed method for generating synthetic biosignals using the electrocardiogram and electroencephalogram datasets. We also showed the effectiveness of the proposed method for data augmentation in the biosignal classification problem.",
"title": ""
},
{
"docid": "f779fbb68782deb5e386bf266fbeae5a",
"text": "The paper attempts to provide forecast methodological framework and concrete models to estimate long run probability of default term structure for Hungarian corporate debt instruments, in line with IFRS 9 requirements. Long run probability of default and expected loss can be estimated by various methods and has fifty-five years of history in literature. After studying literature and empirical models, the Markov chain approach was selected to accomplish lifetime probability of default modeling for Hungarian corporate debt instruments. Empirical results reveal that both discrete and continuous homogeneous Markov chain models systematically overestimate the long term corporate probability of default. However, the continuous nonhomogeneous Markov chain gives both intuitively and empirically appropriate probability of default trajectories. The estimated term structure mathematically and professionally properly expresses the probability of default element of expected loss that can realistically occur in the long-run in Hungarian corporate lending. The elaborated models can be easily implemented at Hungarian corporate financial institutions.",
"title": ""
},
{
"docid": "796f46abd496ebb8784122a9c9f65e1d",
"text": "Authorship verification can be checked using stylometric techniques through the analysis of linguistic styles and writing characteristics of the authors. Stylometry is a behavioral feature that a person exhibits during writing and can be extracted and used potentially to check the identity of the author of online documents. Although stylometric techniques can achieve high accuracy rates for long documents, it is still challenging to identify an author for short documents, in particular when dealing with large authors populations. These hurdles must be addressed for stylometry to be usable in checking authorship of online messages such as emails, text messages, or twitter feeds. In this paper, we pose some steps toward achieving that goal by proposing a supervised learning technique combined with n-gram analysis for authorship verification in short texts. Experimental evaluation based on the Enron email dataset involving 87 authors yields very promising results consisting of an Equal Error Rate (EER) of 14.35% for message blocks of 500 characters.",
"title": ""
},
{
"docid": "a34efaa2a8739cce020cb5fe1da6883d",
"text": "Graphical models, as applied to multi-target prediction problems, commonly utilize interaction terms to impose structure among the output variables. Often, such structure is based on the assumption that related outputs need to be similar and interaction terms that force them to be closer are adopted. Here we relax that assumption and propose a feature that is based on distance and can adapt to ensure that variables have smaller or larger difference in values. We utilized a Gaussian Conditional Random Field model, where we have extended its originally proposed interaction potential to include a distance term. The extended model is compared to the baseline in various structured regression setups. An increase in predictive accuracy was observed on both synthetic examples and real-world applications, including challenging tasks from climate and healthcare domains.",
"title": ""
},
{
"docid": "33dcba37947e3bdb5956f7355393eea5",
"text": "Big Data and Cloud computing are the most important technologies that give the opportunity for government agencies to gain a competitive advantage and improve their organizations. On one hand, Big Data implementation requires investing a significant amount of money in hardware, software, and workforce. On the other hand, Cloud Computing offers an unlimited, scalable and on-demand pool of resources which provide the ability to adopt Big Data technology without wasting on the financial resources of the organization and make the implementation of Big Data faster and easier. The aim of this study is to conduct a systematic literature review in order to collect data to identify the benefits and challenges of Big Data on Cloud for government agencies and to make a clear understanding of how combining Big Data and Cloud Computing help to overcome some of these challenges. The last objective of this study is to identify the solutions for related challenges of Big Data. Four research questions were designed to determine the information that is related to the objectives of this study. Data is collected using literature review method and the results are deduced from there.",
"title": ""
},
{
"docid": "3d81f003b29ad4cea90a533a002f3082",
"text": "Technology roadmapping is becoming an increasingly important and widespread approach for aligning technology with organizational goals. The popularity of roadmapping is due mainly to the communication and networking benefits that arise from the development and dissemination of roadmaps, particularly in terms of building common understanding across internal and external organizational boundaries. From its origins in Motorola and Corning more than 25 years ago, where it was used to link product and technology plans, the approach has been adapted for many different purposes in a wide variety of sectors and at all levels, from small enterprises to national foresight programs. Building on previous papers presented at PICMET, concerning the rapid initiation of the technique, and how to customize the approach, this paper highlights the evolution and continuing growth of the method and its application to general strategic planning. The issues associated with extending the roadmapping method to form a central element of an integrated strategic planning process are considered.",
"title": ""
},
{
"docid": "f5d2052dd5f5bb359cfbc80856cb7793",
"text": "We describe our experience with collecting roughly 250, 000 image annotations on Amazon Mechanical Turk (AMT). The annotations we collected range from location of keypoints and figure ground masks of various object categories, 3D pose estimates of head and torsos of people in images and attributes like gender, race, type of hair, etc. We describe the setup and strategies we adopted to automatically approve and reject the annotations, which becomes important for large scale annotations. These annotations were used to train algorithms for detection, segmentation, pose estimation, action recognition and attribute recognition of people in images.",
"title": ""
},
{
"docid": "4207c7f69d65c5b46abce85a369dada1",
"text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.",
"title": ""
},
{
"docid": "f1f72a6d5d2ab8862b514983ac63480b",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "591438f31d3f7b8093f8d10874a17d5b",
"text": "Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure object-oriented languages, but the degree to which these results are transferable to applications written in hybrid languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the \"object-orientedness\" of a program.",
"title": ""
},
{
"docid": "cdd43b3baa9849441817b5f31d7cb0e0",
"text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.",
"title": ""
},
{
"docid": "c1f095252c6c64af9ceeb33e78318b82",
"text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.",
"title": ""
},
{
"docid": "e18a8e3622ae85763c729bd2844ce14c",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: [email protected] (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f5f1300baf7ed92626c912b98b6308c9",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
},
{
"docid": "867c8c0286c0fed4779f550f7483770d",
"text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.",
"title": ""
},
{
"docid": "71cd341da48223745e0abc5aa9aded7b",
"text": "MIMO is a technology that utilizes multiple antennas at transmitter/receiver to improve the throughput, capacity and coverage of wireless system. Massive MIMO where Base Station is equipped with orders of magnitude more antennas have shown over 10 times spectral efficiency increase over MIMO with simpler signal processing algorithms. Massive MIMO has benefits of enhanced capacity, spectral and energy efficiency and it can be built by using low cost and low power components. Despite its potential benefits, this paper also summarizes some challenges faced by massive MIMO such as antenna spatial correlation and mutual coupling as well as non-linear hardware impairments. These challenges encountered in massive MIMO uncover new problems that need further investigation.",
"title": ""
},
{
"docid": "13e8fd8e8462e4bbb267f909403f9872",
"text": "Ergative case, the special case of transitive subjects, rai ses questions not only for the theory of case but also for theories of subjectho od and transitivity. This paper analyzes the case system of Nez Perce, a ”three-way erg tiv ” language, with an eye towards a formalization of the category of transitive subject . I show that it is object agreement that is determinative of transitivity, an d hence of ergative case, in Nez Perce. I further show that the transitivity condition on ergative case must be coupled with a criterion of subjecthood that makes reference to participation in subject agreement, not just to origin in a high argument-structural position. These two results suggest a formalization of the transitive subject as that ar gument uniquely accessing both high and low agreement information, the former through its (agreement-derived) connection with T and the latter through its origin in the spe cifi r of a head associated with object agreement (v). In view of these findings, I ar gue that ergative case morphology should be analyzed not as the expression of a synt ctic primitive but as the morphological spell-out of subject agreement and objec t agreement on a nominal.",
"title": ""
},
{
"docid": "e5380801d69c3acf7bfe36e868b1dadb",
"text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.",
"title": ""
},
{
"docid": "e3d8ce945e727e8b31a764ffd226353b",
"text": "Epilepsy is a neurological disorder with prevalence of about 1-2% of the world’s population (Mormann, Andrzejak, Elger & Lehnertz, 2007). It is characterized by sudden recurrent and transient disturbances of perception or behaviour resulting from excessive synchronization of cortical neuronal networks; it is a neurological condition in which an individual experiences chronic abnormal bursts of electrical discharges in the brain. The hallmark of epilepsy is recurrent seizures termed \"epileptic seizures\". Epileptic seizures are divided by their clinical manifestation into partial or focal, generalized, unilateral and unclassified seizures (James, 1997; Tzallas, Tsipouras & Fotiadis, 2007a, 2009). Focal epileptic seizures involve only part of cerebral hemisphere and produce symptoms in corresponding parts of the body or in some related mental functions. Generalized epileptic seizures involve the entire brain and produce bilateral motor symptoms usually with loss of consciousness. Both types of epileptic seizures can occur at all ages. Generalized epileptic seizures can be subdivided into absence (petit mal) and tonic-clonic (grand mal) seizures (James, 1997).",
"title": ""
},
{
"docid": "68388b2f67030d85030d5813df2e147d",
"text": "Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.",
"title": ""
}
] | scidocsrr |
5b6c98a74df8ed0b2654ea84e478157a | Accelerating 5G QoE via public-private spectrum sharing | [
{
"docid": "7f4843fe3cab91688786d7ca1bfb19d9",
"text": "In this paper the possibility of designing an OFDM system for simultaneous radar and communications operations is discussed. A novel approach to OFDM radar processing is introduced that overcomes the typical drawbacks of correlation based processing. A suitable OFDM system parameterization for operation at 24 GHz is derived that fulfills the requirements for both applications. The operability of the proposed system concept is verified with MatLab simulations. Keywords-OFDM; Radar, Communications",
"title": ""
},
{
"docid": "e0583afbdc609792ad947223006c851f",
"text": "Orthogonal frequency-division multiplexing (OFDM) signal coding and system architecture were implemented to achieve radar and data communication functionalities. The resultant system is a software-defined unit, which can be used for range measurements, radar imaging, and data communications. Range reconstructions were performed for ranges up to 4 m using trihedral corner reflectors with approximately 203 m of radar cross section at the carrier frequency; range resolution of approximately 0.3 m was demonstrated. Synthetic aperture radar (SAR) image of a single corner reflector was obtained; SAR signal processing specific to OFDM signals is presented. Data communication tests were performed in radar setup, where the signal was reflected by the same target and decoded as communication data; bit error rate of was achieved at 57 Mb/s. The system shows good promise as a multifunctional software-defined sensor which can be used in radar sensor networks.",
"title": ""
}
] | [
{
"docid": "46ba4ec0c448e01102d5adae0540ca0d",
"text": "An extensive literature shows that social relationships influence psychological well-being, but the underlying mechanisms remain unclear. We test predictions about online interactions and well-being made by theories of belongingness, relationship maintenance, relational investment, social support, and social comparison. An opt-in panel study of 1,910 Facebook users linked self-reported measures of well-being to counts of respondents’ Facebook activities from server logs. Specific uses of the site were associated with improvements in well-being: Receiving targeted, composed communication from strong ties was associated with improvements in wellbeing while viewing friends’ wide-audience broadcasts and receiving one-click feedback were not. These results suggest that people derive benefits from online communication, as long it comes from people they care about and has been tailored for them.",
"title": ""
},
{
"docid": "66e43ce62fd7e9cf78c4ff90b82afb8d",
"text": "BACKGROUND\nConcern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.\n\n\nMETHODS\nA systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.\n\n\nRESULTS\nOf 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.\n\n\nCONCLUSION\nThe evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.",
"title": ""
},
{
"docid": "efdbbbea608b711e00bc9e6b6468e821",
"text": "We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward to improve the approximation by employing normalizing flows (Rezende & Mohamed, 2015) while still allowing for local reparametrizations (Kingma et al., 2015) and a tractable lower bound (Ranganath et al., 2015; Maaløe et al., 2016). In experiments we show that with this new approximation we can significantly improve upon classical mean field for Bayesian neural networks on both predictive accuracy as well as predictive uncertainty.",
"title": ""
},
{
"docid": "b9adc9e23eadd1b99ad13330a9cf8c9f",
"text": "BACKGROUND\nPancreatic stellate cells (PSCs), a major component of the tumor microenvironment in pancreatic cancer, play roles in cancer progression as well as drug resistance. Culturing various cells in microfluidic (microchannel) devices has proven to be a useful in studying cellular interactions and drug sensitivity. Here we present a microchannel plate-based co-culture model that integrates tumor spheroids with PSCs in a three-dimensional (3D) collagen matrix to mimic the tumor microenvironment in vivo by recapitulating epithelial-mesenchymal transition and chemoresistance.\n\n\nMETHODS\nA 7-channel microchannel plate was prepared using poly-dimethylsiloxane (PDMS) via soft lithography. PANC-1, a human pancreatic cancer cell line, and PSCs, each within a designated channel of the microchannel plate, were cultured embedded in type I collagen. Expression of EMT-related markers and factors was analyzed using immunofluorescent staining or Proteome analysis. Changes in viability following exposure to gemcitabine and paclitaxel were measured using Live/Dead assay.\n\n\nRESULTS\nPANC-1 cells formed 3D tumor spheroids within 5 days and the number of spheroids increased when co-cultured with PSCs. Culture conditions were optimized for PANC-1 cells and PSCs, and their appropriate interaction was confirmed by reciprocal activation shown as increased cell motility. PSCs under co-culture showed an increased expression of α-SMA. Expression of EMT-related markers, such as vimentin and TGF-β, was higher in co-cultured PANC-1 spheroids compared to that in mono-cultured spheroids; as was the expression of many other EMT-related factors including TIMP1 and IL-8. Following gemcitabine exposure, no significant changes in survival were observed. When paclitaxel was combined with gemcitabine, a growth inhibitory advantage was prominent in tumor spheroids, which was accompanied by significant cytotoxicity in PSCs.\n\n\nCONCLUSIONS\nWe demonstrated that cancer cells grown as tumor spheroids in a 3D collagen matrix and PSCs co-cultured in sub-millimeter proximity participate in mutual interactions that induce EMT and drug resistance in a microchannel plate. Microfluidic co-culture of pancreatic tumor spheroids with PSCs may serve as a useful model for studying EMT and drug resistance in a clinically relevant manner.",
"title": ""
},
{
"docid": "9f2d6c872761d8922cac8a3f30b4b7ba",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "e624a94c8440c1a8f318f5a56c353632",
"text": "“Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the implied communication channel consists of both the experimental and biological system. Thus, the terms “neural code” are used inappropriately when “neuroexperimental code” would be more accurate, although less insightful. Second, the brain cannot be presumed to decode neural messages into objective properties of the world, since it never gets to observe those properties. To avoid dualism, codes must relate not to external properties but to internal sensorimotor models. Because this requires structured representations, neural assemblies cannot be the basis of such codes. Third, a message is informative to the extent that the reader understands its language. But the neural code is private to the encoder since only the message is communicated: each neuron speaks its own language. It follows that in the neural coding metaphor, the brain is a Tower of Babel. Finally, the relation between input signals and actions is circular; that inputs do not preexist to outputs makes the coding paradigm problematic. I conclude that the view that spikes are messages is generally not tenable. An alternative proposition is that action potentials are actions on other neurons and the environment, and neurons interact with each other rather than exchange messages. . CC-BY 4.0 International license not peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was . http://dx.doi.org/10.1101/168237 doi: bioRxiv preprint first posted online Jul. 27, 2017;",
"title": ""
},
{
"docid": "1d15eb4e1a4296829ea9af62923a3fae",
"text": "We present a novel adaptation technique for search engines to better support information-seeking activities that include both lookup and exploratory tasks. Building on previous findings, we describe (1) a classifier that recognizes task type (lookup vs. exploratory) as a user is searching and (2) a reinforcement learning based search engine that adapts accordingly the balance of exploration/exploitation in ranking the documents. This allows supporting both task types surreptitiously without changing the familiar list-based interface. Search results include more diverse results when users are exploring and more precise results for lookup tasks. Users found more useful results in exploratory tasks when compared to a base-line system, which is specifically tuned for lookup tasks.",
"title": ""
},
{
"docid": "33e5a3619a6f7d831146c399ff55f5ff",
"text": "With the continuous development of online learning platforms, educational data analytics and prediction have become a promising research field, which are helpful for the development of personalized learning system. However, the indicator's selection process does not combine with the whole learning process, which may affect the accuracy of prediction results. In this paper, we induce 19 behavior indicators in the online learning platform, proposing a student performance prediction model which combines with the whole learning process. The model consists of four parts: data collection and pre-processing, learning behavior analytics, algorithm model building and prediction. Moreover, we apply an optimized Logistic Regression algorithm, taking a case to analyze students' behavior and to predict their performance. Experimental results demonstrate that these eigenvalues can effectively predict whether a student was probably to have an excellent grade.",
"title": ""
},
{
"docid": "a5776d4da32a93c69b18c696c717e634",
"text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.",
"title": ""
},
{
"docid": "f5deca0eaba55d34af78df82eda27bc2",
"text": "Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR’s adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "b47127a755d7bef1c5baf89253af46e7",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "625741e64d35ccef11fb2679fe36384c",
"text": "Wearable technology comprises miniaturized sensors (eg, accelerometers) worn on the body and/or paired with mobile devices (eg, smart phones) allowing continuous patient monitoring in unsupervised, habitual environments (termed free-living). Wearable technologies are revolutionizing approaches to health care as a result of their utility, accessibility, and affordability. They are positioned to transform Parkinson's disease (PD) management through the provision of individualized, comprehensive, and representative data. This is particularly relevant in PD where symptoms are often triggered by task and free-living environmental challenges that cannot be replicated with sufficient veracity elsewhere. This review concerns use of wearable technology in free-living environments for people with PD. It outlines the potential advantages of wearable technologies and evidence for these to accurately detect and measure clinically relevant features including motor symptoms, falls risk, freezing of gait, gait, functional mobility, and physical activity. Technological limitations and challenges are highlighted, and advances concerning broader aspects are discussed. Recommendations to overcome key challenges are made. To date there is no fully validated system to monitor clinical features or activities in free-living environments. Robust accuracy and validity metrics for some features have been reported, and wearable technology may be used in these cases with a degree of confidence. Utility and acceptability appears reasonable, although testing has largely been informal. Key recommendations include adopting a multidisciplinary approach for standardizing definitions, protocols, and outcomes. Robust validation of developed algorithms and sensor-based metrics is required along with testing of utility. These advances are required before widespread clinical adoption of wearable technology can be realized. © 2016 International Parkinson and Movement Disorder Society.",
"title": ""
},
{
"docid": "f3b1e1c9effb7828a62187e9eec5fba7",
"text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.",
"title": ""
},
{
"docid": "0d292d5c1875845408c2582c182a6eb9",
"text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation",
"title": ""
},
{
"docid": "0af5fae1c9ee71cb7058b025883488a0",
"text": "Ciphertext Policy Attribute-Based Encryption (CP-ABE) enforces expressive data access policies and each policy consists of a number of attributes. Most existing CP-ABE schemes incur a very large ciphertext size, which increases linearly with respect to the number of attributes in the access policy. Recently, Herranz proposed a construction of CP-ABE with constant ciphertext. However, Herranz do not consider the recipients' anonymity and the access policies are exposed to potential malicious attackers. On the other hand, existing privacy preserving schemes protect the anonymity but require bulky, linearly increasing ciphertext size. In this paper, we proposed a new construction of CP-ABE, named Privacy Preserving Constant CP-ABE (denoted as PP-CP-ABE) that significantly reduces the ciphertext to a constant size with any given number of attributes. Furthermore, PP-CP-ABE leverages a hidden policy construction such that the recipients' privacy is preserved efficiently. As far as we know, PP-CP-ABE is the first construction with such properties. Furthermore, we developed a Privacy Preserving Attribute-Based Broadcast Encryption (PP-AB-BE) scheme. Compared to existing Broadcast Encryption (BE) schemes, PP-AB-BE is more flexible because a broadcasted message can be encrypted by an expressive hidden access policy, either with or without explicit specifying the receivers. Moreover, PP-AB-BE significantly reduces the storage and communication overhead to the order of O(log N), where N is the system size. Also, we proved, using information theoretical approaches, PP-AB-BE attains minimal bound on storage overhead for each user to cover all possible subgroups in the communication system.",
"title": ""
},
{
"docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "4cd612ca54df72f3174ebaab26d12d9a",
"text": "Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios. As evolutionary algorithms possess several characteristics that are desirable for this type of problem, this class of search strategies has been used for multiobjective optimization for more than a decade. Meanwhile evolutionary multiobjective optimization has become established as a separate subdiscipline combining the fields of evolutionary computation and classical multiple criteria decision making. This paper gives an overview of evolutionary multiobjective optimization with the focus on methods and theory. On the one hand, basic principles of multiobjective optimization and evolutionary algorithms are presented, and various algorithmic concepts such as fitness assignment, diversity preservation, and elitism are discussed. On the other hand, the tutorial includes some recent theoretical results on the performance of multiobjective evolutionary algorithms and addresses the question of how to simplify the exchange of methods and applications by means of a standardized interface.",
"title": ""
}
] | scidocsrr |
ddd05d61d1f55eaa2126b71c913e8159 | Trajectory similarity measures | [
{
"docid": "c1a8e30586aad77395e429556545675c",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
}
] | [
{
"docid": "8f33b9a78fd252ef82008a4e6148f592",
"text": "Some natural language processing tasks can be learned from example corpora, but having enough examples for the task at hands can be a bottleneck. In this work we address how Wikipedia and DBpedia, two freely available language resources, can be used to support Named Entity Recognition, a fundamental task in Information Extraction and a necessary step of other tasks such as Co-reference Resolution and Relation Extraction.",
"title": ""
},
{
"docid": "e187403127990eb4b6c256ceb61d6f37",
"text": "Modern data analysis stands at the interface of statistics, computer science, and discrete mathematics. This volume describes new methods in this area, with special emphasis on classification and cluster analysis. Those methods are applied to problems in information retrieval, phylogeny, medical dia... This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques-genetic algorithms, multiobjective optimization, soft ...",
"title": ""
},
{
"docid": "875917534e19961b06e09b6c1a872914",
"text": "It is found that the current manual water quality monitoring entails tedious process and is time consuming. To alleviate the problems caused by the manual monitoring and the lack of effective system for prawn farming, a remote water quality monitoring for prawn farming pond is proposed. The proposed system is leveraging on wireless sensors in detecting the water quality and Short Message Service (SMS) technology in delivering alert to the farmers upon detection of degradation of the water quality. Three water quality parameters that are critical to the prawn health are monitored, which are pH, temperature and dissolved oxygen. In this paper, the details of system design and implementation are presented. The results obtained in the preliminary survey study served as the basis for the development of the system prototype. Meanwhile, the results acquired through the usability testing imply that the system is able to meet the users’ needs. Key-Words: Remote monitoring, Water Quality, Wireless sensors",
"title": ""
},
{
"docid": "ea9b57d98a697f6b1f878b01434ccd6d",
"text": "Thin film bulk acoustic wave resonators (FBAR) using piezoelectric AlN thin films have attracted extensive research activities in the past few years. Highly c-axis oriented AlN thin films are particularly investigated for resonators operating at the fundamental thickness longitudinal mode. Depending on the processing conditions, tilted polarization (c-axis off the normal direction to the substrate surface) is often found for the as-deposited AlN thin films, which may leads to the coexistence of thickness longitudinal mode and shear mode for the thin film resonators. Knowing that the material properties are strongly crystalline orientation dependent for AlN thin films, a theoretical study is conducted to reveal the effect of tilted polarization on the frequency characteristics of thin film resonators. The input electric impedance of a thin film resonator is derived that includes both thickness longitudinal and thickness shear modes in a uniform equation. Based on the theoretical model, the effective material properties corresponding to the longitudinal and shear modes are calculated through the properties transformation between the original and new coordinate systems. The electric impedance spectra of dual mode AlN thin film resonators are calculated using appropriate materials properties and compared with experimental results. The results indicate that the frequency characteristics of thin film resonators vary with the tilted polarization angles. The coexistence of thickness longitudinal and shear modes in the thin film resonators may provide some flexibility in the design and fabrication of the FBAR devices.",
"title": ""
},
{
"docid": "d4345ee2baaa016fc38ba160e741b8ee",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "074407b66a0a81f27625edc98dead4fc",
"text": "A new method, modified Bagging (mBagging) of Maximal Information Coefficient (mBoMIC), was developed for genome-wide identification. Traditional Bagging is inadequate to meet some requirements of genome-wide identification, in terms of statistical performance and time cost. To improve statistical performance and reduce time cost, an mBagging was developed to introduce Maximal Information Coefficient (MIC) into genomewide identification. The mBoMIC overcame the weakness of original MIC, i.e., the statistical power is inadequate and MIC values are volatile. The three incompatible measures of Bagging, i.e. time cost, statistical power and false positive rate, were significantly improved simultaneously. Compared with traditional Bagging, mBagging reduced time cost by 80%, improved statistical power by 15%, and decreased false positive rate by 31%. The mBoMIC has sensitivity and university in genome-wide identification. The SNPs identified only by mBoMIC have been reported as SNPs associated with cardiac disease.",
"title": ""
},
{
"docid": "3e13e2781fa5866533f86b5b773f9664",
"text": "Reconstruction of massive abdominal wall defects has long been a vexing clinical problem. A landmark development for the autogenous tissue reconstruction of these difficult wounds was the introduction of \"components of anatomic separation\" technique by Ramirez et al. This method uses bilateral, innervated, bipedicle, rectus abdominis-transversus abdominis-internal oblique muscle flap complexes transposed medially to reconstruct the central abdominal wall. Enamored with this concept, this institution sought to define the limitations and complications and to quantify functional outcome with the use of this technique. During a 4-year period (July of 1991 to 1995), 22 patients underwent reconstruction of massive midline abdominal wounds. The defects varied in size from 6 to 14 cm in width and from 10 to 24 cm in height. Causes included removal of infected synthetic mesh material (n = 7), recurrent hernia (n = 4), removal of split-thickness skin graft and dense abdominal wall cicatrix (n = 4), parastomal hernia (n = 2), primary incisional hernia (n = 2), trauma/enteric sepsis (n = 2), and tumor resection (abdominal wall desmoid tumor involving the right rectus abdominis muscle) (n = 1). Twenty patients were treated with mobilization of both rectus abdominis muscles, and in two patients one muscle complex was used. The plane of \"separation\" was the interface between the external and internal oblique muscles. A quantitative dynamic assessment of the abdominal wall was performed in two patients by using a Cybex TEF machine, with analysis of truncal flexion strength being undertaken preoperatively and at 6 months after surgery. Patients achieved wound healing in all cases with one operation. Minor complications included superficial infection in two patients and a wound seroma in one. One patient developed a recurrent incisional hernia 8 months postoperatively. There was one postoperative death caused by multisystem organ failure. One patient required the addition of synthetic mesh to achieve abdominal closure. This case involved a thin patient whose defect exceeded 16 cm in width. There has been no clinically apparent muscle weakness in the abdomen over that present preoperatively. Analysis of preoperative and postoperative truncal force generation revealed a 40 percent increase in strength in the two patients tested on a Cybex machine. Reoperation was possible through the reconstructed abdominal wall in two patients without untoward sequela. This operation is an effective method for autogenous reconstruction of massive midline abdominal wall defects. It can be used either as a primary mode of defect closure or to treat the complications of trauma, surgery, or various diseases.",
"title": ""
},
{
"docid": "12b94323c586de18e8de02e5a065903d",
"text": "Species of lactic acid bacteria (LAB) represent as potential microorganisms and have been widely applied in food fermentation worldwide. Milk fermentation process has been relied on the activity of LAB, where transformation of milk to good quality of fermented milk products made possible. The presence of LAB in milk fermentation can be either as spontaneous or inoculated starter cultures. Both of them are promising cultures to be explored in fermented milk manufacture. LAB have a role in milk fermentation to produce acid which is important as preservative agents and generating flavour of the products. They also produce exopolysaccharides which are essential as texture formation. Considering the existing reports on several health-promoting properties as well as their generally recognized as safe (GRAS) status of LAB, they can be widely used in the developing of new fermented milk products.",
"title": ""
},
{
"docid": "540f9a976fa3b6e8a928b7d0d50f64ac",
"text": "Due to the low-dimensional property of clean hyperspectral images (HSIs), many low-rank-based methods have been proposed to denoise HSIs. However, in an HSI, the noise intensity in different bands is often different, and most of the existing methods do not take this fact into consideration. In this paper, a noise-adjusted iterative low-rank matrix approximation (NAILRMA) method is proposed for HSI denoising. Based on the low-rank property of HSIs, the patchwise low-rank matrix approximation (LRMA) is established. To further separate the noise from the signal subspaces, an iterative regularization framework is proposed. Considering that the noise intensity in different bands is different, an adaptive iteration factor selection based on the noise variance of each HSI band is adopted. This noise-adjusted iteration strategy can effectively preserve the high-SNR bands and denoise the low-SNR bands. The randomized singular value decomposition (RSVD) method is then utilized to solve the NAILRMA optimization problem. A number of experiments were conducted in both simulated and real data conditions to illustrate the performance of the proposed NAILRMA method for HSI denoising.",
"title": ""
},
{
"docid": "413314afd18f6a73c626c4edd95e203b",
"text": "Data mining is the process of identifying the hidden patterns from large and complex data. It may provide crucial role in decision making for complex agricultural problems. Data visualisation is also equally important to understand the general trends of the effect of various factors influencing the crop yield. The present study examines the application of data visualisation techniques to find correlations between the climatic factors and rice crop yield. The study also applies data mining techniques to extract the knowledge from the historical agriculture data set to predict rice crop yield for Kharif season of Tropical Wet and Dry climatic zone of India. The data set has been visualised in Microsoft Office Excel using scatter plots. The classification algorithms have been executed in the free and open source data mining tool WEKA. The experimental results provided include sensitivity, specificity, accuracy, F1 score, Mathews correlation coefficient, mean absolute error, root mean squared error, relative absolute error and root relative squared error. General trends in the data visualisation show that decrease in precipitation in the selected climatic zone increases the rice crop yield and increase in minimum, average or maximum temperature for the season increases the rice crop yield. For the current data set experimental results show that J48 and LADTree achieved the highest accuracy, sensitivity and specificity. Classification performed by LWL classifier displayed the lowest accuracy, sensitivity and specificity results.",
"title": ""
},
{
"docid": "b8625942315177d2fa1c534e8be5eb9f",
"text": "The pantograph-overhead contact wire system is investigated by using an infrared camera. As the pantograph has a vertical motion because of the non-uniform elasticity of the catenary, in order to detect the temperature along the strip from a sequence of infrared images, a segment-tracking algorithm, based on the Hough transformation, has been employed. An analysis of the stored images could help maintenance operations revealing, for example, overheating of the pantograph strip, bursts of arcing, or an irregular positioning of the contact line. Obtained results are relevant for monitoring the status of the quality transmission of the current and for a predictive maintenance of the pantograph and of the catenary system. Examples of analysis from experimental data are reported in the paper.",
"title": ""
},
{
"docid": "8ef4e3c1b3b0008d263e0a55e92750cc",
"text": "We review recent breakthroughs in the silicon photonic technology and components, and describe progress in silicon photonic integrated circuits. Heterogeneous silicon photonics has recently demonstrated performance that significantly outperforms native III/V components. The impact active silicon photonic integrated circuits could have on interconnects, telecommunications, sensors, and silicon electronics is reviewed.",
"title": ""
},
{
"docid": "917ce5ca904c8866ddc84c113dc93b91",
"text": "Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Therefore, tracking bias in everyday news and building a platform where people can receive balanced news information is important. We propose a model that maps the news media sources along a dimensional dichotomous political spectrum using the co-subscriptions relationships inferred by Twitter links. By analyzing 7 million follow links, we show that the political dichotomy naturally arises on Twitter when we only consider direct media subscription. Furthermore, we demonstrate a real-time Twitter-based application that visualizes an ideological map of various media sources.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "bea3ba14723c729c2c07b21f8569dce6",
"text": "In this paper, an action planning algorithm is presented for a reconfigurable hybrid leg–wheel mobile robot. Hybrid leg–wheel robots have recently receiving growing interest from the space community to explore planets, as they offer a solution to improve speed and mobility on uneven terrain. One critical issue connected with them is the study of an appropriate strategy to define when to use one over the other locomotion mode, depending on the soil properties and topology. Although this step is crucial to reach the full hybrid mechanism’s potential, little attention has been devoted to this topic. Given an elevation map of the environment, we developed an action planner that selects the appropriate locomotion mode along an optimal path toward a point of scientific interest. This tool is helpful for the space mission team to decide the next move of the robot during the exploration. First, a candidate path is generated based on topology and specifications’ criteria functions. Then, switching actions are defined along this path based on the robot’s performance in each motion mode. Finally, the path is rated based on the energy profile evaluated using a dynamic simulator. The proposed approach is applied to a concept prototype of a reconfigurable hybrid wheel–leg robot for planetary exploration through extensive simulations and real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2010",
"title": ""
},
{
"docid": "42ae9ed79bfa818870e67934c87d83c9",
"text": "It has been estimated that in urban scenarios up to 30% of the traffic is due to vehicles looking for a free parking space. Thanks to recent technological evolutions, it is now possible to have at least a partial coverage of real-time data of parking space availability, and some preliminary mobile services are able to guide drivers towards free parking spaces. Nevertheless, the integration of this data within car navigators is challenging, mainly because (I) current In-Vehicle Telematic systems are not connected, and (II) they have strong limitations in terms of storage capabilities. To overcome these issues, in this paper we present a back-end based approach to learn historical models of parking availability per street. These compact models can then be easily stored on the map in the vehicle. In particular, we investigate the trade-off between the granularity level of the detailed spatial and temporal representation of parking space availability vs. The achievable prediction accuracy, using different spatio-temporal clustering strategies. The proposed solution is evaluated using five months of parking availability data, publicly available from the project Spark, based in San Francisco. Results show that clustering can reduce the needed storage up to 99%, still having an accuracy of around 70% in the predictions.",
"title": ""
},
{
"docid": "ea71723e88685f5c8c29296d3d8e3197",
"text": "In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.",
"title": ""
},
{
"docid": "398169d654c89191090c04fa930e5e62",
"text": "Psychedelic drug flashbacks have been a puzzling clinical phenomenon observed by clinicians. Flashbacks are defined as transient, spontaneous recurrences of the psychedelic drug effect appearing after a period of normalcy following an intoxication of psychedelics. The paper traces the evolution of the concept of flashback and gives examples of the varieties encountered. Although many drugs have been advocated for the treatment of flashback, flashbacks generally decrease in intensity and frequency with abstinence from psychedelic drugs.",
"title": ""
},
{
"docid": "0814d93829261505cb88d33a73adc4e7",
"text": "Partial shading of a photovoltaic array is the condition under which different modules in the array experience different irradiance levels due to shading. This difference causes mismatch between the modules, leading to undesirable effects such as reduction in generated power and hot spots. The severity of these effects can be considerably reduced by photovoltaic array reconfiguration. This paper proposes a novel mathematical formulation for the optimal reconfiguration of photovoltaic arrays to minimize partial shading losses. The paper formulates the reconfiguration problem as a mixed integer quadratic programming problem and finds the optimal solution using a branch and bound algorithm. The proposed formulation can be used for an equal or nonequal number of modules per row. Moreover, it can be used for fully reconfigurable or partially reconfigurable arrays. The improvement resulting from the reconfiguration with respect to the existing photovoltaic interconnections is demonstrated by extensive simulation results.",
"title": ""
}
] | scidocsrr |
e70cedd385532e99fbddfbf98fdb5494 | Variations in cognitive maps: understanding individual differences in navigation. | [
{
"docid": "f13ffbb31eedcf46df1aaecfbdf61be9",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] | [
{
"docid": "afd6d41c0985372a88ff3bb6f91ce5b5",
"text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ggplot2 elegant graphics for data analysis sources. Yeah, sources about the books from countries in the world are provided.",
"title": ""
},
{
"docid": "7ca2bde925c4bb322fe266b4a9de5004",
"text": "Nuclear transfer of an oocyte into the cytoplasm of another enucleated oocyte has shown that embryogenesis and implantation are influenced by cytoplasmic factors. We report a case of a 30-year-old nulligravida woman who had two failed IVF cycles characterized by all her embryos arresting at the two-cell stage and ultimately had pronuclear transfer using donor oocytes. After her third IVF cycle, eight out of 12 patient oocytes and 12 out of 15 donor oocytes were fertilized. The patient's pronuclei were transferred subzonally into an enucleated donor cytoplasm resulting in seven reconstructed zygotes. Five viable reconstructed embryos were transferred into the patient's uterus resulting in a triplet pregnancy with fetal heartbeats, normal karyotypes and nuclear genetic fingerprinting matching the mother's genetic fingerprinting. Fetal mitochondrial DNA profiles were identical to those from donor cytoplasm with no detection of patient's mitochondrial DNA. This report suggests that a potentially viable pregnancy with normal karyotype can be achieved through pronuclear transfer. Ongoing work to establish the efficacy and safety of pronuclear transfer will result in its use as an aid for human reproduction.",
"title": ""
},
{
"docid": "c9b7832cd306fc022e4a376f10ee8fc8",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "d055bc9f5c7feb9712ec72f8050f5fd8",
"text": "An intelligent observer looks at the world and sees not only what is, but what is moving and what can be moved. In other words, the observer sees how the present state of the world can transform in the future. We propose a model that predicts future images by learning to represent the present state and its transformation given only a sequence of images. To do so, we introduce an architecture with a latent state composed of two components designed to capture (i) the present image state and (ii) the transformation between present and future states, respectively. We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator. We describe how this model can be integrated into an encoder-decoder convolutional neural network (CNN) architecture that uses weighted residual connections to integrate representations of the past with representations of the future. Qualitatively, our approach generates image sequences that are stable and capture realistic motion over multiple predicted frames, without requiring adversarial training. Quantitatively, our method achieves prediction results comparable to state-of-the-art results on standard image prediction benchmarks (Moving MNIST, KTH, and UCF101).",
"title": ""
},
{
"docid": "0ff1837d40bbd6bbfe4f5ec69f83de90",
"text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.",
"title": ""
},
{
"docid": "dc8af68ed9bbfd8e24c438771ca1d376",
"text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.",
"title": ""
},
{
"docid": "6bc94f9b5eb90ba679964cf2a7df4de4",
"text": "New high-frequency data collection technologies and machine learning analysis techniques could offer new insights into learning, especially in tasks in which students have ample space to generate unique, personalized artifacts, such as a computer program, a robot, or a solution to an engineering challenge. To date most of the work on learning analytics and educational data mining has focused on online courses or cognitive tutors, in which the tasks are more structured and the entirety of interaction happens in front of a computer. In this paper, I argue that multimodal learning analytics could offer new insights into students' learning trajectories, and present several examples of this work and its educational application.",
"title": ""
},
{
"docid": "04a7039b3069816b157e5ca0f7541b94",
"text": "Visible light communication is innovative and active technique in modern digital wireless communication. In this paper, we describe a new innovative vlc system which having a better performance and efficiency to other previous system. Visible light communication (VLC) is an efficient technology in order to the improve the speed and the robustness of the communication link in indoor optical wireless communication system. In order to achieve high data rate in communication for VLC system, multiple input multiple output (MIMO) with OFDM is a feasible option. However, the contemporary MIMO with OFDM VLC system are lacks from the diversity and the experiences performance variation through different optical channels. This is mostly because of characteristics of optical elements used for making the receiver. In this paper, we analyze the imaging diversity in MIMO with OFDM VLC system. Simulation results are shown diversity achieved in the different cases.",
"title": ""
},
{
"docid": "53d04c06efb468e14e2ee0b485caf66f",
"text": "The analysis of time-oriented data is an important task in many application scenarios. In recent years, a variety of techniques for visualizing such data have been published. This variety makes it difficult for prospective users to select methods or tools that are useful for their particular task at hand. In this article, we develop and discuss a systematic view on the diversity of methods for visualizing time-oriented data. With the proposed categorization we try to untangle the visualization of time-oriented data, which is such an important concern in Visual Analytics. The categorization is not only helpful for users, but also for researchers to identify future tasks in Visual Analytics. r 2007 Elsevier Ltd. All rights reserved. MSC: primary 68U05; 68U35",
"title": ""
},
{
"docid": "c49d4b1f2ac185bcb070cb105798417a",
"text": "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face ∗Equal contribution. †Work was done during an internship at Megvii Research. detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.",
"title": ""
},
{
"docid": "11d1978a3405f63829e02ccb73dcd75f",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "8cd666c0796c0fe764bc8de0d7a20fa3",
"text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.",
"title": ""
},
{
"docid": "8f183ac262aac98c563bf9dcc69b1bf5",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "2eb344b6701139be184624307a617c1b",
"text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "cfc5b5676552c2e90b70ed3cfa5ac022",
"text": "UNLABELLED\nWe present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.\n\n\nPAPERCLIP\nVIDEO ABSTRACT.",
"title": ""
},
{
"docid": "526e36dd9e3db50149687ea6358b4451",
"text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "96d8174d188fa89c17ffa0b255584628",
"text": "In the past years we have witnessed the emergence of the new discipline of computational social science, which promotes a new data-driven and computation-based approach to social sciences. In this article we discuss how the availability of new technologies such as online social media and mobile smartphones has allowed researchers to passively collect human behavioral data at a scale and a level of granularity that were just unthinkable some years ago. We also discuss how these digital traces can then be used to prove (or disprove) existing theories and develop new models of human behavior.",
"title": ""
},
{
"docid": "1d9e03eb11328f96eaee1f70dcf2a539",
"text": "Crossbar architecture has been widely adopted in neural network accelerators due to the efficient implementations on vector-matrix multiplication operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically because of the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied method to shrink the model size, whereas prior work for CNNs compression rarely considered the crossbar architecture and the corresponding mapping method and cannot be directly utilized by crossbar-based neural network accelerators. This paper proposes a crossbar-aware pruning framework based on a formulated $L_{0}$ -norm constrained optimization problem. Specifically, we design an $L_{0}$ -norm constrained gradient descent with relaxant probabilistic projection to solve this problem. Two types of sparsity are successfully achieved: 1) intuitive crossbar-grain sparsity and 2) column-grain sparsity with output recombination, based on which we further propose an input feature maps reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on the median-scale CIFAR10 data set and the large-scale ImageNet data set with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%–72% with insignificant accuracy degradation. This paper significantly reduce the resource overhead and the related energy cost and provides a new co-design solution for mapping CNNs onto various crossbar devices with much better efficiency.",
"title": ""
}
] | scidocsrr |
8abdf33714c5ebfe22d1912ed9b9ccc7 | From Spin to Swindle: Identifying Falsification in Financial Text | [
{
"docid": "83dec7aa3435effc3040dfb08cb5754a",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "3aaa2d625cddd46f1a7daddbb3e2b23d",
"text": "Text summarization is the task of shortening text documents but retaining their overall meaning and information. A good summary should highlight the main concepts of any text document. Many statistical-based, location-based and linguistic-based techniques are available for text summarization. This paper has described a novel hybrid technique for automatic summarization of Punjabi text. Punjabi is an official language of Punjab State in India. There are very few linguistic resources available for Punjabi. The proposed summarization system is hybrid of conceptual-, statistical-, location- and linguistic-based features for Punjabi text. In this system, four new location-based features and two new statistical features (entropy measure and Z score) are used and results are very much encouraging. Support vector machine-based classifier is also used to classify Punjabi sentences into summary and non-summary sentences and to handle imbalanced data. Synthetic minority over-sampling technique is applied for over-sampling minority class data. Results of proposed system are compared with different baseline systems, and it is found that F score, Precision, Recall and ROUGE-2 score of our system are reasonably well as compared to other baseline systems. Moreover, summary quality of proposed system is comparable to the gold summary.",
"title": ""
}
] | [
{
"docid": "d99b2bab853f867024d1becb0835548d",
"text": "In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.",
"title": ""
},
{
"docid": "6e94a736aa913fa4585025ec52675f48",
"text": "A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta. Copyright © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "19350a76398e0054be44c73618cdfb33",
"text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.",
"title": ""
},
{
"docid": "6033682cf01008f027877e3fda4511f8",
"text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.",
"title": ""
},
{
"docid": "e6953902f5fc0bb9f98d9c632b2ac26e",
"text": "In high voltage (HV) flyback charging circuits, the importance of transformer parasitics holds a significant part in the overall system parasitics. The HV transformers have a larger number of turns on the secondary side that leads to higher self-capacitance which is inevitable. The conventional wire-wound transformer (CWT) has limitation over the design with larger self-capacitance including increased size and volume. For capacitive load in flyback charging circuit these self-capacitances on the secondary side gets added with device capacitances and dominates the load. For such applications the requirement is to have a transformer with minimum self-capacitances and low profile. In order to achieve the above requirements Planar Transformer (PT) design can be implemented with windings as tracks in Printed Circuit Boards (PCB) each layer is insulated by the FR4 material which aids better insulation. Finite Element Model (FEM) has been developed to obtain the self-capacitance in between the layers for larger turns on the secondary side. The modelled hardware prototype of the Planar Transformer has been characterised for open circuit and short circuit test using Frequency Response Analyser (FRA). The results obtained from FEM and FRA are compared and presented.",
"title": ""
},
{
"docid": "3db8dc56e573488c5085bf5a61ea0d7f",
"text": "This paper proposes new approximate coloring and other related techniques which markedly improve the run time of the branchand-bound algorithm MCR (J. Global Optim., 37, 95–111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Consequently, it is shown by extensive computational experiments that MCS is remarkably faster than MCR and other existing algorithms. It is faster than the other algorithms by an order of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graphs. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs.",
"title": ""
},
{
"docid": "2b202063a897e03f797918a8731fd88a",
"text": "Children quickly acquire basic grammatical facts about their native language. Does this early syntactic knowledge involve knowledge of words or rules? According to lexical accounts of acquisition, abstract syntactic and semantic categories are not primitive to the language-acquisition system; thus, early language comprehension and production are based on verb-specific knowledge. The present experiments challenge this account: We probed the abstractness of young children's knowledge of syntax by testing whether 25- and 21-month-olds extend their knowledge of English word order to new verbs. In four experiments, children used word order appropriately to interpret transitive sentences containing novel verbs. These findings demonstrate that although toddlers have much to learn about their native languages, they represent language experience in terms of an abstract mental vocabulary. These abstract representations allow children to rapidly detect general patterns in their native language, and thus to learn rules as well as words from the start.",
"title": ""
},
{
"docid": "c4c5401a3aac140a5c0ef254b08943b4",
"text": "Tasks recognizing named entities such as products, people names, or locations from documents have recently received significant attention in the literature. Many solutions to these tasks assume the existence of reference entity tables. An important challenge that needs to be addressed in the entity extraction task is that of ascertaining whether or not a candidate string approximately matches with a named entity in a given reference table.\n Prior approaches have relied on string-based similarity which only compare a candidate string and an entity it matches with. In this paper, we exploit web search engines in order to define new similarity functions. We then develop efficient techniques to facilitate approximate matching in the context of our proposed similarity functions. In an extensive experimental evaluation, we demonstrate the accuracy and efficiency of our techniques.",
"title": ""
},
{
"docid": "4bf485a218fca405a4d8655bc2a2be86",
"text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;",
"title": ""
},
{
"docid": "63bc96e4a3f44ae1e7fc3e543363dcdd",
"text": "The involvement of smart grid in home and building automation systems has led to the development of diverse standards for interoperable products to control appliances, lighting, energy management and security. Smart grid enables a user to control the energy usage according to the price and demand. These standards have been developed in parallel by different organizations, which are either open or proprietary. It is necessary to arrange these standards in such a way that it is easier for potential readers to easily understand and select a suitable standard according to their functionalities without going into the depth of each standard. In this paper, we review the main smart grid standards proposed by different organizations for home and building automation in terms of different function fields. In addition, we evaluate the scope of interoperability, benefits and drawbacks of the standard.",
"title": ""
},
{
"docid": "98162dc86a2c70dd55e7e3e996dc492c",
"text": "PURPOSE\nTo evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999.\n\n\nMETHODS\nAll subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test.\n\n\nRESULTS\nSurveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability.\n\n\nCONCLUSIONS\nThe Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.",
"title": ""
},
{
"docid": "bade30443df95aafc6958547afd5628a",
"text": "This paper details a series of pre-professional development interventions to assist teachers in utilizing computational thinking and programming as an instructional tool within other subject areas (i.e. music, language arts, mathematics, and science). It describes the lessons utilized in the interventions along with the instruments used to evaluate them, and offers some preliminary findings.",
"title": ""
},
{
"docid": "e306933b27867c99585d7fc82cc380ff",
"text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.",
"title": ""
},
{
"docid": "a6889a6dd3dbdc4488ba01653acbc386",
"text": "OBJECTIVE\nTo succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.\n\n\nRESULTS\nMotivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a 'growth' mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a 'fixed' mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.\n\n\nCONCLUSIONS\nTo avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.",
"title": ""
},
{
"docid": "32ce76009016ba30ce38524b7e9071c9",
"text": "Error in medicine is a subject of continuing interest among physicians, patients, policymakers, and the general public. This article examines the issue of disclosure of medical errors in the context of emergency medicine. It reviews the concept of medical error; proposes the professional duty of truthfulness as a justification for error disclosure; examines barriers to error disclosure posed by health care systems, patients, physicians, and the law; suggests system changes to address the issue of medical error; offers practical guidelines to promote the practice of error disclosure; and discusses the issue of disclosure of errors made by another physician.",
"title": ""
},
{
"docid": "e158971a53492c6b9e21116da891de1a",
"text": "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.",
"title": ""
},
{
"docid": "c2055f8366e983b45d8607c877126797",
"text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.",
"title": ""
},
{
"docid": "0ede49c216f911cd01b3bfcf0c539d6e",
"text": "Distribution patterns along a slope and vertical root distribution were compared among seven major woody species in a secondary forest of the warm-temperate zone in central Japan in relation to differences in soil moisture profiles through a growing season among different positions along the slope. Pinus densiflora, Juniperus rigida, Ilex pedunculosa and Lyonia ovalifolia, growing mostly on the upper part of the slope with shallow soil depth had shallower roots. Quercus serrata and Quercus glauca, occurring mostly on the lower slope with deep soil showed deeper rooting. Styrax japonica, mainly restricted to the foot slope, had shallower roots in spite of growing on the deepest soil. These relations can be explained by the soil moisture profile under drought at each position on the slope. On the upper part of the slope and the foot slope, deep rooting brings little advantage in water uptake from the soil due to the total drying of the soil and no period of drying even in the shallow soil, respectively. However, deep rooting is useful on the lower slope where only the deep soil layer keeps moist. This was supported by better diameter growth of a deep-rooting species on deeper soil sites than on shallower soil sites, although a shallow-rooting species showed little difference between them.",
"title": ""
},
{
"docid": "43c49bb7d9cebb8f476079ac9dd0af27",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
}
] | scidocsrr |
93a44a6ab7dd1262366bd2f3081f0595 | RIOT : One OS to Rule Them All in the IoT | [
{
"docid": "5fd6462e402e3a3ab1e390243d80f737",
"text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.",
"title": ""
},
{
"docid": "cf54c485a54d9b22d06710684061eac2",
"text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.",
"title": ""
}
] | [
{
"docid": "235e6e4537e9f336bf80e6d648fdc8fb",
"text": "Communication between the deaf and non-deaf has always been a very cumbersome task. This paper aims to cover the various prevailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf –mute people are Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touchscreen. All the above mentioned three sub-divided methods make use of various sensors, accelerometer, a suitable microcontroller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf –mute and non-deaf-mute people can be overcome by the second method i.e online learning system. The Online Learning System has different methods under it, five of which are explained in this paper. The five sub-divided methods areSLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web-Sign Technology. The working of the individual components used and the operation of the whole system for the communication purpose has been explained in detail in this paper.",
"title": ""
},
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "4c67d3686008e377220314323a35eecb",
"text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"title": ""
},
{
"docid": "8272f6d511cc8aa104ba10c23deb17a5",
"text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.",
"title": ""
},
{
"docid": "ad584a07befbfff1dff36c18ea830a4e",
"text": "In this paper, we review some of the novel emerging memory technologies and how they can enable energy-efficient implementation of large neuromorphic computing systems. We will highlight some of the key aspects of biological computation that are being mimicked in these novel nanoscale devices, and discuss various strategies employed to implement them efficiently. Though large scale learning systems have not been implemented using these devices yet, we will discuss the ideal specifications and metrics to be satisfied by these devices based on theoretical estimations and simulations. We also outline the emerging trends and challenges in the path towards successful implementations of large learning systems that could be ubiquitously deployed for a wide variety of cognitive computing tasks.",
"title": ""
},
{
"docid": "116ab901f60a7282f8a2ea245c59b679",
"text": "Image classification is a vital technology many people in all arenas of human life utilize. It is pervasive in every facet of the social, economic, and corporate spheres of influence, worldwide. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep learning algorithms. This paper uses Convolutional Neural Networks (CNN) to classify handwritten digits in the MNIST database, and scenes in the CIFAR-10 database. Our proposed method preprocesses the data in the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By separating the image into different subbands, important feature learning occurs over varying low to high frequencies. The fusion of the learned low and high frequency features, and processing the combined feature mapping results in an increase in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings reveal a substantial increase in accuracy.",
"title": ""
},
{
"docid": "a52a90bb69f303c4a31e4f24daf609e6",
"text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.",
"title": ""
},
{
"docid": "dacf2f44c3f8fc0931dceda7e4cb9bef",
"text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "9f68df51d0d47b539a6c42207536d012",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "5591247b2e28f436da302757d3f82122",
"text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.",
"title": ""
},
{
"docid": "d51ef75ccf464cc03656210ec500db44",
"text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "c24bfd3b7bbc8222f253b004b522f7d5",
"text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).",
"title": ""
},
{
"docid": "6d3dc0ea95cd8626aff6938748b58d1a",
"text": "Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.",
"title": ""
},
{
"docid": "ec31c653457389bd587b26f4427c90d7",
"text": "Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.",
"title": ""
},
{
"docid": "f65c027ab5baa981667955cc300d2f34",
"text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.",
"title": ""
},
{
"docid": "5ca36b7877ebd3d05e48d3230f2dceb0",
"text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.",
"title": ""
},
{
"docid": "acc26655abb2a181034db8571409d0a5",
"text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.",
"title": ""
},
{
"docid": "941d7a7a59261fe2463f42cad9cff004",
"text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.",
"title": ""
}
] | scidocsrr |
5bb3b78d1b976ec8a8b48f408a80c8a2 | CamBP: a camera-based, non-contact blood pressure monitor | [
{
"docid": "2531d8d05d262c544a25dbffb7b43d67",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
}
] | [
{
"docid": "28a6a89717b3894b181e746e684cfad5",
"text": "When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives. In order to explore these stories, one needs a map to navigate unfamiliar territory. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents maximizing coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. We first formalize characteristics of good maps and formulate their construction as an optimization problem. Then we provide efficient methods with theoretical guarantees for generating maps. Finally, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with a real-world dataset demonstrate that the method is able to produce maps which help users acquire knowledge efficiently.",
"title": ""
},
{
"docid": "7b3dd8bdc75bf99f358ef58b2d56e570",
"text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.",
"title": ""
},
{
"docid": "c31dbdee3c36690794f3537c61cfc1e3",
"text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper",
"title": ""
},
{
"docid": "9497731525a996844714d5bdbca6ae03",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "ea8450e8e1a217f1af596bb70051f5e7",
"text": "Supplier selection is nowadays one of the critical topics in supply chain management. This paper presents a new decision making approach for group multi-criteria supplier selection problem, which clubs supplier selection process with order allocation for dynamic supply chains to cope market variations. More specifically, the developed approach imitates the knowledge acquisition and manipulation in a manner similar to the decision makers who have gathered considerable knowledge and expertise in procurement domain. Nevertheless, under many conditions, exact data are inadequate to model real-life situation and fuzzy logic can be incorporated to handle the vagueness of the decision makers. As per this concept, fuzzy-AHP method is used first for supplier selection through four classes (CLASS I: Performance strategy, CLASS II: Quality of service, CLASS III: Innovation and CLASS IV: Risk), which are qualitatively meaningful. Thereafter, using simulation based fuzzy TOPSIS technique, the criteria application is quantitatively evaluated for order allocation among the selected suppliers. As a result, the approach generates decision-making knowledge, and thereafter, the developed combination of rules order allocation can easily be interpreted, adopted and at the same time if necessary, modified by decision makers. To demonstrate the applicability of the proposed approach, an illustrative example is presented and the results analyzed. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2ff17287164ea85e0a41974e5da0ecb6",
"text": "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",
"title": ""
},
{
"docid": "c673f0f39f874f8dfde363d6a030e8dd",
"text": "Big data refers to data volumes in the range of exabytes (1018) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate that is rapidly approaching the exabyte/year range. But, its creation and aggregation are accelerating and will approach the zettabyte/year range within a few years. Volume is only one aspect of big data; other attributes are variety, velocity, value, and complexity. Storage and data transport are technology issues, which seem to be solvable in the near-term, but represent longterm challenges that require research and new paradigms. We analyze the issues and challenges as we begin a collaborative research program into methodologies for big data analysis and design.",
"title": ""
},
{
"docid": "593aae604e5ecd7b6d096ed033a303f8",
"text": "We describe the first mobile app for identifying plant species using automatic visual recognition. The system – called Leafsnap – identifies tree species from photographs of their leaves. Key to this system are computer vision components for discarding non-leaf images, segmenting the leaf from an untextured background, extracting features representing the curvature of the leaf’s contour over multiple scales, and identifying the species from a dataset of the 184 trees in the Northeastern United States. Our system obtains state-of-the-art performance on the real-world images from the new Leafsnap Dataset – the largest of its kind. Throughout the paper, we document many of the practical steps needed to produce a computer vision system such as ours, which currently has nearly a million users.",
"title": ""
},
{
"docid": "63405a3fc4815e869fc872bb96bb8a33",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "3d10793b2e4e63e7d639ff1e4cdf04b6",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "e1b6de27518c1c17965a891a8d14a1e1",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "1865cf66083c30d74b555eab827d0f5f",
"text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.",
"title": ""
},
{
"docid": "5a3f542176503ddc6fcbd0fe29f08869",
"text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.",
"title": ""
},
{
"docid": "fd5c5ff7c97b9d6b6bfabca14631b423",
"text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.",
"title": ""
},
{
"docid": "30d0453033d3951f5b5faf3213eacb89",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "56e1778df9d5b6fa36cbf4caae710e67",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "2431ee8fb0dcfd84c61e60ee41a95edb",
"text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.",
"title": ""
},
{
"docid": "6c1c3bc94314ce1efae62ac3ec605d4a",
"text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.",
"title": ""
},
{
"docid": "0dc1119bf47ffa6d032c464a54d5d173",
"text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
}
] | scidocsrr |
07b5970a870965f31a550df1161ec716 | An effective and fast iris recognition system based on a combined multiscale feature extraction technique | [
{
"docid": "b125649628d46871b2212c61e355ec43",
"text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.",
"title": ""
},
{
"docid": "d82e41bcf0d25a728ddbad1dd875bd16",
"text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.",
"title": ""
}
] | [
{
"docid": "f01970e430e1dce15efdd5a054bd9c2a",
"text": "Hierarchical text categorization (HTC) refers to assigning a text document to one or more most suitable categories from a hierarchical category space. In this paper we present two HTC techniques based on kNN and SVM machine learning techniques for categorization process and byte n-gram based document representation. They are fully language independent and do not require any text preprocessing steps, or any prior information about document content or language. The effectiveness of the presented techniques and their language independence are demonstrated in experiments performed on five tree-structured benchmark category hierarchies that differ in many aspects: Reuters-Hier1, Reuters-Hier2, 15NGHier and 20NGHier in English and TanCorpHier in Chinese. The results obtained are compared with the corresponding flat categorization techniques applied to leaf level categories of the considered hierarchies. While kNN-based flat text categorization produced slightly better results than kNN-based HTC on the largest TanCorpHier and 20NGHier datasets, SVM-based HTC results do not considerably differ from the corresponding flat techniques, due to shallow hierarchies; still, they outperform both kNN-based flat and hierarchical categorization on all corpora except the smallest Reuters-Hier1 and Reuters-Hier2 datasets. Formal evaluation confirmed that the proposed techniques obtained state-of-the-art results.",
"title": ""
},
{
"docid": "5178bd7051b15f9178f9d19e916fdc85",
"text": "With more and more functions in modern battery-powered mobile devices, enabling light-harvesting in the power management system can extend battery usage time [1]. For both indoor and outdoor operations of mobile devices, the output power range of the solar panel with the size of a touchscreen can vary from 100s of µW to a Watt due to the irradiance-level variation. An energy harvester is thus essential to achieve high maximum power-point tracking efficiency (ηT) over this wide power range. However, state-of-the-art energy harvesters only use one maximum power-point tracking (MPPT) method under different irradiance levels as shown in Fig. 22.5.1 [2–5]. Those energy harvesters with power-computation-based MPPT schemes for portable [2,3] and standalone [4] systems suffer from low ηT under low input power due to the limited input dynamic range of the MPPT circuitry. Other low-power energy harvesters with the fractional open-cell voltage (FOCV) MPPT scheme are confined by the fractional-constant accuracy to only offer high ηT across a narrow power range [5]. Additionally, the conventional FOCV MPPT scheme requires long transient time of 250ms to identify MPP [5], thereby significantly reducing energy capture from the solar panel. To address the above issues, this paper presents an energy harvester with an irradiance-aware hybrid algorithm (IAHA) to automatically switch between an auto-zeroed pulse-integration based MPPT (AZ PI-MPPT) and a slew-rate-enhanced FOCV (SRE-FOCV) MPPT scheme for maximizing ηT under different irradiance levels. The SRE-FOCV MPPT scheme also enables the energy harvester to shorten the MPPT transient time to 2.9ms in low irradiance levels.",
"title": ""
},
{
"docid": "fe400cd7cd0e63dfe4510e1357172d72",
"text": "Accurate segmentation of anatomical structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper we investigate the latest fully-convolutional architectures for the task of multi-class segmentation of the lungs field, heart and clavicles in a chest radiograph. In addition, we explore the influence of using different loss functions in the training process of a neural network for semantic segmentation. We evaluate all models on a common benchmark of 247 X-ray images from the JSRT database and ground-truth segmentation masks from the SCR dataset. Our best performing architecture, is a modified U-Net that benefits from pre-trained encoder weights. This model outperformed the current state-of-the-art methods tested on the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6% for heart and 85.5% for clavicles.",
"title": ""
},
{
"docid": "98ee8e95912382d6c2aaaa40c4117bbb",
"text": "There are various undergoing efforts by system operators to set up an electricity market at the distribution level to enable a rapid and widespread deployment of distributed energy resources (DERs) and microgrids. This paper follows the previous work of the authors in implementing the distribution market operator (DMO) concept, and focuses on investigating the clearing and settlement processes performed by the DMO. The DMO clears the market to assign the awarded power from the wholesale market to customers within its service territory based on their associated demand bids. The DMO accordingly settles the market to identify the distribution locational marginal prices (DLMPs) and calculate payments from each customer and the total payment to the system operator. Numerical simulations exhibit the merits and effectiveness of the proposed DMO clearing and settlement processes.",
"title": ""
},
{
"docid": "cebdedb344f2ba7efb95c2933470e738",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "94d14d837104590fcf9055351fa59482",
"text": "The amygdala receives cortical inputs from the medial prefrontal cortex (mPFC) and orbitofrontal cortex (OFC) that are believed to affect emotional control and cue-outcome contingencies, respectively. Although mPFC impact on the amygdala has been studied, how the OFC modulates mPFC-amygdala information flow, specifically the infralimbic (IL) division of mPFC, is largely unknown. In this study, combined in vivo extracellular single-unit recordings and pharmacological manipulations were used in anesthetized rats to examine how OFC modulates amygdala neurons responsive to mPFC activation. Compared with basal condition, pharmacological (N-Methyl-D-aspartate) or electrical activation of the OFC exerted an inhibitory modulation of the mPFC-amygdala pathway, which was reversed with intra-amygdala blockade of GABAergic receptors with combined GABAA and GABAB antagonists (bicuculline and saclofen). Moreover, potentiation of the OFC-related pathways resulted in a loss of OFC control over the mPFC-amygdala pathway. These results show that the OFC potently inhibits mPFC drive of the amygdala in a GABA-dependent manner; but with extended OFC pathway activation this modulation is lost. Our results provide a circuit-level basis for this interaction at the level of the amygdala, which would be critical in understanding the normal and pathophysiological control of emotion and contingency associations regulating behavior.",
"title": ""
},
{
"docid": "be5d3179bc0198505728a5b27dcf1a89",
"text": "Emotion has been investigated from various perspectives and across several domains within human computer interaction (HCI) including intelligent tutoring systems, interactive web applications, social media and human-robot interaction. One of the most promising and, nevertheless, challenging applications of affective computing (AC) research is within computer games. This chapter focuses on the study of emotion in the computer games domain, reviews seminal work at the crossroads of game technology, game design and affective computing and details the key phases for efficient affectbased interaction in games.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "9b451aa93627d7b44acc7150a1b7c2d0",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "864d1c5a2861acc317f9f2a37c6d3660",
"text": "We report a case of an 8-month-old child with a primitive myxoid mesenchymal tumor of infancy arising in the thenar eminence. The lesion recurred after conservative excision and was ultimately nonresponsive to chemotherapy, necessitating partial amputation. The patient remains free of disease 5 years after this radical surgery. This is the 1st report of such a tumor since it was initially described by Alaggio and colleagues in 2006. The pathologic differential diagnosis is discussed.",
"title": ""
},
{
"docid": "0109c8c7663df5e8ac2abd805924d9f6",
"text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System",
"title": ""
},
{
"docid": "4b3592efd8a4f6f6c9361a6f66a30a5f",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "6bbfe0418e446d70eb0a17a14415c03c",
"text": "Most mobile apps today require access to remote services, and many of them also require users to be authenticated in order to use their services. To ensure the security between the client app and the remote service, app developers often use cryptographic mechanisms such as encryption (e.g., HTTPS), hashing (e.g., MD5, SHA1), and signing (e.g., HMAC) to ensure the confidentiality and integrity of the network messages. However, these cryptographic mechanisms can only protect the communication security, and server-side checks are still needed because malicious clients owned by attackers can generate any messages they wish. As a result, incorrect or missing server side checks can lead to severe security vulnerabilities including password brute-forcing, leaked password probing, and security access token hijacking. To demonstrate such a threat, we present AUTOFORGE, a tool that can automatically forge valid request messages from the client side to test whether the server side of an app has ensured the security of user accounts with sufficient checks. To enable these security tests, a fundamental challenge lies in how to forge a valid cryptographically consistent message such that it can be consumed by the server. We have addressed this challenge with a set of systematic techniques, and applied them to test the server side implementation of 76 popular mobile apps (each of which has over 1,000,000 installs). Our experimental results show that among these apps, 65 (86%) of their servers are vulnerable to password brute-forcing attacks, all (100%) are vulnerable to leaked password probing attacks, and 9 (12%) are vulnerable to Facebook access token hijacking attacks.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "ebc5ef2e1f557b723358043248c051a8",
"text": "Place similarity has a central role in geographic information retrieval and geographic information systems, where spatial proximity is frequently just a poor substitute for semantic relatedness. For applications such as toponym disambiguation, alternative measures are thus required to answer the non-trivial question of place similarity in a given context. In this paper, we discuss a novel approach to the construction of a network of locations from unstructured text data. By deriving similarity scores based on the textual distance of toponyms, we obtain a kind of relatedness that encodes the importance of the co-occurrences of place mentions. Based on the text of the English Wikipedia, we construct and provide such a network of place similarities, including entity linking to Wikidata as an augmentation of the contained information. In an analysis of centrality, we explore the networks capability of capturing the similarity between places. An evaluation of the network for the task of toponym disambiguation on the AIDA CoNLL-YAGO dataset reveals a performance that is in line with state-of-the-art methods.",
"title": ""
},
{
"docid": "63934b1fdc9b7c007302cdc41a42744e",
"text": "Recently, various energy harvesting techniques from ambient environments were proposed as alternative methods for powering sensor nodes, which convert the ambient energy from environments into electricity to power sensor nodes. However, those techniques are not applicable to the wireless sensor networks (WSNs) in the environment with no ambient energy source. To overcome this problem, an RF energy transfer method was proposed to power wireless sensor nodes. However, the RF energy transfer method also has a problem of unfairness among sensor nodes due to the significant difference between their energy harvesting rates according to their positions. In this paper, we propose a medium access control (MAC) protocol for WSNs based on RF energy transfer. The proposed MAC protocol adaptively manages the duty cycle of sensor nodes according to their the amount of harvested energy as well as the contention time of sensor nodes considering fairness among them. Through simulations, we show that our protocol can achieve a high degree of fairness, while maintaining duty cycle of sensor nodes appropriately according to the amount of their harvested energy.",
"title": ""
},
{
"docid": "90d9360a3e769311a8d7611d8c8845d9",
"text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"title": ""
},
{
"docid": "ba6f0206decf2b9bde415ffdfcd32eb9",
"text": "Broad host-range mini-Tn7 vectors facilitate integration of single-copy genes into bacterial chromosomes at a neutral, naturally evolved site. Here we present a protocol for employing the mini-Tn7 system in bacteria with single attTn7 sites, using the example Pseudomonas aeruginosa. The procedure involves, first, cloning of the genes of interest into an appropriate mini-Tn7 vector; second, co-transfer of the recombinant mini-Tn7 vector and a helper plasmid encoding the Tn7 site-specific transposition pathway into P. aeruginosa by either transformation or conjugation, followed by selection of insertion-containing strains; third, PCR verification of mini-Tn7 insertions; and last, optional Flp-mediated excision of the antibiotic-resistance selection marker present on the chromosomally integrated mini-Tn7 element. From start to verification of the insertion events, the procedure takes as little as 4 d and is very efficient, yielding several thousand transformants per microgram of input DNA or conjugation mixture. In contrast to existing chromosome integration systems, which are mostly based on species-specific phage or more-or-less randomly integrating transposons, the mini-Tn7 system is characterized by its ready adaptability to various bacterial hosts, its site specificity and its efficiency. Vectors have been developed for gene complementation, construction of gene fusions, regulated gene expression and reporter gene tagging.",
"title": ""
},
{
"docid": "ef24d571dc9a7a3fae2904b8674799e9",
"text": "With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.",
"title": ""
}
] | scidocsrr |
a26d3fce3c96b00be0c8478bdff69ccb | Herbert Simon ’ s Decision-Making Approach : Investigation of Cognitive Processes in Experts | [
{
"docid": "4aec1d1c4f4ca3990836a5d15fba81c7",
"text": "P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this",
"title": ""
}
] | [
{
"docid": "b743159683f5cb99e7b5252dbc9ae74f",
"text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.",
"title": ""
},
{
"docid": "ede29bc41058b246ceb451d5605cce2c",
"text": "Knowledge graphs have challenged the existing embedding-based approaches for representing their multifacetedness. To address some of the issues, we have investigated some novel approaches that (i) capture the multilingual transitions on different language-specific versions of knowledge, and (ii) encode the commonly existing monolingual knowledge with important relational properties and hierarchies. In addition, we propose the use of our approaches in a wide spectrum of NLP tasks that have not been well explored by related works.",
"title": ""
},
{
"docid": "87e8b5b75b5e83ebc52579e8bbae04f0",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
},
{
"docid": "64f1e3a95a8f25ec260f7cdf1867effc",
"text": "In this paper we present a Petri net simulator that will be used for modeling of queues. The concept of queue is ubiquitous in the everyday life. Queues are present at the airports, banks, shops etc. Many factors are important in the studying of queues – some of them are: the average waiting time in the queue, the usage of the server, expected length of the queue etc. The paper is organized as follows: first we give a brief overview of the queuing theory. After that – we describe the Petri nets, as a tool for modeling of discrete event systems. We have implemented Petri net simulator that is capable for queue modeling. Finally we are presenting the simulation results for M/M/1 queue and its different properties. We’ve also made comparison between theoretically obtained results and our simulation results.",
"title": ""
},
{
"docid": "7b609a8f58f69a402d8db0435c0ed475",
"text": "The preservation of privacy when publishing spatiotemporal traces of mobile humans is a field that is receiving growing attention. However, while more and more services offer personalized privacy options to their users, few trajectory anonymization algorithms are able to handle personalization effectively, without incurring unnecessary information distortion. In this paper, we study the problem of Personalized (K,∆)anonymity, which builds upon the model of (k,δ)-anonymity, while allowing users to have their own individual privacy and service quality requirements. First, we propose efficient modifications to state-of-the-art (k,δ)-anonymization algorithms by introducing a novel technique built upon users’ personalized privacy settings. This way, we avoid over-anonymization and we decrease information distortion. In addition, we utilize datasetaware trajectory segmentation in order to further reduce information distortion. We also study the novel problem of Bounded Personalized (Κ,∆)-anonymity, where the algorithm gets as input an upper bound the information distortion being accepted, and introduce a solution to this problem by editing the (k,δ) requirements of the highest demanding trajectories. Our extensive experimental study over real life trajectories shows the effectiveness of the proposed techniques.",
"title": ""
},
{
"docid": "1610802593a60609bc1213762a9e0584",
"text": "We examined emotional stability, ambition (an aspect of extraversion), and openness as predictors of adaptive performance at work, based on the evolutionary relevance of these traits to human adaptation to novel environments. A meta-analysis on 71 independent samples (N = 7,535) demonstrated that emotional stability and ambition are both related to overall adaptive performance. Openness, however, does not contribute to the prediction of adaptive performance. Analysis of predictor importance suggests that ambition is the most important predictor for proactive forms of adaptive performance, whereas emotional stability is the most important predictor for reactive forms of adaptive performance. Job level (managers vs. employees) moderates the effects of personality traits: Ambition and emotional stability exert stronger effects on adaptive performance for managers as compared to employees.",
"title": ""
},
{
"docid": "eb350f3e61333f7bcd6f9bade1151f4b",
"text": "This Internet research project examined the relationship between consumption of muscle and fitness magazines and/or various indices of pornography and body satisfaction in gay and heterosexual men. Participants (N = 101) were asked to complete body satisfaction questionnaires that addressed maladaptive eating attitudes, the drive for muscularity, and social physique anxiety. Participants also completed scales measuring self-esteem, depression, and socially desirable responding. Finally, respondents were asked about their consumption of muscle and fitness magazines and pornography. Results indicated that viewing and purchasing of muscle and fitness magazines correlated positively with levels of body dissatisfaction for both gay and heterosexual men. Pornography exposure was positively correlated with social physique anxiety for gay men. The limitations of this study and directions for future research are outlined.",
"title": ""
},
{
"docid": "59a98a769d8aa5565f522369e65f02fc",
"text": "Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent to explore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.",
"title": ""
},
{
"docid": "47785d2cbbc5456c0a2c32c329498425",
"text": "Are there important cyclical fluctuations in bond market premiums and, if so, with what macroeconomic aggregates do these premiums vary? We use the methodology of dynamic factor analysis for large datasets to investigate possible empirical linkages between forecastable variation in excess bond returns and macroeconomic fundamentals. We find that “real” and “inflation” factors have important forecasting power for future excess returns on U.S. government bonds, above and beyond the predictive power contained in forward rates and yield spreads. This behavior is ruled out by commonly employed affine term structure models where the forecastability of bond returns and bond yields is completely summarized by the cross-section of yields or forward rates. An important implication of these findings is that the cyclical behavior of estimated risk premia in both returns and long-term yields depends importantly on whether the information in macroeconomic factors is included in forecasts of excess bond returns. Without the macro factors, risk premia appear virtually acyclical, whereas with the estimated factors risk premia have a marked countercyclical component, consistent with theories that imply investors must be compensated for risks associated with macroeconomic activity. ( JEL E0, E4, G10, G12)",
"title": ""
},
{
"docid": "68cd651ccfa4bf19ebc842efee7dcdb9",
"text": "The impetus for our study was the contention of both Lynn [Lynn, R. (1991) Race differences in intelligence: A global perspective. Mankind Quarterly, 31, 255–296] and Rushton (Rushton [Rushton, J. P. (1995). Race, evolution and behavior: A life history perspective. New Brunswick, NJ: Transaction; Rushton, J. P. (1997). Race, intelligence, and the brain: The errors and omissions of the revised edition of S.J. Gould’s the mismeasurement of man. Personality and Individual Differences, 23, 169–180; Rushton, J. P. (2000). Race, evolution, and behavior. A life history perspective (3rd edition). Port Huron: Charles Darwin Research Institute] that persons in colder climates tend to have higher IQs than persons in warmer climates. We correlated mean IQ of 129 countries with per capita income, skin color, and winter and summer temperatures, conceptualizing skin color as a multigenerational reflection of climate. The highest correlations were 0.92 (rho= 0.91) for skin color, 0.76 (rho= 0.76) for mean high winter temperature, 0.66 (rho= 0.68) for mean low winter temperature, and 0.63 (rho=0.74) for real gross domestic product per capita. The correlations with population of country controlled for are almost identical. Our findings provide strong support for the observation of Lynn and of Rushton that persons in colder climates tend to have higher IQs. These findings could also be viewed as congruent with, although not providing unequivocal evidence for, the contention that higher intelligence evolves in colder climates. The finding of higher IQ in Eurasians than Africans could also be viewed as congruent with the position of Diamond (1997) that knowledge and resources are transmitted more readily on the Eurasian west–east axis. D 2005 Elsevier Inc. All rights reserved. Both Rushton (1995, 1997, 2000) and Lynn (1991) have pointed out that ethnic groups in colder climates score higher on intelligence tests than ethnic groups in warmer climates. They contend that greater intelligence is needed to adapt to a colder climate so that, over many generations, the more intelligent members of a population are more likely to survive and reproduce. Their temperature and IQ analyses have been descriptive rather than quantitative, however. In the present quantitative study, we predicted a negative correlation between IQ 0160-2896/$ see front matter D 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.intell.2005.04.002 * Corresponding author. E-mail address: [email protected] (D.I. Templer). and temperature. We hypothesized that correlations would be higher for mean winter temperatures (January in the Northern Hemisphere and July in the Southern Hemisphere) than for mean summer temperatures. Skin color was conceptualized as a variable closely related to temperature. It is viewed by the present authors as a multigenerational reflection of the climates one’s ancestors have lived in for thousands of years. Another reason to predict correlations of IQ with temperature and skin color is the product–moment correlation reported by Beals, Smith, and Dodd (1984) of 0.62 between cranial capacity and distance from the equator. Beals et al. based their finding on 20,000 individual crania 06) 121–139 D.I. Templer, H. Arikawa / Intelligence 34 (2006) 121–139 122 from every continent and representing 122 ethnically distinguishable populations. Jensen (1998) reasoned that natural selection would favor a smaller head with a less spherical shape because of better heat dissipation in hot climates. Natural selection in colder climates would favor a more spherical head to accommodate a larger brain and to have better heat conservation. We used an index of per capita income-real gross domestic product (GDP) per capita to compare the correlations of income with IQ to those of temperature and skin color with IQ. There is a strong rationale for predicting a positive relationship between IQ and real GDP per capita. Common sense dictates that more intelligent populations can achieve greater scientific, technological, and organizational advancement. Furthermore, it is well established that conditions associated with poverty, such as malnutrition and inadequate prenatal/perinatal and other health care, can prevent the attainment of genetic potential. Lynn and Vanhanen (2002) did indeed find positive correlations between adjusted IQ and real GDP per capita of nations throughout the world. Their scatter plots vividly show that countries south of the Sahara Desert have both the lowest real GDPs per capita in the world and the lowest mean IQs in the world (in the 60s and 70s). The real GDP per capita in high-IQ countries is much more variable. For example, China and Korea have very high mean IQs but rather low real GDPs per capita. In this study, we considered only countries (N =129) with primarily indigenous people—those with populations that have persisted since before the voyages of Christopher Columbus. It is acknowledged that there have been many migrations both before and after Columbus. However, the year 1492 has previously been used to define indigenous populations (Cavalli-Sforza, Menzoni, & Piazza, 1994).",
"title": ""
},
{
"docid": "0743a084a2fbacab046b3e0420d74443",
"text": "Quick Response (QR) codes are two dimensional barcodes that can be used to efficiently store small amount of data. They are increasingly used in all life fields, especially with the wide spread of smart phones which are used as QR code scanners. While QR codes have many advantages that make them very popular, there are several security issues and risks that are associated with them. Running malicious code, stealing users' sensitive information and violating their privacy and identity theft are some typical security risks that a user might be subject to in the background while he/she is just reading the QR code in the foreground. In this paper, a security system for QR codes that guarantees both users and generators security concerns is implemented. The system is backward compatible with current standard used for encoding QR codes. The system is implemented and tested using an Android-based smartphone application. It was found that the system introduces a little overhead in terms of the delay required for integrity verification and content validation.",
"title": ""
},
{
"docid": "cb27f97e93df2b7570ca354d078272ad",
"text": "To perform power augmentation tasks of a robotic exoskeleton, this paper utilizes fuzzy approximation and designed disturbance observers to compensate for the disturbance torques caused by unknown input saturation, fuzzy approximation errors, viscous friction, gravity, and payloads. The proposed adaptive fuzzy control with updated parameters' mechanism and additional torque inputs by using the disturbance observers are exerted into the robotic exoskeleton via feedforward loops to counteract to the disturbances. Through such an approach, the system does not need any requirement of built-in torque sensing units. In order to validate the proposed framework, extensive experiments are conducted on the upper limb exoskeleton using the state feedback and output feedback control to illustrate the performance of the proposed approaches.",
"title": ""
},
{
"docid": "1305feaad59e75d81d5c20c6aaca5254",
"text": "Live fish recognition is one of the most crucial elements of fisheries survey applications where the vast amount of data is rapidly acquired. Different from general scenarios, challenges to underwater image recognition are posted by poor image quality, uncontrolled objects and environment, and difficulty in acquiring representative samples. In addition, most existing feature extraction techniques are hindered from automation due to involving human supervision. Toward this end, we propose an underwater fish recognition framework that consists of a fully unsupervised feature learning technique and an error-resilient classifier. Object parts are initialized based on saliency and relaxation labeling to match object parts correctly. A non-rigid part model is then learned based on fitness, separation, and discrimination criteria. For the classifier, an unsupervised clustering approach generates a binary class hierarchy, where each node is a classifier. To exploit information from ambiguous images, the notion of partial classification is introduced to assign coarse labels by optimizing the benefit of indecision made by the classifier. Experiments show that the proposed framework achieves high accuracy on both public and self-collected underwater fish images with high uncertainty and class imbalance.",
"title": ""
},
{
"docid": "9772d6f0173d88ce000974f912e458f5",
"text": "To be tractable and robust to data noise, existing metric learning algorithms commonly rely on PCA as a pre-processing step. How can we know, however, that PCA, or any other specific dimensionality reduction technique, is the method of choice for the problem at hand? The answer is simple: We cannot! To address this issue, in this paper, we develop a Riemannian framework to jointly learn a mapping performing dimensionality reduction and a metric in the induced space. Our experiments evidence that, while we directly work on high-dimensional features, our approach yields competitive runtimes with and higher accuracy than state-of-the-art metric learning algorithms.",
"title": ""
},
{
"docid": "1c8ae6e8d46e95897a9bd76e09fd28aa",
"text": "Skin diseases are very common in our daily life. Due to the similar appearance of skin diseases, automatic classification through lesion images is quite a challenging task. In this paper, a novel multi-classification method based on convolutional neural network (CNN) is proposed for dermoscopy images. A CNN network with nested residual structure is designed first, which can learn more information than the original residual structure. Then, the designed network are trained through transfer learning. With the trained network, 6 kinds of lesion diseases are classified, including nevus, seborrheic keratosis, psoriasis, seborrheic dermatitis, eczema and basal cell carcinoma. The experiments are conducted on six-classification and two-classification tasks, and with the accuracies of 65.8% and 90% respectively, our method greatly outperforms other 4 state-of-the-art networks and the average of 149 professional dermatologists.",
"title": ""
},
{
"docid": "b1cabb319ce759343ad3f043c7d86b14",
"text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.",
"title": ""
},
{
"docid": "18d4a0b3b6eceb110b6eb13fde6981c7",
"text": "We simulate the growth of a benign avascular tumour embedded in normal tissue, including cell sorting that occurs between tumour and normal cells, due to the variation of adhesion between diierent cell types. The simulation uses the Potts Model, an energy minimisation method. Trial random movements of cell walls are checked to see if they reduce the adhesion energy of the tissue. These trials are then accepted with Boltzmann weighted probability. The simulated tumour initially grows exponentially, then forms three concentric shells as the nutrient level supplied to the core by diiusion decreases: the outer shell consists of live proliferating cells, the middle of quiescent cells and the centre is a necrotic core, where the nutrient concentration is below the critical level that sustains life. The growth rate of the tumour decreases at the onset of shell formation in agreement with experimental observation. The tumour eventually approaches a steady state, where the increase in volume due to the growth of the proliferating cells equals the loss of volume due to the disintegration of cells in the necrotic core. The nal thickness of the shells also agrees with experiment.",
"title": ""
},
{
"docid": "bc6a13cc44a77d29360d04a2bc96bd61",
"text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.",
"title": ""
},
{
"docid": "2a3b9c70dc8f80419ba4557752c4e603",
"text": "The proliferation of sensors and mobile devices and their connectedness to the network have given rise to numerous types of situation monitoring applications. Data Stream Management Systems (DSMSs) have been proposed to address the data processing needs of such applications that require collection of high-speed data, computing results on-the-fly, and taking actions in real-time. Although a lot of work appears in the area of DSMS, not much has been done in multilevel secure (MLS) DSMS making the technology unsuitable for highly sensitive applications, such as battlefield monitoring. An MLS–DSMS should ensure the absence of illegal information flow in a DSMS and more importantly provide the performance needed to handle continuous queries. We illustrate why the traditional DSMSs cannot be used for processing multilevel secure continuous queries and discuss various DSMS architectures for processing such queries. We implement one such architecture and demonstrate how it processes continuous queries. In order to provide better quality of service and memory usage in a DSMS, we show how continuous queries submitted by various users can be shared. We provide experimental evaluations to demonstrate the performance benefits achieved through query sharing.",
"title": ""
}
] | scidocsrr |
63867bb9be12855f80ba17d341d9aa3c | Tree Edit Distance Learning via Adaptive Symbol Embeddings | [
{
"docid": "b16dfd7a36069ed7df12f088c44922c5",
"text": "This paper considers the problem of computing the editing distance between unordered, labeled trees. We give efficient polynomial-time algorithms for the case when one tree is a string or has a bounded number of leaves. By contrast, we show that the problem is NP -complete even for binary trees having a label alphabet of size two. keywords: Computational Complexity, Unordered trees, NP -completeness.",
"title": ""
},
{
"docid": "c75095680818ccc7094e4d53815ef475",
"text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.",
"title": ""
}
] | [
{
"docid": "7d3f0c22674ac3febe309c2440ad3d90",
"text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.",
"title": ""
},
{
"docid": "4e95abd0786147e5e9f4195f2c7a8ff7",
"text": "The contour tree is an abstraction of a scalar field that encodes the nesting relationships of isosurfaces. We show how to use the contour tree to represent individual contours of a scalar field, how to simplify both the contour tree and the topology of the scalar field, how to compute and store geometric properties for all possible contours in the contour tree, and how to use the simplified contour tree as an interface for exploratory visualization.",
"title": ""
},
{
"docid": "e8792ced13f1be61d031e2b150cc5cf6",
"text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.",
"title": ""
},
{
"docid": "32dd24b2c3bcc15dd285b2ffacc1ba43",
"text": "In this paper, we present for the first time the realization of a 77 GHz chip-to-rectangular waveguide transition realized in an embedded Wafer Level Ball Grid Array (eWLB) package. The chip is contacted with a coplanar waveguide (CPW). For the transformation of the transverse electromagnetic (TEM) mode of the CPW line to the transverse electric (TE) mode of the rectangular waveguide an insert is used in the eWLB package. This insert is based on radio-frequency (RF) printed circuit board (PCB) technology. Micro vias formed in the insert are used to realize the sidewalls of the rectangular waveguide structure in the fan-out area of the eWLB package. The redistribution layers (RDLs) on the top and bottom surface of the package form the top and bottom wall, respectively. We present two possible variants of transforming the TEM mode to the TE mode. The first variant uses a via realized in the rectangular waveguide structure. The second variant uses only the RDLs of the eWLB package for mode conversion. We present simulation and measurement results of both variants. We obtain an insertion loss of 1.5 dB and return loss better than 10 dB. The presented results show that this approach is an attractive candidate for future low loss and highly integrated RF systems.",
"title": ""
},
{
"docid": "3b61571cef1afad7a1aad2f9f8e586c4",
"text": "A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.",
"title": ""
},
{
"docid": "a916ebc65d96a9f0bee8edbe6c360d38",
"text": "Technology must work for human race and improve the way help reaches a person in distress in the shortest possible time. In a developing nation like India, with the advancement in the transportation technology and rise in the total number of vehicles, road accidents are increasing at an alarming rate. If an accident occurs, the victim's survival rate increases when you give immediate medical assistance. You can give medical assistance to an accident victim only when you know the exact location of the accident. This paper presents an inexpensive but intelligent framework that can identify and report an accident for two-wheelers. This paper targets two-wheelers because the mortality ratio is highest in two-wheeler accidents in India. This framework includes a microcontroller-based low-cost Accident Detection Unit (ADU) that contains a GPS positioning system and a GSM modem to sense and generate accidental events to a centralized server. The ADU calculates acceleration along with ground clearance of the vehicle to identify the accidental situation. On detecting an accident, ADU sends accident detection parameters, GPS coordinates, and the current time to the Accident Detection Server (ADS). ADS maintain information on the movement of the vehicle according to the historical data, current data, and the rules that you configure in the system. If an accident occurs, ADS notifies the emergency services and the preconfigured mobile numbers for the vehicle that contains this unit.",
"title": ""
},
{
"docid": "966d8f17b750117effa81a64a0dba524",
"text": "As the number of indoor location based services increase in the mobile market, service providers demanding more accurate indoor positioning information. To improve the location estimation accuracy, this paper presents an enhanced client centered location prediction scheme and filtering algorithm based on IEEE 802.11 ac standard for indoor positioning system. We proposed error minimization filtering algorithm for accurate location prediction and also introduce IEEE 802.11 ac mobile client centered location correction scheme without modification of preinstalled infrastructure Access Point. Performance evaluation based on MATLAB simulation highlight the enhanced location prediction performance. We observe a significant error reduction of angle estimation.",
"title": ""
},
{
"docid": "2b010823e217e64e8e56b835cef40a1a",
"text": "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models.\n To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs).\n Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7% in precision, 11.5% in recall, and 14.2% in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9% in F1.",
"title": ""
},
{
"docid": "eaf3d25c7babb067e987b2586129e0e4",
"text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.",
"title": ""
},
{
"docid": "3d9c05889446c52af89e799a8fd48098",
"text": "OBJECTIVE\nCompared to other eating disorders, anorexia nervosa (AN) has the highest rates of completed suicide whereas suicide attempt rates are similar or lower than in bulimia nervosa (BN). Attempted suicide is a key predictor of suicide, thus this mismatch is intriguing. We sought to explore whether the clinical characteristics of suicidal acts differ between suicide attempters with AN, BN or without an eating disorders (ED).\n\n\nMETHOD\nCase-control study in a cohort of suicide attempters (n = 1563). Forty-four patients with AN and 71 with BN were compared with 235 non-ED attempters matched for sex, age and education, using interview measures of suicidal intent and severity.\n\n\nRESULTS\nAN patients were more likely to have made a serious attempt (OR = 3.4, 95% CI 1.4-7.9), with a higher expectation of dying (OR = 3.7,95% CI 1.1-13.5), and an increased risk of severity (OR = 3.4,95% CI 1.2-9.6). BN patients did not differ from the control group. Clinical markers of the severity of ED were associated with the seriousness of the attempt.\n\n\nCONCLUSION\nThere are distinct features of suicide attempts in AN. This may explain the higher suicide rates in AN. Higher completed suicide rates in AN may be partially explained by AN patients' higher desire to die and their more severe and lethal attempts.",
"title": ""
},
{
"docid": "e1d6ae23f76674750573d2b9ac34f5ec",
"text": "In this work, we provide a new formulation for Graph Convolutional Neural Networks (GCNNs) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (KGs). We introduce a regularized attention mechanism to GCNNs that not only improves performance on clean datasets, but also favorably accommodates noise in KGs, a pervasive issue in realworld applications. Further, we explore new visualization methods for interpretable modelling and to illustrate how the learned representation can be exploited to automate dataset denoising. The results are demonstrated on a synthetic dataset, the common benchmark dataset FB15k-237, and a large biomedical knowledge graph derived from a combination of noisy and clean data sources. Using these improvements, we visualize a learned model’s representation of the disease cystic fibrosis and demonstrate how to interrogate a neural network to show the potential of PPARG as a candidate therapeutic target for rheumatoid arthritis.",
"title": ""
},
{
"docid": "796ce0465e28fa07ebec1e4845073539",
"text": "In multiple attribute decision analysis (MADA), one often needs to deal with both numerical data and qualitative information with uncertainty. It is essential to properly represent and use uncertain information to conduct rational decision analysis. Based on a multilevel evaluation framework, an evidential reasoning (ER) approach has been developed for supporting such decision analysis, the kernel of which is an ER algorithm developed on the basis of the framework and the evidence combination rule of the Dempster–Shafer (D–S) theory. The approach has been applied to engineering design selection, organizational self-assessment, safety and risk assessment, and supplier assessment. In this paper, the fundamental features of the ER approach are investigated. New schemes for weight normalization and basic probability assignments are proposed. The original ER approach is further developed to enhance the process of aggregating attributes with uncertainty. Utility intervals are proposed to describe the impact of ignorance on decision analysis. Several properties of the new ER approach are explored, which lay the theoretical foundation of the ER approach. A numerical example of a motorcycle evaluation problem is examined using the ER approach. Computation steps and analysis results are provided in order to demonstrate its implementation process.",
"title": ""
},
{
"docid": "d04042c81f2c2f7f762025e6b2bd9ab8",
"text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.",
"title": ""
},
{
"docid": "aaa2c8a7367086cd762f52b6a6c30df6",
"text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.",
"title": ""
},
{
"docid": "5459dc71fd40a576365f0afced64b6b7",
"text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.",
"title": ""
},
{
"docid": "4e8f7fdba06ae7973e3d25cf35399aaf",
"text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.",
"title": ""
},
{
"docid": "b5dc5268c2eb3b216aa499a639ddfbf9",
"text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter",
"title": ""
},
{
"docid": "23493c14053a4608203f8e77bd899445",
"text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.",
"title": ""
},
{
"docid": "3712b06059d60211843a92a507075f86",
"text": "Automatically filtering relevant information about a real-world incident from Social Web streams and making the information accessible and findable in the given context of the incident are non-trivial scientific challenges. In this paper, we engineer and evaluate solutions that analyze the semantics of Social Web data streams to solve these challenges. We introduce Twitcident, a framework and Web-based system for filtering, searching and analyzing information about real-world incidents or crises. Given an incident, our framework automatically starts tracking and filtering information that is relevant for the incident from Social Web streams and Twitter particularly. It enriches the semantics of streamed messages to profile incidents and to continuously improve and adapt the information filtering to the current temporal context. Faceted search and analytical tools allow people and emergency services to retrieve particular information fragments and overview and analyze the current situation as reported on the Social Web.\n We put our Twitcident system into practice by connecting it to emergency broadcasting services in the Netherlands to allow for the retrieval of relevant information from Twitter streams for any incident that is reported by those services. We conduct large-scale experiments in which we evaluate (i) strategies for filtering relevant information for a given incident and (ii) search strategies for finding particular information pieces. Our results prove that the semantic enrichment offered by our framework leads to major and significant improvements of both the filtering and the search performance. A demonstration is available via: http://wis.ewi.tudelft.nl/twitcident/",
"title": ""
},
{
"docid": "9cb28706a45251e3d2fb5af64dd9351f",
"text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.",
"title": ""
}
] | scidocsrr |
c1d39b7a58c4b82c78924481e351db2b | RFID technology: Beyond cash-based methods in vending machine | [
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "8eeeb23e7e1bf5ad3300c9a46f080829",
"text": "The purpose of this paper is to develop a wireless system to detect and maintain the attendance of a student and locate a student. For, this the students ID (identification) card is tagged with an Radio-frequency identification (RFID) passive tag which is matched against the database and only finalized once his fingerprint is verified using the biometric fingerprint scanner. The guardian is intimated by a sms (short message service) sent using the GSM (Global System for Mobile Communications) Modem of the same that is the student has reached the university or not on a daily basis, in the present day every guardian is worried whether his child has reached safely or not. In every classroom, laboratory, libraries, staffrooms etc. a RFID transponder is installed through which we will be detecting the location of the student and staff. There will be a website through which the student, teacher and the guardians can view the status of attendance and location of a student at present in the campus. A person needs to be located can be done by two means that is via the website or by sending the roll number of the student as an sms to the GSM modem which will reply by taking the last location stored of the student in the database.",
"title": ""
},
{
"docid": "5f5d5f4dc35f06eaedb19d1841d7ab3d",
"text": "In this paper we propose an implementation technique for sequential circuit using single electron tunneling technology (SET-s) with the example of designing of a “coffee vending machine” with the goal of getting low power and faster operation. We implement the proposed design based on single electron encoded logic (SEEL).The circuit is tested and compared with the existing CMOS technology.",
"title": ""
}
] | [
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "1d73817f8b1b54a82308106ee526a62b",
"text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.",
"title": ""
},
{
"docid": "2cde7564c83fe2b75135550cb4847af0",
"text": "The twenty-first century global population will be increasingly urban-focusing the sustainability challenge on cities and raising new challenges to address urban resilience capacity. Landscape ecologists are poised to contribute to this challenge in a transdisciplinary mode in which science and research are integrated with planning policies and design applications. Five strategies to build resilience capacity and transdisciplinary collaboration are proposed: biodiversity; urban ecological networks and connectivity; multifunctionality; redundancy and modularization, adaptive design. Key research questions for landscape ecologists, planners and designers are posed to advance the development of knowledge in an adaptive mode.",
"title": ""
},
{
"docid": "238ae411572961116e47b7f6ebce974c",
"text": "Coercing new programmers to adopt disciplined development practices such as thorough unit testing is a challenging endeavor. Test-driven development (TDD) has been proposed as a solution to improve both software design and testing. Test-driven learning (TDL) has been proposed as a pedagogical approach for teaching TDD without imposing significant additional instruction time.\n This research evaluates the effects of students using a test-first (TDD) versus test-last approach in early programming courses, and considers the use of TDL on a limited basis in CS1 and CS2. Software testing, programmer productivity, programmer performance, and programmer opinions are compared between test-first and test-last programming groups. Results from this research indicate that a test-first approach can increase student testing and programmer performance, but that early programmers are very reluctant to adopt a test-first approach, even after having positive experiences using TDD. Further, this research demonstrates that TDL can be applied in CS1/2, but suggests that a more pervasive implementation of TDL may be necessary to motivate and establish disciplined testing practice among early programmers.",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "e86247471d4911cb84aa79911547045b",
"text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.",
"title": ""
},
{
"docid": "06b0708250515510b8a3fc302045fe4b",
"text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.",
"title": ""
},
{
"docid": "86a49d9f751cab880b7a4f8f516e9183",
"text": "1. THE GALLERY This paper describes the design and implementation of a virtual art gallery on the Web. The main objective of the virtual art gallery is to provide a navigational environment for the Networked Digital Library of Theses and Dissertations (NDLTD see http://www.ndltd.org and Fox et al., 1996), where users can browse and search electronic theses and dissertations (ETDs) indexed by the pictures--images and figures--that the ETDs contain. The galleries in the system and the ETD collection are simultaneously organized using a college/department taxonomic hierarchy. The system architecture has the general characteristics of the client-server model. Users access the system by using a Web browser to follow a link to one of the named art galleries. This link has the form of a CGI query which is passed by the Web server to a CGI program that handles it as a data request by accessing the database server. The query results are used to generate a view of the art gallery (as a VRML or HTML file) and are sent back to the user’s browser. We employ several technologies for improving the system’s efficiency and flexibility. We define a Simple Template Language (STL) to specify the format of VRML or HTML templates. Documents are generated by merging pre-designed VRML or HTML files and the data coming from the Database. Therefore, the generated virtual galleries always contain the most recent database information. We also define a Data Service Protocol (DSP), which normalizes the request from the user to communicate it with the Data Server and ultimately the image database.",
"title": ""
},
{
"docid": "06b0ad9465d3e83723db53cdca19167a",
"text": "SUMMARY\nSOAP2 is a significantly improved version of the short oligonucleotide alignment program that both reduces computer memory usage and increases alignment speed at an unprecedented rate. We used a Burrows Wheeler Transformation (BWT) compression index to substitute the seed strategy for indexing the reference sequence in the main memory. We tested it on the whole human genome and found that this new algorithm reduced memory usage from 14.7 to 5.4 GB and improved alignment speed by 20-30 times. SOAP2 is compatible with both single- and paired-end reads. Additionally, this tool now supports multiple text and compressed file formats. A consensus builder has also been developed for consensus assembly and SNP detection from alignment of short reads on a reference genome.\n\n\nAVAILABILITY\nhttp://soap.genomics.org.cn.",
"title": ""
},
{
"docid": "aa58cb2b2621da6260aeb203af1bd6f1",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.",
"title": ""
},
{
"docid": "c002eab17c87343b5d138b34e3be73f3",
"text": "Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the distributed vector representation (embedding) of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.",
"title": ""
},
{
"docid": "e640e1831db44482e04005657f1a0f43",
"text": "Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers.",
"title": ""
},
{
"docid": "2547be3cb052064fd1f995c45191e84d",
"text": "With the new generation of full low floor passenger trains, the constraints of weight and size on the traction transformer are becoming stronger. The ultimate target weight for the transformer is 1 kg/kVA. The reliability and the efficiency are also becoming more important. To address these issues, a multilevel topology using medium frequency transformers has been developed. It permits to reduce the weight and the size of the system and improves the global life cycle cost of the vehicle. The proposed multilevel converter consists of sixteen bidirectional direct current converters (cycloconverters) connected in series to the catenary 15 kV, 16.7 Hz through a choke inductor. The cycloconverters are connected to sixteen medium frequency transformers (400 Hz) that are fed by sixteen four-quadrant converters connected in parallel to a 1.8 kV DC link with a 2f filter. The control, the command and the hardware of the prototype are described in detail.",
"title": ""
},
{
"docid": "0bba0afb68f80afad03d0ba3d1ce9c89",
"text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.",
"title": ""
},
{
"docid": "d3afe3be6debe665f442367b17fa4e28",
"text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "aae5e70b47a7ec720333984e80725034",
"text": "The authors realize a 50% length reduction of short-slot couplers in a post-wall dielectric substrate by two techniques. One is to introduce hollow rectangular holes near the side walls of the coupled region. The difference of phase constant between the TE10 and TE20 propagating modes increases and the required length to realize a desired dividing ratio is reduced. Another is to remove two reflection-suppressing posts in the coupled region. The length of the coupled region is determined to cancel the reflections at both ends of the coupled region. The total length of a 4-way Butler matrix can be reduced to 48% in comparison with the conventional one and the couplers still maintain good dividing characteristics; the dividing ratio of the hybrid is less than 0.1 dB and the isolations of the couplers are more than 20 dB. key words: short-slot coupler, length reduction, Butler matrix, post-wall waveguide, dielectric substrate, rectangular hole",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "eb2663865d0d7312641e0748978b238c",
"text": "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.",
"title": ""
}
] | scidocsrr |
91b725d0bb28af9d1e260ebf50e01592 | Deep Learning for Lip Reading using Audio-Visual Information for Urdu Language | [
{
"docid": "aa70b48d3831cf56b1432ebba6960797",
"text": "Lipreading is the task of decoding text from the movement of a speaker’s mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). All existing works, however, perform only word classification, not sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, an LSTM recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.",
"title": ""
}
] | [
{
"docid": "bf559d41d2f0fb72ccb2331418573acc",
"text": "Massive Open Online Courses or MOOCs, both in their major approaches xMOOC and cMOOC, are attracting a very interesting debate about their influence in the higher education future. MOOC have both great defenders and detractors. As an emerging trend, MOOCs have a hype repercussion in technological and pedagogical areas, but they should demonstrate their real value in specific implementation and within institutional strategies. Independently, MOOCs have different issues such as high dropout rates and low number of cooperative activities among participants. This paper presents an adaptive proposal to be applied in MOOC definition and development, with a special attention to cMOOC, that may be useful to tackle the mentioned MOOC problems.",
"title": ""
},
{
"docid": "5853f6f7405a73fe6587f2fd0b4d9419",
"text": "Substantial evidence suggests that mind-wandering typically occurs at a significant cost to performance. Mind-wandering–related deficits in performance have been observed in many contexts, most notably reading, tests of sustained attention, and tests of aptitude. Mind-wandering has been shown to negatively impact reading comprehension and model building, impair the ability to withhold automatized responses, and disrupt performance on tests of working memory and intelligence. These empirically identified costs of mind-wandering have led to the suggestion that mind-wandering may represent a pure failure of cognitive control and thus pose little benefit. However, emerging evidence suggests that the role of mind-wandering is not entirely pernicious. Recent studies have shown that mind-wandering may play a crucial role in both autobiographical planning and creative problem solving, thus providing at least two possible adaptive functions of the phenomenon. This article reviews these observed costs and possible functions of mind-wandering and identifies important avenues of future inquiry.",
"title": ""
},
{
"docid": "04500f0dbf48d3c1d8eb02ed43d46e00",
"text": "The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. \n We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. \n We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.",
"title": ""
},
{
"docid": "ba87d116445aa7a2ced1b9d8333c0b7a",
"text": "Study Design Systematic review with meta-analysis. Background The addition of hip strengthening to knee strengthening for persons with patellofemoral pain has the potential to optimize treatment effects. There is a need to systematically review and pool the current evidence in this area. Objective To examine the efficacy of hip strengthening, associated or not with knee strengthening, to increase strength, reduce pain, and improve activity in individuals with patellofemoral pain. Methods A systematic review of randomized and/or controlled trials was performed. Participants in the reviewed studies were individuals with patellofemoral pain, and the experimental intervention was hip and knee strengthening. Outcome data related to muscle strength, pain, and activity were extracted from the eligible trials and combined in a meta-analysis. Results The review included 14 trials involving 673 participants. Random-effects meta-analyses revealed that hip and knee strengthening decreased pain (mean difference, -3.3; 95% confidence interval [CI]: -5.6, -1.1) and improved activity (standardized mean difference, 1.4; 95% CI: 0.03, 2.8) compared to no training/placebo. In addition, hip and knee strengthening was superior to knee strengthening alone for decreasing pain (mean difference, -1.5; 95% CI: -2.3, -0.8) and improving activity (standardized mean difference, 0.7; 95% CI: 0.2, 1.3). Results were maintained beyond the intervention period. Meta-analyses showed no significant changes in strength for any of the interventions. Conclusion Hip and knee strengthening is effective and superior to knee strengthening alone for decreasing pain and improving activity in persons with patellofemoral pain; however, these outcomes were achieved without a concurrent change in strength. Level of Evidence Therapy, level 1a-. J Orthop Sports Phys Ther 2018;48(1):19-31. Epub 15 Oct 2017. doi:10.2519/jospt.2018.7365.",
"title": ""
},
{
"docid": "242977c8b2a5768b18fc276309407d60",
"text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.",
"title": ""
},
{
"docid": "899243c1b24010cc5423e2ac9a7f6df2",
"text": "The introduction of wearable video cameras (e.g., GoPro) in the consumer market has promoted video life-logging, motivating users to generate large amounts of video data. This increasing flow of first-person video has led to a growing need for automatic video summarization adapted to the characteristics and applications of egocentric video. With this paper, we provide the first comprehensive survey of the techniques used specifically to summarize egocentric videos. We present a framework for first-person view summarization and compare the segmentation methods and selection algorithms used by the related work in the literature. Next, we describe the existing egocentric video datasets suitable for summarization and, then, the various evaluation methods. Finally, we analyze the challenges and opportunities in the field and propose new lines of research.",
"title": ""
},
{
"docid": "95403ce714b2102ca5f50cfc4d838e07",
"text": "Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a self-supervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our self-supervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP .",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "be8c7050c87ad0344f8ddd10c7832bb4",
"text": "A novel method of map matching using the Global Positioning System (GPS) has been developed for civilian use, which uses digital mapping data to infer the <100 metres systematic position errors which result largely from “selective availability” (S/A) imposed by the U.S. military. The system tracks a vehicle on all possible roads (road centre-lines) in a computed error region, then uses a method of rapidly detecting inappropriate road centre-lines from the set of all those possible. This is called the Road Reduction Filter (RRF) algorithm. Point positioning is computed using C/A code pseudorange measurements direct from a GPS receiver. The least squares estimation is performed in the software developed for the experiment described in this paper. Virtual differential GPS (VDGPS) corrections are computed and used from a vehicle’s previous positions, thus providing an autonomous alternative to DGPS for in-car navigation and fleet management. Height aiding is used to augment the solution and reduce the number of satellites required for a position solution. Ordnance Survey (OS) digital map data was used for the experiment, i.e. OSCAR 1m resolution road centre-line geometry and Land Form PANORAMA 1:50,000, 50m-grid digital terrain model (DTM). Testing of the algorithm is reported and results are analysed. Vehicle positions provided by RRF are compared with the “true” position determined using high precision (cm) GPS carrier phase techniques. It is shown that height aiding using a DTM and the RRF significantly improve the accuracy of position provided by inexpensive single frequency GPS receivers. INTRODUCTION The accurate location of a vehicle on a highway network model is fundamental to any in-car-navigation system, personal navigation assistant, fleet management system, National Mayday System (Carstensen, 1998) and many other applications that provide a current vehicle location, a digital map and perhaps directions or route guidance. A great",
"title": ""
},
{
"docid": "c66c1523322809d1b2d1279b5b2b8384",
"text": "The design of the Smart Grid requires solving a complex problem of combined sensing, communications and control and, thus, the problem of choosing a networking technology cannot be addressed without also taking into consideration requirements related to sensor networking and distributed control. These requirements are today still somewhat undefined so that it is not possible yet to give quantitative guidelines on how to choose one communication technology over the other. In this paper, we make a first qualitative attempt to better understand the role that Power Line Communications (PLCs) can have in the Smart Grid. Furthermore, we here report recent results on the electrical and topological properties of the power distribution network. The topological characterization of the power grid is not only important because it allows us to model the grid as an information source, but also because the grid becomes the actual physical information delivery infrastructure when PLCs are used.",
"title": ""
},
{
"docid": "1cbd70bddd09be198f6695209786438d",
"text": "In this research work a neural network based technique to be applied on condition monitoring and diagnosis of rotating machines equipped with hydrostatic self levitating bearing system is presented. Based on fluid measured data, such pressures and temperature, vibration analysis based diagnosis is being carried out by determining the vibration characteristics of the rotating machines on the basis of signal processing tasks. Required signals are achieved by conversion of measured data (fluid temperature and pressures) into virtual data (vibration magnitudes) by means of neural network functional approximation techniques.",
"title": ""
},
{
"docid": "02842ef0acfed3d59612ce944b948adf",
"text": "One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter subject differences. In order to solve these problems we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information. Next, we train our recurrent hybrid network to learn robust representations from the sequence of frames. The proposed approach preserves spectral, spatial and temporal structures and extracts features which are less sensitive to variations along each dimension. Our results demonstrate cognitive memory load prediction across four different levels with an overall accuracy of 92.5% during the memory task execution and reduce classification error to 7.61% in comparison to other state-of-art techniques.",
"title": ""
},
{
"docid": "93fad9723826fcf99ae229b4e7298a31",
"text": "In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.",
"title": ""
},
{
"docid": "af7379b6c3ecf9abaf3da04d06f06d00",
"text": "AIM\nThe purpose of the current study was to examine the effect of Astaxanthin (Asx) supplementation on muscle enzymes as indirect markers of muscle damage, oxidative stress markers and antioxidant response in elite young soccer players.\n\n\nMETHODS\nThirty-two male elite soccer players were randomly assigned in a double-blind fashion to Asx and placebo (P) group. After the 90 days of supplementation, the athletes performed a 2 hour acute exercise bout. Blood samples were obtained before and after 90 days of supplementation and after the exercise at the end of observational period for analysis of thiobarbituric acid-reacting substances (TBARS), advanced oxidation protein products (AOPP), superoxide anion (O2•¯), total antioxidative status (TAS), sulphydril groups (SH), superoxide-dismutase (SOD), serum creatine kinase (CK) and aspartate aminotransferase (AST).\n\n\nRESULTS\nTBARS and AOPP levels did not change throughout the study. Regular training significantly increased O2•¯ levels (main training effect, P<0.01). O2•¯ concentrations increased after the soccer exercise (main exercise effect, P<0.01), but these changes reached statistical significance only in the P group (exercise x supplementation effect, P<0.05). TAS levels decreased significantly post- exercise only in P group (P<0.01). Both Asx and P groups experienced increase in total SH groups content (by 21% and 9%, respectively) and supplementation effect was marginally significant (P=0.08). Basal SOD activity significantly decreased both in P and in Asx group by the end of the study (main training effect, P<0.01). All participants showed a significant decrease in basal CK and AST activities after 90 days (main training effect, P<0.01 and P<0.001, respectively). CK and AST activities in serum significantly increased as result of soccer exercise (main exercise effect, P<0.001 and P<0.01, respectively). Postexercise CK and AST levels were significantly lower in Asx group compared to P group (P<0.05)\n\n\nCONCLUSION\nThe results of the present study suggest that soccer training and soccer exercise are associated with excessive production of free radicals and oxidative stress, which might diminish antioxidant system efficiency. Supplementation with Asx could prevent exercise induced free radical production and depletion of non-enzymatic antioxidant defense in young soccer players.",
"title": ""
},
{
"docid": "681219aa3cb0fb579e16c734d2e357f7",
"text": "Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies.",
"title": ""
},
{
"docid": "934ee0b55bf90eed86fabfff8f1238d1",
"text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.",
"title": ""
},
{
"docid": "a73bc91aa52aa15dbe30fbabac5d0ecc",
"text": "In this paper we address the problem of multivariate outlier detection using the (unsupervised) self-organizing map (SOM) algorithm introduced by Kohonen. We examine a number of techniques, based on summary statistics and graphics derived from the trained SOM, and conclude that they work well in cooperation with each other. Useful tools include the median interneuron distance matrix and the projection of the trained map (via Sammon’s mapping). SOM quantization errors provide an important complementary source of information for certain type of outlying behavior. Empirical results are reported on both artificial and real data.",
"title": ""
},
{
"docid": "127b8dfb562792d02a4c09091e09da90",
"text": "Current approaches to conservation and natural-resource management often focus on single objectives, resulting in many unintended consequences. These outcomes often affect society through unaccounted-for ecosystem services. A major challenge in moving to a more ecosystem-based approach to management that would avoid such societal damages is the creation of practical tools that bring a scientifically sound, production function-based approach to natural-resource decision making. A new set of computer-based models is presented, the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) that has been designed to inform such decisions. Several of the key features of these models are discussed, including the ability to visualize relationships among multiple ecosystem services and biodiversity, the ability to focus on ecosystem services rather than biophysical processes, the ability to project service levels and values in space, sensitivity to manager-designed scenarios, and flexibility to deal with data and knowledge limitations. Sample outputs of InVEST are shown for two case applications; the Willamette Basin in Oregon and the Amazon Basin. Future challenges relating to the incorporation of social data, the projection of social distributional effects, and the design of effective policy mechanisms are discussed.",
"title": ""
},
{
"docid": "bc3e2c94cd53f472f36229e2d9c5d69e",
"text": "The problem of planning safe tra-jectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem proposed. The key features of the solution are: 1. the identification of trajectory primitives and a hierarchy of abstraction spaces that permit simple manip-ulator models, 2. the characterization of empty space by approximating it with easily describable entities called charts-the approximation is dynamic and can be selective, 3. a scheme for planning motions close to obstacles that is computationally viable, and that suggests how proximity sensors might be used to do the planning, and 4. the use of hierarchical decomposition to reduce the complexity of the planning problem. 1. INTRODUCTION the 2D and 3D solution noted, it is easy to visualize the solution for the 3D manipulator. Section 2 of this paper presents an example, and Section 3 a statement and analysis of the problem. Sections 4 and 5 present the solution. Section 6 summarizes the key ideas in the solution and indicates areas for future work. 2. AN EXAMPLE This section describes an example (Figure 2.1) of the collision detection and avoidance problem for a two-dimensional manipulator. The example highlights features of the problem and its solution. 2.1 The Problem The manipulator has two links and three degrees of freedom. The larger link, called the boom, slides back and forth and can rotate about the origin. The smaller link, called the forearm, has a rotational degree of freedom about the tip of the boom. The tip of the forearm is called the hand. S and G are the initial and final configurations of the manipulator. Any real manipulator's links will have physical dimensions. The line segment representation of the link is an abstraction; the physical dimensions can be accounted for and how this is done is described later. The problem of planning safe trajectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem is presented. The trajectory planning system is initialized with a description of the part of the environment that the manipulator is to maneuver in. When given the goal position and orientation of the hand, the system plans a complete trajectory that will safely maneuver the manipulator into the goal configuration. The executive system in charge of operating the hardware uses this trajectory to physically move the manipulator. …",
"title": ""
}
] | scidocsrr |
56e878ee737128ab55269f3daed7b964 | Sentiment Strength Prediction Using Auxiliary Features | [
{
"docid": "57666e9d9b7e69c38d7530633d556589",
"text": "In this paper, we investigate the utility of linguistic features for detecting the sentiment of Twitter messages. We evaluate the usefulness of existing lexical resources as well as features that capture information about the informal and creative language used in microblogging. We take a supervised approach to the problem, but leverage existing hashtags in the Twitter data for building training data.",
"title": ""
},
{
"docid": "f0af945042c44b20d6bd9f81a0b21b6b",
"text": "We investigate a technique to adapt unsupervised word embeddings to specific applications, when only small and noisy labeled datasets are available. Current methods use pre-trained embeddings to initialize model parameters, and then use the labeled data to tailor them for the intended task. However, this approach is prone to overfitting when the training is performed with scarce and noisy data. To overcome this issue, we use the supervised data to find an embedding subspace that fits the task complexity. All the word representations are adapted through a projection into this task-specific subspace, even if they do not occur on the labeled dataset. This approach was recently used in the SemEval 2015 Twitter sentiment analysis challenge, attaining state-of-the-art results. Here we show results improving those of the challenge, as well as additional experiments in a Twitter Part-Of-Speech tagging task.",
"title": ""
},
{
"docid": "a3866467e9a5a1ee2e35b9f2e477a3e3",
"text": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.",
"title": ""
}
] | [
{
"docid": "efc3b3592c46c784ebf5a6e2d9b911df",
"text": "A discrete-time analysis of the orthogonal frequency division multiplex/offset QAM (OFDM/OQAM) multicarrier modulation technique, leading to a modulated transmultiplexer, is presented. The conditions of discrete orthogonality are established with respect to the polyphase components of the OFDM/OQAM prototype filter, which is assumed to be symmetrical and with arbitrary length. Fast implementation schemes of the OFDM/OQAM modulator and demodulator are provided, which are based on the inverse fast Fourier transform. Non-orthogonal prototypes create intersymbol and interchannel interferences (ISI and ICI) that, in the case of a distortion-free transmission, are expressed by a closed-form expression. A large set of design examples is presented for OFDM/OQAM systems with a number of subcarriers going from four up to 2048, which also allows a comparison between different approaches to get well-localized prototypes.",
"title": ""
},
{
"docid": "ad5943b20597be07646cca1af9d23660",
"text": "Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) [3] can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely welldefined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired [1]. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.",
"title": ""
},
{
"docid": "e912abc2da4eb1158c6a6c84245d13f8",
"text": "Social media hype has created a lot of speculation among educators on how these media can be used to support learning, but there have been rather few studies so far. Our explorative interview study contributes by critically exploring how campus students perceive using social media to support their studies and the perceived benefits and limitations compared with other means. Although the vast majority of the respondents use social media frequently, a “digital dissonance” can be noted, because few of them feel that they use such media to support their studies. The interviewees mainly put forth e-mail and instant messaging, which are used among students to ask questions, coordinate group work and share files. Some of them mention using Wikipedia and YouTube for retrieving content and Facebook to initiate contact with course peers. Students regard social media as one of three key means of the educational experience, alongside face-to-face meetings and using the learning management systems, and are mainly used for brief questions and answers, and to coordinate group work. In conclusion, we argue that teaching strategy plays a key role in supporting students in moving from using social media to support coordination and information retrieval to also using such media for collaborative learning, when appropriate.",
"title": ""
},
{
"docid": "85f67ab0e1adad72bbe6417d67fd4c81",
"text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.",
"title": ""
},
{
"docid": "39b4978950b5718ceaf4458cf5b38424",
"text": "Multipath TCP (MPTCP) suffers from the degradation of goodput in the presence of diverse network conditions on the available subflows. The goodput can even be worse than that of one regular TCP, undermining the advantage gained by using multipath transfer. In this work, we propose a new multipath TCP protocol, namely NC-MPTCP, which introduces network coding (NC) to some but not all subflows traveling from source to destination. At the core of our scheme is the mixed use of regular and NC subflows. Thus, the regular subflows deliver original data while the NC subflows deliver linear combinations of the original data. The idea is to take advantage of the redundant NC data to compensate for the lost or delayed data in order to avoid receive buffer becoming full. We design a packet scheduling algorithm and a redundancy estimation algorithm to allocate data among different subflows in order to optimize the overall goodput. We also give a guideline on how to choose the NC subflows among the available subflows. We evaluate the performance of NC-MPTCP through a NS-3 network simulator. The experiments show that NC-MPTCP achieves higher goodput compared to MPTCP in the presence of different subflow qualities. And in the worst case, the performance of NC-MPTCP is close to that of one regular TCP.",
"title": ""
},
{
"docid": "adc436481c2e7214cdaf8b9a4121d4c1",
"text": "Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics. Therefore, we believe that it is important to combine the existing word embeddings to produce more accurate and complete meta-embeddings of words. We model the meta-embedding learning problem as an autoencoding problem, where we would like to learn a meta-embedding space that can accurately reconstruct all source embeddings simultaneously. Thereby, the meta-embedding space is enforced to capture complementary information in different source embeddings via a coherent common embedding space. We propose three flavours of autoencoded meta-embeddings motivated by different requirements that must be satisfied by a meta-embedding. Our experimental results on a series of benchmark evaluations show that the proposed autoencoded meta-embeddings outperform the existing state-of-the-art metaembeddings in multiple tasks.",
"title": ""
},
{
"docid": "6a7640e686db220a3981e20718914e10",
"text": "Two issues are increasingly of interests in the scientific literature regarding virtual reality induced unwanted side effects: (a) the latent structure of the Simulator Sickness Questionnaire (SSQ), and (b) the overlap between anxiety symptoms and unwanted side effects. Study 1 was conducted with a sample of 517 participants. A confirmatory factor analysis clearly supported a two-factor model composed of nausea and oculomotor symptoms. To tease-out symptoms of anxiety from negative side effects of immersion, Study 2 was conducted with 47 participants who were stressed without being immersed in virtual reality. Five of the 16 side effects correlated significantly with anxiety. In a third study conducted with 72 participants, the post-immersion scores of the SSQ and the State Anxiety scale were subjected to a factor analysis to detect items side effects that would load on the anxiety factor. The results converged with those of Study 2 and revealed that general discomfort and difficulty concentrating loaded significantly on the anxiety factor. The overall results support the notion that side effects consists mostly of a nausea and an oculomotor latent structure and that (only) a few items may reflect anxiety more than unwanted negative side effects.",
"title": ""
},
{
"docid": "219ab7f299b70e0c6c286b89dfd145b4",
"text": "Malicious payload injection attacks have been a serious threat to software for decades. Unfortunately, protection against these attacks remains challenging due to the ever increasing diversity and sophistication of payload injection and triggering mechanisms used by adversaries. In this paper, we develop A2C, a system that provides general protection against payload injection attacks. A2C is based on the observation that payloads are highly fragile and thus any mutation would likely break their functionalities. Therefore, A2C mutates inputs from untrusted sources. Malicious payloads that reside in these inputs are hence mutated and broken. To assure that the program continues to function correctly when benign inputs are provided, A2C divides the state space into exploitable and post-exploitable sub-spaces, where the latter is much larger than the former, and decodes the mutated values only when they are transmitted from the former to the latter. A2C does not rely on any knowledge of malicious payloads or their injection and triggering mechanisms. Hence, its protection is general. We evaluate A2C with 30 realworld applications, including apache on a real-world work-load, and our results show that A2C effectively prevents a variety of payload injection attacks on these programs with reasonably low overhead (6.94%).",
"title": ""
},
{
"docid": "4003b1a03be323c78e98650895967a07",
"text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.",
"title": ""
},
{
"docid": "bc25cdd67a196b0d4947069d6d3864b3",
"text": "Data integration is a key issue for any integrated set of software tools where each tool has its own data structures (at least on the conceptual level), but where we have many interdependencies between these private data structures. A typical CASE environment, for instance, offers tools for the manipulation of requirements and software design documents and provides more or less sophisticated assistance for keeping these documents in a consistent state. Up to now almost all of these data consistency observing or preserving integration tools are hand-crafted due to the lack of generic implementation frameworks and the absence of adequate specification formalisms. Triple graph grammars, a proper superset of pair grammars, are intended to fill this gap and to support the specification of interdependencies between graph-like data structures on a very high level. Furthermore, they form a solid fundament of a new machinery for the production of batch-oriented as well as incrementally working data integration tools.",
"title": ""
},
{
"docid": "902ca8c9a7cd8384143654ee302eca82",
"text": "The Paper presents the outlines of the Field Programmable Gate Array (FPGA) implementation of Real Time speech enhancement by Spectral Subtraction of acoustic noise using Dynamic Moving Average Method. It describes an stand alone algorithm for Speech Enhancement and presents a architecture for the implementation. The traditional Spectral Subtraction method can only suppress stationary acoustic noise from speech by subtracting the spectral noise bias calculated during non-speech activity, while adding the unique option of dynamic moving averaging to it, it can now periodically upgrade the estimation and cope up with changes in noise level. Signal to Noise Ratio (SNR) has been tested at different noisy environment and the improvement in SNR certifies the effectiveness of the algorithm. The FPGA implementation presented in this paper, works on streaming speech signals and can be used in factories, bus terminals, Cellular Phones, or in outdoor conferences where a large number of people have gathered. The Table in the Experimental Result section consolidates our claim of optimum resouce usage.",
"title": ""
},
{
"docid": "fabbfba897bd88d132eba35460093ef7",
"text": "Although mind wandering occupies a large proportion of our waking life, its neural basis and relation to ongoing behavior remain controversial. We report an fMRI study that used experience sampling to provide an online measure of mind wandering during a concurrent task. Analyses focused on the interval of time immediately preceding experience sampling probes demonstrate activation of default network regions during mind wandering, a finding consistent with theoretical accounts of default network functions. Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions--two brain systems that so far have been assumed to work in opposition--suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation. The ability of this study to reveal a number of crucial aspects of the neural recruitment associated with mind wandering underscores the value of combining subjective self-reports with online measures of brain function for advancing our understanding of the neurophenomenology of subjective experience.",
"title": ""
},
{
"docid": "a28a29ec67cf50193b9beb579a83bdf4",
"text": "DNA microarray is an efficient new technology that allows to analyze, at the same time, the expression level of millions of genes. The gene expression level indicates the synthesis of different messenger ribonucleic acid (mRNA) molecule in a cell. Using this gene expression level, it is possible to diagnose diseases, identify tumors, select the best treatment to resist illness, detect mutations among other processes. In order to achieve that purpose, several computational techniques such as pattern classification approaches can be applied. The classification problem consists in identifying different classes or groups associated with a particular disease (e.g., various types of cancer, in terms of the gene expression level). However, the enormous quantity of genes and the few samples available, make difficult the processes of learning and recognition of any classification technique. Artificial neural networks (ANN) are computational models in artificial intelligence used for classifying, predicting and approximating functions. Among the most popular ones, we could mention the multilayer perceptron (MLP), the radial basis function neural netrtificial Bee Colony algorithm work (RBF) and support vector machine (SVM). The aim of this research is to propose a methodology for classifying DNA microarray. The proposed method performs a feature selection process based on a swarm intelligence algorithm to find a subset of genes that best describe a disease. After that, different ANN are trained using the subset of genes. Finally, four different datasets were used to validate the accuracy of the proposal and test the relevance of genes to correctly classify the samples of the disease. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 . Introduction DNA microarray is an essential technique in molecular biolgy that allows, at the same time, to know the expression level f millions of genes. The DNA microarray consists in immobilizing known deoxyribonucleic acid (DNA) molecule layout in a glass ontainer and then this information with other genetic informaion are hybridized. This process is the base to identify, classify or redict diseases such as different kind of cancer [1–4]. The process to obtain a DNA microarray is based on the comination of a healthy DNA reference with a testing DNA. Using Please cite this article in press as: B.A. Garro, et al., Classification of DNA Appl. Soft Comput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.10. uorophores and a laser it is possible to generate a color spot matrix nd obtain quantitative values that represent the expression level f each gene [5]. This expression level is like a signature useful to ∗ Corresponding author. Tel.: +52 5556223899. E-mail addresses: [email protected] (B.A. Garro), [email protected] (K. Rodríguez), [email protected] R.A. Vázquez). ttp://dx.doi.org/10.1016/j.asoc.2015.10.002 568-4946/© 2015 Published by Elsevier B.V. 48 49 50 51 52 diagnose different diseases. Furthermore, it can be used to identify genes that modify their genetic expression when a medical treatment is applied, identify tumors and genes that make regulation genetic networks, detect mutations among other applications [6]. Computational techniques combined with DNA microarrays can generate efficient results. The classification of DNA microarrays can be divided into three stages: gene finding, class discovery, and class prediction [7,8]. The DNA microarray samples have millions of genes and selecting the best genes set in such a way that get a trustworthy classification is a difficult task. Nonetheless, the evolutionary and bio-inspired algorithms, such as genetic algorithm (GA) [9], particle swarm optimization (PSO) [10], bacterial foraging algorithm (BFA) [11] and fish school search (FSS) [12], are excellent options to solve this problem. However, the performance of these algorithms depends of the fitness function, the parameters of the algorithm, the search space complexity, convergence, etc. In microarrays using artificial neural networks and ABC algorithm, 002 general, the performance of these algorithms is very similar among them, but depends of adjusting carefully their parameters. Based on that, the criterion that we used to select the algorithm for finding the set of most relevant genes was in term of the number of 53 54 55 56",
"title": ""
},
{
"docid": "a1dba8928f1a3b919b44dbd2ca8c3fb8",
"text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets to cloud. To preserve privacy, the datasets are usually encrypted before outsourcing. However, the common practice of encryption makes the effective utilization of the data difficult. For example, it is difficult to search the given keywords in encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we propose ECSED, a novel semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. ECSED uses two cloud servers. One is used to store the outsourced datasets and return the ranked results to data users. The other one is used to compute the similarity scores between the documents and the query and send the scores to the first server. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. We employ the multi-keyword ranked search over encrypted cloud data as our basic frame to propose two secure schemes. The experiment results based on the real world datasets show that the scheme is more efficient than previous schemes. We also prove that our schemes are secure under the known ciphertext model and the known background model.",
"title": ""
},
{
"docid": "861f76c061b9eb52ed5033bdeb9a3ce5",
"text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas",
"title": ""
},
{
"docid": "0bd4e5d64f4f9f67dee5ee98557a851f",
"text": "In this paper, we present the vision for an open, urban-scale wireless networking testbed, called CitySense, with the goal of supporting the development and evaluation of novel wireless systems that span an entire city. CitySense is currently under development and will consist of about 100 Linux-based embedded PCs outfitted with dual 802.11a/b/g radios and various sensors, mounted on buildings and streetlights across the city of Cambridge. CitySense takes its cue from citywide urban mesh networking projects, but will differ substantially in that nodes will be directly programmable by end users. The goal of CitySense is explicitly not to provide public Internet access, but rather to serve as a new kind of experimental apparatus for urban-scale distributed systems and networking research efforts. In this paper we motivate the need for CitySense and its potential to support a host of new research and application developments. We also outline the various engineering challenges of deploying such a testbed as well as the research challenges that we face when building and supporting such a system.",
"title": ""
},
{
"docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc",
"text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.",
"title": ""
},
{
"docid": "55b1eb2df97e5d8e871e341c80514ab1",
"text": "Modern digital still cameras sample the color spectrum using a color filter array coated to the CCD array such that each pixel samples only one color channel. The result is a mosaic of color samples which is used to reconstruct the full color image by taking the information of the pixels’ neighborhood. This process is called demosaicking. While standard literature evaluates the performance of these reconstruction algorithms by comparison of a ground-truth image with a reconstructed Bayer pattern image in terms of grayscale comparison, this work gives an evaluation concept to asses the geometrical accuracy of the resulting color images. Only if no geometrical distortions are created during the demosaicking process, it is allowed to use such images for metric calculations, e.g. 3D reconstruction or arbitrary metrical photogrammetric processing.",
"title": ""
},
{
"docid": "795f59c0658a56aa68a9271d591c81a6",
"text": "We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"title": ""
},
{
"docid": "1c2cc1120129eca44443a637c0f06729",
"text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF",
"title": ""
}
] | scidocsrr |
459bab70548948693bc57bce36cd0c0c | Credit Card Fraud Detection using Big Data Analytics: Use of PSOAANN based One-Class Classification | [
{
"docid": "c2df8cc7775bd4ec2bfdf4498d136c9f",
"text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.",
"title": ""
},
{
"docid": "fc12de539644b1b2cf2406708e13e1e0",
"text": "Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.",
"title": ""
}
] | [
{
"docid": "6844473b57606198066406c540f642a4",
"text": "Physical activity (PA) during physical education is important for health purposes and for developing physical fitness and movement skills. To examine PA levels and how PA was influenced by environmental and instructor-related characteristics, we assessed children’s activity during 368 lessons taught by 105 physical education specialists in 42 randomly selected schools in Hong Kong. Trained observers used SOFIT in randomly selected classes, grades 4–6, during three climatic seasons. Results indicated children’s PA levels met the U.S. Healthy People 2010 objective of 50% engagement time and were higher than comparable U.S. populations. Multiple regression analyses revealed that temperature, teacher behavior, and two lesson characteristics (subject matter and mode of delivery) were significantly associated with the PA levels. Most of these factors are modifiable, and changes could improve the quantity and intensity of children’s PA.",
"title": ""
},
{
"docid": "38c78be386aa3827f39825f9e40aa3cc",
"text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.",
"title": ""
},
{
"docid": "ee6612fa13482f7e3bbc7241b9e22297",
"text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.",
"title": ""
},
{
"docid": "8994470e355b5db188090be731ee4fe9",
"text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.",
"title": ""
},
{
"docid": "fa0f3b4cf096260a1c9a69541117431e",
"text": "Existing techniques for interactive rendering of deformable translucent objects can accurately compute diffuse but not directional subsurface scattering effects. It is currently a common practice to gain efficiency by storing maps of transmitted irradiance. This is, however, not efficient if we need to store elements of irradiance from specific directions. To include changes in subsurface scattering due to changes in the direction of the incident light, we instead sample incident radiance and store scattered radiosity. This enables us to accommodate not only the common distance-based analytical models for subsurface scattering but also directional models. In addition, our method enables easy extraction of virtual point lights for transporting emergent light to the rest of the scene. Our method requires neither preprocessing nor texture parameterization of the translucent objects. To build our maps of scattered radiosity, we progressively render the model from different directions using an importance sampling pattern based on the optical properties of the material. We obtain interactive frame rates, our subsurface scattering results are close to ground truth, and our technique is the first to include interactive transport of emergent light from deformable translucent objects.",
"title": ""
},
{
"docid": "5af801ca029fa3a0517ef9d32e7baab0",
"text": "Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.",
"title": ""
},
{
"docid": "5dec9381369e61c30112bd87a044cb2f",
"text": "A limiting factor for the application of IDA methods in many domains is the incompleteness of data repositories. Many records have fields that are not filled in, especially, when data entry is manual. In addition, a significant fraction of the entries can be erroneous and there may be no alternative but to discard these records. But every cell in a database is not an independent datum. Statistical relationships will constrain and, often determine, missing values. Data imputation, the filling in of missing values for partially missing data, can thus be an invaluable first step in many IDA projects. New imputation methods that can handle the large-scale problems and large-scale sparsity of industrial databases are needed. To illustrate the incomplete database problem, we analyze one database with instrumentation maintenance and test records for an industrial process. Despite regulatory requirements for process data collection, this database is less than 50% complete. Next, we discuss possible solutions to the missing data problem. Several approaches to imputation are noted and classified into two categories: data-driven and model-based. We then describe two machine-learning-based approaches that we have worked with. These build upon well-known algorithms: AutoClass and C4.5. Several experiments are designed, all using the maintenance database as a common test-bed but with various data splits and algorithmic variations. Results are generally positive with up to 80% accuracies of imputation. We conclude the paper by outlining some considerations in selecting imputation methods, and by discussing applications of data imputation for intelligent data analysis.",
"title": ""
},
{
"docid": "b3bab7639acde03cbe12253ebc6eba31",
"text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.",
"title": ""
},
{
"docid": "31e955e62361b6857b31d09398760830",
"text": "Measuring “how much the human is in the interaction” - the level of engagement - is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.",
"title": ""
},
{
"docid": "43fec39ff1d77fbe246e31d98f33f861",
"text": "OBJECTIVE\nTo investigate a modulation of the N170 face-sensitive component related to the perception of other-race (OR) and same-race (SR) faces, as well as differences in face and non-face object processing, by combining different methods of event-related potential (ERP) signal analysis.\n\n\nMETHODS\nSixty-two channel ERPs were recorded in 12 Caucasian subjects presented with Caucasian and Asian faces along with non-face objects. Surface data were submitted to classical waveforms and ERP map topography analysis. Underlying brain sources were estimated with two inverse solutions (BESA and LORETA).\n\n\nRESULTS\nThe N170 face component was identical for both race faces. This component and its topography revealed a face specific pattern regardless of race. However, in this time period OR faces evoked significantly stronger medial occipital activity than SR faces. Moreover, in terms of maps, at around 170 ms face-specific activity significantly preceded non-face object activity by 25 ms. These ERP maps were followed by similar activation patterns across conditions around 190-300 ms, most likely reflecting the activation of visually derived semantic information.\n\n\nCONCLUSIONS\nThe N170 was not sensitive to the race of the faces. However, a possible pre-attentive process associated to the relatively stronger unfamiliarity for OR faces was found in medial occipital area. Moreover, our data provide further information on the time-course of face and non-face object processing.",
"title": ""
},
{
"docid": "efb9686dbd690109e8e5341043648424",
"text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "374674cc8a087d31ee2c801f7e49aa8d",
"text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "851de4b014dfeb6f470876896b0416b3",
"text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: [email protected] (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.",
"title": ""
},
{
"docid": "630baadcd861e58e0f36ba3a4e52ffd2",
"text": "The handshake gesture is an important part of the social etiquette in many cultures. It lies at the core of many human interactions, either in formal or informal settings: exchanging greetings, offering congratulations, and finalizing a deal are all activities that typically either start or finish with a handshake. The automated detection of a handshake can enable wide range of pervasive computing scanarios; in particular, different types of information can be exchanged and processed among the handshaking persons, depending on the physical/logical contexts where they are located and on their mutual acquaintance. This paper proposes a novel handshake detection system based on body sensor networks consisting of a resource-constrained wrist-wearable sensor node and a more capable base station. The system uses an effective collaboration technique among body sensor networks of the handshaking persons which minimizes errors associated with the application of classification algorithms and improves the overall accuracy in terms of the number of false positives and false negatives.",
"title": ""
},
{
"docid": "a166b3ed625a5d1db6c70ac41fbf1871",
"text": "The main challenge of online multi-object tracking is to reliably associate object trajectories with detections in each video frame based on their tracking history. In this work, we propose the Recurrent Autoregressive Network (RAN), a temporal generative modeling framework to characterize the appearance and motion dynamics of multiple objects over time. The RAN couples an external memory and an internal memory. The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory. We conduct experiments on the MOT 2015 and 2016 datasets to demonstrate the robustness of our tracking method in highly crowded and occluded scenes. Our method achieves top-ranked results on the two benchmarks.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "8b39fe1fdfdc0426cc1c31ef2c825c58",
"text": "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.",
"title": ""
},
{
"docid": "e2f5feaa4670bc1ae21d7c88f3d738e3",
"text": "Orofacial clefts are common birth defects and can occur as isolated, nonsyndromic events or as part of Mendelian syndromes. There is substantial phenotypic diversity in individuals with these birth defects and their family members: from subclinical phenotypes to associated syndromic features that is mirrored by the many genes that contribute to the etiology of these disorders. Identification of these genes and loci has been the result of decades of research using multiple genetic approaches. Significant progress has been made recently due to advances in sequencing and genotyping technologies, primarily through the use of whole exome sequencing and genome-wide association studies. Future progress will hinge on identifying functional variants, investigation of pathway and other interactions, and inclusion of phenotypic and ethnic diversity in studies.",
"title": ""
}
] | scidocsrr |
c4f6df2578bf51cc16b5ba09ef53a67d | Retrieving Similar Styles to Parse Clothing | [
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
}
] | [
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "94aa50aa17651de3b9332d4556cd0ca3",
"text": "Fingerprinting of mobile device apps is currently an attractive and affordable data gathering technique. Even in the presence of encryption, it is possible to fingerprint a user's app by means of packet-level traffic analysis in which side-channel information is used to determine specific patterns in packets. Knowing the specific apps utilized by smartphone users is a serious privacy concern. In this study, we address the issue of defending against statistical traffic analysis of Android apps. First, we present a methodology for the identification of mobile apps using traffic analysis. Further, we propose confusion models in which we obfuscate packet lengths information leaked by mobile traffic, and we shape one class of app traffic to obscure its class features with minimum overhead. We assess the efficiency of our model using different apps and against a recently published approach for mobile apps classification. We focus on making it hard for intruders to differentiate between the altered app traffic and the actual one using statistical analysis. Additionally, we study the tradeoff between shaping cost and traffic privacy protection, specifically the needed overhead and realization feasibility. We were able to attain 91.1% classification accuracy. Using our obfuscation technique, we were able to reduce this accuracy to 15.78%.",
"title": ""
},
{
"docid": "da9ffb00398f6aad726c247e3d1f2450",
"text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "fcf1d5d56f52d814f0df3b02643ef71b",
"text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of food grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The images have been properly enhanced to reduce noise and blurring in image. Finally image has segmented applying proper segmentation methods so that edges may be detected effectively and thus rectification of the image has been done. The approach has been tested on sufficient number of food grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties rice seeds which are widely planted in Chhattisgarh region. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types.",
"title": ""
},
{
"docid": "3bcf0e33007feb67b482247ef6702901",
"text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"title": ""
},
{
"docid": "4d2c5785e60fa80febb176165622fca7",
"text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.",
"title": ""
},
{
"docid": "73d3c622c98fba72ae2156df52c860d3",
"text": "We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of spacetime tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.",
"title": ""
},
{
"docid": "7e78dd27dd2d4da997ceef7e867b7cd2",
"text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.",
"title": ""
},
{
"docid": "4db8a0d39ef31b49f2b6d542a14b03a2",
"text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.",
"title": ""
},
{
"docid": "59e29fa12539757b5084cab8f1e1b292",
"text": "This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented.",
"title": ""
},
{
"docid": "a73f07080a2f93a09b05b58184acf306",
"text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.",
"title": ""
},
{
"docid": "103b9c101d18867f25d605bd0b314c51",
"text": "The human visual system is the most complex pattern recognition device known. In ways that are yet to be fully understood, the visual cortex arrives at a simple and unambiguous interpretation of data from the retinal image that is useful for the decisions and actions of everyday life. Recent advances in Bayesian models of computer vision and in the measurement and modeling of natural image statistics are providing the tools to test and constrain theories of human object perception. In turn, these theories are having an impact on the interpretation of cortical function.",
"title": ""
},
{
"docid": "eba9ec47b04e08ff2606efa9ffebb6f8",
"text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.",
"title": ""
},
{
"docid": "05760329560f86bf8008c0a952abd893",
"text": "Type 2 diabetes is rapidly increasing in prevalence worldwide, in concert with epidemics of obesity and sedentary behavior that are themselves tracking economic development. Within this broad pattern, susceptibility to diabetes varies substantially in association with ethnicity and nutritional exposures through the life-course. An evolutionary perspective may help understand why humans are so prone to this condition in modern environments, and why this risk is unequally distributed. A simple conceptual model treats diabetes risk as the function of two interacting traits, namely ‘metabolic capacity’ which promotes glucose homeostasis, and ‘metabolic load’ which challenges glucose homoeostasis. This conceptual model helps understand how long-term and more recent trends in body composition can be considered to have shaped variability in diabetes risk. Hominin evolution appears to have continued a broader trend evident in primates, towards lower levels of muscularity. In addition, hominins developed higher levels of body fatness, especially in females in relative terms. These traits most likely evolved as part of a broader reorganization of human life history traits in response to growing levels of ecological instability, enabling both survival during tough periods and reproduction during bountiful periods. Since the emergence of Homo sapiens, populations have diverged in body composition in association with geographical setting and local ecological stresses. These long-term trends in both metabolic capacity and adiposity help explain the overall susceptibility of humans to diabetes in ways that are similar to, and exacerbated by, the effects of nutritional exposures during the life-course.",
"title": ""
},
{
"docid": "c1b8beec6f2cb42b5a784630512525f3",
"text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.",
"title": ""
},
{
"docid": "7ddf437114258023cc7d9c6d51bb8f94",
"text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.",
"title": ""
},
{
"docid": "c760e6db820733dc3f57306eef81e5c9",
"text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.",
"title": ""
},
{
"docid": "49e1dc71e71b45984009f4ee20740763",
"text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.",
"title": ""
}
] | scidocsrr |
515d426102670dffd3391507bf5dff3f | A survey of Context-aware Recommender System and services | [
{
"docid": "6bfdd78045816085cd0fa5d8bb91fd18",
"text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.",
"title": ""
},
{
"docid": "bf1dd3cf77750fe5e994fd6c192ba1be",
"text": "Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type.",
"title": ""
}
] | [
{
"docid": "ab7e775e95f18c27210941602624340f",
"text": "Introduction Blepharophimosis, ptosis and epicanthus inversus (BPES) is an autosomal dominant condition caused by mutations in the FOXL2 gene, located on chromosome 3q23 (Beysen et al., 2005). It has frequently been reported in association with 3q chromosomal deletions and rearrangements (Franceschini et al., 1983; Al-Awadi et al., 1986; Alvarado et al., 1987; Jewett et al., 1993; Costa et al., 1998). In a review of the phenotype associated with interstitial deletions of the long arm of chromosome 3, Alvarado et al. (1987) commented on brachydactyly and abnormalities of the hands and feet. A syndrome characterized by growth and developmental delay, microcephaly, ptosis, blepharophimosis, large abnormally shaped ears, and a prominent nasal tip was suggested to be associated with deletion of the 3q2 region.",
"title": ""
},
{
"docid": "7e66fa86706c823915cce027dc3c5db8",
"text": "Very recently, some studies on neural dependency parsers have shown advantage over the traditional ones on a wide variety of languages. However, for graphbased neural dependency parsing systems, they either count on the long-term memory and attention mechanism to implicitly capture the high-order features or give up the global exhaustive inference algorithms in order to harness the features over a rich history of parsing decisions. The former might miss out the important features for specific headword predictions without the help of the explicit structural information, and the latter may suffer from the error propagation as false early structural constraints are used to create features when making future predictions. We explore the feasibility of explicitly taking high-order features into account while remaining the main advantage of global inference and learning for graph-based parsing. The proposed parser first forms an initial parse tree by head-modifier predictions based on the first-order factorization. High-order features (such as grandparent, sibling, and uncle) then can be defined over the initial tree, and used to refine the parse tree in an iterative fashion. Experimental results showed that our model (called INDP) archived competitive performance to existing benchmark parsers on both English and Chinese datasets.",
"title": ""
},
{
"docid": "02c00f1fd4efffd5f01c165b64fb3f5e",
"text": "A fundamental question in human memory is how the brain represents sensory-specific information during the process of retrieval. One hypothesis is that regions of sensory cortex are reactivated during retrieval of sensory-specific information (1). Here we report findings from a study in which subjects learned a set of picture and sound items and were then given a recall test during which they vividly remembered the items while imaged by using event-related functional MRI. Regions of visual and auditory cortex were activated differentially during retrieval of pictures and sounds, respectively. Furthermore, the regions activated during the recall test comprised a subset of those activated during a separate perception task in which subjects actually viewed pictures and heard sounds. Regions activated during the recall test were found to be represented more in late than in early visual and auditory cortex. Therefore, results indicate that retrieval of vivid visual and auditory information can be associated with a reactivation of some of the same sensory regions that were activated during perception of those items.",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "11b953431233ef988f11c5139a4eb484",
"text": "In this paper, we forecast the reading of an air quality monitoring station over the next 48 hours, using a data-driven method that considers current meteorological data, weather forecasts, and air quality data of the station and that of other stations within a few hundred kilometers. Our predictive model is comprised of four major components: 1) a linear regression-based temporal predictor to model the local factors of air quality, 2) a neural network-based spatial predictor to model global factors, 3) a dynamic aggregator combining the predictions of the spatial and temporal predictors according to meteorological data, and 4) an inflection predictor to capture sudden changes in air quality. We evaluate our model with data from 43 cities in China, surpassing the results of multiple baseline methods. We have deployed a system with the Chinese Ministry of Environmental Protection, providing 48-hour fine-grained air quality forecasts for four major Chinese cities every hour. The forecast function is also enabled on Microsoft Bing Map and MS cloud platform Azure. Our technology is general and can be applied globally for other cities.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "2a4ea3452fe02605144a569797bead9a",
"text": "A novel proximity-coupled probe-fed stacked patch antenna is proposed for Global Navigation Satellite Systems (GNSS) applications. The antenna has been designed to operate for the satellite navigation frequencies in L-band including GPS, GLONASS, Galileo, and Compass (1164-1239 MHz and 1559-1610 MHz). A key feature of our design is the proximity-coupled probe feeds to increase impedance bandwidth and the integrated 90deg broadband balun to improve polarization purity. The final antenna exhibits broad pattern coverage, high gain at the low angles (more than -5 dBi), and VSWR <1.5 for all the operating bands. The design procedures and employed tuning techniques to achieve the desired performance are presented.",
"title": ""
},
{
"docid": "f237b861e72d79008305abea9f53547d",
"text": "Biopharmaceutical companies attempting to increase productivity through novel discovery technologies have fallen short of achieving the desired results. Repositioning existing drugs for new indications could deliver the productivity increases that the industry needs while shifting the locus of production to biotechnology companies. More and more companies are scanning the existing pharmacopoeia for repositioning candidates, and the number of repositioning success stories is increasing.",
"title": ""
},
{
"docid": "e28f0e9aea0b1e8b61b8a425d61eb1ab",
"text": "For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for sliding-window detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4% 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM.",
"title": ""
},
{
"docid": "93396325751b44a328b262ca34b40ee5",
"text": "Compression enables us to shift resource bottlenecks between IO and CPU. In modern datacenters, where energy efficiency is a growing concern, the benefits of using compression have not been completely exploited. As MapReduce represents a common computation framework for Internet datacenters, we develop a decision algorithm that helps MapReduce users identify when and where to use compression. For some jobs, using compression gives energy savings of up to 60%. We believe our findings will provide signficant impact on improving datacenter energy efficiency.",
"title": ""
},
{
"docid": "e47574d102d581f3bc4a8b4ea304d216",
"text": "The paper proposes a high torque density design for variable flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets in order to achieve high air gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then several modifications are applied to the stator and rotor designs through finite element simulations (FEA) in order to improve the machine efficiency and torque density.",
"title": ""
},
{
"docid": "66b912a52197a98134d911ee236ac869",
"text": "We survey the field of quantum information theory. In particular, we discuss the fundamentals of the field, source coding, quantum error-correcting codes, capacities of quantum channels, measures of entanglement, and quantum cryptography.",
"title": ""
},
{
"docid": "0279c0857b6dfdba5c3fe51137c1f824",
"text": "INTRODUCTION. When deciding about the cause underlying serially presented events, patients with delusions utilise fewer events than controls, showing a \"Jumping-to-Conclusions\" bias. This has been widely hypothesised to be because patients expect to incur higher costs if they sample more information. This hypothesis is, however, unconfirmed. METHODS. The hypothesis was tested by analysing patient and control data using two models. The models provided explicit, quantitative variables characterising decision making. One model was based on calculating the potential costs of making a decision; the other compared a measure of certainty to a fixed threshold. RESULTS. Differences between paranoid participants and controls were found, but not in the way that was previously hypothesised. A greater \"noise\" in decision making (relative to the effective motivation to get the task right), rather than greater perceived costs, best accounted for group differences. Paranoid participants also deviated from ideal Bayesian reasoning more than healthy controls. CONCLUSIONS. The Jumping-to-Conclusions Bias is unlikely to be due to an overestimation of the cost of gathering more information. The analytic approach we used, involving a Bayesian model to estimate the parameters characterising different participant populations, is well suited to testing hypotheses regarding \"hidden\" variables underpinning observed behaviours.",
"title": ""
},
{
"docid": "e8c05d0540f384d0e85b4610c4bad4ef",
"text": "We present DIALDroid, a scalable and accurate tool for analyzing inter-app Inter-Component Communication (ICC) among Android apps, which outperforms current stateof-the-art ICC analysis tools. Using DIALDroid, we performed the first large-scale detection of collusive and vulnerable apps based on inter-app ICC data flows among 110,150 real-world apps and identified key security insights.",
"title": ""
},
{
"docid": "444364c2ab97bef660ab322420fc5158",
"text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "31895a2de8b3d9b2c2a3d29e0d7b7f58",
"text": "BACKGROUND\nReducing delays for patients who are safe to be discharged is important for minimising complications, managing costs and improving quality. Barriers to discharge include placement, multispecialty coordination of care and ineffective communication. There are a few recent studies that describe barriers from the perspective of all members of the multidisciplinary team.\n\n\nSTUDY OBJECTIVE\nTo identify the barriers to discharge for patients from our medicine service who had a discharge delay of over 24 hours.\n\n\nMETHODOLOGY\nWe developed and implemented a biweekly survey that was reviewed with attending physicians on each of the five medicine services to identify patients with an unnecessary delay. Separately, we conducted interviews with staff members involved in the discharge process to identify common barriers they observed on the wards.\n\n\nRESULTS\nOver the study period from 28 October to 22 November 2013, out of 259 total discharges, 87 patients had a delay of over 24 hours (33.6%) and experienced a total of 181 barriers. The top barriers from the survey included patient readiness, prolonged wait times for procedures or results, consult recommendations and facility placement. A total of 20 interviews were conducted, from which the top barriers included communication both between staff members and with the patient, timely notification of discharge and lack of discharge standardisation.\n\n\nCONCLUSIONS\nThere are a number of frequent barriers to discharge encountered in our hospital that may be avoidable with planning, effective communication methods, more timely preparation and tools to standardise the discharge process.",
"title": ""
},
{
"docid": "1095311bbe710412173f33134c7d47d4",
"text": "A large number of papers are appearing in the biomedical engineering literature that describe the use of machine learning techniques to develop classifiers for detection or diagnosis of disease. However, the usefulness of this approach in developing clinically validated diagnostic techniques so far has been limited and the methods are prone to overfitting and other problems which may not be immediately apparent to the investigators. This commentary is intended to help sensitize investigators as well as readers and reviewers of papers to some potential pitfalls in the development of classifiers, and suggests steps that researchers can take to help avoid these problems. Building classifiers should be viewed not simply as an add-on statistical analysis, but as part and parcel of the experimental process. Validation of classifiers for diagnostic applications should be considered as part of a much larger process of establishing the clinical validity of the diagnostic technique.",
"title": ""
},
{
"docid": "dd6711fdcf9e2908f65258c2948a1e26",
"text": "Pevzner and Sze [23] considered a precise version of the motif discovery problem and simultaneously issued an algorithmic challenge: find a motif M of length 15, where each planted instance differs from M in 4 positions. Whereas previous algorithms all failed to solve this (15,4)-motif problem. Pevzner and Sze introduced algorithms that succeeded. However, their algorithms failed to solve the considerably more difficult (14,4)-, (16,5)-, and (18,6)-motif problems.\nWe introduce a novel motif discovery algorithm based on the use of random projections of the input's substrings. Experiments on simulated data demonstrate that this algorithm performs better than existing algorithms and, in particular, typically solves the difficult (14,4)-, (16,5)-, and (18,6)-motif problems quite efficiently. A probabilistic estimate shows that the small values of d for which the algorithm fails to recover the planted (l, d)-motif are in all likelihood inherently impossible to solve. We also present experimental results on realistic biological data by identifying ribosome binding sites in prokaryotes as well as a number of known transcriptional regulatory motifs in eukaryotes.",
"title": ""
}
] | scidocsrr |
014ee38649c8ac97b152962fc03d8a2d | A review of stator fault monitoring techniques of induction motors | [
{
"docid": "eda3987f781263615ccf53dd9a7d1a27",
"text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.",
"title": ""
}
] | [
{
"docid": "f26f254827efa3fe29301ef31eb8669f",
"text": "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a \"visual Turing test\": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (\"just-in-time truthing\"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.",
"title": ""
},
{
"docid": "bfd23678afff2ac4cd4650cf46195590",
"text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.",
"title": ""
},
{
"docid": "0173524e63580e3a4d85cb41719e0bd6",
"text": "This paper describes some of the possibilities of artificial neural networks that open up after solving the problem of catastrophic forgetting. A simple model and reinforcement learning applications of existing methods are also proposed",
"title": ""
},
{
"docid": "d3c811ec795c04005fb04cdf6eec6b0e",
"text": "We present a new replay-based method of continual classification learning that we term \"conditional replay\" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term \"marginal replay\") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and Fashion MNIST data, and compare to the regularization-based EWC method (Kirkpatrick et al., 2016; Shin et al., 2017).",
"title": ""
},
{
"docid": "af3297de35d49f774e2f31f31b09fd61",
"text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.",
"title": ""
},
{
"docid": "f5b8a5e034e8c619cea9cbdaca4a5bef",
"text": "x Acknowledgements xii Chapter",
"title": ""
},
{
"docid": "a413a29f8921bec12d71b8c0156e353b",
"text": "In the present study, we tested the hypothesis that a carbohydrate-protein (CHO-Pro) supplement would be more effective in the replenishment of muscle glycogen after exercise compared with a carbohydrate supplement of equal carbohydrate content (LCHO) or caloric equivalency (HCHO). After 2.5 +/- 0.1 h of intense cycling to deplete the muscle glycogen stores, subjects (n = 7) received, using a rank-ordered design, a CHO-Pro (80 g CHO, 28 g Pro, 6 g fat), LCHO (80 g CHO, 6 g fat), or HCHO (108 g CHO, 6 g fat) supplement immediately after exercise (10 min) and 2 h postexercise. Before exercise and during 4 h of recovery, muscle glycogen of the vastus lateralis was determined periodically by nuclear magnetic resonance spectroscopy. Exercise significantly reduced the muscle glycogen stores (final concentrations: 40.9 +/- 5.9 mmol/l CHO-Pro, 41.9 +/- 5.7 mmol/l HCHO, 40.7 +/- 5.0 mmol/l LCHO). After 240 min of recovery, muscle glycogen was significantly greater for the CHO-Pro treatment (88.8 +/- 4.4 mmol/l) when compared with the LCHO (70.0 +/- 4.0 mmol/l; P = 0.004) and HCHO (75.5 +/- 2.8 mmol/l; P = 0.013) treatments. Glycogen storage did not differ significantly between the LCHO and HCHO treatments. There were no significant differences in the plasma insulin responses among treatments, although plasma glucose was significantly lower during the CHO-Pro treatment. These results suggest that a CHO-Pro supplement is more effective for the rapid replenishment of muscle glycogen after exercise than a CHO supplement of equal CHO or caloric content.",
"title": ""
},
{
"docid": "49ba31f0452a57e29f7bbf15de477f2b",
"text": "In recent years, the IoT(Internet of Things)has been wide spreadly concerned as a new technology architecture integrated by many information technologies. However, IoT is a complex system. If there is no intensive study from the system point of view, it will be very hard to understand the key technologies of IoT. in order to better study IoT and its technologies this paper proposes an extent six-layer architecture of IoT based on network hierarchical structure firstly. then it discusses the key technologies involved in every layer of IoT, such as RFID, WSN, Internet, SOA, cloud computing and Web Service etc. According to this, an automatic recognition system is designed via integrating both RFID and WSN and a strategy is also brought forward to via integrating both RFID and Web Sevice. This result offers the feasibility of technology integration of IoT.",
"title": ""
},
{
"docid": "f6c874435978db83361f62bfe70a6681",
"text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. [email protected] or journal managing editor Susan Haigney at [email protected].",
"title": ""
},
{
"docid": "8813c7c18f0629680f537bdd0afcb1ba",
"text": "A fault-tolerant (FT) control approach for four-wheel independently-driven (4WID) electric vehicles is presented. An adaptive control based passive fault-tolerant controller is designed to ensure the system stability when an in-wheel motor/motor driver fault happens. As an over-actuated system, it is challenging to isolate the faulty wheel and accurately estimate the control gain of the faulty in-wheel motor for 4WID electric vehicles. An active fault diagnosis approach is thus proposed to isolate and evaluate the fault. Based on the estimated control gain of the faulty in-wheel motor, the control efforts of all the four wheels are redistributed to relieve the torque demand on the faulty wheel. Simulations using a high-fidelity, CarSim, full-vehicle model show the effectiveness of the proposed in-wheel motor/motor driver fault diagnosis and fault-tolerant control approach.",
"title": ""
},
{
"docid": "8acae9ae82e89bd4d76bb9a706276149",
"text": "Metastasis leads to poor prognosis in colorectal cancer patients, and there is a growing need for new therapeutic targets. TMEM16A (ANO1, DOG1 or TAOS2) has recently been identified as a calcium-activated chloride channel (CaCC) and is reported to be overexpressed in several malignancies; however, its expression and function in colorectal cancer (CRC) remains unclear. In this study, we found expression of TMEM16A mRNA and protein in high-metastatic-potential SW620, HCT116 and LS174T cells, but not in primary HCT8 and SW480 cells, using RT-PCR, western blotting and immunofluorescence labeling. Patch-clamp recordings detected CaCC currents regulated by intracellular Ca(2+) and voltage in SW620 cells. Knockdown of TMEM16A by short hairpin RNAs (shRNA) resulted in the suppression of growth, migration and invasion of SW620 cells as detected by MTT, wound-healing and transwell assays. Mechanistically, TMEM16A depletion was accompanied by the dysregulation of phospho-MEK, phospho-ERK1/2 and cyclin D1 expression. Flow cytometry analysis showed that SW620 cells were inhibited from the G1 to S phase of the cell cycle in the TMEM16A shRNA group compared with the control group. In conclusion, our results indicate that TMEM16A CaCC is involved in growth, migration and invasion of metastatic CRC cells and provide evidence for TMEM16A as a potential drug target for treating metastatic colorectal carcinoma.",
"title": ""
},
{
"docid": "85c687f7b01d635fa9f46d0dd61098d3",
"text": "This paper provides a comprehensive survey of the technical achievements in the research area of Image Retrieval , especially Content-Based Image Retrieval, an area so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multi-dimensional indexing, and system design, three of the fundamental bases of Content-Based Image Retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identiied, and future promising research directions are suggested.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "e6457f5257e95d727e06e212bef2f488",
"text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.",
"title": ""
},
{
"docid": "81b6059f24c827c271247b07f38f86d5",
"text": "We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.",
"title": ""
},
{
"docid": "4765f21109d36fb2631325fd0442aeac",
"text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.",
"title": ""
},
{
"docid": "2ff3238a25fd7055517a2596e5e0cd7c",
"text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.",
"title": ""
},
{
"docid": "abdd0d2c13c884b22075b2c3f54a0dfc",
"text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is",
"title": ""
},
{
"docid": "fa9a2112a687063c2fb3733af7b1ea61",
"text": "Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.",
"title": ""
},
{
"docid": "95c4a2cfd063abdac35572927c4dcfc1",
"text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.",
"title": ""
}
] | scidocsrr |
59708a44c0315cb1c93bedc32e054a5f | Supporting Drivers in Keeping Safe Speed and Safe Distance: The SASPENCE Subproject Within the European Framework Programme 6 Integrating Project PReVENT | [
{
"docid": "c1956e4c6b732fa6a420d4c69cfbe529",
"text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
}
] | [
{
"docid": "9f3404d9e2941a1e0987f215e6d82a54",
"text": "Augmented reality technologies allow people to view and interact with virtual objects that appear alongside physical objects in the real world. For augmented reality applications to be effective, users must be able to accurately perceive the intended real world location of virtual objects. However, when creating augmented reality applications, developers are faced with a variety of design decisions that may affect user perceptions regarding the real world depth of virtual objects. In this paper, we conducted two experiments using a perceptual matching task to understand how shading, cast shadows, aerial perspective, texture, dimensionality (i.e., 2D vs. 3D shapes) and billboarding affected participant perceptions of virtual object depth relative to real world targets. The results of these studies quantify trade-offs across virtual object designs to inform the development of applications that take advantage of users' visual abilities to better blend the physical and virtual world.",
"title": ""
},
{
"docid": "9de44948e28892190f461199a1d33935",
"text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "4dbbcaf264cc9beda8644fa926932d2e",
"text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.",
"title": ""
},
{
"docid": "1510979ae461bfff3deb46dec2a81798",
"text": "State-of-the-art methods for relation classification are mostly based on statistical machine learning, and the performance heavily depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing NLP systems, which lead to the error propagation of existing tools and hinder the performance of the system. In this paper, we exploit a convolutional Deep Neural Network (DNN) to extract lexical and sentence level features. Our method takes all the word tokens as input without complicated pre-processing. First, all the word tokens are transformed to vectors by looking up word embeddings1. Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated as the final extracted feature vector. Finally, the features are feed into a softmax classifier to predict the relationship between two marked nouns. Experimental results show that our approach significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "1196ab65ddfcedb8775835f2e176576f",
"text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.",
"title": ""
},
{
"docid": "d9ad51299d4afb8075bd911b6655cf16",
"text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.",
"title": ""
},
{
"docid": "8f9929a21107e6edf9a2aa5f69a3c012",
"text": "Rice is one of the most cultivated cereal in Asian countries and Vietnam in particular. Good seed germination is important for rice seed quality, that impacts the rice production and crop yield. Currently, seed germination evaluation is carried out manually by experienced persons. This is a tedious and time-consuming task. In this paper, we present a system for automatic evaluation of rice seed germination rate based on advanced techniques in computer vision and machine learning. We propose to use U-Net - a convolutional neural network - for segmentation and separation of rice seeds. Further processing such as computing distance transform and thresholding will be applied on the segmented regions for rice seed detection. Finally, ResNet is utilized to classify segmented rice seed regions into two classes: germinated and non- germinated seeds. Our contributions in this paper are three-fold. Firstly, we propose a framework which confirms that convolutional neural networks are better than traditional methods for both segmentation and classification tasks (with F1- scores of 93.38\\% and 95.66\\% respectively). Secondly, we deploy successfully the automatic tool in a real application for estimating rice germination rate. Finally, we introduce a new dataset of 1276 images of rice seeds from 7 to 8 seed varieties germinated during 6 to 10 days. This dataset is publicly available for research purpose.",
"title": ""
},
{
"docid": "4d2f03a786f8addf0825b5bc7701c621",
"text": "Integrated Design of Agile Missile Guidance and Autopilot Systems By P. K. Menon and E. J. Ohlmeyer Abstract Recent threat assessments by the US Navy have indicated the need for improving the accuracy of defensive missiles. This objective can only be achieved by enhancing the performance of the missile subsystems and by finding methods to exploit the synergism existing between subsystems. As a first step towards the development of integrated design methodologies, this paper develops a technique for integrated design of missile guidance and autopilot systems. Traditional approach for the design of guidance and autopilot systems has been to design these subsystems separately and then to integrate them together before verifying their performance. Such an approach does not exploit any beneficial relationships between these and other subsystems. The application of the feedback linearization technique for integrated guidance-autopilot system design is discussed. Numerical results using a six degree-of-freedom missile simulation are given. Integrated guidance-autopilot systems are expected to result in significant improvements in missile performance, leading to lower weight and enhanced lethality. Both of these factors will lead to a more effective, lower-cost weapon system. Integrated system design methods developed under the present research effort also have extensive applications in high performance aircraft autopilot and guidance systems.",
"title": ""
},
{
"docid": "f5886c4e73fed097e44d6a0e052b143f",
"text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "115c06a2e366293850d1ef3d60f2a672",
"text": "Accurate network traffic identification plays important roles in many areas such as traffic engineering, QoS and intrusion detection etc. The emergence of many new encrypted applications which use dynamic port numbers and masquerading techniques causes the most challenging problem in network traffic identification field. One of the challenging issues for existing traffic identification methods is that they can’t classify online encrypted traffic. To overcome the drawback of the previous identification scheme and to meet the requirements of the encrypted network activities, our work mainly focuses on how to build an online Internet traffic identification based on flow information. We propose real-time encrypted traffic identification based on flow statistical characteristics using machine learning in this paper. We evaluate the effectiveness of our proposed method through the experiments on different real traffic traces. By experiment results and analysis, this method can classify online encrypted network traffic with high accuracy and robustness.",
"title": ""
},
{
"docid": "c6a7c67fa77d2a5341b8e01c04677058",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "fe4a59449e61d47ccf04f4ef70a6ba72",
"text": "Bipedal robots are currently either slow, energetically inefficient and/or require a lot of control to maintain their stability. This paper introduces the FastRunner, a bipedal robot based on a new leg architecture. Simulation results of a Planar FastRunner demonstrate that legged robots can run fast, be energy efficient and inherently stable. The simulated FastRunner has a cost of transport of 1.4 and requires only a local feedback of the hip position to reach 35.4 kph from stop in simulation.",
"title": ""
},
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "43d88fd321ad7a6c1bbb2a0054a77959",
"text": "We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.",
"title": ""
},
{
"docid": "a6f1d81e6b4a20d892c9292fb86d2c1d",
"text": "Research in biomaterials and biomechanics has fueled a large part of the significant revolution associated with osseointegrated implants. Additional key areas that may become even more important--such as guided tissue regeneration, growth factors, and tissue engineering--could not be included in this review because of space limitations. All of this work will no doubt continue unabated; indeed, it is probably even accelerating as more clinical applications are found for implant technology and related therapies. An excellent overall summary of oral biology and dental implants recently appeared in a dedicated issue of Advances in Dental Research. Many advances have been made in the understanding of events at the interface between bone and implants and in developing methods for controlling these events. However, several important questions still remain. What is the relationship between tissue structure, matrix composition, and biomechanical properties of the interface? Do surface modifications alter the interfacial tissue structure and composition and the rate at which it forms? If surface modifications change the initial interface structure and composition, are these changes retained? Do surface modifications enhance biomechanical properties of the interface? As current understanding of the bone-implant interface progresses, so will development of proactive implants that can help promote desired outcomes. However, in the midst of the excitement born out of this activity, it is necessary to remember that the needs of the patient must remain paramount. It is also worth noting another as-yet unsatisfied need. With all of the new developments, continuing education of clinicians in the expert use of all of these research advances is needed. For example, in the area of biomechanical treatment planning, there are still no well-accepted biomaterials/biomechanics \"building codes\" that can be passed on to clinicians. Also, there are no readily available treatment-planning tools that clinicians can use to explore \"what-if\" scenarios and other design calculations of the sort done in modern engineering. No doubt such approaches could be developed based on materials already in the literature, but unfortunately much of what is done now by clinicians remains empirical. A worthwhile task for the future is to find ways to more effectively deliver products of research into the hands of clinicians.",
"title": ""
},
{
"docid": "dbd92d8f7fc050229379ebfb87e22a5f",
"text": "The presence of third-party tracking on websites has become customary. However, our understanding of the thirdparty ecosystem is still very rudimentary. We examine thirdparty trackers from a geographical perspective, observing the third-party tracking ecosystem from 29 countries across the globe. When examining the data by region (North America, South America, Europe, East Asia, Middle East, and Oceania), we observe significant geographical variation between regions and countries within regions. We find trackers that focus on specific regions and countries, and some that are hosted in countries outside their expected target tracking domain. Given the differences in regulatory regimes between jurisdictions, we believe this analysis sheds light on the geographical properties of this ecosystem and on the problems that these may pose to our ability to track and manage the different data silos that now store personal data about us all.",
"title": ""
},
{
"docid": "745cdbb442c73316f691dc20cc696f31",
"text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.",
"title": ""
},
{
"docid": "313c8ba6d61a160786760543658185df",
"text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).",
"title": ""
},
{
"docid": "c9968b5dbe66ad96605c88df9d92a2fb",
"text": "We present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The demographics survey is ongoing (as of November 2017), and the results are available at http://demographics.mturk-tracker.com: we provide an API for researchers to download the survey data. We use techniques from the field of ecology, in particular, the capture-recapture technique, to understand the size and dynamics of the underlying population. We also demonstrate how to model and account for the inherent selection biases in such surveys. Our results indicate that there are more than 100K workers available in Amazon»s crowdsourcing platform, the participation of the workers in the platform follows a heavy-tailed distribution, and at any given time there are more than 2K active workers. We also show that the half-life of a worker on the platform is around 12-18 months and that the rate of arrival of new workers balances the rate of departures, keeping the overall worker population relatively stable. Finally, we demonstrate how we can estimate the biases of different demographics to participate in the survey tasks, and show how to correct such biases. Our methodology is generic and can be applied to any platform where we are interested in understanding the dynamics and demographics of the underlying user population.",
"title": ""
}
] | scidocsrr |
9dca7788703754f3df30cadba621e0c4 | Are blockchains immune to all malicious attacks ? | [
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
}
] | [
{
"docid": "3cd3ac867951fb8153dd979ad64c826e",
"text": "Based on information-theoretic security, physical-layer security in an optical code division multiple access (OCDMA) system is analyzed. For the first time, the security leakage factor is employed to evaluate the physicallayer security level, and the safe receiving distance is used to measure the security transmission range. By establishing the wiretap channel model of a coherent OCDMA system, the influences of the extraction location, the extraction ratio, the number of active users, and the length of the code on the physical-layer security are analyzed quantitatively. The physical-layer security and reliability of the coherent OCDMA system are evaluated under the premise of satisfying the legitimate user’s bit error rate and the security leakage factor. The simulation results show that the number of users must be in a certain interval in order to meet the legitimate user’s reliability and security. Furthermore, the security performance of the coherent OCDMA-based wiretap channel can be improved by increasing the code length.",
"title": ""
},
{
"docid": "4828e830d440cb7a2c0501952033da2f",
"text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.",
"title": ""
},
{
"docid": "fce1fd47f04105be554465c7ae4245db",
"text": "A model for evaluating the lifetime of closed electrical contacts was described. The model is based on the diffusion of oxidizing agent into the contact zone. The results showed that there is good agreement between the experimental data as reported by different authors in the literature and the derived expressions for the limiting value of contact lifetime.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "f2b3f12251d636fe0ac92a2965826525",
"text": "Today, not only Internet companies such as Google, Facebook or Twitter do have Big Data but also Enterprise Information Systems store an ever growing amount of data (called Big Enterprise Data in this paper). In a classical SAP system landscape a central data warehouse (SAP BW) is used to integrate and analyze all enterprise data. In SAP BW most of the business logic required for complex analytical tasks (e.g., a complex currency conversion) is implemented in the application layer on top of a standard relational database. While being independent from the underlying database when using such an architecture, this architecture has two major drawbacks when analyzing Big Enterprise Data: (1) algorithms in ABAP do not scale with the amount of data and (2) data shipping is required. To this end, we present a novel programming language called SQLScript to efficiently support complex and scalable analytical tasks inside SAP’s new main-memory database HANA. SQLScript provides two major extensions to the SQL dialect of SAP HANA: A functional and a procedural extension. While the functional extension allows the definition of scalable analytical tasks on Big Enterprise Data, the procedural extension provides imperative constructs to orchestrate the analytical tasks. The major contributions of this paper are two novel functional extensions: First, an extended version of the MapReduce programming model for supporting parallelizable user-defined functions (UDFs). Second, compared to recursion in the SQL standard, a generalized version of recursion to support graph analytics as well as machine learning tasks.",
"title": ""
},
{
"docid": "0d75a194abf88a0cbf478869dc171794",
"text": "As a promising way for heterogeneous data analytics, consensus clustering has attracted increasing attention in recent decades. Among various excellent solutions, the co-association matrix based methods form a landmark, which redefines consensus clustering as a graph partition problem. Nevertheless, the relatively high time and space complexities preclude it from wide real-life applications. We, therefore, propose Spectral Ensemble Clustering (SEC) to leverage the advantages of co-association matrix in information integration but run more efficiently. We disclose the theoretical equivalence between SEC and weighted K-means clustering, which dramatically reduces the algorithmic complexity. We also derive the latent consensus function of SEC, which to our best knowledge is the first to bridge co-association matrix based methods to the methods with explicit global objective functions. Further, we prove in theory that SEC holds the robustness, generalizability, and convergence properties. We finally extend SEC to meet the challenge arising from incomplete basic partitions, based on which a row-segmentation scheme for big data clustering is proposed. Experiments on various real-world data sets in both ensemble and multi-view clustering scenarios demonstrate the superiority of SEC to some state-of-the-art methods. In particular, SEC seems to be a promising candidate for big data clustering.",
"title": ""
},
{
"docid": "c45344ee6d7feb5a9bb15e32f63217f8",
"text": "In this paper athree-stage pulse generator architecture capable of generating high voltage, high current pulses is reported and system issues are presented. Design choices and system dynamics are explained both qualitatively and quantitatively with discussion sections followed by the presentation of closed-form expressions and numerical analysis that provide insight into the system's operation. Analysis targeted at optimizing performance focuses on diode opening switch pumping, energy efficiency, and compensation of parasitic reactances. A compact system based on these design guidelines has been built to output 8 kV, 5 ns pulses into 50 Ω. Output risetimes below 1 ns have been achieved using two different compression techniques. At only 1.5 kg, this light and compact system shows promise for a variety of pulsed power applications requiring the long lifetime, low jitter performance of a solid state pulse generator that can produce fast, high voltage pulses at high repetition rates.",
"title": ""
},
{
"docid": "6291f21727c70d3455a892a8edd3b18c",
"text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.",
"title": ""
},
{
"docid": "5d866d630f78bb81b5ce8d3dae2521ee",
"text": "In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.",
"title": ""
},
{
"docid": "6fac5129579a54a8320ea84df29a0393",
"text": "Lewis G. Halsey is in the Department of Life Sciences, University of Roehampton, London, UK; Douglas Curran-Everett is in the Division of Biostatistics and Bioinformatics, National Jewish Health, and the Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Denver, Denver, Colorado, USA; Sarah L. Vowler is at Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; and Gordon B. Drummond is at the University of Edinburgh, Edinburgh, UK. e-mail: [email protected] The fickle P value generates irreproducible results",
"title": ""
},
{
"docid": "c15f36dccebee50056381c41e6ddb2dc",
"text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.",
"title": ""
},
{
"docid": "e5687e8ac3eb1fbac18d203c049d9446",
"text": "Information security policy compliance is one of the key concerns that face organizations today. Although, technical and procedural security measures help improve information security, there is an increased need to accommodate human, social and organizational factors. While employees are considered the weakest link in information security domain, they also are assets that organizations need to leverage effectively. Employees' compliance with Information Security Policies (ISPs) is critical to the success of an information security program. The purpose of this research is to develop a measurement tool that provides better measures for predicting and explaining employees' compliance with ISPs by examining the role of information security awareness in enhancing employees' compliance with ISPs. The study is the first to address compliance intention from a users' perspective. Overall, analysis results indicate strong support for the proposed instrument and represent an early confirmation for the validation of the underlying theoretical model.",
"title": ""
},
{
"docid": "45ce30113e80cc6a28f243b0d1661c58",
"text": "We describe and compare several methods for generating game character controllers that mimic the playing style of a particular human player, or of a population of human players, across video game levels. Similarity in playing style is measured through an evaluation framework, that compares the play trace of one or several human players with the punctuated play trace of an AI player. The methods that are compared are either hand-coded, direct (based on supervised learning) or indirect (based on maximising a similarity measure). We find that a method based on neuroevolution performs best both in terms of the instrumental similarity measure and in phenomenological evaluation by human spectators. A version of the classic platform game “Super Mario Bros” is used as the testbed game in this study but the methods are applicable to other games that are based on character movement in space.",
"title": ""
},
{
"docid": "9c1687323661ccb6bf2151824edc4260",
"text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.",
"title": ""
},
{
"docid": "f267e8cfbe10decbe16fa83c97e76049",
"text": "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen.",
"title": ""
},
{
"docid": "93e93d2278706638859f5f4b1601bfa6",
"text": "To acquire accurate, real-time hyperspectral images with high spatial resolution, we develop two types of low-cost, lightweight Whisk broom hyperspectral sensors that can be loaded onto lightweight unmanned autonomous vehicle (UAV) platforms. A system is composed of two Mini-Spectrometers, a polygon mirror, references for sensor calibration, a GPS sensor, a data logger and a power supply. The acquisition of images with high spatial resolution is realized by a ground scanning along a direction perpendicular to the flight direction based on the polygon mirror. To cope with the unstable illumination condition caused by the low-altitude observation, skylight radiation and dark current are acquired in real-time by the scanning structure. Another system is composed of 2D optical fiber array connected to eight Mini-Spectrometers and a telephoto lens, a convex lens, a micro mirror, a GPS sensor, a data logger and a power supply. The acquisition of images is realized by a ground scanning based on the rotation of the micro mirror.",
"title": ""
},
{
"docid": "91f89990f9d41d3a92cbff38efc56b57",
"text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.",
"title": ""
},
{
"docid": "f519e878b3aae2f0024978489db77425",
"text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
}
] | scidocsrr |
e67f28a335c0596d3ea636c5722ff0e2 | Comparison between security majors in virtual machine and linux containers | [
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
}
] | [
{
"docid": "11a4536e40dde47e024d4fe7541b368c",
"text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.",
"title": ""
},
{
"docid": "9dbd988e0e7510ddf4fce9d5a216f9d6",
"text": "Tooth abutments can be prepared to receive fixed dental prostheses with different types of finish lines. The literature reports different complications arising from tooth preparation techniques, including gingival recession. Vertical preparation without a finish line is a technique whereby the abutments are prepared by introducing a diamond rotary instrument into the sulcus to eliminate the cementoenamel junction and to create a new prosthetic cementoenamel junction determined by the prosthetic margin. This article describes 2 patients whose dental abutments were prepared to receive ceramic restorations using vertical preparation without a finish line.",
"title": ""
},
{
"docid": "4b6b9539468db238d92e9762b2650b61",
"text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: [email protected] J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke",
"title": ""
},
{
"docid": "e6d359934523ed73b2f9f2ac66fd6096",
"text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.",
"title": ""
},
{
"docid": "a2314ce56557135146e43f0d4a02782d",
"text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.",
"title": ""
},
{
"docid": "e50ce59ede6ad5c7a89309aed6aa06aa",
"text": "In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF. Since PDF is a format specifically designed for printing, layout and logical structures of documents are indistinguishably embedded in the file. It requires much effort to extract natural language text from PDF files, and reversely, display semantic annotations produced by NLP tools on the original page layout. In our browsing system, we tackle these issues caused by the gap between printable document and plain text. Our system provides ways to extract natural language sentences from PDF files together with their logical structures, and also to map arbitrary textual spans to their corresponding regions on page images. We setup a demonstration system using papers published in ACL anthology and demonstrate the enhanced search and refined recommendation functions which we plan to make widely available to NLP researchers.",
"title": ""
},
{
"docid": "53a962f9ab6dacf876e129ce3039fd69",
"text": "9 Background. Office of Academic Affairs (OAA), Office of Student Life (OSL) and Information Technology Helpdesk (ITD) are support functions within a university which receives hundreds of email messages on the daily basis. A large percentage of emails received by these departments are frequent and commonly used queries or request for information. Responding to every query by manually typing is a tedious and time consuming task and an automated approach for email response suggestion can save lot of time. 10",
"title": ""
},
{
"docid": "51f437b8631ba8494d3cf3bad7578794",
"text": "The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.)\nThe chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.",
"title": ""
},
{
"docid": "efa21b9d1cd973fc068074b94b60bf08",
"text": "BACKGROUND\nDermoscopy is useful in evaluating skin tumours, but its applicability also extends into the field of inflammatory skin disorders. Discoid lupus erythematosus (DLE) represents the most common subtype of cutaneous lupus erythematosus. While dermoscopy and videodermoscopy have been shown to aid the differentiation of scalp DLE from other causes of scarring alopecia, limited data exist concerning dermoscopic criteria of DLE in other locations, such as the face, trunk and extremities.\n\n\nOBJECTIVE\nTo describe the dermoscopic criteria observed in a series of patients with DLE located on areas other than the scalp, and to correlate them to the underlying histopathological alterations.\n\n\nMETHODS\nDLE lesions located on the face, trunk and extremities were dermoscopically and histopathologically examined. Selection of the dermoscopic variables included in the evaluation process was based on data in the available literature on DLE of the scalp and on our preliminary observations. Analysis of data was done with SPSS analysis software.\n\n\nRESULTS\nFifty-five lesions from 37 patients with DLE were included in the study. Perifollicular whitish halo, follicular keratotic plugs and telangiectasias were the most common dermoscopic criteria. Statistical analysis revealed excellent correlation between dermoscopic and histopathological findings. Notably, a time-related alteration of dermoscopic features was observed.\n\n\nCONCLUSIONS\nThe present study provides new insights into the dermoscopic variability of DLE located on the face, trunk and extremities.",
"title": ""
},
{
"docid": "8e2006ca72dbc6be6592e21418b7f3ba",
"text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.",
"title": ""
},
{
"docid": "6e527b021720cc006ec18a996abf36b5",
"text": "Flow cytometry is a sophisticated instrument measuring multiple physical characteristics of a single cell such as size and granularity simultaneously as the cell flows in suspension through a measuring device. Its working depends on the light scattering features of the cells under investigation, which may be derived from dyes or monoclonal antibodies targeting either extracellular molecules located on the surface or intracellular molecules inside the cell. This approach makes flow cytometry a powerful tool for detailed analysis of complex populations in a short period of time. This review covers the general principles and selected applications of flow cytometry such as immunophenotyping of peripheral blood cells, analysis of apoptosis and detection of cytokines. Additionally, this report provides a basic understanding of flow cytometry technology essential for all users as well as the methods used to analyze and interpret the data. Moreover, recent progresses in flow cytometry have been discussed in order to give an opinion about the future importance of this technology.",
"title": ""
},
{
"docid": "109a1276cd743a522b9e0a36b9b58f32",
"text": "This study examined the effects of a virtual reality distraction intervention on chemotherapy-related symptom distress levels in 16 women aged 50 and older. A cross-over design was used to answer the following research questions: (1) Is virtual reality an effective distraction intervention for reducing chemotherapy-related symptom distress levels in older women with breast cancer? (2) Does virtual reality have a lasting effect? Chemotherapy treatments are intensive and difficult to endure. One way to cope with chemotherapy-related symptom distress is through the use of distraction. For this study, a head-mounted display (Sony PC Glasstron PLM - S700) was used to display encompassing images and block competing stimuli during chemotherapy infusions. The Symptom Distress Scale (SDS), Revised Piper Fatigue Scale (PFS), and the State Anxiety Inventory (SAI) were used to measure symptom distress. For two matched chemotherapy treatments, one pre-test and two post-test measures were employed. Participants were randomly assigned to receive the VR distraction intervention during one chemotherapy treatment and received no distraction intervention (control condition) during an alternate chemotherapy treatment. Analysis using paired t-tests demonstrated a significant decrease in the SAI (p = 0.10) scores immediately following chemotherapy treatments when participants used VR. No significant changes were found in SDS or PFS values. There was a consistent trend toward improved symptoms on all measures 48 h following completion of chemotherapy. Evaluation of the intervention indicated that women thought the head mounted device was easy to use, they experienced no cybersickness, and 100% would use VR again.",
"title": ""
},
{
"docid": "b15ed1584eb030fba1ab3c882983dbf0",
"text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.",
"title": ""
},
{
"docid": "7cb07d4ed42409269337c6ea1d6aa5c6",
"text": "Since a parallel structure is a closed kinematics chain, all legs are connected from the origin of the tool point by a parallel connection. This connection allows a higher precision and a higher velocity. Parallel kinematic manipulators have better performance compared to serial kinematic manipulators in terms of a high degree of accuracy, high speeds or accelerations and high stiffness. Therefore, they seem perfectly suitable for industrial high-speed applications, such as pick-and-place or micro and high-speed machining. They are used in many fields such as flight simulation systems, manufacturing and medical applications. One of the most popular parallel manipulators is the general purpose 6 degree of freedom (DOF) Stewart Platform (SP) proposed by Stewart in 1965 as a flight simulator (Stewart, 1965). It consists of a top plate (moving platform), a base plate (fixed base), and six extensible legs connecting the top plate to the bottom plate. SP employing the same architecture of the Gough mechanism (Merlet, 1999) is the most studied type of parallel manipulators. This is also known as Gough–Stewart platforms in literature. Complex kinematics and dynamics often lead to model simplifications decreasing the accuracy. In order to overcome this problem, accurate kinematic and dynamic identification is needed. The kinematic and dynamic modeling of SP is extremely complicated in comparison with serial robots. Typically, the robot kinematics can be divided into forward kinematics and inverse kinematics. For a parallel manipulator, inverse kinematics is straight forward and there is no complexity deriving the equations. However, forward kinematics of SP is very complicated and difficult to solve since it requires the solution of many non-linear equations. Moreover, the forward kinematic problem generally has more than one solution. As a result, most research papers concentrated on the forward kinematics of the parallel manipulators (Bonev and Ryu, 2000; Merlet, 2004; Harib and Srinivasan, 2003; Wang, 2007). For the design and the control of the SP manipulators, the accurate dynamic model is very essential. The dynamic modeling of parallel manipulators is quite complicated because of their closed-loop structure, coupled relationship between system parameters, high nonlinearity in system dynamics and kinematic constraints. Robot dynamic modeling can be also divided into two topics: inverse and forward dynamic model. The inverse dynamic model is important for system control while the forward model is used for system simulation. To obtain the dynamic model of parallel manipulators, there are many valuable studies published by many researches in the literature. The dynamic analysis of parallel manipulators has been traditionally performed through several different methods such as",
"title": ""
},
{
"docid": "1f9bbe5628ad4bb37d2b6fa3b53bcc12",
"text": "The problem of consistently estimating the sparsity pattern of a vector beta* isin Rp based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and linfin-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N (0, Sigma) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 < thetasl(Sigma) les thetasu(Sigma) < +infin with the following properties: for any delta > 0, if n > 2 (thetasu + delta) klog (p- k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 (thetasl - delta)klog (p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Sigma = Iptimesp), we show that thetasl = thetas<u = 1, so that the precise threshold n = 2 klog(p- k) is exactly determined.",
"title": ""
},
{
"docid": "42ecca95c15cd1f92d6e5795f99b414a",
"text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.",
"title": ""
},
{
"docid": "25a13221c6cda8e1d4adf451701e421a",
"text": "Block-based local binary patterns a.k.a. enhanced local binary patterns (ELBPs) have proven to be a highly discriminative descriptor for face recognition and image retrieval. Since this descriptor is mainly composed by histograms, little work (if any) has been done for selecting its relevant features (either the bins or the blocks). In this paper, we address feature selection for both the classic ELBP representation and the recently proposed color quaternionic LBP (QLBP). We introduce a filter method for the automatic weighting of attributes or blocks using an improved version of the margin-based iterative search Simba algorithm. This new improved version introduces two main modifications: (i) the hypothesis margin of a given instance is computed by taking into account the K-nearest neighboring examples within the same class as well as the K-nearest neighboring examples with a different label; (ii) the distances between samples and their nearest neighbors are computed using the weighted $$\\chi ^2$$ χ 2 distance instead of the Euclidean one. This algorithm has been compared favorably with several competing feature selection algorithms including the Euclidean-based Simba as well as variance and Fisher score algorithms giving higher performances. The proposed method is useful for other descriptors that are formed by histograms. Experimental results show that the QLBP descriptor allows an improvement of the accuracy in discriminating faces compared with the ELBP. They also show that the obtained selection (attributes or blocks) can either improve recognition performance or maintain it with a significant reduction in the descriptor size.",
"title": ""
},
{
"docid": "1d7b7ea9f0cc284f447c11902bad6685",
"text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.",
"title": ""
},
{
"docid": "1e44c9ab1f66f7bf7dced95bd3f0a122",
"text": "The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.",
"title": ""
},
{
"docid": "f50d0948319a4487b43b94bac09e5fab",
"text": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"title": ""
}
] | scidocsrr |
44c45c53021be2a5091a012f9299fe3c | First Step Towards End-to-End Parametric TTS Synthesis: Generating Spectral Parameters with Neural Attention | [
{
"docid": "280c39aea4584e6f722607df68ee28dc",
"text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.",
"title": ""
}
] | [
{
"docid": "b4f1559a05ded4584ef06f2d660053ac",
"text": "A circularly polarized microstrip array antenna is proposed for Ka-band satellite applications. The antenna element consists of an L-shaped patch with parasitic circular-ring radiator. A sequentially rotated 2×2 antenna array exhibits a wideband 3-dB axial ratio bandwidth of 20.6% (25.75 GHz - 31.75 GHz) and 2;1-VSWR bandwidth of 24.0% (25.5 GHz - 32.5 GHz). A boresight gain of 10-11.8 dBic is achieved across a frequency range from 26 GHz to 32 GHz. An 8×8 antenna array exhibits a boresight gain of greater than 24 dBic over 27.25 GHz-31.25 GHz.",
"title": ""
},
{
"docid": "c601040737c42abcef996e027fabc8cf",
"text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.",
"title": ""
},
{
"docid": "81c02e708a21532d972aca0b0afd8bb5",
"text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.",
"title": ""
},
{
"docid": "ce2139f51970bfa5bd3738392f55ea48",
"text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.",
"title": ""
},
{
"docid": "33bc830ab66c9864fd4c45c463c2c9da",
"text": "We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.",
"title": ""
},
{
"docid": "d1f02e2f57cffbc17387de37506fddc9",
"text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.",
"title": ""
},
{
"docid": "18bc3abbd6a4f51fdcfbafcc280f0805",
"text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.",
"title": ""
},
{
"docid": "f1a2d243c58592c7e004770dfdd4a494",
"text": "Dynamic voltage scaling (DVS), which adjusts the clockspeed and supply voltage dynamically, is an effective techniquein reducing the energy consumption of embedded real-timesystems. The energy efficiency of a DVS algorithm largelydepends on the performance of the slack estimation methodused in it. In this paper, we propose a novel DVS algorithmfor periodic hard real-time tasks based on an improved slackestimation algorithm. Unlike the existing techniques, the proposedmethod takes full advantage of the periodic characteristicsof the real-time tasks under priority-driven schedulingsuch as EDF. Experimental results show that the proposed algorithmreduces the energy consumption by 20~40% over theexisting DVS algorithm. The experiment results also show thatour algorithm based on the improved slack estimation methodgives comparable energy savings to the DVS algorithm basedon the theoretically optimal (but impractical) slack estimationmethod.",
"title": ""
},
{
"docid": "eebeb59c737839e82ecc20a748b12c6b",
"text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.",
"title": ""
},
{
"docid": "f05dd13991878e5395561ce24564975c",
"text": "Detecting informal settlements might be one of the most challenging tasks within urban remote sensing. This phenomenon occurs mostly in developing countries. In order to carry out the urban planning and development tasks necessary to improve living conditions for the poorest world-wide, an adequate spatial data basis is needed (see Mason, O. S. & Fraser, C. S., 1998). This can only be obtained through the analysis of remote sensing data, which represents an additional challenge from a technical point of view. Formal settlements by definition are mapped sufficiently for most purposes. However, this does not hold for informal settlements. Due to their microstructure and instability of shape, the detection of these settlements is substantially more difficult. Hence, more sophisticated data and methods of image analysis are necessary, which ideally act as a spatial data basis for a further informal settlement management. While these methods are usually quite labour-intensive, one should nonetheless bear in mind cost-effectivity of the applied methods and tools. In the present article, it will be shown how eCognition can be used to detect and discriminate informal settlements from other land-use-forms by describing typical characteristics of colour, texture, shape and context. This software is completely objectoriented and uses a patented, multi-scale image segmentation approach. The generated segments act as image objects whose physical and contextual characteristics can be described by means of fuzzy logic. The article will show methods and strategies using eCognition to detect informal settlements from high resolution space-borne image data such as IKONOS. A final discussion of the results will be given.",
"title": ""
},
{
"docid": "e2e8aac3945fa7206dee21792133b77b",
"text": "We provide an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model. The method is based appropriate permutational distributions of sufficient statistics. It is useful for analysing small or unbalanced binary data with covariates. It also applies to small-sample clustered binary data. We illustrate the method by analysing several biomedical data sets.",
"title": ""
},
{
"docid": "14dfa311d7edf2048ebd4425ae38d3e2",
"text": "Forest species recognition has been traditionally addressed as a texture classification problem, and explored using standard texture methods such as Local Binary Patterns (LBP), Local Phase Quantization (LPQ) and Gabor Filters. Deep learning techniques have been a recent focus of research for classification problems, with state-of-the art results for object recognition and other tasks, but are not yet widely used for texture problems. This paper investigates the usage of deep learning techniques, in particular Convolutional Neural Networks (CNN), for texture classification in two forest species datasets - one with macroscopic images and another with microscopic images. Given the higher resolution images of these problems, we present a method that is able to cope with the high-resolution texture images so as to achieve high accuracy and avoid the burden of training and defining an architecture with a large number of free parameters. On the first dataset, the proposed CNN-based method achieves 95.77% of accuracy, compared to state-of-the-art of 97.77%. On the dataset of microscopic images, it achieves 97.32%, beating the best published result of 93.2%.",
"title": ""
},
{
"docid": "9343a2775b5dac7c48c1c6cec3d0a59c",
"text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "ca82ceafc6079f416d9f7b94a7a6a665",
"text": "When Big data and cloud computing join forces together, several domains like: healthcare, disaster prediction and decision making become easier and much more beneficial to users in term of information gathering, although cloud computing will reduce time and cost of analyzing information for big data, it may harm the confidentiality and integrity of the sensitive data, for instance, in healthcare, when analyzing disease's spreading area, the name of the infected people must remain secure, hence the obligation to adopt a secure model that protect sensitive data from malicious users. Several case studies on the integration of big data in cloud computing, urge on how easier it would be to analyze and manage big data in this complex envronement. Companies must consider outsourcing their sensitive data to the cloud to take advantage of its beneficial resources such as huge storage, fast calculation, and availability, yet cloud computing might harm the security of data stored and computed in it (confidentiality, integrity). Therefore, strict paradigm must be adopted by organization to obviate their outsourced data from being stolen, damaged or lost. In this paper, we compare between the existing models to secure big data implementation in the cloud computing. Then, we propose our own model to secure Big Data on the cloud computing environement, considering the lifecycle of data from uploading, storage, calculation to its destruction.",
"title": ""
},
{
"docid": "0fdd7f5c5cd1225567e89b456ef25ea0",
"text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.",
"title": ""
},
{
"docid": "922354208e78ed5154b8dfbe4ed14c7e",
"text": "Digital systems, especially those for mobile applications are sensitive to power consumption, chip size and costs. Therefore they are realized using fixed-point architectures, either dedicated HW or programmable DSPs. On the other hand, system design starts from a floating-point description. These requirements have been the motivation for FRIDGE (Fixed-point pRogrammIng DesiGn Environment), a design environment for the specification, evaluation and implementation of fixed-point systems. FRIDGE offers a seamless design flow from a floating- point description to a fixed-point implementation. Within this paper we focus on two core capabilities of FRIDGE: (1) the concept of an interactive, automated transformation of floating-point programs written in ANSI-C into fixed-point specifications, based on an interpolative approach. The design time reductions that can be achieved make FRIDGE a key component for an efficient HW/SW-CoDesign. (2) a fast fixed-point simulation that performs comprehensive compile-time analyses, reducing simulation time by one order of magnitude compared to existing approaches.",
"title": ""
},
{
"docid": "bfee1553c6207909abc9820e741d6e01",
"text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.",
"title": ""
},
{
"docid": "74812252b395dca254783d05e1db0cf5",
"text": "Cyber-Physical Security Testbeds serve as valuable experimental platforms to implement and evaluate realistic, complex cyber attack-defense experiments. Testbeds, unlike traditional simulation platforms, capture communication, control and physical system characteristics and their interdependencies adequately in a unified environment. In this paper, we show how the PowerCyber CPS testbed at Iowa State was used to implement and evaluate cyber attacks on one of the fundamental Wide-Area Control applications, namely, the Automatic Generation Control (AGC). We provide a brief overview of the implementation of the experimental setup on the testbed. We then present a case study using the IEEE 9 bus system to evaluate the impacts of cyber attacks on AGC. Specifically, we analyzed the impacts of measurement based attacks that manipulated the tie-line and frequency measurements, and control based attacks that manipulated the ACE values sent to generators. We found that these attacks could potentially create under frequency conditions and could cause unnecessary load shedding. As part of future work, we plan to extend this work and utilize the experimental setup to implement other sophisticated, stealthy attack vectors and also develop attack-resilient algorithms to detect and mitigate such attacks.",
"title": ""
},
{
"docid": "2ecb4d841ef57a3acdf05cbb727aecbf",
"text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.",
"title": ""
}
] | scidocsrr |
84997cf3e49c09bbc79045a6ce4e3810 | Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering | [
{
"docid": "34a7d306a788ab925db8d0afe4c21c5a",
"text": "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sensemaking component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include “put-this-there” cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.",
"title": ""
},
{
"docid": "cff44da2e1038c8e5707cdde37bc5461",
"text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.",
"title": ""
}
] | [
{
"docid": "85a93038a98a4744c3b574f664a1199c",
"text": "This paper describes our construction of named-entity recognition (NER) systems in two Western Iranian languages, Sorani Kurdish and Tajik, as a part of a pilot study of Linguistic Rapid Response to potential emergency humanitarian relief situations. In the absence of large annotated corpora, parallel corpora, treebanks, bilingual lexica, etc., we found the following to be effective: exploiting distributional regularities in monolingual data, projecting information across closely related languages, and utilizing human linguist judgments. We show promising results on both a four-month exercise in Sorani and a two-day exercise in Tajik, achieved with minimal annotation costs.",
"title": ""
},
{
"docid": "bda0ae59319660987e9d2686d98e4b9a",
"text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.",
"title": ""
},
{
"docid": "ce1b4c5e15fd1d0777c26ca93a9cadbd",
"text": "In early studies on energy metabolism of tumor cells, it was proposed that the enhanced glycolysis was induced by a decreased oxidative phosphorylation. Since then it has been indiscriminately applied to all types of tumor cells that the ATP supply is mainly or only provided by glycolysis, without an appropriate experimental evaluation. In this review, the different genetic and biochemical mechanisms by which tumor cells achieve an enhanced glycolytic flux are analyzed. Furthermore, the proposed mechanisms that arguably lead to a decreased oxidative phosphorylation in tumor cells are discussed. As the O(2) concentration in hypoxic regions of tumors seems not to be limiting for the functioning of oxidative phosphorylation, this pathway is re-evaluated regarding oxidizable substrate utilization and its contribution to ATP supply versus glycolysis. In the tumor cell lines where the oxidative metabolism prevails over the glycolytic metabolism for ATP supply, the flux control distribution of both pathways is described. The effect of glycolytic and mitochondrial drugs on tumor energy metabolism and cellular proliferation is described and discussed. Similarly, the energy metabolic changes associated with inherent and acquired resistance to radiotherapy and chemotherapy of tumor cells, and those determined by positron emission tomography, are revised. It is proposed that energy metabolism may be an alternative therapeutic target for both hypoxic (glycolytic) and oxidative tumors.",
"title": ""
},
{
"docid": "ae87441b3ce5fd388101dc85ad25b558",
"text": "University of Tampere School of Management Author: MIIA HANNOLA Title: Critical factors in Customer Relationship Management system implementation Master’s thesis: 84 pages, 2 appendices Date: November 2016",
"title": ""
},
{
"docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9",
"text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.",
"title": ""
},
{
"docid": "57ab94ce902f4a8b0082cc4f42cd3b3f",
"text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.",
"title": ""
},
{
"docid": "9c4a4a3253e7a279c1c2fd5582838942",
"text": "This Preview Edition of Designing Data-Intensive Applications, Chapters 1 and 2, is a work in progress. The final book is currently scheduled for release in July 2015 and will be available at oreilly.com and other retailers once it is published. O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or [email protected]. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O'Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. Thinking About Data Systems 4 Reliability 6 Hardware faults 7 Software errors 8 Human errors 9 How important is reliability? 9 Scalability 10 Describing load 10 Describing performance 13 Approaches for coping with load 16 Maintainability 17 Operability: making life easy for operations 18 Simplicity: managing complexity 19 Plasticity: making change easy 20 Summary 21 Relational Model vs. Document Model 26 The birth of NoSQL 27 The object-relational mismatch 27 Many-to-one and many-to-many relationships 31 Are document databases repeating history? 34 Relational vs. document databases today 36 Query Languages for Data 40 Declarative queries on the web 41 MapReduce querying 43 iii",
"title": ""
},
{
"docid": "7d23d8d233a3fc7ff75edf361acbe642",
"text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.",
"title": ""
},
{
"docid": "7cf28eb429d724d9213aed8aa1f192ec",
"text": "Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier.",
"title": ""
},
{
"docid": "8a380625849ba678e8fed0e837510423",
"text": "Congestive heart failure (CHF) is a common clinical disorder that results in pulmonary vascular congestion and reduced cardiac output. CHF should be considered in the differential diagnosis of any adult patient who presents with dyspnea and/or respiratory failure. The diagnosis of heart failure is often determined by a careful history and physical examination and characteristic chest-radiograph findings. The measurement of serum brain natriuretic peptide and echocardiography have substantially improved the accuracy of diagnosis. Therapy for CHF is directed at restoring normal cardiopulmonary physiology and reducing the hyperadrenergic state. The cornerstone of treatment is a combination of an angiotensin-converting-enzyme inhibitor and slow titration of a beta blocker. Patients with CHF are prone to pulmonary complications, including obstructive sleep apnea, pulmonary edema, and pleural effusions. Continuous positive airway pressure and noninvasive positive-pressure ventilation benefit patients in CHF exacerbations.",
"title": ""
},
{
"docid": "10d53a05fcfb93231ab100be7eeb6482",
"text": "We present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content given a text-based query. We consider the related tasks of content-based audio annotation and retrieval as one supervised multiclass, multilabel problem in which we model the joint probability of acoustic features and words. We collect a data set of 1700 human-generated annotations that describe 500 Western popular music tracks. For each word in a vocabulary, we use this data to train a Gaussian mixture model (GMM) over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies expectation maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our system is comparable with the performance of humans on the same task. Our ldquoquery-by-textrdquo system can retrieve appropriate songs for a large number of musically relevant words. We also show that our audition system is general by learning a model that can annotate and retrieve sound effects.",
"title": ""
},
{
"docid": "46106f37992ec3089dfe6bfd31e699c8",
"text": "For doubly fed induction generator (DFIG)-based wind turbine, the main constraint to ride-through serious grid faults is the limited converter rating. In order to realize controllable low voltage ride through (LVRT) under the typical converter rating, transient control reference usually need to be modified to adapt to the constraint of converter's maximum output voltage. Generally, the generation of such reference relies on observation of stator flux and even sequence separation. This is susceptible to observation errors during the fault transient; moreover, it increases the complexity of control system. For this issue, this paper proposes a scaled current tracking control for rotor-side converter (RSC) to enhance its LVRT capacity without flux observation. In this method, rotor current is controlled to track stator current in a certain scale. Under proper tracking coefficient, both the required rotor current and rotor voltage can be constrained within the permissible ranges of RSC, thus it can maintain DFIG under control to suppress overcurrent and overvoltage. Moreover, during fault transient, electromagnetic torque oscillations can be greatly suppressed. Based on it, certain additional positive-sequence item is injected into rotor current reference to supply dynamic reactive support. Simulation and experimental results demonstrate the feasibility of the proposed method.",
"title": ""
},
{
"docid": "05894f874111fd55bd856d4768c61abe",
"text": "Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force-feedback can require over 1,000 collision queries per second. In this paper, we develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a “discrete orientation polytope” (“k-dop”), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (“BV-trees”) of bounding k-dops. Further, we propose algorithms for maintaining an effective BV-tree of k-dops for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.",
"title": ""
},
{
"docid": "00639757a1a60fe8e56b868bd6e2ff62",
"text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.",
"title": ""
},
{
"docid": "5d8f33b7f28e6a8d25d7a02c1f081af1",
"text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: [email protected] Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1",
"title": ""
},
{
"docid": "b23230f0386f185b7d5eb191034d58ec",
"text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.",
"title": ""
},
{
"docid": "6e9ed92dc37e2d7e7ed956ed7b880ff2",
"text": "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).",
"title": ""
},
{
"docid": "824bcc0f9f4e71eb749a04f441891200",
"text": "We characterize the singular values of the linear transformation associated with a convolution applied to a two-dimensional feature map with multiple channels. Our characterization enables efficient computation of the singular values of convolutional layers used in popular deep neural network architectures. It also leads to an algorithm for projecting a convolutional layer onto the set of layers obeying a bound on the operator norm of the layer. We show that this is an effective regularizer; periodically applying these projections during training improves the test error of a residual network on CIFAR-10 from 6.2% to 5.3%.",
"title": ""
},
{
"docid": "e68da0df82ade1ef0ff2e0b26da4cb4e",
"text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?",
"title": ""
},
{
"docid": "f466b3200413fc5b06101a19741ca395",
"text": "This paper present a circular patch microstrip array antenna operate in KU-band (10.9GHz – 17.25GHz). The proposed circular patch array antenna will be in light weight, flexible, slim and compact unit compare with current antenna used in KU-band. The paper also presents the detail steps of designing the circular patch microstrip array antenna. An Advance Design System (ADS) software is used to compute the gain, power, radiation pattern, and S11 of the antenna. The proposed Circular patch microstrip array antenna basically is a phased array consisting of ‘n’ elements (circular patch antennas) arranged in a rectangular grid. The size of each element is determined by the operating frequency. The incident wave from satellite arrives at the plane of the antenna with equal phase across the surface of the array. Each ‘n’ element receives a small amount of power in phase with the others. There are feed network connects each element to the microstrip lines with an equal length, thus the signals reaching the circular patches are all combined in phase and the voltages add up. The significant difference of the circular patch array antenna is not come in the phase across the surface but in the magnitude distribution. Keywords—Circular patch microstrip array antenna, gain, radiation pattern, S-Parameter.",
"title": ""
}
] | scidocsrr |
3a54fb1cbfc3b79c3437a190dd9efc57 | A Semantic Loss Function for Deep Learning Under Weak Supervision ∗ | [
{
"docid": "c78ab55e5d74c6baa3b4b38dea107489",
"text": "Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime.",
"title": ""
},
{
"docid": "c8bbc713aecbc6682d21268ee58ca258",
"text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.",
"title": ""
}
] | [
{
"docid": "2d20574f353950f7805e85c55e023d37",
"text": "Stress has effect on speech characteristics and can influence the quality of speech. In this paper, we study the effect of SleepDeprivation (SD) on speech characteristics and classify Normal Speech (NS) and Sleep Deprived Speech (SDS). One of the indicators of sleep deprivation is flattened voice. We examine pitch and harmonic locations to analyse flatness of voice. To investigate, we compute the spectral coefficients that can capture the variations of pitch and harmonic patterns. These are derived using Two-Layer Cascaded-Subband Filter spread according to the pitch and harmonic frequency scale. Hidden Markov Model (HMM) is employed for statistical modeling. We use DCIEM map task corpus to conduct experiments. The analysis results show that SDS has less variation of pitch and harmonic pattern than NS. In addition, we achieve the relatively high accuracy for classification of Normal Speech (NS) and Sleep Deprived Speech (SDS) using proposed spectral coefficients.",
"title": ""
},
{
"docid": "a9f309abd4711ad5c73c8ba8e80a1c76",
"text": "The Open Networking Foundation's Extensibility Working Group is standardizing OpenFlow, the main software-defined networking (SDN) protocol. To address the requirements of a wide range of network devices and to accommodate its all-volunteer membership, the group has made the specification process highly dynamic and similar to that of open source projects.",
"title": ""
},
{
"docid": "65a7b631bdf02ae108e07eb35c5dfe55",
"text": "Wireless access networks scale by replicating base stations geographically and then allowing mobile clients to seamlessly \"hand off\" from one station to the next as they traverse the network. However, providing the illusion of continuous connectivity requires selecting the right moment to handoff and the right base station to transfer to. Unfortunately, 802.11-based networks only attempt a handoff when a client's service degrades to a point where connectivity is threatened. Worse, the overhead of scanning for nearby base stations is routinely over 250 ms - during which incoming packets are dropped - far longer than what can be tolerated by highly interactive applications such as voice telephony. In this paper we describe SyncScan, a low-cost technique for continuously tracking nearby base stations by synchronizing short listening periods at the client with periodic transmissions from each base station. We have implemented this SyncScan algorithm using commodity 802.11 hardware and we demonstrate that it allows better handoff decisions and over an order of magnitude improvement in handoff delay. Finally, our approach only requires trivial implementation changes, is incrementally deployable and is completely backward compatible with existing 802.11 standards.",
"title": ""
},
{
"docid": "02c50512c053fb8df4537e125afea321",
"text": "Online Social Networks (OSNs) have spread at stunning speed over the past decade. They are now a part of the lives of dozens of millions of people. The onset of OSNs has stretched the traditional notion of community to include groups of people who have never met in person but communicate with each other through OSNs to share knowledge, opinions, interests and activities. Here we explore in depth language independent gender classification. Our approach predicts gender using five colorbased features extracted from Twitter profiles such as the background color in a user’s profile page. This is in contrast with most existing methods for gender prediction that are language dependent. Those methods use high-dimensional spaces consisting of unique words extracted from such text fields as postings, user names, and profile descriptions. Our approach is independent of the user’s language, efficient, scalable, and computationally tractable, while attaining a good level of accuracy.",
"title": ""
},
{
"docid": "6f7332494ffc384eaae308b2116cab6a",
"text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.",
"title": ""
},
{
"docid": "5cd3abebf4d990bb9196b7019b29c568",
"text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.",
"title": ""
},
{
"docid": "feb57c831158e03530d59725ae23af00",
"text": "Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine the success of MTL. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary task configurations, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, because significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable.",
"title": ""
},
{
"docid": "990c8e69811a8ebafd6e8c797b36349d",
"text": "Segmentation of pulmonary X-ray computed tomography (CT) images is a precursor to most pulmonary image analysis applications. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3-D) pulmonary X-ray CT images. The method has three main steps. First, the lung region is extracted from the CT images by gray-level thresholding. Then, the left and right lungs are separated by identifying the anterior and posterior junctions by dynamic programming. Finally, a sequence of morphological operations is used to smooth the irregular boundary along the mediastinum in order to obtain results consistent with these obtained by manual analysis, in which only the most central pulmonary arteries are excluded from the lung region. The method has been tested by processing 3-D CT data sets from eight normal subjects, each imaged three times at biweekly intervals with lungs at 90% vital capacity. The authors present results by comparing their automatic method to manually traced borders from two image analysts. Averaged over all volumes, the root mean square difference between the computer and human analysis is 0.8 pixels (0.54 mm). The mean intrasubject change in tissue content over the three scans was 2.75%/spl plusmn/2.29% (mean/spl plusmn/standard deviation).",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "958cde8dec4d8df9c6b6d83a7740e2d0",
"text": "Distributed applications use replication, implemented by protocols like Paxos, to ensure data availability and transparently mask server failures. This paper presents a new approach to achieving replication in the data center without the performance cost of traditional methods. Our work carefully divides replication responsibility between the network and protocol layers. The network orders requests but does not ensure reliable delivery – using a new primitive we call ordered unreliable multicast (OUM). Implementing this primitive can be achieved with near-zero-cost in the data center. Our new replication protocol, NetworkOrdered Paxos (NOPaxos), exploits network ordering to provide strongly consistent replication without coordination. The resulting system not only outperforms both latencyand throughput-optimized protocols on their respective metrics, but also yields throughput within 2% and latency within 16 μs of an unreplicated system – providing replication without the performance cost.",
"title": ""
},
{
"docid": "00100476074a90ecb616308b63a128e8",
"text": "We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets.",
"title": ""
},
{
"docid": "8f77e8c01163052e8d4da1934f26d929",
"text": "We present a simplified and novel fully convolutional neural network (CNN) architecture for semantic pixel-wise segmentation named as SCNet. Different from current CNN pipelines, proposed network uses only convolution layers with no pooling layer. The key objective of this model is to offer a more simplified CNN model with equal benchmark performance and results. It is an encoder-decoder based fully convolution network model. Encoder network is based on VGG 16-layer while decoder networks use upsampling and deconvolution units followed by a pixel-wise classification layer. The proposed network is simple and offers reduced search space for segmentation by using low-resolution encoder feature maps. It also offers a great deal of reduction in trainable parameters due to reusing encoder layer's sparse features maps. The proposed model offers outstanding performance and enhanced results in terms of architectural simplicity, number of trainable parameters and computational time.",
"title": ""
},
{
"docid": "06c8d56ecc9e92b106de01ad22c5a125",
"text": "Due to the reasonably acceptable performance of state-of-the-art object detectors, tracking-by-detection is a standard strategy for visual multi-object tracking (MOT). In particular, online MOT is more demanding due to its diverse applications in time-critical situations. A main issue of realizing online MOT is how to associate noisy object detection results on a new frame with previously being tracked objects. In this work, we propose a multi-object tracker method called CRF-boosting which utilizes a hybrid data association method based on online hybrid boosting facilitated by a conditional random field (CRF) for establishing online MOT. For data association, learned CRF is used to generate reliable low-level tracklets and then these are used as the input of the hybrid boosting. To do so, while existing data association methods based on boosting algorithms have the necessity of training data having ground truth information to improve robustness, CRF-boosting ensures sufficient robustness without such information due to the synergetic cascaded learning procedure. Further, a hierarchical feature association framework is adopted to further improve MOT accuracy. From experimental results on public datasets, we could conclude that the benefit of proposed hybrid approach compared to the other competitive MOT systems is noticeable.",
"title": ""
},
{
"docid": "508eb69a9e6b0194fbda681439e404c4",
"text": "Price forecasting is becoming increasingly relevant to producers and consumers in the new competitive electric power markets. Both for spot markets and long-term contracts, price forecasts are necessary to develop bidding strategies or negotiation skills in order to maximize benefit. This paper provides a method to predict next-day electricity prices based on the ARIMA methodology. ARIMA techniques are used to analyze time series and, in the past, have been mainly used for load forecasting due to their accuracy and mathematical soundness. A detailed explanation of the aforementioned ARIMA models and results from mainland Spain and Californian markets are presented.",
"title": ""
},
{
"docid": "c3c36535a6dbe74165c0e8b798ac820f",
"text": "Multiplier, being a very vital part in the design of microprocessor, graphical systems, multimedia systems, DSP system etc. It is very important to have an efficient design in terms of performance, area, speed of the multiplier, and for the same Booth's multiplication algorithm provides a very fundamental platform for all the new advances made for high end multipliers meant for faster multiplication with higher performance. The algorithm provides an efficient encoding of the bits during the first steps of the multiplication process. In pursuit of the same, Radix 4 booths encoding has increased the performance of the multiplier by reducing the number of partial products generated. Radix 4 Booths algorithm produces both positive and negative partial products and implementing the negative partial product nullifies the advances made in different units to some extent if not fully. Most of the research work focuses on the reduction of the number of partial products generated and making efficient implementation of the algorithm. There is very little work done on disposal of the negative partial products generated. The presented work in the paper addresses the issue of disposal of the negative partial products efficiently by computing the 2's complement avoiding the additional adder for adding 1 and generation of long carry chain, hence. The proposed mechanism also continues to support the concept of reducing the partial product and in persuasion of the same it is able to reduce the number of partial product and also improved further from n/2 +1 partial products achieved via modified booths algorithm to n/2. Also, while implementing the proposed mechanism using Verilog HDL, a mode selection capability is provided, enabling the same hardware to act as multiplier and as a simple two's complement calculator using the proposed mechanism. The proposed technique has added advantage in terms of its independentness of the number of bits to be multiplied. It is tested and verified with varied test vectors of different number bit sets. Xilinx synthesis tool is used for synthesis and the multiplier mechanism has a maximum operating frequency of 14.59 MHz and a delay of 7.013 ns.",
"title": ""
},
{
"docid": "ca0cef542eafe283ba4fa224e3c87df6",
"text": "Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion, some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see [4] for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of \"fear\" and \"disgust.\" Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for cross-cultural communication and globalization.",
"title": ""
},
{
"docid": "c2d0706570f3d2e803fbb2fdafd44d90",
"text": "Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called \"laboratory errors\", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term \"laboratory error\" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.",
"title": ""
},
{
"docid": "220d7b64db1731667e57ed318d2502ce",
"text": "Neutrophils infiltration/activation following wound induction marks the early inflammatory response in wound repair. However, the role of the infiltrated/activated neutrophils in tissue regeneration/proliferation during wound repair is not well understood. Here, we report that infiltrated/activated neutrophils at wound site release pyruvate kinase M2 (PKM2) by its secretive mechanisms during early stages of wound repair. The released extracellular PKM2 facilitates early wound healing by promoting angiogenesis at wound site. Our studies reveal a new and important molecular linker between the early inflammatory response and proliferation phase in tissue repair process.",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "f6e9192e59ad6d6f94bacf78ad24f712",
"text": "Opinion mining is considered as a subfield of natural language processing, information retrieval and text mining. Opinion mining is the process of extracting human thoughts and perceptions from unstructured texts, which with regard to the emergence of online social media and mass volume of users’ comments, has become to a useful, attractive and also challenging issue. There are varieties of researches with different trends and approaches in this area, but the lack of a comprehensive study to investigate them from all aspects is tangible. In this paper we represent a complete, multilateral and systematic review of opinion mining and sentiment analysis to classify available methods and compare their advantages and drawbacks, in order to have better understanding of available challenges and solutions to clarify the future direction. For this purpose, we present a proper framework of opinion mining accompanying with its steps and levels and then we completely monitor, classify, summarize and compare proposed techniques for aspect extraction, opinion classification, summary production and evaluation, based on the major validated scientific works. In order to have a better comparison, we also propose some factors in each category, which help to have a better understanding of advantages and disadvantages of different methods.",
"title": ""
}
] | scidocsrr |
f40417357e19ac3bd71667f15e2611c6 | Improving Optical Character Recognition Process for Low Resolution Images | [
{
"docid": "6e30761b695e22a29f98a051dbccac6f",
"text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.",
"title": ""
}
] | [
{
"docid": "3bba773dc33ef83b975dd15803fac957",
"text": "In competitive games where players' skill levels are mis-matched, the play experience can be unsatisfying for both stronger and weaker players. Player balancing provides assistance for less-skilled players in order to make games more competitive and engaging. Although player balancing can be seen in many real-world games, there is little work on the design and effectiveness of these techniques outside of shooting games. In this paper we provide new knowledge about player balancing in the popular and competitive rac-ing genre. We studied issues of noticeability and balancing effectiveness in a prototype racing game, and tested the effects of several balancing techniques on performance and play experience. The techniques significantly improved the balance of player performance, were preferred by both experts and novices, increased novices' feelings of competi-tiveness, and did not detract from experts' experience. Our results provide new understanding of the design and use of player balancing for racing games, and provide novel tech-niques that can also be applied to other genres.",
"title": ""
},
{
"docid": "b7390d19beb199e21dac200f2f7021f3",
"text": "In this paper, we propose a workflow and a machine learning model for recognizing handwritten characters on form document. The learning model is based on Convolutional Neural Network (CNN) as a powerful feature extraction and Support Vector Machines (SVM) as a high-end classifier. The proposed method is more efficient than modifying the CNN with complex architecture. We evaluated some SVM and found that the linear SVM using L1 loss function and L2 regularization giving the best performance both of the accuracy rate and the computation time. Based on the experiment results using data from NIST SD 192nd edition both for training and testing, the proposed method which combines CNN and linear SVM using L1 loss function and L2 regularization achieved a recognition rate better than only CNN. The recognition rate achieved by the proposed method are 98.85% on numeral characters, 93.05% on uppercase characters, 86.21% on lowercase characters, and 91.37% on the merger of numeral and uppercase characters. While the original CNN achieves an accuracy rate of 98.30% on numeral characters, 92.33% on uppercase characters, 83.54% on lowercase characters, and 88.32% on the merger of numeral and uppercase characters. The proposed method was also validated by using ten folds cross-validation, and it shows that the proposed method still can improve the accuracy rate. The learning model was used to construct a handwriting recognition system to recognize a more challenging data on form document automatically. The pre-processing, segmentation and character recognition are integrated into one system. The output of the system is converted into an editable text. The system gives an accuracy rate of 83.37% on ten different test form document.",
"title": ""
},
{
"docid": "6f650989dff7b4aaa76f051985c185bf",
"text": "Since Sharir and Pnueli, algorithms for context-sensitivity have been defined in terms of ‘valid’ paths in an interprocedural flow graph. The definition of valid paths requires atomic call and ret statements, and encapsulated procedures. Thus, the resulting algorithms are not directly applicable when behavior similar to call and ret instructions may be realized using non-atomic statements, or when procedures do not have rigid boundaries, such as with programs in low level languages like assembly or RTL. We present a framework for context-sensitive analysis that requires neither atomic call and ret instructions, nor encapsulated procedures. The framework presented decouples the transfer of control semantics and the context manipulation semantics of statements. A new definition of context-sensitivity, called stack contexts, is developed. A stack context, which is defined using trace semantics, is more general than Sharir and Pnueli’s interprocedural path based calling-context. An abstract interpretation based framework is developed to reason about stack-contexts and to derive analogues of calling-context based algorithms using stack-context. The framework presented is suitable for deriving algorithms for analyzing binary programs, such as malware, that employ obfuscations with the deliberate intent of defeating automated analysis. The framework is used to create a context-sensitive version of Venable et al.’s algorithm for detecting obfuscated calls in x86 binaries. Experimental results from comparing context insensitive, Sharir and Pnueli’s callingcontext-sensitive, and stack-context-sensitive versions of the algorithm are presented.",
"title": ""
},
{
"docid": "881de4b66bdba0a45caaa48a13b33388",
"text": "This paper describes a Di e-Hellman based encryption scheme, DHAES. The scheme is as e cient as ElGamal encryption, but has stronger security properties. Furthermore, these security properties are proven to hold under appropriate assumptions on the underlying primitive. We show that DHAES has not only the \\basic\" property of secure encryption (namely privacy under a chosen-plaintext attack) but also achieves privacy under both non-adaptive and adaptive chosenciphertext attacks. (And hence it also achieves non-malleability.) DHAES is built in a generic way from lower-level primitives: a symmetric encryption scheme, a message authentication code, group operations in an arbitrary group, and a cryptographic hash function. In particular, the underlying group may be an elliptic-curve group or the multiplicative group of integers modulo a prime number. The proofs of security are based on appropriate assumptions about the hardness of the Di e-Hellman problem and the assumption that the underlying symmetric primitives are secure. The assumptions are all standard in the sense that no random oracles are involved. We suggest that DHAES provides an attractive starting point for developing public-key encryption standards based on the Di e-Hellman assumption.",
"title": ""
},
{
"docid": "f2ff1b82da6daf58bb303310bc18555b",
"text": "We propose a low-cost and reconfigurable plasma frequency selective surface (PFSS) with fluorescent lamps for Global Navigation Satellite System (GNSS) applications. Although many antenna topologies exist for broadband operation, antennas with plasma frequency selective surfaces are limited and offer certain advantages. Especially for GNSS applications, antenna performance can be demanding given the limited space for the antenna. In almost all GNSS antennas, a perfect electric conductor (PEC) ground is utilized. In this study, we compare PEC and PFSS reflectors over entire GNSS bandwidth. We believe PFSS structure can not only provide the benefits of PEC ground but also lower antenna profile.",
"title": ""
},
{
"docid": "0577e45490a0d640d74683599bc39bdf",
"text": "As a consequence of the numerous cyber attacks over the last decade, both the consideration and use of cyberspace has fundamentally changed, and will continue to evolve. Military forces all over the world have come to value the new role of cyberspace in warfare, building up cyber commands, and establishing new capabilities. Integral to such capabilities is that military forces fundamentally depend on the rapid exchange of information in order for their decision-making processes to gain superiority on the battlefield; this compounds the need to develop network- enabled capabilities to realize network-centric warfare. This triangle of cyber offense, cyber defence, and cyber dependence creates a challenging and complex system of interdependencies. Alongside, while numerous technologies have not improved cyber security significantly, this may change with upcoming new concepts and systems, like decentralized ledger technologies (Blockchains) or quantum-secured communication. Following these thoughts, the paper analyses the development of both cyber threats and defence capabilities during the past 10 years, evaluates the current situation and gives recommendations for improvements. To this end, the paper is structured as follows: first, general conditions for military forces with respect to \"cyber\" are described, including an analysis of the most likely courses of action of the West and their seemingly traditional adversary in the East, Russia. The overview includes a discussion of the usefulness of the measures and an overview of upcoming technologies critical for cyber security. Finally, requirements and recommendations for the further development of cyber defence are briefly covered.",
"title": ""
},
{
"docid": "d3f67f69f9db9e685f47ab1ca011fc6f",
"text": "The maximum clique problem is a well known NP-Hard problem with applications in data mining, network analysis, information retrieval and many other areas related to the World Wide Web. There exist several algorithms for the problem with acceptable runtimes for certain classes of graphs, but many of them are infeasible for massive graphs. We present a new exact algorithm that employs novel pruning techniques and is able to find maximum cliques in very large, sparse graphs quickly. Extensive experiments on different kinds of synthetic and real-world graphs show that our new algorithm can be orders of magnitude faster than existing algorithms. We also present a heuristic that runs orders of magnitude faster than the exact algorithm while providing optimal or near-optimal solutions. We illustrate a simple application of the algorithms in developing methods for detection of overlapping communities in networks.",
"title": ""
},
{
"docid": "4f20763c1f25a2d6074376f8ec4f0c35",
"text": "Lateral flow (immuno)assays are currently used for qualitative, semiquantitative and to some extent quantitative monitoring in resource-poor or non-laboratory environments. Applications include tests on pathogens, drugs, hormones and metabolites in biomedical, phytosanitary, veterinary, feed/food and environmental settings. We describe principles of current formats, applications, limitations and perspectives for quantitative monitoring. We illustrate the potentials and limitations of analysis with lateral flow (immuno)assays using a literature survey and a SWOT analysis (acronym for \"strengths, weaknesses, opportunities, threats\"). Articles referred to in this survey were searched for on MEDLINE, Scopus and in references of reviewed papers. Search terms included \"immunochromatography\", \"sol particle immunoassay\", \"lateral flow immunoassay\" and \"dipstick assay\".",
"title": ""
},
{
"docid": "ebb68e2067fe684756514ce61871a820",
"text": "Ž . Ž PLS-regression PLSR is the PLS approach in its simplest, and in chemistry and technology, most used form two-block . predictive PLS . PLSR is a method for relating two data matrices, X andY, by a linear multivariate model, but goes beyond traditional regression in that it models also the structure of X andY. PLSR derives its usefulness from its ability to analyze data with many, noisy, collinear, and even incomplete variables in both X andY. PLSR has the desirable property that the precision of the model parameters improves with the increasing number of relevant variables and observations. This article reviews PLSR as it has developed to become a standard tool in chemometrics and used in chemistry and engineering. The underlying model and its assumptions are discussed, and commonly used diagnostics are reviewed together with the interpretation of resulting parameters. Ž . Two examples are used as illustrations: First, a Quantitative Structure–Activity Relationship QSAR rQuantitative StrucŽ . ture–Property Relationship QSPR data set of peptides is used to outline how to develop, interpret and refine a PLSR model. Second, a data set from the manufacturing of recycled paper is analyzed to illustrate time series modelling of process data by means of PLSR and time-lagged X-variables. q2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1e8c9d6c746bedcb85dcfcc52b736ac1",
"text": "This letter presents our initial results in deep learning for channel estimation and signal detection in orthogonal frequency-division multiplexing (OFDM) systems. In this letter, we exploit deep learning to handle wireless OFDM channels in an end-to-end manner. Different from existing OFDM receivers that first estimate channel state information (CSI) explicitly and then detect/recover the transmitted symbols using the estimated CSI, the proposed deep learning-based approach estimates CSI implicitly and recovers the transmitted symbols directly. To address channel distortion, a deep learning model is first trained offline using the data generated from simulation based on channel statistics and then used for recovering the online transmitted data directly. From our simulation results, the deep learning based approach can address channel distortion and detect the transmitted symbols with performance comparable to the minimum mean-square error estimator. Furthermore, the deep learning-based approach is more robust than conventional methods when fewer training pilots are used, the cyclic prefix is omitted, and nonlinear clipping noise exists. In summary, deep learning is a promising tool for channel estimation and signal detection in wireless communications with complicated channel distortion and interference.",
"title": ""
},
{
"docid": "c8b1f9e4409af4f385cd2b8bfbb874d6",
"text": "In this paper we present a novel method for desktop-to-mobile adaptation. The solution also supports end-users in customizing multi-device ubiquitous user interfaces. In particular, we describe an algorithm and the corresponding tool support to perform desktop-to-mobile adaptation by exploiting logical user interface descriptions able to capture interaction semantic information indicating the purpose of the interface elements. We also compare our solution with existing tools for similar goals.",
"title": ""
},
{
"docid": "2bb936db4a73e009a86e2bff45f88313",
"text": "Chimeric antigen receptors (CARs) have been used to redirect the specificity of autologous T cells against leukemia and lymphoma with promising clinical results. Extending this approach to allogeneic T cells is problematic as they carry a significant risk of graft-versus-host disease (GVHD). Natural killer (NK) cells are highly cytotoxic effectors, killing their targets in a non-antigen-specific manner without causing GVHD. Cord blood (CB) offers an attractive, allogeneic, off-the-self source of NK cells for immunotherapy. We transduced CB-derived NK cells with a retroviral vector incorporating the genes for CAR-CD19, IL-15 and inducible caspase-9-based suicide gene (iC9), and demonstrated efficient killing of CD19-expressing cell lines and primary leukemia cells in vitro, with marked prolongation of survival in a xenograft Raji lymphoma murine model. Interleukin-15 (IL-15) production by the transduced CB-NK cells critically improved their function. Moreover, iC9/CAR.19/IL-15 CB-NK cells were readily eliminated upon pharmacologic activation of the iC9 suicide gene. In conclusion, we have developed a novel approach to immunotherapy using engineered CB-derived NK cells, which are easy to produce, exhibit striking efficacy and incorporate safety measures to limit toxicity. This approach should greatly improve the logistics of delivering this therapy to large numbers of patients, a major limitation to current CAR-T-cell therapies.",
"title": ""
},
{
"docid": "751231430c54bf33649e4c4e14d45851",
"text": "The current state of A. D. Baddeley and G. J. Hitch's (1974) multicomponent working memory model is reviewed. The phonological and visuospatial subsystems have been extensively investigated, leading both to challenges over interpretation of individual phenomena and to more detailed attempts to model the processes underlying the subsystems. Analysis of the controlling central executive has proved more challenging, leading to a proposed clarification in which the executive is assumed to be a limited capacity attentional system, aided by a newly postulated fourth system, the episodic buffer. Current interest focuses most strongly on the link between working memory and long-term memory and on the processes allowing the integration of information from the component subsystems. The model has proved valuable in accounting for data from a wide range of participant groups under a rich array of task conditions. Working memory does still appear to be working.",
"title": ""
},
{
"docid": "1ca8130cf3f0f1788196bd4bc4ec45a0",
"text": "PURPOSE\nTo examine the feasibility and preliminary benefits of an integrative cognitive behavioral therapy (CBT) with adolescents with inflammatory bowel disease and anxiety.\n\n\nDESIGN AND METHODS\nNine adolescents participated in a CBT program at their gastroenterologist's office. Structured diagnostic interviews, self-report measures of anxiety and pain, and physician-rated disease severity were collected pretreatment and post-treatment.\n\n\nRESULTS\nPostintervention, 88% of adolescents were treatment responders, and 50% no longer met criteria for their principal anxiety disorder. Decreases were demonstrated in anxiety, pain, and disease severity.\n\n\nPRACTICE IMPLICATIONS\nAnxiety screening and a mental health referral to professionals familiar with medical management issues is important.",
"title": ""
},
{
"docid": "48966a0436405a6656feea3ce17e87c3",
"text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.",
"title": ""
},
{
"docid": "bfb5ab3f17045856db6da616f5d82609",
"text": "This study examined cognitive distortions and coping styles as potential mediators for the effects of mindfulness meditation on anxiety, negative affect, positive affect, and hope in college students. Our pre- and postintervention design had four conditions: control, brief meditation focused on attention, brief meditation focused on loving kindness, and longer meditation combining both attentional and loving kindness aspects of mindfulness. Each group met weekly over the course of a semester. Longer combined meditation significantly reduced anxiety and negative affect and increased hope. Changes in cognitive distortions mediated intervention effects for anxiety, negative affect, and hope. Further research is needed to determine differential effects of types of meditation.",
"title": ""
},
{
"docid": "19469e716d6d6ce0855a2b3101f05cf6",
"text": "Integrative approaches to the study of complex systems demand that one knows the manner in which the parts comprising the system are connected. The structure of the complex network defining the interactions provides insight into the function and evolution of the components of the system. Unfortunately, the large size and intricacy of these networks implies that such insight is usually difficult to extract. Here, we propose a method that allows one to systematically extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find modules in complex networks and (ii) classify nodes into universal roles according to their pattern of within- and between-module connections. The method thus yields a 'cartographic representation' of complex networks.",
"title": ""
},
{
"docid": "2f1641240e0fdfe15c65cda20704cddc",
"text": "Multichannel tetrode array recording in awake behaving animals provides a powerful method to record the activity of large numbers of neurons. The power of this method could be extended if further information concerning the intracellular state of the neurons could be extracted from the extracellularly recorded signals. Toward this end, we have simultaneously recorded intracellular and extracellular signals from hippocampal CA1 pyramidal cells and interneurons in the anesthetized rat. We found that several intracellular parameters can be deduced from extracellular spike waveforms. The width of the intracellular action potential is defined precisely by distinct points on the extracellular spike. Amplitude changes of the intracellular action potential are reflected by changes in the amplitude of the initial negative phase of the extracellular spike, and these amplitude changes are dependent on the state of the network. In addition, intracellular recordings from dendrites with simultaneous extracellular recordings from the soma indicate that, on average, action potentials are initiated in the perisomatic region and propagate to the dendrites at 1.68 m/s. Finally we determined that a tetrode in hippocampal area CA1 theoretically should be able to record electrical signals from approximately 1, 000 neurons. Of these, 60-100 neurons should generate spikes of sufficient amplitude to be detectable from the noise and to allow for their separation using current spatial clustering methods. This theoretical maximum is in contrast to the approximately six units that are usually detected per tetrode. From this, we conclude that a large percentage of hippocampal CA1 pyramidal cells are silent in any given behavioral condition.",
"title": ""
},
{
"docid": "df75c48628144cdbcf974502ea24aa24",
"text": "Standard SVM training has O(m3) time andO(m2) space complexities, where m is the training set size. It is thus computationally infeasible on very larg e data sets. By observing that practical SVM implementations onlyapproximatethe optimal solution by an iterative strategy, we scale up kernel methods by exploiting such “approximateness” in t h s paper. We first show that many kernel methods can be equivalently formulated as minimum en closing ball (MEB) problems in computational geometry. Then, by adopting an efficient appr oximate MEB algorithm, we obtain provably approximately optimal solutions with the idea of c re sets. Our proposed Core Vector Machine (CVM) algorithm can be used with nonlinear kernels a nd has a time complexity that is linear in m and a space complexity that is independent of m. Experiments on large toy and realworld data sets demonstrate that the CVM is as accurate as exi sting SVM implementations, but is much faster and can handle much larger data sets than existin g scale-up methods. For example, CVM with the Gaussian kernel produces superior results on th e KDDCUP-99 intrusion detection data, which has about five million training patterns, in only 1.4 seconds on a 3.2GHz Pentium–4 PC.",
"title": ""
},
{
"docid": "6881d94bf6f586da6e9daec76dc441c5",
"text": "Received: 27 October 2014 Revised: 29 December 2015 Accepted: 12 January 2016 Abstract In information systems (IS) literature, there is ongoing debate as to whether the field has become fragmented and lost its identity in response to the rapid changes of the field. The paper contributes to this discussion by providing quantitative measurement of the fragmentation or cohesiveness level of the field. A co-word analysis approach aiding in visualization of the intellectual map of IS is applied through application of clustering analysis, network maps, strategic diagram techniques, and graph theory for a collection of 47,467 keywords from 9551 articles, published in 10 major IS journals and the proceedings of two leading IS conferences over a span of 20 years, 1993 through 2012. The study identified the popular, core, and bridging topics of IS research for the periods 1993–2002 and 2003–2012. Its results show that research topics and subfields underwent substantial change between those two periods and the field became more concrete and cohesive, increasing in density. Findings from this study suggest that the evolution of the research topics and themes in the IS field should be seen as part of the natural metabolism of the field, rather than a process of fragmentation or disintegration. European Journal of Information Systems advance online publication, 22 March 2016; doi:10.1057/ejis.2016.5",
"title": ""
}
] | scidocsrr |
9c0b045e65668e43100e9d2103d0d65b | Bayesian nonparametric sparse seemingly unrelated regression model ( SUR ) ∗ | [
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "981d140731d8a3cdbaebacc1fd26484a",
"text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.",
"title": ""
},
{
"docid": "fc6726bddf3d70b7cb3745137f4583c1",
"text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.",
"title": ""
},
{
"docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75",
"text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.",
"title": ""
},
{
"docid": "dded827d0b9c513ad504663547018749",
"text": "In this paper, various key points in the rotor design of a low-cost permanent-magnet-assisted synchronous reluctance motor (PMa-SynRM) are introduced and their effects are studied. Finite-element approach has been utilized to show the effects of these parameters on the developed average electromagnetic torque and total d-q inductances. One of the features considered in the design of this motor is the magnetization of the permanent magnets mounted in the rotor core using the stator windings. This feature will cause a reduction in cost and ease of manufacturing. Effectiveness of the design procedure is validated by presenting simulation and experimental results of a 1.5-kW prototype PMa-SynRM",
"title": ""
},
{
"docid": "3c66b3db12b8695863d79f4edcc74103",
"text": "Several studies have found that women tend to demonstrate stronger preferences for masculine men as short-term partners than as long-term partners, though there is considerable variation among women in the magnitude of this effect. One possible source of this variation is individual differences in the extent to which women perceive masculine men to possess antisocial traits that are less costly in short-term relationships than in long-term relationships. Consistent with this proposal, here we show that the extent to which women report stronger preferences for men with low (i.e., masculine) voice pitch as short-term partners than as long-term partners is associated with the extent to which they attribute physical dominance and low trustworthiness to these masculine voices. Thus, our findings suggest that variation in the extent to which women attribute negative personality characteristics to masculine men predicts individual differences in the magnitude of the effect of relationship context on women's masculinity preferences, highlighting the importance of perceived personality attributions for individual differences in women's judgments of men's vocal attractiveness and, potentially, their mate preferences.",
"title": ""
},
{
"docid": "375d5fcb41b7fb3a2f60822720608396",
"text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.",
"title": ""
},
{
"docid": "bf7b3cdb178fd1969257f56c0770b30b",
"text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "269e1c0d737beafd10560360049c6ee3",
"text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.",
"title": ""
},
{
"docid": "066fdb2deeca1d13218f16ad35fe5f86",
"text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.",
"title": ""
},
{
"docid": "1010eaf1bb85e14c31b5861367115b32",
"text": "This letter presents a design of a dual-orthogonal, linear polarized antenna for the UWB-IR technology in the frequency range from 3.1 to 10.6 GHz. The antenna is compact with dimensions of 40 times 40 mm of the radiation plane, which is orthogonal to the radiation direction. Both the antenna and the feeding network are realized in planar technology. The radiation principle and the computed design are verified by a prototype. The input impedance matching is better than -6 dB. The measured results show a mean gain in copolarization close to 4 dBi. The cross-polarizations suppression w.r.t. the copolarization is better than 20 dB. Due to its features, the antenna is suited for polarimetric ultrawideband (UWB) radar and UWB multiple-input-multiple-output (MIMO) applications.",
"title": ""
},
{
"docid": "580d83a0e627daedb45fe55e3f9b6883",
"text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.",
"title": ""
},
{
"docid": "31a0b00a79496deccdec4d3629fcbf88",
"text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.",
"title": ""
},
{
"docid": "0d5628aa4bfeefbb6dedfa59325ed517",
"text": "The effects of caffeine on cognition were reviewed based on the large body of literature available on the topic. Caffeine does not usually affect performance in learning and memory tasks, although caffeine may occasionally have facilitatory or inhibitory effects on memory and learning. Caffeine facilitates learning in tasks in which information is presented passively; in tasks in which material is learned intentionally, caffeine has no effect. Caffeine facilitates performance in tasks involving working memory to a limited extent, but hinders performance in tasks that heavily depend on working memory, and caffeine appears to rather improve memory performance under suboptimal alertness conditions. Most studies, however, found improvements in reaction time. The ingestion of caffeine does not seem to affect long-term memory. At low doses, caffeine improves hedonic tone and reduces anxiety, while at high doses, there is an increase in tense arousal, including anxiety, nervousness, jitteriness. The larger improvement of performance in fatigued subjects confirms that caffeine is a mild stimulant. Caffeine has also been reported to prevent cognitive decline in healthy subjects but the results of the studies are heterogeneous, some finding no age-related effect while others reported effects only in one sex and mainly in the oldest population. In conclusion, it appears that caffeine cannot be considered a ;pure' cognitive enhancer. Its indirect action on arousal, mood and concentration contributes in large part to its cognitive enhancing properties.",
"title": ""
},
{
"docid": "a32d80b0b446f91832f92ca68597821d",
"text": "PURPOSE\nWe define the cause of the occurrence of Peyronie's disease.\n\n\nMATERIALS AND METHODS\nClinical evaluation of a large number of patients with Peyronie's disease, while taking into account the pathological and biochemical findings of the penis in patients who have been treated by surgery, has led to an understanding of the relationship of the anatomical structure of the penis to its rigidity during erection, and how the effect of the stress imposed upon those structures during intercourse is modified by the loss of compliance resulting from aging of the collagen composing those structures. Peyronie's disease occurs most frequently in middle-aged men, less frequently in older men and infrequently in younger men who have more elastic tissues. During erection, when full tumescence has occurred and the elastic tissues of the penis have reached the limit of their compliance, the strands of the septum give vertical rigidity to the penis. Bending the erect penis out of column stresses the attachment of the septal strands to the tunica albuginea.\n\n\nRESULTS\nPlaques of Peyronie's disease are found where the strands of the septum are attached in the dorsal or ventral aspect of the penis. The pathological scar in the tunica albuginea of the corpora cavernosa in Peyronie's disease is characterized by excessive collagen accumulation, fibrin deposition and disordered elastic fibers in the plaque.\n\n\nCONCLUSIONS\nWe suggest that Peyronie's disease results from repetitive microvascular injury, with fibrin deposition and trapping in the tissue space that is not adequately cleared during the normal remodeling and repair of the tear in the tunica. Fibroblast activation and proliferation, enhanced vessel permeability and generation of chemotactic factors for leukocytes are stimulated by fibrin deposited in the normal process of wound healing. However, in Peyronie's disease the lesion fails to resolve either due to an inability to clear the original stimulus or due to further deposition of fibrin subsequent to repeated trauma. Collagen is also trapped and pathological fibrosis ensues.",
"title": ""
},
{
"docid": "ce6296ae51be4e6fe3d36be618cdfe75",
"text": "UNLABELLED\nOBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research.\n\n\nMETHODS\nThis article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade.\n\n\nRESULTS\nResearch to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.",
"title": ""
},
{
"docid": "12271ff5fdaf797a6f1063088d0cf2a6",
"text": "A resonant current-fed push-pull converter with active clamping is investigated for vehicle inverter application in this paper. A communal clamping capacitor and a voltage doubler topology are employed. Moreover, two kinds of resonances to realize ZVS ON of MOSFETs or to remove reverse-recovery problem of secondary diodes are analyzed and an optimized resonance is utilized for volume restriction. At last, a 150W prototype with 10V to 16V input voltage, 360V output voltage is implemented to verify the design.",
"title": ""
},
{
"docid": "373bffe67da88ece93006e97f3ae6c53",
"text": "This paper proposes a novel mains voltage proportional input current control concept eliminating the multiplication of the output voltage controller output and the mains ac phase voltages for the derivation of mains phase current reference values of a three-phase/level/switch pulsewidth-modulated (VIENNA) rectifier system. Furthermore, the concept features low input current ripple amplitude as, e.g., achieved for space-vector modulation, a low amplitude of the third harmonic of the current flowing into the output voltage center point, and a wide range of modulation. The practical realization of the analog control concept as well as experimental results for application with a 5-kW prototype of the pulsewidth-modulated rectifier are presented. Furthermore, a control scheme which relies only on the absolute values of the input phase currents and a modified control scheme which does not require information about the mains phase voltages are presented.",
"title": ""
},
{
"docid": "a6cf168632efb2a4c4a4d91c4161dc24",
"text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.",
"title": ""
},
{
"docid": "835e07466611989c2a4979a5e087156f",
"text": "AIM\nInternet gaming disorder (IGD) is a serious disorder leading to and maintaining pertinent personal and social impairment. IGD has to be considered in view of heterogeneous and incomplete concepts. We therefore reviewed the scientific literature on IGD to provide an overview focusing on definitions, symptoms, prevalence, and aetiology.\n\n\nMETHOD\nWe systematically reviewed the databases ERIC, PsyARTICLES, PsycINFO, PSYNDEX, and PubMed for the period January 1991 to August 2016, and additionally identified secondary references.\n\n\nRESULTS\nThe proposed definition in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition provides a good starting point for diagnosing IGD but entails some disadvantages. Developing IGD requires several interacting internal factors such as deficient self, mood and reward regulation, problems of decision-making, and external factors such as deficient family background and social skills. In addition, specific game-related factors may promote IGD. Summarizing aetiological knowledge, we suggest an integrated model of IGD elucidating the interplay of internal and external factors.\n\n\nINTERPRETATION\nSo far, the concept of IGD and the pathways leading to it are not entirely clear. In particular, long-term follow-up studies are missing. IGD should be understood as an endangering disorder with a complex psychosocial background.\n\n\nWHAT THIS PAPER ADDS\nIn representative samples of children and adolescents, on average, 2% are affected by Internet gaming disorder (IGD). The mean prevalences (overall, clinical samples included) reach 5.5%. Definitions are heterogeneous and the relationship with substance-related addictions is inconsistent. Many aetiological factors are related to the development and maintenance of IGD. This review presents an integrated model of IGD, delineating the interplay of these factors.",
"title": ""
}
] | scidocsrr |
0ee681e15fad3ce020562df645751692 | Risk assessment model selection in construction industry | [
{
"docid": "ee0d11cbd2e723aff16af1c2f02bbc2b",
"text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: [email protected] (J.-W. Wang), [email protected] (C.-H. Cheng), [email protected] (K.-C. Huang).",
"title": ""
}
] | [
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "e06005f63efd6f8ca77f8b91d1b3b4a9",
"text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.",
"title": ""
},
{
"docid": "0850f46a4bcbe1898a6a2dca9f61ea61",
"text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.",
"title": ""
},
{
"docid": "1742428cb9c72c90f94d0beb6ad3e086",
"text": "A key issue in face recognition is to seek an effective descriptor for representing face appearance. In the context of considering the face image as a set of small facial regions, this paper presents a new face representation approach coined spatial feature interdependence matrix (SFIM). Unlike classical face descriptors which usually use a hierarchically organized or a sequentially concatenated structure to describe the spatial layout features extracted from local regions, SFIM is attributed to the exploitation of the underlying feature interdependences regarding local region pairs inside a class specific face. According to SFIM, the face image is projected onto an undirected connected graph in a manner that explicitly encodes feature interdependence-based relationships between local regions. We calculate the pair-wise interdependence strength as the weighted discrepancy between two feature sets extracted in a hybrid feature space fusing histograms of intensity, local binary pattern and oriented gradients. To achieve the goal of face recognition, our SFIM-based face descriptor is embedded in three different recognition frameworks, namely nearest neighbor search, subspace-based classification, and linear optimization-based classification. Extensive experimental results on four well-known face databases and comprehensive comparisons with the state-of-the-art results are provided to demonstrate the efficacy of the proposed SFIM-based descriptor.",
"title": ""
},
{
"docid": "4d3468bb14b7ad933baac5c50feec496",
"text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.",
"title": ""
},
{
"docid": "68720a44720b4d80e661b58079679763",
"text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.",
"title": ""
},
{
"docid": "42903610920a47773627a33db25590f3",
"text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.",
"title": ""
},
{
"docid": "338e037f4ec9f6215f48843b9d03f103",
"text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).",
"title": ""
},
{
"docid": "0d9057d8a40eb8faa7e67128a7d24565",
"text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "9876e3f1438d39549a93ce4011bc0df0",
"text": "SIMON (Signal Interpretation and MONitoring) continuously collects, permanently stores, and processes bedside medical device data. Since 1998 SIMON has monitored over 3500 trauma intensive care unit (TICU) patients, representing approximately 250,000 hours of continuous monitoring and two billion data points, and is currently operational on all 14 TICU beds at Vanderbilt University Medical Center. This repository of dense physiologic data (heart rate, arterial, pulmonary, central venous, intracranial, and cerebral perfusion pressures, arterial and venous oxygen saturations, and other parameters sampled second-by-second) supports research to identify “new vital signs” features of patient physiology only observable through dense data capture and analysis more predictive of patient status than current measures. SIMON’s alerting and reporting capabilities, including web-based display, sentinel event notification via alphanumeric pagers, and daily summary reports of vital sign statistics, allow these discoveries to be rapidly tested and implemented in a working clinical environment. This",
"title": ""
},
{
"docid": "5ed409feee70554257e4974ab99674e0",
"text": "Text mining and information retrieval in large collections of scientific literature require automated processing systems that analyse the documents’ content. However, the layout of scientific articles is highly varying across publishers, and common digital document formats are optimised for presentation, but lack structural information. To overcome these challenges, we have developed a processing pipeline that analyses the structure a PDF document using a number of unsupervised machine learning techniques and heuristics. Apart from the meta-data extraction, which we reused from previous work, our system uses only information available from the current document and does not require any pre-trained model. First, contiguous text blocks are extracted from the raw character stream. Next, we determine geometrical relations between these blocks, which, together with geometrical and font information, are then used categorize the blocks into different classes. Based on this resulting logical structure we finally extract the body text and the table of contents of a scientific article. We separately evaluate the individual stages of our pipeline on a number of different datasets and compare it with other document structure analysis approaches. We show that it outperforms a state-of-the-art system in terms of the quality of the extracted body text and table of contents. Our unsupervised approach could provide a basis for advanced digital library scenarios that involve diverse and dynamic corpora.",
"title": ""
},
{
"docid": "012d6b86279e65237a3ad4515e4e439f",
"text": "The main purpose of the present study is to help managers cope with the negative effects of technostress on employee use of ICT. Drawing on transaction theory of stress (Cooper, Dewe, & O’Driscoll, 2001) and information systems (IS) continuance theory (Bhattacherjee, 2001) we investigate the effects of technostress on employee intentions to extend the use of ICT at work. Our results show that factors that create and inhibit technostress affect both employee satisfaction with the use of ICT and employee intentions to extend the use of ICT. Our findings have important implications for the management of technostress with regard to both individual stress levels and organizational performance. A key implication of our research is that managers should implement strategies for coping with technostress through the theoretical concept of technostress inhibitors. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "da64b7855ec158e97d48b31e36f100a5",
"text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "e643f7f29c2e96639a476abb1b9a38b1",
"text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.",
"title": ""
},
{
"docid": "9b1cf7cb855ba95693b90efacc34ac6d",
"text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.",
"title": ""
},
{
"docid": "37c8fa72d0959a64460dbbe4fdb8c296",
"text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.",
"title": ""
},
{
"docid": "27fd4240452fe6f08af7fbf86e8acdf5",
"text": "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment. The code used in the experiments is available at https://github.com/flowersteam/ Curiosity_Driven_Goal_Exploration.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] | scidocsrr |
36a0aa0c3466f4ff1b9199f09b78244d | Distributed Agile Development: Using Scrum in a Large Project | [
{
"docid": "3effd14a76c134d741e6a1f4fda022dc",
"text": "Agile processes rely on feedback and communication to work and they often work best with co-located teams for this reason. Sometimes agile makes sense because of project requirements and a distributed team makes sense because of resource constraints. A distributed team can be productive and fun to work on if the team takes an active role in overcoming the barriers that distribution causes. This is the story of how one product team used creativity, communications tools, and basic good engineering practices to build a successful product.",
"title": ""
}
] | [
{
"docid": "6b7de13e2e413885e0142e3b6bf61dc9",
"text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.",
"title": ""
},
{
"docid": "4863316487ead3c1b459dc43d9ed17d5",
"text": "The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced reidentification ratio compared to non-overlapping cases.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "f67b0131de800281ceb8522198302cde",
"text": "This brief presents an approach for identifying the parameters of linear time-varying systems that repeat their trajectories. The identification is based on the concept that parameter identification results can be improved by incorporating information learned from previous executions. The learning laws for this iterative learning identification are determined through an optimization framework. The convergence analysis of the algorithm is presented along with the experimental results to demonstrate its effectiveness. The algorithm is demonstrated to be capable of simultaneously estimating rapidly varying parameters and addressing robustness to noise by adopting a time-varying design approach.",
"title": ""
},
{
"docid": "98af4946349aabb95d98dde19344cd4f",
"text": "Automatic speaker recognition is a field of study attributed in identifying a person from a spoken phrase. The technique makes it possible to use the speaker’s voice to verify their identity and control access to the services such as biometric security system, voice dialing, telephone banking, telephone shopping, database access services, information services, voice mail, and security control for confidential information areas and remote access to the computers. This thesis represents a development of a Matlab based text dependent speaker recognition system. Mel Frequency Cepstrum Coefficient (MFCC) Method is used to extract a speaker’s discriminative features from the mathematical representation of the speech signal. After that Vector Quantization with VQ-LBG Algorithm is used to match the feature. Key-Words: Speaker Recognition, Human Speech Signal Processing, Vector Quantization",
"title": ""
},
{
"docid": "b4edd546c786bbc7a72af67439dfcad7",
"text": "We aim to develop a computationally feasible, cognitivelyinspired, formal model of concept invention, drawing on Fauconnier and Turner’s theory of conceptual blending, and grounding it on a sound mathematical theory of concepts. Conceptual blending, although successfully applied to describing combinational creativity in a varied number of fields, has barely been used at all for implementing creative computational systems, mainly due to the lack of sufficiently precise mathematical characterisations thereof. The model we will define will be based on Goguen’s proposal of a Unified Concept Theory, and will draw from interdisciplinary research results from cognitive science, artificial intelligence, formal methods and computational creativity. To validate our model, we will implement a proof of concept of an autonomous computational creative system that will be evaluated in two testbed scenarios: mathematical reasoning and melodic harmonisation. We envisage that the results of this project will be significant for gaining a deeper scientific understanding of creativity, for fostering the synergy between understanding and enhancing human creativity, and for developing new technologies for autonomous creative systems.",
"title": ""
},
{
"docid": "126007e4cb2a200725868551a11267c8",
"text": "One of the most important challenges of this decade is the Internet of Things (IoT), which aims to enable things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service. IoT systems are usually composed of heterogeneous and interconnected lightweight devices that support applications that are subject to change in their external environment and in the functioning of these devices. The management of the variability of these changes, autonomously, is a challenge in the development of these systems. Agents are a good option for developing self-managed IoT systems due to their distributed nature, context-awareness and self-adaptation. Our goal is to enhance the development of IoT applications using agents and software product lines (SPL). Specifically, we propose to use Self-StarMASMAS, multi-agent system) agents and to define an SPL process using the Common Variability Language. In this contribution, we propose an SPL process for Self-StarMAS, paying particular attention to agents embedded in sensor motes.",
"title": ""
},
{
"docid": "d54e3e53b6aa5fad2949348ebe1523d2",
"text": "Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision problems are encountered while developing exploration rovers, team of patrolling robots, rescue-robot colonies, mine-clearance robots, etc. In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.",
"title": ""
},
{
"docid": "0994f1ccf2f0157554c93d7814772200",
"text": "Botanical insecticides presently play only a minor role in insect pest management and crop protection; increasingly stringent regulatory requirements in many jurisdictions have prevented all but a handful of botanical products from reaching the marketplace in North America and Europe in the past 20 years. Nonetheless, the regulatory environment and public health needs are creating opportunities for the use of botanicals in industrialized countries in situations where human and animal health are foremost--for pest control in and around homes and gardens, in commercial kitchens and food storage facilities and on companion animals. Botanicals may also find favour in organic food production, both in the field and in controlled environments. In this review it is argued that the greatest benefits from botanicals might be achieved in developing countries, where human pesticide poisonings are most prevalent. Recent studies in Africa suggest that extracts of locally available plants can be effective as crop protectants, either used alone or in mixtures with conventional insecticides at reduced rates. These studies suggest that indigenous knowledge and traditional practice can make valuable contributions to domestic food production in countries where strict enforcement of pesticide regulations is impractical.",
"title": ""
},
{
"docid": "63cfadd9a71aaa1cbe1ead79f943f83c",
"text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.",
"title": ""
},
{
"docid": "a3e1eb38273f67a283063bce79b20b9d",
"text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.",
"title": ""
},
{
"docid": "c3c5168b77603b4bb548e1b8f7d9af50",
"text": "The concept of agile domain name system (DNS) refers to dynamic and rapidly changing mappings between domain names and their Internet protocol (IP) addresses. This empirical paper evaluates the bias from this kind of agility for DNS-based graph theoretical data mining applications. By building on two conventional metrics for observing malicious DNS agility, the agility bias is observed by comparing bipartite DNS graphs to different subgraphs from which vertices and edges are removed according to two criteria. According to an empirical experiment with two longitudinal DNS datasets, irrespective of the criterion, the agility bias is observed to be severe particularly regarding the effect of outlying domains hosted and delivered via content delivery networks and cloud computing services. With these observations, the paper contributes to the research domains of cyber security and DNS mining. In a larger context of applied graph mining, the paper further elaborates the practical concerns related to the learning of large and dynamic bipartite graphs.",
"title": ""
},
{
"docid": "228d7fa684e1caf43769fa13818b938f",
"text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.",
"title": ""
},
{
"docid": "cf7fc0d2f9d2d78b1777254e37d6ceb7",
"text": "Nanoscale drug delivery systems using liposomes and nanoparticles are emerging technologies for the rational delivery of chemotherapeutic drugs in the treatment of cancer. Their use offers improved pharmacokinetic properties, controlled and sustained release of drugs and, more importantly, lower systemic toxicity. The commercial availability of liposomal Doxil and albumin-nanoparticle-based Abraxane has focused attention on this innovative and exciting field. Recent advances in liposome technology offer better treatment of multidrug-resistant cancers and lower cardiotoxicity. Nanoparticles offer increased precision in chemotherapeutic targeting of prostate cancer and new avenues for the treatment of breast cancer. Here we review current knowledge on the two technologies and their potential applications to cancer treatment.",
"title": ""
},
{
"docid": "577857c80b11e2e2451d00f9b1f37c85",
"text": "Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top–down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged—by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging “real-world neuroscientific” approaches. These approaches differ in their principal aims, assumptions, or even definitions of “real-world neuroscience” research. Here, we showcase the commonalities and distinctive features of the different “real-world neuroscience” approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.",
"title": ""
},
{
"docid": "b1ef75c4a0dc481453fb68e94ec70cdc",
"text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.",
"title": ""
},
{
"docid": "e44fefc20ed303064dabff3da1004749",
"text": "Printflatables is a design and fabrication system for human-scale, functional and dynamic inflatable objects. We use inextensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimensional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplastic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.",
"title": ""
},
{
"docid": "78bf0b1d4065fd0e1740589c4e060c70",
"text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.",
"title": ""
},
{
"docid": "94d3d6c32b5c0f08722b2ecbe6692969",
"text": "A 64% instantaneous bandwidth scalable turnstile-based orthomode transducer to be used in the so-called extended C-band satellite link is presented. The proposed structure overcomes the current practical bandwidth limitations by adding a single-step widening at the junction of the four output rectangular waveguides. This judicious modification, together with the use of reduced-height waveguides and E-plane bends and power combiners, enables to approach the theoretical structure bandwidth limit with a simple, scalable and compact design. The presented orthomode transducer architecture exhibits a return loss better than 25 dB, an isolation between rectangular ports better than 50 dB and a transmission loss less than 0.04 dB in the 3.6-7 GHz range, which represents state-of-the-art achievement in terms of bandwidth.",
"title": ""
},
{
"docid": "5ddd9074ed765d0f2dc44a4fd9632fe5",
"text": "The complexity in electrical power systems is continuously increasing due to its advancing distribution. This affects the topology of the grid infrastructure as well as the IT-infrastructure, leading to various heterogeneous systems, data models, protocols, and interfaces. This in turn raises the need for appropriate processes and tools that facilitate the management of the systems architecture on different levels and from different stakeholders’ view points. In order to achieve this goal, a common understanding of architecture elements and means of classification shall be gained. The Smart Grid Architecture Model (SGAM) proposed in context of the European standardization mandate M/490 provides a promising basis for domainspecific architecture models. The idea of following a Model-Driven-Architecture (MDA)-approach to create such models, including requirements specification based on Smart Grid use cases, is detailed in this contribution. The SGAM-Toolbox is introduced as tool-support for the approach and its utility is demonstrated by two real-world case studies.",
"title": ""
}
] | scidocsrr |
4ff724ea016ef2a5bbf496abbe3bcd45 | Implications of Placebo and Nocebo Effects for Clinical Practice: Expert Consensus. | [
{
"docid": "d501d2758e600c307e41a329222bf7d6",
"text": "Placebo effects are beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug. They are mediated by diverse processes — including learning, expectations and social cognition — and can influence various clinical and physiological outcomes related to health. Emerging neuroscience evidence implicates multiple brain systems and neurochemical mediators, including opioids and dopamine. We present an empirical review of the brain systems that are involved in placebo effects, focusing on placebo analgesia, and a conceptual framework linking these findings to the mind–brain processes that mediate them. This framework suggests that the neuropsychological processes that mediate placebo effects may be crucial for a wide array of therapeutic approaches, including many drugs.",
"title": ""
}
] | [
{
"docid": "396dd0517369d892d249bb64fa410128",
"text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148. Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "02a6e024c1d318862ad4c17b9a56ca36",
"text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.",
"title": ""
},
{
"docid": "92d858331c4da840c0223a7031a02280",
"text": "The advent use of new online social media such as articles, blogs, message boards, news channels, and in general Web content has dramatically changed the way people look at various things around them. Today, it’s a daily practice for many people to read news online. People's perspective tends to undergo a change as per the news content they read. The majority of the content that we read today is on the negative aspects of various things e.g. corruption, rapes, thefts etc. Reading such news is spreading negativity amongst the people. Positive news seems to have gone into a hiding. The positivity surrounding the good news has been drastically reduced by the number of bad news. This has made a great practical use of Sentiment Analysis and there has been more innovation in this area in recent days. Sentiment analysis refers to a broad range of fields of text mining, natural language processing and computational linguistics. It traditionally emphasizes on classification of text document into positive and negative categories. Sentiment analysis of any text document has emerged as the most useful application in the area of sentiment analysis. The objective of this project is to provide a platform for serving good news and create a positive environment. This is achieved by finding the sentiments of the news articles and filtering out the negative articles. This would enable us to focus only on the good news which will help spread positivity around and would allow people to think positively.",
"title": ""
},
{
"docid": "73b150681d7de50ada8e046a3027085f",
"text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"title": ""
},
{
"docid": "72939e7f99727408acfa3dd3977b38ad",
"text": "We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail.",
"title": ""
},
{
"docid": "61dcc07734c98bf0ad01a98fe0c55bf4",
"text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "1306d2579c8af0af65805da887d283b0",
"text": "Traceability allows tracking products through all stages of a supply chain, which is crucial for product quality control. To provide accountability and forensic information, traceability information must be secured. This is challenging because traceability systems often must adapt to changes in regulations and to customized traceability inspection processes. OriginChain is a real-world traceability system using a blockchain. Blockchains are an emerging data storage technology that enables new forms of decentralized architectures. Components can agree on their shared states without trusting a central integration point. OriginChain’s architecture provides transparent tamper-proof traceability information, automates regulatory compliance checking, and enables system adaptability.",
"title": ""
},
{
"docid": "62425652b113c72c668cf9c73b7c8480",
"text": "Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel twothirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KGspecific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.",
"title": ""
},
{
"docid": "c5f3d1465ad3cec4578fe64e57caee66",
"text": "Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.",
"title": ""
},
{
"docid": "717489cca3ec371bfcc66efd1e6c1185",
"text": "This paper presents an algorithm for calibrating strapdown magnetometers in the magnetic field domain. In contrast to the traditional method of compass swinging, which computes a series of heading correction parameters and, thus, is limited to use with two-axis systems, this algorithm estimates magnetometer output errors directly. Therefore, this new algorithm can be used to calibrate a full three-axis magnetometer triad. The calibration algorithm uses an iterated, batch least squares estimator which is initialized using a novel two-step nonlinear estimator. The algorithm is simulated to validate convergence characteristics and further validated on experimental data collected using a magnetometer triad. It is shown that the post calibration residuals are small and result in a system with heading errors on the order of 1 to 2 degrees.",
"title": ""
},
{
"docid": "33da6c0dcc2e62059080a4b1d220ef8b",
"text": "We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.",
"title": ""
},
{
"docid": "db225a91f1730d69aaf47a7c1b20dbc4",
"text": "We describe how malicious customers can attack the availability of Content Delivery Networks (CDNs) by creating forwarding loops inside one CDN or across multiple CDNs. Such forwarding loops cause one request to be processed repeatedly or even indefinitely, resulting in undesired resource consumption and potential Denial-of-Service attacks. To evaluate the practicality of such forwarding-loop attacks, we examined 16 popular CDN providers and found all of them are vulnerable to some form of such attacks. While some CDNs appear to be aware of this threat and have adopted specific forwarding-loop detection mechanisms, we discovered that they can all be bypassed with new attack techniques. Although conceptually simple, a comprehensive defense requires collaboration among all CDNs. Given that hurdle, we also discuss other mitigations that individual CDN can implement immediately. At a higher level, our work underscores the hazards that can arise when a networked system provides users with control over forwarding, particularly in a context that lacks a single point of administrative control.",
"title": ""
},
{
"docid": "45372f5c198270179a30e3ea46a9d44d",
"text": "OBJECTIVE\nThe objective of this study was to investigate the relationship between breakfast type, energy intake and body mass index (BMI). We hypothesized not only that breakfast consumption itself is associated with BMI, but that the type of food eaten at breakfast also affects BMI.\n\n\nMETHODS\nData from the Third National Health and Nutrition Examination Survey (NHANES III), a large, population-based study conducted in the United States from 1988 to 1994, were analyzed for breakfast type, total daily energy intake, and BMI. The analyzed breakfast categories were \"Skippers,\" \"Meat/eggs,\" \"Ready-to-eat cereal (RTEC),\" \"Cooked cereal,\" \"Breads,\" \"Quick Breads,\" \"Fruits/vegetables,\" \"Dairy,\" \"Fats/sweets,\" and \"Beverages.\" Analysis of covariance was used to estimate adjusted mean body mass index (BMI) and energy intake (kcal) as dependent variables. Covariates included age, gender, race, smoking, alcohol intake, physical activity and poverty index ratio.\n\n\nRESULTS\nSubjects who ate RTEC, Cooked cereal, or Quick Breads for breakfast had significantly lower BMI compared to Skippers and Meat and Egg eaters (p < or = 0.01). Breakfast skippers and fruit/vegetable eaters had the lowest daily energy intake. The Meat and Eggs eaters had the highest daily energy intake and one of the highest BMIs.\n\n\nCONCLUSIONS\nThis analysis provides evidence that skipping breakfast is not an effective way to manage weight. Eating cereal (ready-to-eat or cooked cereal) or quick breads for breakfast is associated with significantly lower body mass index compared to skipping breakfast or eating meats and/or eggs for breakfast.",
"title": ""
},
{
"docid": "61406f27199acc5f034c2721d66cda89",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "5bf9aeb37fc1a82420b2ff4136f547d0",
"text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.",
"title": ""
},
{
"docid": "e144d8c0f046ad6cd2e5c71844b2b532",
"text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.",
"title": ""
},
{
"docid": "57e2adea74edb5eaf5b2af00ab3c625e",
"text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.",
"title": ""
},
{
"docid": "b1b56020802d11d1f5b2badb177b06b9",
"text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.",
"title": ""
}
] | scidocsrr |
59791b2be7d88731827f1eb8c9ff2b9a | Learning and example selection for object and pattern detection | [
{
"docid": "783d7251658f9077e05a7b1b9bd60835",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "02d7f821a984f96f92ac0547c51fa98d",
"text": "We address the problem of learning an unknown function by pu tting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use.",
"title": ""
}
] | [
{
"docid": "e99908d93bcd07da7323016aa4570b0f",
"text": "In this paper we review masterpieces of curved crease folding, the deployed design methods and geometric studies on this special kind of paper folding. Our goal is to make this work and its techniques accessible to enable further development. By exploring masterpieces of the past and present of this still underexplored field, this paper aims to contribute to the development of novel design methods to achieve shell structures and deployable structures that can accommodate structural properties and design intention for a wide range of materials.",
"title": ""
},
{
"docid": "107de9a30eb4a76385f125d3e857cc79",
"text": "We define a simply typed, non-deterministic lambda-calculus where isomorphic types are equated. To this end, an equivalence relation is settled at the term level. We then provide a proof of strong normalisation modulo equivalence. Such a proof is a non-trivial adaptation of the reducibility method.",
"title": ""
},
{
"docid": "40746bfccf801222d99151dc4b4cb7e8",
"text": "Fingerprints are the oldest and most widely used form of biometric identification. Everyone is known to have unique, immutable fingerprints. As most Automatic Fingerprint Recognition Systems are based on local ridge features known as minutiae, marking minutiae accurately and rejecting false ones is very important. However, fingerprint images get degraded and corrupted due to variations in skin and impression conditions. Thus, image enhancement techniques are employed prior to minutiae extraction. A critical step in automatic fingerprint matching is to reliably extract minutiae from the input fingerprint images. This paper presents a review of a large number of techniques present in the literature for extracting fingerprint minutiae. The techniques are broadly classified as those working on binarized images and those that work on gray scale images directly.",
"title": ""
},
{
"docid": "a6c95ca30680ed1218cec87ab7352803",
"text": "In this paper, we have presented a comprehensive multi-view geometry library, Theia, that focuses on large-scale SfM. In addition to state-of-the-art scalable SfM pipelines, the library provides numerous tools that are useful for students, researchers, and industry experts in the field of multi-view geometry. Theia contains clean code that is well documented (with code comments and the website) and easy to extend. The modular design allows for users to easily implement and experiment with new algorithms within our current pipeline without having to implement a full end-to-end SfM pipeline themselves. Theia has already gathered a large number of diverse users from universities, startups, and industry and we hope to continue to gather users and active contributors from the open-source community.",
"title": ""
},
{
"docid": "9deea0426461c72df7ee56353ecf1d88",
"text": "This paper presents a novel approach to visualizing the time structure of musical waveforms. The acoustic similarity between any two instants of an audio recording is displayed in a static 2D representation, which makes structural and rhythmic characteristics visible. Unlike practically all prior work, this method characterizes self-similarity rather than specific audio attributes such as pitch or spectral features. Examples are presented for classical and popular music.",
"title": ""
},
{
"docid": "0444b38c0d20c999df4cb1294b5539c3",
"text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df5c384e9fb6ba57a5bbd7fef44ce5f0",
"text": "CONTEXT\nPressure ulcers are common in a variety of patient settings and are associated with adverse health outcomes and high treatment costs.\n\n\nOBJECTIVE\nTo systematically review the evidence examining interventions to prevent pressure ulcers.\n\n\nDATA SOURCES AND STUDY SELECTION\nMEDLINE, EMBASE, and CINAHL (from inception through June 2006) and Cochrane databases (through issue 1, 2006) were searched to identify relevant randomized controlled trials (RCTs). UMI Proquest Digital Dissertations, ISI Web of Science, and Cambridge Scientific Abstracts were also searched. All searches used the terms pressure ulcer, pressure sore, decubitus, bedsore, prevention, prophylactic, reduction, randomized, and clinical trials. Bibliographies of identified articles were further reviewed.\n\n\nDATA SYNTHESIS\nFifty-nine RCTs were selected. Interventions assessed in these studies were grouped into 3 categories, ie, those addressing impairments in mobility, nutrition, or skin health. Methodological quality for the RCTs was variable and generally suboptimal. Effective strategies that addressed impaired mobility included the use of support surfaces, mattress overlays on operating tables, and specialized foam and specialized sheepskin overlays. While repositioning is a mainstay in most pressure ulcer prevention protocols, there is insufficient evidence to recommend specific turning regimens for patients with impaired mobility. In patients with nutritional impairments, dietary supplements may be beneficial. The incremental benefit of specific topical agents over simple moisturizers for patients with impaired skin health is unclear.\n\n\nCONCLUSIONS\nGiven current evidence, using support surfaces, repositioning the patient, optimizing nutritional status, and moisturizing sacral skin are appropriate strategies to prevent pressure ulcers. Although a number of RCTs have evaluated preventive strategies for pressure ulcers, many of them had important methodological limitations. There is a need for well-designed RCTs that follow standard criteria for reporting nonpharmacological interventions and that provide data on cost-effectiveness for these interventions.",
"title": ""
},
{
"docid": "60465268d2ede9a7d8b374ac05df0d46",
"text": "Nobody likes performance reviews. Subordinates are terrified they'll hear nothing but criticism. Bosses think their direct reports will respond to even the mildest criticism with anger or tears. The result? Everyone keeps quiet. That's unfortunate, because most people need help figuring out how to improve their performance and advance their careers. This fear of feedback doesn't come into play just during annual reviews. At least half the executives with whom the authors have worked never ask for feedback. Many expect the worst: heated arguments, even threats of dismissal. So rather than seek feedback, people try to guess what their bosses are thinking. Fears and assumptions about feedback often manifest themselves in psychologically maladaptive behaviors such as procrastination, denial, brooding, jealousy, and self-sabotage. But there's hope, say the authors. Those who learn adaptive techniques can free themselves from destructive responses. They'll be able to deal with feedback better if they acknowledge negative emotions, reframe fear and criticism constructively, develop realistic goals, create support systems, and reward themselves for achievements along the way. Once you've begun to alter your maladaptive behaviors, you can begin seeking regular feedback from your boss. The authors take you through four steps for doing just that: self-assessment, external assessment, absorbing the feedback, and taking action toward change. Organizations profit when employees ask for feedback and deal well with criticism. Once people begin to know how they are doing relative to management's priorities, their work becomes better aligned with organizational goals. What's more, they begin to transform a feedback-averse environment into a more honest and open one, in turn improving performance throughout the organization.",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "19d935568542af1ab33b532a64982563",
"text": "We consider the problem of learning causal directed acyclic graphs from an observational joint distribution. One can use these graphs to predict the outcome of interventional experiments, from which data are often not available. We show that if the observational distribution follows a structural equation model with an additive noise structure, the directed acyclic graph becomes identifiable from the distribution under mild conditions. This constitutes an interesting alternative to traditional methods that assume faithfulness and identify only the Markov equivalence class of the graph, thus leaving some edges undirected. We provide practical algorithms for finitely many samples, RESIT (regression with subsequent independence test) and two methods based on an independence score. We prove that RESIT is correct in the population setting and provide an empirical evaluation.",
"title": ""
},
{
"docid": "6d4cd80341c429ecaaccc164b1bde5f9",
"text": "One hundred and two olive RAPD profiles were sampled from all around the Mediterranean Basin. Twenty four clusters of RAPD profiles were shown in the dendrogram based on the Ward’s minimum variance algorithm using chi-square distances. Factorial discriminant analyses showed that RAPD profiles were correlated with the use of the fruits and the country or region of origin of the cultivars. This suggests that cultivar selection has occurred in different genetic pools and in different areas. Mitochondrial DNA RFLP analyses were also performed. These mitotypes supported the conclusion also that multilocal olive selection has occurred. This prediction for the use of cultivars will help olive growers to choose new foreign cultivars for testing them before an eventual introduction if they are well adapted to local conditions.",
"title": ""
},
{
"docid": "e77cf8938714824d46cfdbdb1b809f93",
"text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.",
"title": ""
},
{
"docid": "a9931e49d853b5c35735bb7770ceeee1",
"text": "Human activity recognition involves classifying times series data, measured at inertial sensors such as accelerometers or gyroscopes, into one of pre-defined actions. Recently, convolutional neural network (CNN) has established itself as a powerful technique for human activity recognition, where convolution and pooling operations are applied along the temporal dimension of sensor signals. In most of existing work, 1D convolution operation is applied to individual univariate time series, while multi-sensors or multi-modality yield multivariate time series. 2D convolution and pooling operations are applied to multivariate time series, in order to capture local dependency along both temporal and spatial domains for uni-modal data, so that it achieves high performance with less number of parameters compared to 1D operation. However for multi-modal data existing CNNs with 2D operation handle different modalities in the same way, which cause interferences between characteristics from different modalities. In this paper, we present CNNs (CNN-pf and CNN-pff), especially CNN-pff, for multi-modal data. We employ both partial weight sharing and full weight sharing for our CNN models in such a way that modality-specific characteristics as well as common characteristics across modalities are learned from multi-modal (or multi-sensor) data and are eventually aggregated in upper layers. Experiments on benchmark datasets demonstrate the high performance of our CNN models, compared to state of the arts methods.",
"title": ""
},
{
"docid": "fb0b06eb6238c008bef7d3b2e9a80792",
"text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.",
"title": ""
},
{
"docid": "20a0915c04bcb678461de86c7f8f0ba1",
"text": "Context: The security goals of a software system provide a foundation for security requirements engineering. Identifying security goals is a process of iteration and refinement, leveraging the knowledge and expertise of the analyst to secure not only the core functionality but the security mechanisms as well. Moreover, a comprehensive security plan should include goals for not only preventing a breach, but also for detecting and appropriately responding in case a breach does occur. Goal: The objective of this research is to support analysts in security requirements engineering by providing a framework that supports a systematic and comprehensive discovery of security goals for a software system. Method: We develop a framework, Discovering Goals for Security (DIGS), that models the key entities in information security, including assets and security goals. We systematically develop a set of security goal patterns that capture multiple dimensions of security for assets. DIGS explicitly captures the relations and assumptions that underlie security goals to elicit implied goals. We map the goal patterns to NIST controls to help in operationalizing the goals. We evaluate DIGS via a controlled experiment where 28 participants analyzed systems from mobile banking and human resource management domains. Results: Participants considered security goals commensurate to the knowledge available to them. Although the overall recall was low given the empirical constraints, participants using DIGS identified more implied goals and felt more confident in completing the task. Conclusion: Explicitly providing the additional knowledge for the identification of implied security goals significantly increased the chances of discovering such goals, thereby improving coverage of stakeholder security requirements, even if they are unstated.",
"title": ""
},
{
"docid": "a114801b4a00d024d555378ffa7cc583",
"text": "UNLABELLED\nRectal prolapse is the partial or complete protrusion of the rectal wall into the anal canal. The most common etiology consists in the insufficiency of the diaphragm of the lesser pelvis and anal sphincter apparatus. Methods of surgical treatment involve perineal or abdominal approach surgical procedures. The aim of the study was to present the method of surgical rectal prolapse treatment, according to Mikulicz's procedure by means of the perineal approach, based on our own experience and literature review.\n\n\nMATERIAL AND METHODS\nThe study group comprised 16 patients, including 14 women and 2 men, aged between 38 and 82 years admitted to the department, due to rectal prolapse, during the period between 2000 and 2012. Nine female patients, aged between 68 and 82 years (mean age-76.3 years) with fullthickness rectal prolapse underwent surgery by means of Mikulicz's method with levator muscle and external anal sphincter plasty. The most common comorbidities amongst patients operated by means of Mikulicz's method included cardiovascular and metabolic diseases.\n\n\nRESULTS\nMean hospitalization was 14.4 days (ranging between 12 and 17 days). Despite advanced age and poor general condition of the patients, complications during the perioperative period were not observed. Good early and late functional results were achieved. The degree of anal sphincter continence was determined 6-8 weeks after surgery showing significant improvement, as compared to results obtained prior to surgery. One case of recurrence consisting in mucosal prolapse was noted, being treated surgically by means of Whitehead's method. Good treatment results were observed.\n\n\nCONCLUSION\nTransperineal rectosigmoidectomy using Mikulicz's method with levator muscle and external anal sphincter plasty seems to be an effective, minimally invasive and relatively safe procedure that does not require general anesthesia. It is recommended in case of patients with significant comorbidities and high surgical risk.",
"title": ""
},
{
"docid": "02988a99b7cb966ec5c63c51d1aa57d8",
"text": "Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.",
"title": ""
},
{
"docid": "b9ea38ab2c6c68af37a46d92c8501b68",
"text": "In this paper we introduce a gamification model for encouraging sustainable multi-modal urban travel in modern European cities. Our aim is to provide a mechanism that encourages users to reflect on their current travel behaviours and to engage in more environmentally friendly activities that lead to the formation of sustainable, long-term travel behaviours. To achieve this our users track their own behaviours, set goals, manage their progress towards those goals, and respond to challenges. Our approach uses a point accumulation and level achievement metaphor to abstract from the underlying specifics of individual behaviours and goals to allow an Simon Wells University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: [email protected] Henri Kotkanen University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: [email protected] Michael Schlafli University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: [email protected] Silvia Gabrielli CREATE-NET Via alla Cascata 56/D Povo 38123 Trento Italy e-mail: [email protected] Judith Masthoff University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: [email protected] Antti Jylhä University of Helsinki, Department of Computer Science, P.O. 68 (Gustaf Hllstrmin katu 2b), FI00014 UNIVERSITY OF HELSINKI, FINLAND e-mail: [email protected] Paula Forbes University of Aberdeen, Computing Science, Meston building, Meston Walk, Aberdeen, AB24 3UE e-mail: [email protected]",
"title": ""
},
{
"docid": "4d15474b4ea04b5585f9c52db8325135",
"text": "Among the tests of a leader, few are more challenging-and more painful-than recovering from a career catastrophe. Most fallen leaders, in fact, don't recover. Still, two decades of consulting experience, scholarly research, and their own personal experiences have convinced the authors that leaders can triumph over tragedy--if they do so deliberately. Great business leaders have much in common with the great heroes of universal myth, and they can learn to overcome profound setbacks by thinking in heroic terms. First, they must decide whether or not to fight back. Either way, they must recruit others into their battle. They must then take steps to recover their heroic status, in the process proving, both to others and to themselves, that they have the mettle necessary to recover their heroic mission. Bernie Marcus exemplifies this process. Devastated after Sandy Sigoloff ired him from Handy Dan, Marcus decided to forgo the distraction of litigation and instead make the marketplace his batttleground. Drawing from his network of carefully nurtured relationships with both close and more distant acquaintances, Marcus was able to get funding for a new venture. He proved that he had the mettle, and recovered his heroic status, by building Home Depot, whose entrepreneurial spirit embodied his heroic mission. As Bank One's Jamie Dimon, J.Crew's Mickey Drexler, and even Jimmy Carter, Martha Stewart, and Michael Milken have shown, stunning comebacks are possible in all industries and walks of life. Whatever the cause of your predicament, it makes sense to get your story out. The alternative is likely to be long-lasting unemployment. If the facts of your dismissal cannot be made public because they are damning, then show authentic remorse. The public is often enormously forgiving when it sees genuine contrition and atonement.",
"title": ""
},
{
"docid": "97dd64ead9e4e88c5a9d5c70fef75712",
"text": "This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.",
"title": ""
}
] | scidocsrr |
be7390a0d3790cb17b54f1b8b45dae52 | Terminology Extraction: An Analysis of Linguistic and Statistical Approaches | [
{
"docid": "8a043a1ac74da0ec0cd55d1c8b658666",
"text": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In (Brill 1992), a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a small number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.",
"title": ""
}
] | [
{
"docid": "b537af893b84a4c41edb829d45190659",
"text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.",
"title": ""
},
{
"docid": "de71bef095a0ef7fb4fb1b10d4136615",
"text": "Active learning—a class of algorithms that iteratively searches for the most informative samples to include in a training dataset—has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures “localization tightness” of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures “localization stability” of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the base-line method using classification only. Asian Conference on Computer Vision This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2018 201 Broadway, Cambridge, Massachusetts 02139 Localization-Aware Active Learning for Object",
"title": ""
},
{
"docid": "1a8e9b74d4c1a32299ca08e69078c70c",
"text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.",
"title": ""
},
{
"docid": "9c97262605b3505bbc33c64ff64cfcd5",
"text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.",
"title": ""
},
{
"docid": "df8f22d84cc8f6f38de90c2798889051",
"text": "The large amount of videos popping up every day, make it is more and more critical that key information within videos can be extracted and understood in a very short time. Video summarization, the task of finding the smallest subset of frames, which still conveys the whole story of a given video, is thus of great significance to improve efficiency of video understanding. In this paper, we propose a novel Dilated Temporal Relational Generative Adversarial Network (DTR-GAN) to achieve framelevel video summarization. Given a video, it can select a set of key frames, which contains the most meaningful and compact information. Specifically, DTR-GAN learns a dilated temporal relational generator and a discriminator with three-player loss in an adversarial manner. A new dilated temporal relation (DTR) unit is introduced for enhancing temporal representation capturing. The generator aims to select key frames by using DTR units to effectively exploit global multi-scale temporal context and to complement the commonly used Bi-LSTM. To ensure that the summaries capture enough key video representation from a global perspective rather than a trivial randomly shorten sequence, we present a discriminator that learns to enforce both the information completeness and compactness of summaries via a three-player loss. The three-player loss includes the generated summary loss, the random summary loss, and the real summary (ground-truth) loss, which play important roles for better regularizing the learned model to obtain useful summaries. Comprehensive experiments on two public datasets SumMe and TVSum show the superiority of our DTR-GAN over the stateof-the-art approaches.",
"title": ""
},
{
"docid": "c10bd86125db702e0839e2a3776e195b",
"text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.",
"title": ""
},
{
"docid": "d93abfdc3bc20a23e533f3ad2e30b9c9",
"text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.",
"title": ""
},
{
"docid": "93d498adaee9070ffd608c5c1fe8e8c9",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "74d6c2fff4b67d05871ca0debbc4ec15",
"text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.",
"title": ""
},
{
"docid": "e68c73806392d10c3c3fd262f6105924",
"text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd67a23b3ed7ab6d97a198b0e66a5628",
"text": "A growing number of children and adolescents are involved in resistance training in schools, fitness centers, and sports training facilities. In addition to increasing muscular strength and power, regular participation in a pediatric resistance training program may have a favorable influence on body composition, bone health, and reduction of sports-related injuries. Resistance training targeted to improve low fitness levels, poor trunk strength, and deficits in movement mechanics can offer observable health and fitness benefits to young athletes. However, pediatric resistance training programs need to be well-designed and supervised by qualified professionals who understand the physical and psychosocial uniqueness of children and adolescents. The sensible integration of different training methods along with the periodic manipulation of programs design variables over time will keep the training stimulus effective, challenging, and enjoyable for the participants.",
"title": ""
},
{
"docid": "03be8a60e1285d62c34b982ddf1bcf58",
"text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.",
"title": ""
},
{
"docid": "f05f6d9eeff0b492b74e6ab18c4707ba",
"text": "Communication is an interactive, complex, structured process involving agents that are capable of drawing conclusions from the information they have available about some real-life situations. Such situations are generally characterized as being imperfect. In this paper, we aim to address learning from the perspective of the communication between agents. To learn a collection of propositions concerning some situation is to incorporate it within one's knowledge about that situation. That is, the key factor in this activity is for the goal agent, where agents may switch role if appropriate, to integrate the information offered with what it already knows. This may require a process of belief revision, which suggests that the process of incorporation of new information should be modeled nonmonotonically. We shall employ for reasoning a three-valued based nonmonotonic logic that formalizes some aspects of revisable reasoning and it is accessible to implementation. The logic is sound and complete. A theorem-prover of the logic has successfully been implemented.",
"title": ""
},
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "0c7c179f2f86e7289910cee4ca583e4b",
"text": "This paper presents an algorithm for fingerprint image restoration using Digital Reaction-Diffusion System (DRDS). The DRDS is a model of a discrete-time discrete-space nonlinear reaction-diffusion dynamical system, which is useful for generating biological textures, patterns and structures. This paper focuses on the design of a fingerprint restoration algorithm that combines (i) a ridge orientation estimation technique using an iterative coarse-to-fine processing strategy and (ii) an adaptive DRDS having a capability of enhancing low-quality fingerprint images using the estimated ridge orientation. The phase-only image matching technique is employed for evaluating the similarity between an original fingerprint image and a restored image. The proposed algorithm may be useful for person identification applications using fingerprint images. key words: reaction-diffusion system, pattern formation, digital signal processing, digital filters, fingerprint restoration",
"title": ""
},
{
"docid": "4b3d890a8891cd8c84713b1167383f6f",
"text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.",
"title": ""
},
{
"docid": "c15369f923be7c8030cc8f2b1f858ced",
"text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.",
"title": ""
}
] | scidocsrr |
ab5b89532a20f06508b850471a9f00a9 | A Blackboard Based Hybrid Multi-Agent System for Improving Classification Accuracy Using Reinforcement Learning Techniques | [
{
"docid": "d229c679dcd4fa3dd84c6040b95fc99c",
"text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.",
"title": ""
},
{
"docid": "1f3b550487a917df3aede49b779f1e7f",
"text": "Brain tumor segmentation is an important task in medical image processing. Early diagnosis of brain tumors plays an important role in improving treatment possibilities and increases the survival rate of the patients. Manual segmentation of the brain tumors for cancer diagnosis, from large amount of MRI images generated in clinical routine, is a difficult and time consuming task. There is a need for automatic brain tumor image segmentation. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. Recently, automatic segmentation using deep learning methods proved popular since these methods achieve the state-of-the-art results and can address this problem better than other methods. Deep learning methods can also enable efficient processing and objective evaluation of the large amounts of MRI-based image data. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. Different than others, in this paper, we focus on the recent trend of deep learning methods in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of deep learning methods are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Organizing Committee of ICAFS 2016.",
"title": ""
}
] | [
{
"docid": "261930bf1b06e5c1e8cc47598e7e8a30",
"text": "Psychological First Aid (PFA) is the recommended immediate psychosocial response during crises. As PFA is now widely implemented in crises worldwide, there are increasing calls to evaluate its effectiveness. World Vision used PFA as a fundamental component of their emergency response following the 2014 conflict in Gaza. Anecdotal reports from Gaza suggest a range of benefits for those who received PFA. Though not intending to undertake rigorous research, World Vision explored learnings about PFA in Gaza through Focus Group Discussions with PFA providers, Gazan women, men and children and a Key Informant Interview with a PFA trainer. The qualitative analyses aimed to determine if PFA helped individuals to feel safe, calm, connected to social supports, hopeful and efficacious - factors suggested by the disaster literature to promote coping and recovery (Hobfoll et al., 2007). Results show positive psychosocial benefits for children, women and men receiving PFA, confirming that PFA contributed to: safety, reduced distress, ability to engage in calming practices and to support each other, and a greater sense of control and hopefulness irrespective of their adverse circumstances. The data shows that PFA formed an important part of a continuum of care to meet psychosocial needs in Gaza and served as a gateway for addressing additional psychosocial support needs. A \"whole-of-family\" approach to PFA showed particularly strong impacts and strengthened relationships. Of note, the findings from World Vision's implementation of PFA in Gaza suggests that future PFA research go beyond a narrow focus on clinical outcomes, to a wider examination of psychosocial, familial and community-based outcomes.",
"title": ""
},
{
"docid": "eaaf2e04adbc5ea81c30722c815c2a78",
"text": "One of the constraints in the design of dry switchable adhesives is the compliance trade-off: compliant structures conform better to surfaces but are limited in strength due to high stored strain energy. In this work we study the effects of bending compliance on the shear adhesion pressures of hybrid electrostatic/gecko-like adhesives of various areas. We reaffirm that normal electrostatic preload increases contact area and show that it is more effective on compliant adhesives. We also show that the gain in contact area can compensate for low shear stiffness and adhesives with high bending compliance outperform stiffer adhesives on substrates with large scale roughness.",
"title": ""
},
{
"docid": "313a902049654e951860b9225dc5f4e8",
"text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.",
"title": ""
},
{
"docid": "071b898fa3944ec0dabca317d3707217",
"text": "Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering.",
"title": ""
},
{
"docid": "108e4cc0358076fac20d7f9395c9f1e3",
"text": "This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.",
"title": ""
},
{
"docid": "b75f793f4feac0b658437026d98a1e8b",
"text": "From a certain (admittedly narrow) perspective, one of the annoying features of natural language is the ubiquitous syntactic ambiguity. For a computational model intended to assign syntactic descriptions to natural language text, this seem like a design defect. In general, when context and lexical content are taken into account, such syntactic ambiguity can be resolved: sentences used in context show, for the most part, little ambiguity. But the grammar provides many alternative analyses, and gives little guidance about resolving the ambiguity. Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example,",
"title": ""
},
{
"docid": "aa55e655c7fa8c86d189d03c01d5db87",
"text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.",
"title": ""
},
{
"docid": "c7db2522973605850cb97e55b32100a1",
"text": "Purpose – Based on stimulus-organism-response model, the purpose of this paper is to develop an integrated model to explore the effects of six marketing-mix components (stimuli) on consumer loyalty (response) through consumer value (organism) in social commerce (SC). Design/methodology/approach – In order to target online social buyers, a web-based survey was employed. Structural equation modeling with partial least squares (PLS) is used to analyze valid data from 599 consumers who have repurchase experience via Facebook. Findings – The results from PLS analysis show that all components of SC marketing mix (SCMM) have significant effects on SC consumer value. Moreover, SC customer value positively influences SC customer loyalty (CL). Research limitations/implications – The data for this study are collected from Facebook only and the sample size is limited; thus, replication studies are needed to improve generalizability and data representativeness of the study. Moreover, longitudinal studies are needed to verify the causality among the constructs in the proposed research model. Practical implications – SC sellers should implement more effective SCMM strategies to foster SC CL through better SCMM decisions. Social implications – The SCMM components represent the collective benefits of social interaction, exemplifying the importance of effective communication and interaction among SC customers. Originality/value – This study develops a parsimonious model to explain the over-arching effects of SCMM components on CL in SC mediated by customer value. It confirms that utilitarian, hedonic, and social values can be applied to online SC and that SCMM can be leveraged to achieve these values.",
"title": ""
},
{
"docid": "1646f7b53b999170f293d774cc3258d6",
"text": "Magnetic levitation (Maglev) is becoming a popular transportation topic all around the globe. Maglev systems have been successfully implemented in many applications such as frictionless bearings, high-speed maglev trains, and fast tool servo systems. Due to the features of the instability and nonlinearity of the maglev system, the design of controllers for a maglev system is a difficult task. Literature shows, many authors have proposed various controllers to stabilize the position of the object in the air. The main drawback of these controllers is that symmetric constraints like stability conditions, decay rate conditions, disturbance rejection and eddy-current based force due to the motion of the levitated object are not taken into consideration. In this paper, a linear matrix inequality based fuzzy controller is proposed to reduce vibration in the object. The efficacy of the proposed controller tests on a maglev system considering symmetric constraints and the eddy current based force.",
"title": ""
},
{
"docid": "95fa1dac07ce26c1ccd64a9c86c96a22",
"text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.",
"title": ""
},
{
"docid": "d96f7c54d669771f9fa881e97bddd5f6",
"text": "Conceptual modeling is one major topic in information systems research and becomes even more important with the arising of new software engineering principles like model driven architecture (MDA) or serviceoriented architectures (SOA). Research on conceptual modeling is characterized by a dilemma: Empirical research confirms that in practice conceptual modeling is often perceived as difficult and not done well. The application of reusable conceptual models is a promising approach to support model designers. At the same time, the IS research community claims for a sounder theoretical base for conceptual modeling. The design science research paradigm delivers a framework to fortify the theoretical foundation of research on conceptual models. We provide insights on how to achieve both, relevance and rigor, in conceptual modeling by identifying requirements for reusable conceptual models on the basis of the design science research paradigm.",
"title": ""
},
{
"docid": "cabda87958ceb0d6288c9d89fa704f74",
"text": "BACKGROUND\nThe evidence base on the prevalence of dementia is expanding rapidly, particularly in countries with low and middle incomes. A reappraisal of global prevalence and numbers is due, given the significant implications for social and public policy and planning.\n\n\nMETHODS\nIn this study we provide a systematic review of the global literature on the prevalence of dementia (1980-2009) and metaanalysis to estimate the prevalence and numbers of those affected, aged ≥60 years in 21 Global Burden of Disease regions.\n\n\nRESULTS\nAge-standardized prevalence for those aged ≥60 years varied in a narrow band, 5%-7% in most world regions, with a higher prevalence in Latin America (8.5%), and a distinctively lower prevalence in the four sub-Saharan African regions (2%-4%). It was estimated that 35.6 million people lived with dementia worldwide in 2010, with numbers expected to almost double every 20 years, to 65.7 million in 2030 and 115.4 million in 2050. In 2010, 58% of all people with dementia lived in countries with low or middle incomes, with this proportion anticipated to rise to 63% in 2030 and 71% in 2050.\n\n\nCONCLUSION\nThe detailed estimates in this study constitute the best current basis for policymaking, planning, and allocation of health and welfare resources in dementia care. The age-specific prevalence of dementia varies little between world regions, and may converge further. Future projections of numbers of people with dementia may be modified substantially by preventive interventions (lowering incidence), improvements in treatment and care (prolonging survival), and disease-modifying interventions (preventing or slowing progression). All countries need to commission nationally representative surveys that are repeated regularly to monitor trends.",
"title": ""
},
{
"docid": "732eb96d39d250e6b1355f7f4d53feed",
"text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.",
"title": ""
},
{
"docid": "d94d0db91e65bde2b1918ca95cc275bb",
"text": "This study was undertaken to investigate the positive and negative effects of excessive Internet use on undergraduate students. The Internet Effect Scale (IES), especially constructed by the authors to determine these effects, consisted of seven dimensions namely: behavioral problems, interpersonal problems, educational problems, psychological problems, physical problems, Internet abuse, and positive effects. The sample consisted of 200 undergraduate students studying at the GC University Lahore, Pakistan. A set of Pearson Product Moment correlations showed positive associations between time spent on the Internet and various dimensions of the IES indicating that excessive Internet use can lead to a host of problems of educational, physical, psychological and interpersonal nature. However, a greater number of students reported positive than negative effects of Internet use. Without negating the advantages of Internet, the current findings suggest that Internet use should be within reasonable limits focusing more on activities enhancing one's productivity.",
"title": ""
},
{
"docid": "4fb372431e28398691c079936e2f9fe6",
"text": "Entity matching (EM) is a critical part of data integration. We study how to synthesize entity matching rules from positive-negative matching examples. The core of our solution is program synthesis, a powerful tool to automatically generate rules (or programs) that satisfy a given highlevel specification, via a predefined grammar. This grammar describes a General Boolean Formula (GBF) that can include arbitrary attribute matching predicates combined by conjunctions ( Ź",
"title": ""
},
{
"docid": "b633fbaab6e314535312709557ef1139",
"text": "The purification of recombinant proteins by affinity chromatography is one of the most efficient strategies due to the high recovery yields and purity achieved. However, this is dependent on the availability of specific affinity adsorbents for each particular target protein. The diversity of proteins to be purified augments the complexity and number of specific affinity adsorbents needed, and therefore generic platforms for the purification of recombinant proteins are appealing strategies. This justifies why genetically encoded affinity tags became so popular for recombinant protein purification, as these systems only require specific ligands for the capture of the fusion protein through a pre-defined affinity tag tail. There is a wide range of available affinity pairs \"tag-ligand\" combining biological or structural affinity ligands with the respective binding tags. This review gives a general overview of the well-established \"tag-ligand\" systems available for fusion protein purification and also explores current unconventional strategies under development.",
"title": ""
},
{
"docid": "e8edd727e923595acc80df364bfc64af",
"text": "Context: Architecture-centric software evolution (ACSE) enables changes in system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. The existing research and practices for ACSE primarily focus on design-time evolution and runtime adaptations to accommodate changing requirements in existing architectures. Objectives: We aim to identify, taxonomically classify and systematically compare the existing research focused on enabling or enhancing change reuse to support ACSE. Method: We conducted a systematic literature review of 32 qualitatively selected studies and taxonomically classified these studies based on solutions that enable (i) empirical acquisition and (ii) systematic application of architecture evolution reuse knowledge (AERK) to guide ACSE. Results: We identified six distinct research themes that support acquisition and application of AERK. We investigated (i) how evolution reuse knowledge is defined, classified and represented in the existing research to support ACSE and (ii) what are the existing methods, techniques and solutions to support empirical acquisition and systematic application of AERK. Conclusions: Change patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration analysis, evolution and maintenance prediction techniques (approximately 6% each). A lack of focus on empirical acquisition of reuse knowledge suggests the need of solutions with architecture change mining as a complementary and integrated phase for architecture change execution. Copyright © 2014 John Wiley & Sons, Ltd. Received 13 May 2013; Revised 23 September 2013; Accepted 27 December 2013",
"title": ""
},
{
"docid": "f4075ef96ed2d20cbd8615a7ffec0f8e",
"text": "The objective of this paper is to control the speed of Permanent Magnet Synchronous Motor (PMSM) over wide range of speed by consuming minimum time and low cost. Therefore, comparative performance analysis of PMSM on basis of speed regulation has been done in this study. Comparison of two control strategies i.e. Field oriented control (FOC) without sensor less Model Reference Adaptive System (MRAS) and FOC with sensor less MRAS has been carried out. Sensor less speed control of PMSM is achieved by using estimated speed deviation as feedback signal for the PI controller. Performance of the both control strategies has been evaluated in in MATLAB Simulink software. Simulation studies show the response of PMSM speed during various conditions of load and speed variations. Obtained results reveal that the proposed MRAS technique can effectively estimate the speed of rotor with high exactness and torque response is significantly quick as compared to the system without MRAS control system.",
"title": ""
},
{
"docid": "838b599024a14e952145af0c12509e31",
"text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.",
"title": ""
},
{
"docid": "99ebd04c11db731653ba4b8f26c46208",
"text": "This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.",
"title": ""
}
] | scidocsrr |
2248259ddc80c4f4f4eb8b028affc8ef | ASTRO: A Datalog System for Advanced Stream Reasoning | [
{
"docid": "47ac4b546fe75f2556a879d6188d4440",
"text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.",
"title": ""
},
{
"docid": "4b03aeb6c56cc25ce57282279756d1ff",
"text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.",
"title": ""
}
] | [
{
"docid": "0a4a124589dffca733fa9fa87dc94b35",
"text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.",
"title": ""
},
{
"docid": "9d62bcab72472183be38ad67635d744d",
"text": "In most challenging data analysis applications, data evolve over time and must be analyzed in near real time. Patterns and relations in such data often evolve over time, thus, models built for analyzing such data quickly become obsolete over time. In machine learning and data mining this phenomenon is referred to as concept drift. The objective is to deploy models that would diagnose themselves and adapt to changing data over time. This chapter provides an application oriented view towards concept drift research, with a focus on supervised learning tasks. First we overview and categorize application tasks for which the problem of concept drift is particularly relevant. Then we construct a reference framework for positioning application tasks within a spectrum of problems related to concept drift. Finally, we discuss some promising research directions from the application perspective, and present recommendations for application driven concept drift research and development.",
"title": ""
},
{
"docid": "44ecfa6fb5c31abf3a035dea9e709d11",
"text": "The issue of the variant vs. invariant in personality often arises in diVerent forms of the “person– situation” debate, which is based on a false dichotomy between the personal and situational determination of behavior. Previously reported data are summarized that demonstrate how behavior can vary as a function of subtle situational changes while individual consistency is maintained. Further discussion considers the personal source of behavioral invariance, the situational source of behavioral variation, the person–situation interaction, the nature of behavior, and the “personality triad” of persons, situations, and behaviors, in which each element is understood and predicted in terms of the other two. An important goal for future research is further development of theories and methods for conceptualizing and measuring the functional aspects of situations and of behaviors. One reason for the persistence of the person situation debate may be that it serves as a proxy for a deeper, implicit debate over values such as equality vs. individuality, determinism vs. free will, and Xexibility vs. consistency. However, these value dichotomies may be as false as the person–situation debate that they implicitly drive. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d940c448ef854fd8c50bdf08a03cd008",
"text": "The Multi-task Cascaded Convolutional Networks (MTCNN) has recently demonstrated impressive results on jointly face detection and alignment. By using the hard sample ming and training a model on FER2013 datasets, we exploit the inherent correlation between face detection and facial express-ion recognition, and report the results of facial expression recognition based on MTCNN.",
"title": ""
},
{
"docid": "55767c008ad459f570fb6b99eea0b26d",
"text": "The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "0caf6ead2548d556d9dd0564fb59d8cc",
"text": "A modified microstrip Franklin array antenna (MFAA) is proposed for short-range radar applications in vehicle blind spot information systems (BLIS). It is shown that the radiating performance [i.e., absolute gain value and half-power beamwidth (HPBW) angle] can be improved by increasing the number of radiators in the MFAA structure and assigning appropriate values to the antenna geometry parameters. The MFAA possesses a high absolute gain value (>10 dB), good directivity (HPBW <20°) in the E-plane, and large-range coverage (HPBW >80°) in the H-plane at an operating frequency of 24 GHz. Moreover, the 10-dB impedance bandwidth of the proposed antenna is around 250 MHz. The MFAA is, thus, an ideal candidate for automotive BLIS applications.",
"title": ""
},
{
"docid": "edc89ba0554d32297a9aab7103c4abb9",
"text": "During the last years, agile methods like eXtreme Programming have become increasingly popular. Parallel to this, more and more organizations rely on process maturity models to assess and improve their own processes or those of suppliers, since it has been getting clear that most project failures can be imputed to inconsistent, undisciplined processes. Many organizations demand CMMI compliance of projects where agile methods are employed. In this situation it is necessary to analyze the interrelations and mutual restrictions between agile methods and approaches for software process analysis and improvement. This paper analyzes to what extent the CMMI process areas can be covered by XP and where adjustments of XP have to be made. Based on this, we describe the limitations of CMMI in an agile environment and show that level 4 or 5 are not feasible under the current specifications of CMMI and XP.",
"title": ""
},
{
"docid": "7a7c358eaa5752d6984a56429f58c556",
"text": "If the training dataset is not very large, image recognition is usually implemented with the transfer learning methods. In these methods the features are extracted using a deep convolutional neural network, which was preliminarily trained with an external very-large dataset. In this paper we consider the nonparametric classification of extracted feature vectors with the probabilistic neural network (PNN). The number of neurons at the pattern layer of the PNN is equal to the database size, which causes the low recognition performance and high memory space complexity of this network. We propose to overcome these drawbacks by replacing the exponential activation function in the Gaussian Parzen kernel to the complex exponential functions in the Fej\\'er kernel. We demonstrate that in this case it is possible to implement the network with the number of neurons in the pattern layer proportional to the cubic root of the database size. Thus, the proposed modification of the PNN makes it possible to significantly decrease runtime and memory complexities without loosing its main advantages, namely, extremely fast training procedure and the convergence to the optimal Bayesian decision. An experimental study in visual object category classification and unconstrained face recognition with contemporary deep neural networks have shown, that our approach obtains very efficient and rather accurate decisions for the small training sample in comparison with the well-known classifiers.",
"title": ""
},
{
"docid": "343ed18e56e6f562fa509710e4cf8dc6",
"text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.",
"title": ""
},
{
"docid": "5091c24d3ed56e45246fb444a66c290e",
"text": "This paper defines presence in terms of frames and involvement [1]. The value of this analysis of presence is demonstrated by applying it to several issues that have been raised about presence: residual awareness of nonmediation, imaginary presence, presence as categorical or continuum, and presence breaks. The paper goes on to explore the relationship between presence and reality. Goffman introduced frames to try to answer the question, “Under what circumstances do we think things real?” Under frame analysis there are three different conditions under which things are considered unreal, these are explained and related to the experience of presence. Frame analysis is used to show why virtual environments are not usually considered to be part of reality, although the virtual spaces of phone interaction are considered real. The analysis also yields practical suggestions for extending presence within virtual environments. Keywords--presence, frames, virtual environments, mobile phones, Goffman.",
"title": ""
},
{
"docid": "aad7697ce9d9af2b49cd3a46e441ef8e",
"text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.",
"title": ""
},
{
"docid": "49d164ec845f6201f56e18a575ed9436",
"text": "This research explores a Natural Language Processing technique utilized for the automatic reduction of melodies: the Probabilistic Context-Free Grammar (PCFG). Automatic melodic reduction was previously explored by means of a probabilistic grammar [11] [1]. However, each of these methods used unsupervised learning to estimate the probabilities for the grammar rules, and thus a corpusbased evaluation was not performed. A dataset of analyses using the Generative Theory of Tonal Music (GTTM) exists [13], which contains 300 Western tonal melodies and their corresponding melodic reductions in tree format. In this work, supervised learning is used to train a PCFG for the task of melodic reduction, using the tree analyses provided by the GTTM dataset. The resulting model is evaluated on its ability to create accurate reduction trees, based on a node-by-node comparison with ground-truth trees. Multiple data representations are explored, and example output reductions are shown. Motivations for performing melodic reduction include melodic identification and similarity, efficient storage of melodies, automatic composition, variation matching, and automatic harmonic analysis.",
"title": ""
},
{
"docid": "41e9dac7301e00793c6e4891e07b53fa",
"text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.",
"title": ""
},
{
"docid": "67d317befd382c34c143ebfe806a3b55",
"text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.",
"title": ""
},
{
"docid": "08134d0d76acf866a71d660062f2aeb8",
"text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.",
"title": ""
},
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
},
{
"docid": "23832f031f7c700f741843e54ff81b4e",
"text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.",
"title": ""
}
] | scidocsrr |
fce938256c887fe3dbbc4d11deb49869 | Deep packet inspection using Cuckoo filter | [
{
"docid": "3f569eccc71c6186d6163a2cc40be0fc",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "debea8166d89cd6e43d1b6537658de96",
"text": "The emergence of SDNs promises to dramatically simplify network management and enable innovation through network programmability. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the key concern and is an equally striking challenge that reduces the growth of SDNs. Moreover, the deployment of novel entities and the introduction of several architectural components of SDNs pose new security threats and vulnerabilities. Besides, the landscape of digital threats and cyber-attacks is evolving tremendously, considering SDNs as a potential target to have even more devastating effects than using simple networks. Security is not considered as part of the initial SDN design; therefore, it must be raised on the agenda. This article discusses the state-of-the-art security solutions proposed to secure SDNs. We classify the security solutions in the literature by presenting a thematic taxonomy based on SDN layers/interfaces, security measures, simulation environments, and security objectives. Moreover, the article points out the possible attacks and threat vectors targeting different layers/interfaces of SDNs. The potential requirements and their key enablers for securing SDNs are also identified and presented. Also, the article gives great guidance for secure and dependable SDNs. Finally, we discuss open issues and challenges of SDN security that may be deemed appropriate to be tackled by researchers and professionals in the future.",
"title": ""
},
{
"docid": "4a5ee2e22999f2353e055550f9b4f0c5",
"text": "As the popularity of software-defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc.) and specify security levels that define how OpenSec reacts if malicious traffic is detected. In this paper, we first provide a more detailed explanation of how OpenSec converts security policies into a series of OpenFlow messages needed to implement such a policy. Second, we describe how the framework automatically reacts to security alerts as specified by the policies. Third, we perform additional experiments on the GENI testbed to evaluate the scalability of the proposed framework using existing datasets of campus networks. Our results show that up to 95% of attacks in an existing data set can be detected and 99% of malicious source nodes can be blocked automatically. Furthermore, we show that our policy specification language is simpler while offering fast translation times compared to existing solutions.",
"title": ""
}
] | [
{
"docid": "ba4ffbb6c3dc865f803cbe31b52919c5",
"text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.",
"title": ""
},
{
"docid": "8824df4c6f2a95cbb43b45a56a2657d1",
"text": "Semi-supervised classification has become an active topic recently and a number of algorithms, such as Self-training, have been proposed to improve the performance of supervised classification using unlabeled data. In this paper, we propose a semi-supervised learning framework which combines clustering and classification. Our motivation is that clustering analysis is a powerful knowledge-discovery tool and it may clustering is integrated into Self-training classification to help train a better classifier. In particular, the semi-supervised fuzzy c-means algorithm and support vector machines are used for clustering and classification, respectively. Experimental results on artificial and real datasets demonstrate the advantages of the proposed framework. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1c259f13e6ff506cb72fb5d1790c4a42",
"text": "oastal areas, the place where the waters of the seas meet the land are indeed unique places in our global geography. They are endowed with a very wide range of coastal ecosystems like mangroves, coral reefs, lagoons, sea grass, salt marsh, estuary etc. They are unique in a very real economic sense as sites for port and harbour facilities that capture the large monetary benefits associated with waterborne commerce and are highly valued and greatly attractive as sites for resorts and as vacation destinations. The combination of freshwater and salt water in coastal estuaries creates some of the most productive and richest habitats on earth; the resulting bounty in fishes and other marine life can be of great value to coastal nations. In many locations, the coastal topography formed over the millennia provides significant protection from hurricanes, typhoons, and other ocean related disturbances. But these values could diminish or even be lost, if they are not managed. Pollution of coastal waters can greatly reduce the production of fish, as can degradation of coastal nursery grounds and other valuable wetland habitats. The storm protection afforded by fringing reefs and mangrove forests can be lost if the corals die or the mangroves removed. Inappropriate development and accompanying despoilment can reduce the attractiveness of the coastal environment, greatly affecting tourism potential. Even ports and harbours require active management if they are to remain productive and successful over the long term. Coastal ecosystem management is thus immensely important for the sustainable use, development and protection of the coastal and marine areas and resources. To achieve this, an understanding of the coastal processes that influence the coastal environments and the ways in which they interact is necessary. It is advantageous to adopt a holistic or systematic approach for solving the coastal problems, since understanding the processes and products of interaction in coastal environments is very complicated. A careful assessment of changes that occur in the coastal C",
"title": ""
},
{
"docid": "4eb1e1ff406e6520defc293fd8b36cc5",
"text": "Biodegradation of two superabsorbent polymers, a crosslinked, insoluble polyacrylate and an insoluble polyacrylate/ polyacrylamide copolymer, in soil by the white-rot fungus, Phanerochaete chrysosporium was investigated. The polymers were both solubilized and mineralized by the fungus but solubilization and mineralization of the copolymer was much more rapid than of the polyacrylate. Soil microbes poorly solublized the polymers and were unable to mineralize either intact polymer. However, soil microbes cooperated with the fungus during polymer degradation in soil, with the fungus solubilizing the polymers and the soil microbes stimulating mineralization. Further, soil microbes were able to significantly mineralize both polymers after solubilization by P. chrysosporium grown under conditions that produced fungal peroxidases or cellobiose dehydrogenase, or after solubilization by photochemically generated Fenton reagent. The results suggest that biodegradation of these polymers in soil is best under conditions that maximize solubilization.",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "8b8f4ddff20f2321406625849af8766a",
"text": "This paper provides an introduction to specifying multilevel models using PROC MIXED. After a brief introduction to the field of multilevel modeling, users are provided with concrete examples of how PROC MIXED can be used to estimate (a) two-level organizational models, (b) two-level growth models, and (c) three-level organizational models. Both random intercept and random intercept and slope models are illustrated. Examples are shown using different real world data sources, including the publically available Early Childhood Longitudinal Study–Kindergarten cohort data. For each example, different research questions are examined through both narrative explanations and examples of the PROC MIXED code and corresponding output.",
"title": ""
},
{
"docid": "0b106d338b55650fc108c97bcb7aeb13",
"text": "Emerging applications face the need to store and analyze interconnected data that are naturally depicted as graphs. Recent proposals take the idea of data cubes that have been successfully applied to multidimensional data and extend them to work for interconnected datasets. In our work we revisit the graph cube framework and propose novel mechanisms inspired from information theory in order to help the analyst quickly locate interesting relationships within the rich information contained in the graph cube. The proposed entropy-based filtering of data reveals irregularities and non-uniformity which are often what the decision maker is looking for. We experimentally validate our techniques and demonstrate that the proposed entropy-based filtering can help eliminate large portions of the respective graph cubes.",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "f32ff72da2f90ed0e5279815b0fb10e0",
"text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.",
"title": ""
},
{
"docid": "29822df06340218a43fbcf046cbeb264",
"text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.",
"title": ""
},
{
"docid": "db529d673e5d1b337e98e05eeb35fcf4",
"text": "Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b15c689ff3dd7b2e7e2149e73b5451ac",
"text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.",
"title": ""
},
{
"docid": "de298bb631dd0ca515c161b6e6426a85",
"text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.",
"title": ""
},
{
"docid": "ae5df62bc13105298ae28d11a0a92ffa",
"text": "This work presents an approach for estimating the effect of the fractional-N phase locked loop (Frac-N PLL) phase noise profile on frequency modulated continuous wave (FMCW) radar precision. Unlike previous approaches, the proposed modelling method takes the actual shape of the phase noise profile into account leading to insights on the main regions dominating the precision. Estimates from the proposed model are in very good agreement with statistical simulations and measurement results from an FMCW radar test chip fabricated on an IBM7WL BiCMOS 0.18 μm technology. At 5.8 GHz center frequency, a close-in phase noise of −75 dBc/Hz at 1 kHz offset is measured. A root mean squared (RMS) chirp nonlinearity error of 14.6 kHz and a ranging precision of 0.52 cm are achieved which competes with state-of-the-art FMCW secondary radars.",
"title": ""
},
{
"docid": "0ed9c61670394b46c657593d71aa25e4",
"text": "We developed a novel computational framework to predict the perceived trustworthiness of host profile texts in the context of online lodging marketplaces. To achieve this goal, we developed a dataset of 4,180 Airbnb host profiles annotated with perceived trustworthiness. To the best of our knowledge, the dataset along with our models allow for the first computational evaluation of perceived trustworthiness of textual profiles, which are ubiquitous in online peer-to-peer marketplaces. We provide insights into the linguistic factors that contribute to higher and lower perceived trustworthiness for profiles of different lengths.",
"title": ""
},
{
"docid": "c2aa986c09f81c6ab54b0ac117d03afb",
"text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fec4e9bb14c0071e64a5d6c25281a8d1",
"text": "Currently, we are witnessing a growing trend in the study and application of problems in the framework of Big Data. This is mainly due to the great advantages which come from the knowledge extraction from a high volume of information. For this reason, we observe a migration of the standard Data Mining systems towards a new functional paradigm that allows at working with Big Data. By means of the MapReduce model and its different extensions, scalability can be successfully addressed, while maintaining a good fault tolerance during the execution of the algorithms. Among the different approaches used in Data Mining, those models based on fuzzy systems stand out for many applications. Among their advantages, we must stress the use of a representation close to the natural language. Additionally, they use an inference model that allows a good adaptation to different scenarios, especially those with a given degree of uncertainty. Despite the success of this type of systems, their migration to the Big Data environment in the different learning areas is at a preliminary stage yet. In this paper, we will carry out an overview of the main existing proposals on the topic, analyzing the design of these models. Additionally, we will discuss those problems related to the data distribution and parallelization of the current algorithms, and also its relationship with the fuzzy representation of the information. Finally, we will provide our view on the expectations for the future in this framework according to the design of those methods based on fuzzy sets, as well as the open challenges on the topic.",
"title": ""
},
{
"docid": "bc94f3c157857f7f1581c9d23e65ad87",
"text": "The increasing development of educational games as Learning Objects is an important resource for educators to motivate and engage students in several online courses using Learning Management Systems. Students learn more, and enjoy themselves more when they are actively involved, rather than just passive listeners. This article has the aim to discuss about how games may motivate learners, enhancing the teaching-learning process as well as assist instructor designers to develop educational games in order to engage students, based on theories of motivational design and emotional engagement. Keywords— learning objects; games; Learning Management Systems, motivational design, emotional engagement.",
"title": ""
},
{
"docid": "5d0a77058d6b184cb3c77c05363c02e0",
"text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.",
"title": ""
},
{
"docid": "d1940d5db7b1c8b9c7b4ca6ac9463147",
"text": "In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.",
"title": ""
}
] | scidocsrr |
999ee0c9709736f8c347a9c9d201a1b1 | Falcon: Scaling Up Hands-Off Crowdsourced Entity Matching to Build Cloud Services | [
{
"docid": "c15093ead030ba1aa020a99c312109fa",
"text": "Analysts report spending upwards of 80% of their time on problems in data cleaning. The data cleaning process is inherently iterative, with evolving cleaning workflows that start with basic exploratory data analysis on small samples of dirty data, then refine analysis with more sophisticated/expensive cleaning operators (i.e., crowdsourcing), and finally apply the insights to a full dataset. While an analyst often knows at a logical level what operations need to be done, they often have to manage a large search space of physical operators and parameters. We present Wisteria, a system designed to support the iterative development and optimization of data cleaning workflows, especially ones that utilize the crowd. Wisteria separates logical operations from physical implementations, and driven by analyst feedback, suggests optimizations and/or replacements to the analyst’s choice of physical implementation. We highlight research challenges in sampling, in-flight operator replacement, and crowdsourcing. We overview the system architecture and these techniques, then propose a demonstration designed to showcase how Wisteria can improve iterative data analysis and cleaning. The code is available at: http://www.sampleclean.org.",
"title": ""
},
{
"docid": "8100e82d990acf1ec1e8551b479fc71f",
"text": "String similarity search and join are two important operations in data cleaning and integration, which extend traditional exact search and exact join operations in databases by tolerating the errors and inconsistencies in the data. They have many real-world applications, such as spell checking, duplicate detection, entity resolution, and webpage clustering. Although these two problems have been extensively studied in the recent decade, there is no thorough survey. In this paper, we present a comprehensive survey on string similarity search and join. We first give the problem definitions and introduce widely-used similarity functions to quantify the similarity. We then present an extensive set of algorithms for string similarity search and join. We also discuss their variants, including approximate entity extraction, type-ahead search, and approximate substring matching. Finally, we provide some open datasets and summarize some research challenges and open problems.",
"title": ""
}
] | [
{
"docid": "7b3da375a856a53b8a303438d015dffe",
"text": "Social Networking Sites (SNS), such as Facebook and LinkedIn, have become the established place for keeping contact with old friends and meeting new acquaintances. As a result, a user leaves a big trail of personal information about him and his friends on the SNS, sometimes even without being aware of it. This information can lead to privacy drifts such as damaging his reputation and credibility, security risks (for instance identity theft) and profiling risks. In this paper, we first highlight some privacy issues raised by the growing development of SNS and identify clearly three privacy risks. While it may seem a priori that privacy and SNS are two antagonist concepts, we also identified some privacy criteria that SNS could fulfill in order to be more respectful of the privacy of their users. Finally, we introduce the concept of a Privacy-enhanced Social Networking Site (PSNS) and we describe Privacy Watch, our first implementation of a PSNS.",
"title": ""
},
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "eaca6393a08baa24958f7197fb4b8e8a",
"text": "OBJECTIVE\nTo assess adherence to community-based directly observed treatment (DOT) among Tanzanian tuberculosis patients using the Medication Event Monitoring System (MEMS) and to validate alternative adherence measures for resource-limited settings using MEMS as a gold standard.\n\n\nMETHODS\nThis was a longitudinal pilot study of 50 patients recruited consecutively from one rural hospital, one urban hospital and two urban health centres. Treatment adherence was monitored with MEMS and the validity of the following adherence measures was assessed: isoniazid urine test, urine colour test, Morisky scale, Brief Medication Questionnaire, adapted AIDS Clinical Trials Group (ACTG) adherence questionnaire, pill counts and medication refill visits.\n\n\nFINDINGS\nThe mean adherence rate in the study population was 96.3% (standard deviation, SD: 7.7). Adherence was less than 100% in 70% of the patients, less than 95% in 21% of them, and less than 80% in 2%. The ACTG adherence questionnaire and urine colour test had the highest sensitivities but lowest specificities. The Morisky scale and refill visits had the highest specificities but lowest sensitivities. Pill counts and refill visits combined, used in routine practice, yielded moderate sensitivity and specificity, but sensitivity improved when the ACTG adherence questionnaire was added.\n\n\nCONCLUSION\nPatients on community-based DOT showed good adherence in this study. The combination of pill counts, refill visits and the ACTG adherence questionnaire could be used to monitor adherence in settings where MEMS is not affordable. The findings with regard to adherence and to the validity of simple adherence measures should be confirmed in larger populations with wider variability in adherence rates.",
"title": ""
},
{
"docid": "10b8aa3bc47a05d2e0eddc83f6922005",
"text": "Bluetooth Low Energy (BLE), a low-power wireless protocol, is widely used in industrial automation for monitoring field devices. Although the BLE standard defines advanced security mechanisms, there are known security attacks for BLE and BLE-enabled field devices must be tested thoroughly against these attacks. This article identifies the possible attacks for BLE-enabled field devices relevant for industrial automation. It also presents a framework for defining and executing BLE security attacks and evaluates it on three BLE devices. All tested devices are vulnerable and this confirms that there is a need for better security testing tools as well as for additional defense mechanisms for BLE devices.",
"title": ""
},
{
"docid": "1a3ac29d83c04225f6f69eaf3d263139",
"text": "In a web environment, one of the most evolving application is those with recommendation system (RS). It is a subset of information filtering systems wherein, information about certain products or services or a person are categorized and are recommended for the concerned individual. Most of the authors designed collaborative movie recommendation system by using K-NN and K-means but due to a huge increase in movies and users quantity, the neighbour selection is getting more problematic. We propose a hybrid model based on movie recommender system which utilizes type division method and classified the types of the movie according to users which results reduce computation complexity. K-Means provides initial parameters to particle swarm optimization (PSO) so as to improve its performance. PSO provides initial seed and optimizes fuzzy c-means (FCM), for soft clustering of data items (users), instead of strict clustering behaviour in K-Means. For proposed model, we first adopted type division method to reduce the dense multidimensional data space. We looked up for techniques, which could give better results than K-Means and found FCM as the solution. Genetic algorithm (GA) has the limitation of unguided mutation. Hence, we used PSO. In this article experiment performed on Movielens dataset illustrated that the proposed model may deliver high performance related to veracity, and deliver more predictable and personalized recommendations. When compared to already existing methods and having 0.78 mean absolute error (MAE), our result is 3.503 % better with 0.75 as the MAE, showed that our approach gives improved results.",
"title": ""
},
{
"docid": "971398019db2fb255769727964f1e38a",
"text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.",
"title": ""
},
{
"docid": "fdd0067a8c3ebf285c68cac7172590a7",
"text": "We introduce an effective technique to enhance night-time hazy scenes. Our technique builds on multi-scale fusion approach that use several inputs derived from the original image. Inspired by the dark-channel [1] we estimate night-time haze computing the airlight component on image patch and not on the entire image. We do this since under night-time conditions, the lighting generally arises from multiple artificial sources, and is thus intrinsically non-uniform. Selecting the size of the patches is non-trivial, since small patches are desirable to achieve fine spatial adaptation to the atmospheric light, this might also induce poor light estimates and reduced chance of capturing hazy pixels. For this reason, we deploy multiple patch sizes, each generating one input to a multiscale fusion process. Moreover, to reduce the glowing effect and emphasize the finest details, we derive a third input. For each input, a set of weight maps are derived so as to assign higher weights to regions of high contrast, high saliency and small saturation. Finally the derived inputs and the normalized weight maps are blended in a multi-scale fashion using a Laplacian pyramid decomposition. The experimental results demonstrate the effectiveness of our approach compared with recent techniques both in terms of computational efficiency and quality of the outputs.",
"title": ""
},
{
"docid": "609bbd3b066cf7a56d11ea545c0b0e71",
"text": "Subgingival margins are often required for biologic, mechanical, or esthetic reasons. Several investigations have demonstrated that their use is associated with adverse periodontal reactions, such as inflammation or recession. The purpose of this prospective randomized clinical study was to determine if two different subgingival margin designs influence the periodontal parameters and patient perception. Deep chamfer and feather-edge preparations were compared on 58 patients with 6 months follow-up. Statistically significant differences were present for bleeding on probing, gingival recession, and patient satisfaction. Feather-edge preparation was associated with increased bleeding on probing and deep chamfer with increased recession; improved patient comfort was registered with chamfer margin design. Subgingival margins are technique sensitive, especially when feather-edge design is selected. This margin design may facilitate soft tissue stability but can expose the patient to an increased risk of gingival inflammation.",
"title": ""
},
{
"docid": "6b16d2baafca3b8e479d9b34bd3f1ea7",
"text": "Companies are increasingly allocating more of their marketing spending to social media programs. Yet there is little research about how social media use is associated with consumer–brand relationships. We conducted three studies to explore how individual and national differences influence the relationship between social media use and customer brand relationships. The first study surveyed customers in France, the U.K. and U.S. and compared those who engage with their favorite brands via social media with those who do not. The findings indicated that social media use was positively related with brand relationship quality and the effect was more pronounced with high anthropomorphism perceptions (the extent to which consumers' associate human characteristics with brands). Two subsequent experiments further validated these findings and confirmed that cultural differences, specifically uncertainty avoidance, moderated these results. We obtained robust and convergent results from survey and experimental data using both student and adult consumer samples and testing across three product categories (athletic shoes, notebook computers, and automobiles). The results offer cross-national support for the proposition that engaging customers via social media is associated with higher consumer–brand relationships and word of mouth communications when consumers anthropomorphize the brand and they avoid uncertainty. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "564f9c0a1e1f395d59837e1a4b7f08ef",
"text": "To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a \"smoothing filter\", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications.",
"title": ""
},
{
"docid": "a288a610a6cd4ff32b3fff4e2124aee0",
"text": "According to the survey done by IBM business consulting services in 2006, global CEOs stated that business model innovation will have a greater impact on operating margin growth, than product or service innovation. We also noticed that some enterprises in China's real estate industry have improved their business models for sustainable competitive advantage and surplus profit in recently years. Based on the case studies of Shenzhen Vanke, as well as literature review, a framework for business model innovation has been developed. The framework provides an integrated means of making sense of new business model. These include critical dimensions of new customer value propositions, technological innovation, collaboration of the business infrastructure and the economic feasibility of a new business model.",
"title": ""
},
{
"docid": "06a3bf091404fc51bb3ee0a9f1d8a759",
"text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).",
"title": ""
},
{
"docid": "0dec59ce5b66acd7d6466265f5c03125",
"text": "Variation in self concept and academic achievement particularly among the visually impaired pupils has not been conclusively studied. The purpose of this study was therefore to determine if there were gender differences in self-concept and academic achievement among visually impaired pupils in Kenya .The population of the study was 291 visually impaired pupils. A sample of 262 respondents was drawn from the population by stratified random sampling technique based on their sex (152 males and 110 females). Two instruments were used in this study: Pupils’ self-concept and academic achievement test. Data analysis was done at p≤0.05 level of significance. The t test was used to test the relationship between self-concept and achievement. The data was analyzed using Analysis of Variance (ANOVA) structure. The study established that there were indeed gender differences in self-concept among visually impaired pupils in Kenya. The study therefore recommend that the lower self-concept observed among boys should be enhanced by giving counseling and early intervention to this group of pupils with a view to helping them accept their disability.",
"title": ""
},
{
"docid": "fee191728bc0b1fbf11344961be10215",
"text": "In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. Disciplines Computer Sciences Comments Vanderwende, L., Suzuki, H., Brockett, C., & Nenkova, A., Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion, Information Processing and Management, Special Issue on Summarization Volume 43, Issue 6, 2007, doi: 10.1016/j.ipm.2007.01.023 This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/736",
"title": ""
},
{
"docid": "8d08a464c75a8da6de159c0f0e46d447",
"text": "A License plate recognition (LPR) system can be divided into the following steps: preprocessing, plate region extraction, plate region thresholding, character segmentation, character recognition and post-processing. For step 2, a combination of color and shape information of plate is used and a satisfactory extraction result is achieved. For step 3, first channel is selected, then threshold is computed and finally the region is thresholded. For step 4, the character is segmented along vertical, horizontal direction and some tentative optimizations are applied. For step 5, minimum Euclidean distance based template matching is used. And for those confusing characters such as '8' & 'B' and '0' & 'D', a special processing is necessary. And for the final step, validity is checked by machine and manual. The experiment performed by program based on aforementioned algorithms indicates that our LPR system based on color image processing is quite quick and accurate.",
"title": ""
},
{
"docid": "6ae33cdc9601c90f9f3c1bda5aa8086f",
"text": "A k-uniform hypergraph is hamiltonian if for some cyclic ordering of its vertex set, every k consecutive vertices form an edge. In 1952 Dirac proved that if the minimum degree in an n-vertex graph is at least n/2 then the graph is hamiltonian. We prove an approximate version of an analogous result for uniform hypergraphs: For every k ≥ 3 and γ > 0, and for all n large enough, a sufficient condition for an n-vertex k-uniform hypergraph to be hamiltonian is that each (k − 1)-element set of vertices is contained in at least (1/2 + γ)n edges. Research supported by NSF grant DMS-0300529. Research supported by KBN grant 2 P03A 015 23 and N201036 32/2546. Part of research performed at Emory University, Atlanta. Research supported by NSF grant DMS-0100784",
"title": ""
},
{
"docid": "4041235ab6ad93290ed90cdf5e07d6e5",
"text": "This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.",
"title": ""
},
{
"docid": "1cbf4840e09a950a5adfcbbfbd476d6a",
"text": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.",
"title": ""
},
{
"docid": "0f6aaec52e4f7f299a711296992b5dba",
"text": "This paper presents a simple and efficient method for online signature verification. The technique is based on a feature set comprising of several histograms that can be computed efficiently given a raw data sequence of an online signature. The features which are represented by a fixed-length vector can not be used to reconstruct the original signature, thereby providing privacy to the user's biometric trait in case the stored template is compromised. To test the verification performance of the proposed technique, several experiments were conducted on the well known MCYT-100 and SUSIG datasets including both skilled forgeries and random forgeries. Experimental results demonstrate that the performance of the proposed technique is comparable to state-of-art algorithms despite its simplicity and efficiency.",
"title": ""
},
{
"docid": "5288500535b3eaf67daf24071bf6300f",
"text": "All aspects of human-computer interaction, from the high-level concerns of organizational context and system requirements to the conceptual, semantic, syntactic, and lexical levels of user interface design, are ultimately funneled through physical input and output actions and devices. The fundamental task in computer input is to move information from the brain of the user to the computer. Progress in this discipline attempts to increase the useful bandwidth across that interface by seeking faster, more natural, and more convenient means for a user to transmit information to a computer. This article mentions some of the technical background for this area, surveys the range of input devices currently in use and emerging, and considers future trends in input.",
"title": ""
}
] | scidocsrr |
ce570c9ec903f054234cc8932dc3cea3 | Semantically-Guided Video Object Segmentation | [
{
"docid": "775f4fd21194e18cdf303248f1cde206",
"text": "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.",
"title": ""
},
{
"docid": "522345eb9b2e53f05bb9d961c85fea23",
"text": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
}
] | [
{
"docid": "b811c82ff944715edc2b7dec382cb529",
"text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "d104206fd95525192240e9a6d6aedd89",
"text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.",
"title": ""
},
{
"docid": "13c2dea57aed95f7b937a9d329dd5af8",
"text": "Understanding topic hierarchies in text streams and their evolution patterns over time is very important in many applications. In this paper, we propose an evolutionary multi-branch tree clustering method for streaming text data. We build evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which considers both the likelihood of the current tree and conditional prior given the previous tree. We also introduce a constraint model to compute the conditional prior of a tree in the multi-branch setting. Experiments on real world news data demonstrate that our algorithm can better incorporate historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm.",
"title": ""
},
{
"docid": "3f48f5be25ac5d040cc9d226588427b3",
"text": "Snake robots, sometimes called hyper-redundant mechanisms, can use their many degrees of freedom to achieve a variety of locomotive capabilities. These capabilities are ideally suited for disaster response because the snake robot can thread through tightly packed volumes, accessing locations that people and conventional machinery otherwise cannot. Snake robots also have the advantage of possessing a variety of locomotion capabilities that conventional robots do not. Just like their biological counterparts, snake robots achieve these locomotion capabilities using cyclic motions called gaits. These cyclic motions directly control the snake robot’s internal degrees of freedom which, in turn, causes a net motion, say forward, lateral and rotational, for the snake robot. The gaits described in this paper fall into two categories: parameterized and scripted. The parameterized gaits, as their name suggests, can be described by a relative simple parameterized function, whereas the scripted cannot. This paper describes the functions we prescribed for gait generation and our experiences in making these robots operate in real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009",
"title": ""
},
{
"docid": "cb7dda8f4059e5a66e4a6e26fcda601e",
"text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "30672a5e329d9ed61a65b07f24731c91",
"text": "Combined star-delta windings in electrical machines result in a higher fundamental winding factor and cause a smaller spatial harmonic content. This leads to lower I2R losses in the stator and the rotor winding and thus to an increased efficiency. However, compared to an equivalent six-phase winding, additional spatial harmonics are generated due to the different magnetomotive force in the star and delta part of the winding. In this paper, a complete theory and analysis method for the analytical calculation of the efficiency of induction motors equipped with combined star-delta windings is developed. The method takes into account the additional harmonic content due to the different magnetomotive force in the star and delta part. To check the analysis' validity, an experimental test is reported both on a cage induction motor equipped with a combined star-delta winding in the stator and on a reference motor with the same core but with a classical three-phase winding.",
"title": ""
},
{
"docid": "bd5a124345544982d485a0e036c49de8",
"text": "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network.",
"title": ""
},
{
"docid": "4924441de38f1b28e66330a1cb219f4b",
"text": "Online marketing is one of the best practices used to establish a brand and to increase its popularity. Advertisements are used in a better way to showcase the company’s product/service and give rise to a worthy online marketing strategy. Posting an advertisement on utilitarian web pages helps to maximize brand reach and get a better feedback. Now-a-days companies are cautious of their brand image on the Internet due to the growing number of Internet users. Since there are billions of Web sites on the Internet, it becomes difficult for companies to really decide where to advertise on the Internet for brand popularity. What if, the company advertise on a page which is visited by less number of the interested (for a particular type of product) users instead of a web page which is visited by more number of the interested users?—this doubt and uncertainty—is a core issue faced by many companies. This research paper presents a Brand analysis framework and suggests some experimental practices to ensure efficiency of the proposed framework. This framework is divided into three components—(1) Web site network formation framework—a framework that forms a Web site network of a specific search query obtained from resultant web pages of three search engines-Google, Yahoo & Bing and their associated web pages; (2) content scraping framework—it crawls the content of web pages existing in the framework-formed Web site network; (3) rank assignment of networked web pages—text edge processing algorithm has been used to find out terms of interest and their occurrence associated with search query. We have further applied sentiment analysis to validate positive or negative impact of the sentences, having the search term and its associated terms (with reference to the search query) to identify impact of web page. Later, on the basis of both—text edge analysis and sentiment analysis results, we assigned a rank to networked web pages and online social network pages. In this research work, we present experiments for ‘Motorola smart phone,’ ‘LG smart phone’ and ‘Samsung smart phone’ as search query and sampled the Web site network of top 20 search results of all three search engines and examined up to 60 search results for each search engine. This work is useful to target the right online location for specific brand marketing. Once the brand knows the web pages/social media pages containing high brand affinity and ensures that the content of high affinity web page/social media page has a positive impact, we advertise at that respective online location. Thus, targeted brand analysis framework for online marketing not only has benefits for the advertisement agencies but also for the customers.",
"title": ""
},
{
"docid": "01bd2cdb72270a4ad36beeca29cf670b",
"text": "5-Lipoxygenase (5-LO) plays a pivotal role in the progression of atherosclerosis. Therefore, this study investigated the molecular mechanisms involved in 5-LO expression on monocytes induced by LPS. Stimulation of THP-1 monocytes with LPS (0~3 µg/ml) increased 5-LO promoter activity and 5-LO protein expression in a concentration-dependent manner. LPS-induced 5-LO expression was blocked by pharmacological inhibition of the Akt pathway, but not by inhibitors of MAPK pathways including the ERK, JNK, and p38 MAPK pathways. In line with these results, LPS increased the phosphorylation of Akt, suggesting a role for the Akt pathway in LPS-induced 5-LO expression. In a promoter activity assay conducted to identify transcription factors, both Sp1 and NF-κB were found to play central roles in 5-LO expression in LPS-treated monocytes. The LPS-enhanced activities of Sp1 and NF-κB were attenuated by an Akt inhibitor. Moreover, the LPS-enhanced phosphorylation of Akt was significantly attenuated in cells pretreated with an anti-TLR4 antibody. Taken together, 5-LO expression in LPS-stimulated monocytes is regulated at the transcriptional level via TLR4/Akt-mediated activations of Sp1 and NF-κB pathways in monocytes.",
"title": ""
},
{
"docid": "2d3adb98f6b1b4e161d84314958960e5",
"text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.",
"title": ""
},
{
"docid": "30c6829427aaa8d23989afcd666372f7",
"text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and",
"title": ""
},
{
"docid": "01b3c9758bd68ad68a2f1d262feaa4e8",
"text": "A low-voltage-swing MOSFET gate drive technique is proposed in this paper for enhancing the efficiency characteristics of high-frequency-switching dc-dc converters. The parasitic power dissipation of a dc-dc converter is reduced by lowering the voltage swing of the power transistor gate drivers. A comprehensive circuit model of the parasitic impedances of a monolithic buck converter is presented. Closed-form expressions for the total power dissipation of a low-swing buck converter are proposed. The effect of reducing the MOSFET gate voltage swings is explored with the proposed circuit model. A range of design parameters is evaluated, permitting the development of a design space for full integration of active and passive devices of a low-swing buck converter on the same die, for a target CMOS technology. The optimum gate voltage swing of a power MOSFET that maximizes efficiency is lower than a standard full voltage swing. An efficiency of 88% at a switching frequency of 102 MHz is achieved for a voltage conversion from 1.8 to 0.9 V with a low-swing dc-dc converter based on a 0.18-/spl mu/m CMOS technology. The power dissipation of a low-swing dc-dc converter is reduced by 27.9% as compared to a standard full-swing dc-dc converter.",
"title": ""
},
{
"docid": "ce2d4247b1072b3c593e73fe9d67cf63",
"text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.",
"title": ""
},
{
"docid": "68cff1020543f97e5e8c2710bc85c823",
"text": "This paper describes modelling and testing of a digital distance relay for transmission line protection using MATLAB/SIMULINK. SIMULINK’s Power System Blockset (PSB) is used for detailed modelling of a power system network and fault simulation. MATLAB is used to implement programs of digital distance relaying algorithms and to serve as main software environment. The technique is an interactive simulation environment for relaying algorithm design and evaluation. The basic principles of a digital distance relay and some related filtering techniques are also described in this paper. A 345 kV, 100 km transmission line and a MHO type distance relay are selected as examples for fault simulation and relay testing. Some simulation results are given.",
"title": ""
},
{
"docid": "21aa2df33199b6fbdc64abd1ea65341b",
"text": "AIM\nBefore an attempt is made to develop any population-specific behavioural change programme, it is important to know what the factors that influence behaviours are. The aim of this study was to identify what are the perceived determinants that attribute to young people's choices to both consume and misuse alcohol.\n\n\nMETHOD\nUsing a descriptive survey design, a web-based questionnaire based on the Theory of Triadic Influence was administered to students aged 18-29 years at one university in Northern Ireland.\n\n\nRESULTS\nOut of the total respondents ( n = 595), knowledge scores on alcohol consumption and the health risks associated with heavy episodic drinking were high (92.4%, n = 550). Over half (54.1%, n = 322) cited the Internet as their main source for alcohol-related information. The three most perceived influential factors of inclination to misuse alcohol were strains/conflict within the family home ( M = 2.98, standard deviation ( SD) = 0.18, 98.7%, n = 587), risk taking/curiosity behaviour ( M = 2.97, SD = 0.27, 97.3%, n = 579) and the desire not to be socially alienated ( M = 2.94, SD = 0.33, 96%, n = 571). Females were statistically significantly more likely to be influenced by desire not to be socially alienated than males ( p = .029). Religion and personal reasons were the most commonly cited reasons for not drinking.\n\n\nCONCLUSION\nFuture initiatives to reduce alcohol misuse and alcohol-related harms need to focus on changing social normative beliefs and attitudes around alcohol consumption and the family and environmental factors that influence the choice of young adult's alcohol drinking behaviour. Investment in multi-component interventions may be a useful approach.",
"title": ""
},
{
"docid": "0b86a006b1f8e3a5e940daef25fe7d58",
"text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.",
"title": ""
},
{
"docid": "1649b2776fcc2b8a736306128f8a2331",
"text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.",
"title": ""
},
{
"docid": "67733befe230741c69665218dd256dc0",
"text": "Model reduction of the Markov process is a basic problem in modeling statetransition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a finite-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregatability and lumpability. We develop a class of spectral state compression methods for three tasks: (1) estimate the transition matrix of a low-rank Markov model, (2) estimate the leading subspace spanned by Markov features, and (3) recover latent structures of the state space like state aggregation and lumpable partition. The proposed methods provide an unsupervised learning framework for identifying Markov features and clustering states. We provide upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips. ∗Anru Zhang is with the Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: [email protected]; Mengdi Wang is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, E-mail: [email protected]. †",
"title": ""
}
] | scidocsrr |
9f902f990ee1aa756178416d708d951d | Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning | [
{
"docid": "4e8c39eaa7444158a79573481b80a77f",
"text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.",
"title": ""
},
{
"docid": "4ab20b8b40e9d9eff4f9a817b984cf69",
"text": "Convolutional neural networks (CNNs) have emerged as the most powerful technique for a range of different tasks in computer vision. Recent work suggested that CNN features are generic and can be used for classification tasks outside the exact domain for which the networks were trained. In this work we use the features from one such network, OverFeat, trained for object detection in natural images, for nodule detection in computed tomography scans. We use 865 scans from the publicly available LIDC data set, read by four thoracic radiologists. Nodule candidates are generated by a state-of-the-art nodule detection system. We extract 2D sagittal, coronal and axial patches for each nodule candidate and extract 4096 features from the penultimate layer of OverFeat and classify these with linear support vector machines. We show for various configurations that the off-the-shelf CNN features perform surprisingly well, but not as good as the dedicated detection system. When both approaches are combined, significantly better results are obtained than either approach alone. We conclude that CNN features have great potential to be used for detection tasks in volumetric medical data.",
"title": ""
}
] | [
{
"docid": "101ccae3ebf37e824ea7056f92a3568a",
"text": "Ranking microblogs, such as tweets, as search results for a query is challenging, among other things because of the sheer amount of microblogs that are being generated in real time, as well as the short length of each individual microblog. In this paper, we describe several new strategies for ranking microblogs in a real-time search engine. Evaluating these ranking strategies is non-trivial due to the lack of a publicly available ground truth validation dataset. We have therefore developed a framework to obtain such validation data, as well as evaluation measures to assess the accuracy of the proposed ranking strategies. Our experiments demonstrate that it is beneficial for microblog search engines to take into account social network properties of the authors of microblogs in addition to properties of the microblog itself.",
"title": ""
},
{
"docid": "e88c9edf58bd19c12699bdd97a8ee400",
"text": "A new CMOS dynamic comparator using dual input single output differential amplifier as latch stage suitable for high speed analog-to-digital converters with High Speed, low power dissipation and immune to noise than the previous reported work is proposed. Backto-back inverter in the latch stage is replaced with dual-input single output differential amplifier. This topology completely removes the noise that is present in the input. The structure shows lower power dissipation and higher speed than the conventional comparators. The circuit is simulated with 1V DC supply voltage and 250 MHz clock frequency. The proposed topology is based on two cross coupled differential pairs positive feedback and switchable current sources, has a lower power dissipation, higher speed, less area, and it is shown to be very robust against transistor mismatch, noise immunity. Previous reported comparators are designed and simulated their DC response and Transient response in Cadence® Virtuoso Analog Design Environment using GPDK 90nm technology. Layouts of the proposed comparator have been done in Cadence® Virtuoso Layout XL Design Environment. DRC and LVS has been checked and compared with the corresponding circuits and RC extracted diagram has been generated. After that post layout simulation with 1V supply voltage has been done and compared the speed, power dissipation, Area, delay with the results before layout and the superior features of the proposed comparator are established.",
"title": ""
},
{
"docid": "46cf21a92d1d14f2f0e993f052cb7acb",
"text": "The conditional intervention principle is a formal principle that relates patterns of interventions and outcomes to causal structure. It is a central assumption of experimental design and the causal Bayes net formalism. Two studies suggest that preschoolers can use the conditional intervention principle to distinguish causal chains, common cause and interactive causal structures even in the absence of differential spatiotemporal cues and specific mechanism knowledge. Children were also able to use knowledge of causal structure to predict the patterns of evidence that would result from interventions. A third study suggests that children's spontaneous play can generate evidence that would support such accurate causal learning.",
"title": ""
},
{
"docid": "788ec5fff9c511fbf0a4206b79e9fc3c",
"text": "We present a new mechanical design for a 3-DOF haptic device with spherical kinematics (pitch, yaw, and prismatic radial). All motors are grounded in the base to decrease inertia and increase compactness near the user's hand. An aluminum-aluminum friction differential allows for actuation of pitch and yaw with mechanical robustness while allowing a cable transmission to route through its center. This novel cabling system provides simple, compact, and high-performance actuation of the radial DOF independent of motions in pitch and yaw. We show that the device's capabilities are suitable for general haptic rendering, as well as specialized applications of spherical kinematics such as laparoscopic surgery simulation.",
"title": ""
},
{
"docid": "8549330c92a246295a14018ed47c2934",
"text": "It is well-known that deep models could extract robust and abstract features. We propose a efficient facial expression recognition model based on transfer features from deep convolutional networks (ConvNets). We train the deep ConvNets through the task of 1580-class face identification on the MSRA-CFW database and transfer high-level features from the trained deep model to recognize expression. To train and test the facial expression recognition model on a large scope, we built a facial expression database of seven basic emotion states and 2062 imbalanced samples depending on four facial expression databases (CK+, JAFFE, KDEF, Pain expressions form PICS). Compared with 50.65% recognition rate based on Gabor features with the seven-class SVM and 78.84% recognition rate based on distance features with the seven-class SVM, we achieve average 80.49% recognition rate with the seven-class SVM classifier on the self-built facial expression database. Considering occluded face in reality, we test our model in the occluded condition and demonstrate the model could keep its ability of classification in the small occlusion case. To increase the ability further, we improve the facial expression recognition model. The modified model merges high-level features transferred from two trained deep ConvNets of the same structure and the different training sets. The modified model obviously improves its ability of classification in the occluded condition and achieves average 81.50% accuracy on the self-built facial expression database.",
"title": ""
},
{
"docid": "f1b8fde2d3faab672169921b97dc8422",
"text": "Pooling plays an important role in generating a discriminative video representation. In this paper, we propose a new semantic pooling approach for challenging event analysis tasks (e.g., event detection, recognition, and recounting) in long untrimmed Internet videos, especially when only a few shots/segments are relevant to the event of interest while many other shots are irrelevant or even misleading. The commonly adopted pooling strategies aggregate the shots indifferently in one way or another, resulting in a great loss of information. Instead, in this work we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event analysis. Next, we propose a new isotonic regularizer that is able to exploit the constructed semantic ordering information. The resulting nearly-isotonic support vector machine classifier exhibits higher discriminative power in event analysis tasks. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new and closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and achieve promising improvements.",
"title": ""
},
{
"docid": "b76be3ca2e2b3bd0d7def56ccee815ea",
"text": "Internet of things (IoT) or Web of Things (WoT) is a wireless network between smart products or smart things connected to the internet. It is a new and fast developing market which not only connects objects and people but also billions of gadgets and smart devices. With the rapid growth of IoT, there is also a steady increase in security vulnerabilities of the linked objects. For example, a car manufacturer may want to link the systems within a car to smart home network networks to increase sales, but if all the various people involved do not embrace security the system will be exposed to security risks. As a result, there are several new published protocols of IoT, which focus on protecting critical data. However, these protocols face challenges and in this paper, numerous solutions are provided to overcome these problems. The widely used protocols such as, 802.15.4, 6LoWPAN, and RPL are the resenting of the IoT layers PHY/MAC, Adoption and Network. While CoAP (Constrained Application Protocol) is the application layer protocol designed as replication of the HTTP to serve the small devices coming under class 1 and 2. Many implementations of CoAP has been accomplished which indicates it's crucial amd upcoming role in the future of IoT applications. This research article explored the security of CoAP over DTLS incurring many issues and proposed solutions as well as open challenges for future research.",
"title": ""
},
{
"docid": "2f270908a1b4897b7d008d9673e3300b",
"text": "The implementation of 3D stereo matching in real time is an important problem for many vision applications and algorithms. The current work, extending previous results by the same authors, presents in detail an architecture which combines the methods of Absolute Differences, Census, and Belief Propagation in an integrated architecture suitable for implementation with Field Programmable Gate Array (FPGA) logic. Emphasis on the present work is placed on the justification of dimensioning the system, as well as detailed design and testing information for a fully placed and routed design to process 87 frames per sec (fps) in 1920 × 1200 resolution, and a fully implemented design for 400 × 320 which runs up to 1570 fps.",
"title": ""
},
{
"docid": "c2b6ae8cb6de825f9a1e8f032cd29cca",
"text": "This paper shows a LTE antenna design with metal-bezel structure for mobile phone application. For this antenna design, it overcomes the limitation of metal industrial design to achieve LTE all band operation by using two of metal-bezel as antenna radiators (radiator1 and 2). The radiator1 uses a tunable element at feed point and connects an inductor at antenna ground terminal to generate three resonate modes at 700∼960 MHz and 1710–2170 MHz. The radiator2 includes a part of metal-bezel and an extended FPC, which depends on a capacitive coupling effect between metal-bezel to provide the high band resonate mode at 2300–2700 MHz. Using this mechanism, the mobile phone can easily realize wideband LTE antenna design and has an attractive industrial design with metal-bezel structure.",
"title": ""
},
{
"docid": "4d73e82edf2a5ab8ee9191e1c1492af7",
"text": "This analysis focuses on several masculinities conceptualized by Holden Caulfield within the sociohistorical context of the 1950s. When it is understood by heterosexual men that to be masculine means to be emotionally detached and viewing women as sexual objects, they help propagate the belief that expressions of femininity and non-hegemonic masculinities are abnormal behavior and something to be demonized and punished. I propose that Holden’s “craziness” is the result of there being no positive images of heteronormative masculinity and no alternative to the rigid and strictly enforced roles offered to boys as they enter manhood. I suggest that a paradigm shift is needed that starts by collectively recognizing our current forms of patriarchy as harmful to everyone, followed by a reevaluation of how gender is prescribed to youth.",
"title": ""
},
{
"docid": "e2f6434cf7acfa6bd722f893c9bd1851",
"text": "Image Synthesis for Self-Supervised Visual Representation Learning",
"title": ""
},
{
"docid": "f393b6e00ef1e97f683a5dace33e40ff",
"text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).",
"title": ""
},
{
"docid": "dc55d8c0a4e19638bf51a98976ccdf91",
"text": "You don't have to deal with ADCs or DACs for long before running across this often quoted formula for the theoretical signal-to-noise ratio (SNR) of a converter. Rather than blindly accepting it on face value, a fundamental knowledge of its origin is important, because the formula encompasses some subtleties which if not understood can lead to significant misinterpretation of both data sheet specifications and converter performance. Remember that this formula represents the theoretical performance of a perfect N-bit ADC. You can compare the actual ADC SNR with the theoretical SNR and get an idea of how the ADC stacks up.",
"title": ""
},
{
"docid": "7b7776af302df446c7ced33eba386a12",
"text": "BACKGROUND\nEmerging research from psychology and the bio-behavioral sciences recognizes the importance of supporting patients to mobilize their personal strengths to live well with chronic illness. Positive technology and positive computing could be used as underlying design approaches to guide design and development of new technology-based interventions for this user group that support mobilizing their personal strengths.\n\n\nOBJECTIVE\nA codesigning workshop was organized with the aim to explore user requirements and ideas for how technology can be used to help people with chronic illness activate their personal strengths in managing their everyday challenges.\n\n\nMETHODS\nThirty-five participants from diverse backgrounds (patients, health care providers, designers, software developers, and researchers) participated. The workshop combined principles of (1) participatory and service design to enable meaningful participation and collaboration of different stakeholders and (2) an appreciative inquiry methodology to shift participants' attention to positive traits, values, and aspects that are meaningful and life-giving and stimulate participants' creativity, engagement, and collaboration. Utilizing these principles, participants were engaged in group activities to develop ideas for strengths-supportive tools. Each group consisted of 3-8 participants with different backgrounds. All group work was analysed using thematic analyses.\n\n\nRESULTS\nParticipants were highly engaged in all activities and reported a wide variety of requirements and ideas, including more than 150 personal strength examples, more than 100 everyday challenges that could be addressed by using personal strengths, and a wide range of functionality requirements (eg, social support, strength awareness and reflection, and coping strategies). 6 concepts for strength-supportive tools were created. These included the following: a mobile app to support a person to store, reflect on, and mobilize one's strengths (Strengths treasure chest app); \"empathy glasses\" enabling a person to see a situation from another person's perspective (Empathy Simulator); and a mobile app allowing a person to receive supportive messages from close people in a safe user-controlled environment (Cheering squad app). Suggested design elements for making the tools engaging included: metaphors (eg, trees, treasure island), visualization techniques (eg, dashboards, color coding), and multimedia (eg, graphics). Maintaining a positive focus throughout the tool was an important requirement, especially for feedback and framing of content.\n\n\nCONCLUSIONS\nCombining participatory, service design, and appreciative inquiry methods were highly useful to engage participants in creating innovative ideas. Building on peoples' core values and positive experiences empowered the participants to expand their horizons from addressing problems and symptoms, which is a very common approach in health care today, to focusing on their capacities and that which is possible, despite their chronic illness. The ideas and user requirements, combined with insights from relevant theories (eg, positive technology, self-management) and evidence from the related literature, are critical to guide the development of future more personalized and strengths-focused self-management tools.",
"title": ""
},
{
"docid": "fc9910cdff91f69d06d8510ff57fec8e",
"text": "Microplastics, plastics particles <5 mm in length, are a widespread pollutant of the marine environment. Oral ingestion of microplastics has been reported for a wide range of marine biota, but uptake into the body by other routes has received less attention. Here, we test the hypothesis that the shore crab (Carcinus maenas) can take up microplastics through inspiration across the gills as well as ingestion of pre-exposed food (common mussel Mytilus edulis). We used fluorescently labeled polystyrene microspheres (8-10 μm) to show that ingested microspheres were retained within the body tissues of the crabs for up to 14 days following ingestion and up to 21 days following inspiration across the gill, with uptake significantly higher into the posterior versus anterior gills. Multiphoton imaging suggested that most microspheres were retained in the foregut after dietary exposure due to adherence to the hairlike setae and were found on the external surface of gills following aqueous exposure. Results were used to construct a simple conceptual model of particle flow for the gills and the gut. These results identify ventilation as a route of uptake of microplastics into a common marine nonfilter feeding species.",
"title": ""
},
{
"docid": "d0aa53919bbb869a2c033247e413fc72",
"text": "We describe and present a new Question Answering (QA) component that can be easily used by the QA research community. It can be used to answer questions over DBpedia and Wikidata. The language support over DBpedia is restricted to English, while it can be used to answer questions in 4 different languages over Wikidata namely English, French, German and Italian. Moreover it supports both full natural language queries as well as keyword queries. We describe the interfaces to access and reuse it and the services it can be combined with. Moreover we show the evaluation results we achieved on the QALD-7 benchmark.",
"title": ""
},
{
"docid": "782eaf93618c0e6b066519459bcdbdad",
"text": "A model based on strikingly different philosophical as. sumptions from those currently popular is proposed for the design of online subject catalog access. Three design principles are presented and discussed: uncertainty (subject indexing is indeterminate and probabilis-tic beyond a certain point), variety (by Ashby's law of requisite variety, variety of searcher query must equal variety of document indexing), and complexity (the search process, particularly during the entry and orientation phases, is subtler and more complex, on several grounds, than current models assume). Design features presented are an access phase, including entry and orientation , a hunting phase, and a selection phase. An end-user thesaurus and a front-end system mind are presented as examples of online catalog system components to improve searcher success during entry and orientation. The proposed model is \" wrapped around \" existing Library of Congress subject-heading indexing in such a way as to enhance access greatly without requiring reindexing. It is argued that both for cost reasons and in principle this is a superior approach to other design philosophies .",
"title": ""
},
{
"docid": "8bdbf6fc33bc0b2cb5911683c13912a0",
"text": "The breaking of solid objects, like glass or pottery, poses a complex problem for computer animation. We present our methods of using physical simulation to drive the animation of breaking objects. Breakage is obtaned in a three-dimensional flexible model as the limit of elastic behavior. This article describes three principal features of the model: a breakage model, a collision-detection/response scheme, and a geometric modeling method. We use networks of point masses connected by springs to represent physical objects that can bend and break. We present effecient collision-detection algorithms, appropriate for simulating the collisions between the various pieces that interact in breakage. The capability of modeling real objects is provided by a technique of building up composite structures from simple lattice models. We applied these methods to animate the breaking of a teapot and other dishware activities in the animationTipsy Turvy shown at Siggraph '89. Animation techniques that rely on physical simulation to control the motion of objects are discussed, and further topics for research are presented.",
"title": ""
},
{
"docid": "96793943025ba92a7672444597b3a443",
"text": "This document describes Tree Kernel-SVM based methods for identifying sentences that could be improved in scientific text. This has the goal of contributing to the body of knowledge that attempt to build assistive tools to aid scientist improve the quality of their writings. Our methods consist of a combination of the output from multiple support vector machines which use Tree Kernel computations. Therefore, features for individual sentences are trees that reflect their grammatical structure. For the AESW 2016 Shared Task we built systems that provide probabilistic and binary outputs by using these models for trees comparisons.",
"title": ""
},
{
"docid": "80563d90bfdccd97d9da0f7276468a43",
"text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.",
"title": ""
}
] | scidocsrr |
2413e26c33d8fed722ae62b2ee35d170 | Application of stochastic recurrent reinforcement learning to index trading | [
{
"docid": "05ddc7e7819e5f9ac777f80e578f63ef",
"text": "This paper introduces adaptive reinforcement learning (ARL) as the basis for a fully automated trading system application. The system is designed to trade foreign exchange (FX) markets and relies on a layered structure consisting of a machine learning algorithm, a risk management overlay and a dynamic utility optimization layer. An existing machine-learning method called recurrent reinforcement learning (RRL) was chosen as the underlying algorithm for ARL. One of the strengths of our approach is that the dynamic optimization layer makes a fixed choice of model tuning parameters unnecessary. It also allows for a risk-return trade-off to be made by the user within the system. The trading system is able to make consistent gains out-of-sample while avoiding large draw-downs. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b8f59d1b416d4869ae38dbca0eaca41",
"text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.",
"title": ""
}
] | [
{
"docid": "04d66f58cea190d7d7ec8654b6c81d3b",
"text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.",
"title": ""
},
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
},
{
"docid": "157b5612644d4d7e1818932108d9119b",
"text": "This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.",
"title": ""
},
{
"docid": "fb26c619b72d05815b5b4ddf3100a8e6",
"text": "Knowledge graphs are large graph-structured databases of facts, which typically suffer from incompleteness. Link prediction is the task of inferring missing relations (links) between entities (nodes) in a knowledge graph. We approach this task using a hypernetwork architecture to generate convolutional layer filters specific to each relation and apply those filters to the subject entity embeddings. This architecture enables a trade-off between non-linear expressiveness and the number of parameters to learn. Our model simplifies the entity and relation embedding interactions introduced by the predecessor convolutional model, while outperforming all previous approaches to link prediction across all standard link prediction datasets.",
"title": ""
},
{
"docid": "88602ba9bcb297af04e58ed478664ee5",
"text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.",
"title": ""
},
{
"docid": "aea4eb371579b66c75c4cc4d51201253",
"text": "Fog computing based radio access network is a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. With the help of the new designed fog computing based access points (F-APs), the user-centric objectives can be achieved through the adaptive technique and will relieve the load of fronthaul and alleviate the burden of base band unit pool. In this paper, we derive the coverage probability and ergodic rate for both F-AP users and device-to-device users by taking into account the different nodes locations, cache sizes as well as user access modes. Particularly, the stochastic geometry tool is used to derive expressions for above performance metrics. Simulation results validate the accuracy of our analysis and we obtain interesting tradeoffs that depend on the effect of the cache size, user node density, and the quality of service constrains on the different performance metrics.",
"title": ""
},
{
"docid": "707c5c55c11aac05c783929239f953dd",
"text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.",
"title": ""
},
{
"docid": "bdbb97522eea6cb9f8e11f07c2e83282",
"text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.",
"title": ""
},
{
"docid": "c2f8de8ed9c796c351a96eedb072ddaf",
"text": "We partially replicate and extend Shepard, Hovland, and Jenkins's (1961) classic study of task difficulty for learning six fundamental types of rule-based categorization problems. Our main results mirrored those of Shepard et al., with the ordering of task difficulty being the same as in the original study. A much richer data set was collected, however, which enabled the generation of block-by-block learning curves suitable for quantitative fitting. Four current computational models of classification learning were fitted to the learning data: ALCOVE (Kruschke, 1992), the rational model (Anderson, 1991), the configural-cue model (Gluck & Bower, 1988b), and an extended version of the configural-cue model with dimensionalized, adaptive learning rate mechanisms. Although all of the models captured important qualitative aspects of the learning data, ALCOVE provided the best overall quantitative fit. The results suggest the need to incorporate some form of selective attention to dimensions in category-learning models based on stimulus generalization and cue conditioning.",
"title": ""
},
{
"docid": "6a993cdfbb701b43bb1cf287380e5b2e",
"text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.",
"title": ""
},
{
"docid": "4a1c76e617fe05b7253cb508ce5119bc",
"text": "Interdental cleaning is an important part of a patient's personal oral care regimen. Water flossers, also known as oral irrigators or dental water jets, can play a vital, effective role in interdental hygiene. Evidence has shown a significant reduction in plaque biofilm from tooth surfaces and the reduction of subgingival pathogenic bacteria from pockets as deep as 6 mm with the use of water flossing. In addition, water flossers have been shown to reduce gingivitis, bleeding, probing pocket depth, host inflammatory mediators, and calculus. Educating patients on the use of a water flosser as part of their oral hygiene routine can be a valuable tool in maintaining oral health.",
"title": ""
},
{
"docid": "38438e6a0bd03ad5f076daa1f248d001",
"text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.",
"title": ""
},
{
"docid": "f72267cde1287bc3d0a235043c4dc5f5",
"text": "End-to-end congestion control mechanisms have been critical to the robustness and stability of the Internet. Most of today’s Internet traffic is TCP, and we expect this to remain so in the future. Thus, having “TCP-friendly” behavior is crucial for new applications. However, the emergence of non-congestion-controlled realtime applications threatens unfairness to competing TCP traffic and possible congestion collapse. We present an end-to-end TCP-friendly Rate Adaptation Protocol (RAP), which employs an additive-increase, multiplicativedecrease (AIMD) algorithm. It is well suited for unicast playback of realtime streams and other semi-reliable rate-based applications. Its primary goal is to be fair and TCP-friendly while separating network congestion control from application-level reliability. We evaluate RAP through extensive simulation, and conclude that bandwidth is usually evenly shared between TCP and RAP traffic. Unfairness to TCP traffic is directly determined by how TCP diverges from the AIMD algorithm. Basic RAP behaves in a TCPfriendly fashion in a wide range of likely conditions, but we also devised a fine-grain rate adaptation mechanism to extend this range further. Finally, we show that deploying RED queue management can result in an ideal fairness between TCP and RAP traffic.",
"title": ""
},
{
"docid": "77ac3a28ffa420a1e4f1366d36b4c188",
"text": " Call-Exner bodies are present in ovarian follicles of a range of species including human and rabbit, and in a range of human ovarian tumors. We have also found structures resembling Call-Exner bodies in bovine preantral and small antral follicles. Hematoxylin and eosin staining of single sections of bovine ovaries has shown that 30% of preantral follicles with more than one layer of granulosa cells and 45% of small (less than 650 μm) antral follicles have at least one Call-Exner body composed of a spherical eosinophilic region surrounded by a rosette of granulosa cells. Alcian blue stains the spherical eosinophilic region of the Call-Exner bodies. Electron microscopy has demonstrated that some Call-Exner bodies contain large aggregates of convoluted basal lamina, whereas others also contain regions of unassembled basal-lamina-like material. Individual chains of the basal lamina components type IV collagen (α1 to α5) and laminin (α1, β2 and δ1) have been immunolocalized to Call-Exner bodies in sections of fresh-frozen ovaries. Bovine Call-Exner bodies are presumably analogous to Call-Exner bodies in other species but are predominantly found in preantral and small antral follicles, rather than large antral follicles. With follicular development, the basal laminae of Call-Exner bodies change in their apparent ratio of type IV collagen to laminin, similar to changes observed in the follicular basal lamina, suggesting that these structures have a common cellular origin.",
"title": ""
},
{
"docid": "83f5af68f54f9db0608d8173432188f9",
"text": "JaTeCS is an open source Java library that supports research on automatic text categorization and other related problems, such as ordinal regression and quantification, which are of special interest in opinion mining applications. It covers all the steps of an experimental activity, from reading the corpus to the evaluation of the experimental results. As JaTeCS is focused on text as the main input data, it provides the user with many text-dedicated tools, e.g.: data readers for many formats, including the most commonly used text corpora and lexical resources, natural language processing tools, multi-language support, methods for feature selection and weighting, the implementation of many machine learning algorithms as well as wrappers for well-known external software (e.g., SVMlight) which enable their full control from code. JaTeCS support its expansion by abstracting through interfaces many of the typical tools and procedures used in text processing tasks. The library also provides a number of “template” implementations of typical experimental setups (e.g., train-test, k-fold validation, grid-search optimization, randomized runs) which enable fast realization of experiments just by connecting the templates with data readers, learning algorithms and evaluation measures.",
"title": ""
},
{
"docid": "2eed40550c0d011af91a2998ed16f501",
"text": "Context: Client-side JavaScript is widely used in web applications to improve user-interactivity and minimize client-server communications. Unfortunately, web applications are prone to JavaScript faults. While prior studies have demonstrated the prevalence of these faults, no attempts have been made to determine their root causes and consequences. Objective: The goal of our study is to understand the root causes and impact of JavaScript faults and how the results can impact JavaScript programmers, testers and tool developers. Method: We perform an empirical study of 317 bug reports from 12 bug repositories. The bug reports are thoroughly examined to classify and extract information about the fault's cause (the error) and consequence (the failure and impact). Result: The majority (65%) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80% of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself, as opposed to other web application components such as the server-side or HTML code. Conclusion: Given the prevalence of DOM-related faults, JavaScript programmers need development tools that can help them reason about the DOM. Also, testers should prioritize detection of DOM-related faults as most high impact faults belong to this category. Finally, developers can use the error patterns we found to design more powerful static analysis tools for JavaScript.",
"title": ""
},
{
"docid": "705efc15f0c07c3028c691d5098fe921",
"text": "Antisocial behavior is a socially maladaptive and harmful trait to possess. This can be especially injurious for a child who is raised by a parent with this personality structure. The pathology of antisocial behavior implies traits such as deceitfulness, irresponsibility, unreliability, and an incapability to feel guilt, remorse, or even love. This is damaging to a child’s emotional, cognitive, and social development. Parents with this personality makeup can leave a child traumatized, empty, and incapable of forming meaningful personal relationships. Both genetic and environmental factors influence the development of antisocial behavior. Moreover, the child with a genetic predisposition to antisocial behavior who is raised with a parental style that triggers the genetic liability is at high risk for developing the same personality structure. Antisocial individuals are impulsive, irritable, and often have no concerns over their purported responsibilities. As a parent, this can lead to erratic discipline, neglectful parenting, and can undermine effective care giving. This paper will focus on the implications of parents with antisocial behavior and the impact that this behavior has on attachment as well as on the development of antisocial traits in children.",
"title": ""
},
{
"docid": "c7237823182b47cc03c70937bbbb0be0",
"text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.",
"title": ""
},
{
"docid": "b60850caccf9be627b15c7c83fb3938e",
"text": "Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design.",
"title": ""
}
] | scidocsrr |
401343e29ba8e7ed73f7d2aaa811afdd | Resilience : The emergence of a perspective for social – ecological systems analyses | [
{
"docid": "5eb526843c41d2549862b60c17110b5b",
"text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.",
"title": ""
}
] | [
{
"docid": "832305d62b48e316d82efc62fc390359",
"text": "Bluetooth Low Energy (BLE) is ideally suited to exchange information between mobile devices and Internet-of-Things (IoT) sensors. It is supported by most recent consumer mobile devices and can be integrated into sensors enabling them to exchange information in an energy-efficient manner. However, when BLE is used to access or modify sensitive sensor parameters, exchanged messages need to be suitably protected, which may not be possible with the security mechanisms defined in the BLE specification. Consequently we contribute BALSA, a set of cryptographic protocols, a BLE service and a suggested usage architecture aiming to provide a suitable level of security. In this paper we define and analyze these components and describe our proof-of-concept, which demonstrates the feasibility and benefits of BALSA.",
"title": ""
},
{
"docid": "f466a283f1073569ca31e43ebffcda7d",
"text": "Facial alignment involves finding a set of landmark points on an image with a known semantic meaning. However, this semantic meaning of landmark points is often lost in 2D approaches where landmarks are either moved to visible boundaries or ignored as the pose of the face changes. In order to extract consistent alignment points across large poses, the 3D structure of the face must be considered in the alignment step. However, extracting a 3D structure from a single 2D image usually requires alignment in the first place. We present our novel approach to simultaneously extract the 3D shape of the face and the semantically consistent 2D alignment through a 3D Spatial Transformer Network (3DSTN) to model both the camera projection matrix and the warping parameters of a 3D model. By utilizing a generic 3D model and a Thin Plate Spline (TPS) warping function, we are able to generate subject specific 3D shapes without the need for a large 3D shape basis. In addition, our proposed network can be trained in an end-to-end frame-work on entirely synthetic data from the 300W-LP dataset. Unlike other 3D methods, our approach only requires one pass through the network resulting in a faster than real-time alignment. Evaluations of our model on the Annotated Facial Landmarks in the Wild (AFLW) and AFLW2000-3D datasets show our method achieves state-of-the-art performance over other 3D approaches to alignment.",
"title": ""
},
{
"docid": "da8e0706b5ca5b7d391a07d443edc0cf",
"text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups, and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.",
"title": ""
},
{
"docid": "2bdefbc66ae89ce8e48acf0d13041e0a",
"text": "We introduce an ac transconductance dispersion method (ACGD) to profile the oxide traps in an MOSFET without needing a body contact. The method extracts the spatial distribution of oxide traps from the frequency dependence of transconductance, which is attributed to charge trapping as modulated by an ac gate voltage. The results from this method have been verified by the use of the multifrequency charge pumping (MFCP) technique. In fact, this method complements the MFCP technique in terms of the trap depth that each method is capable of probing. We will demonstrate the method with InP passivated InGaAs substrates, along with electrically stressed Si N-MOSFETs.",
"title": ""
},
{
"docid": "e0d3a7e7e000c6704518763bf8dff8c8",
"text": "Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V (ref. 12). Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.",
"title": ""
},
{
"docid": "e38f29a603fb23544ea2fcae04eb1b5d",
"text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.",
"title": ""
},
{
"docid": "cdf313ff69ebd11b360cd5e3b3942580",
"text": "This paper presents, for the first time, a novel pupil detection method for near-infrared head-mounted cameras, which relies not only on image appearance to pursue the shape and gradient variation of the pupil contour, but also on structure principle to explore the mechanism of pupil projection. There are three main characteristics in the proposed method. First, in order to complement the pupil projection information, an eyeball center calibration method is proposed to build an eye model. Second, by utilizing the deformation model of pupils under head-mounted cameras and the edge gradients of a circular pattern, we find the best fitting ellipse describing the pupil boundary. Third, an eye-model-based pupil fitting algorithm with only three parameters is proposed to fine-tune the final pupil contour. Consequently, the proposed method extracts the geometry-appearance information, effectively boosting the performance of pupil detection. Experimental results show that this method outperforms the state-of-the-art ones. On a widely used public database (LPW), our method achieves 72.62% in terms of detection rate up to an error of five pixels, which is superior to the previous best one.",
"title": ""
},
{
"docid": "fbce6308301306e0ef5877b192281a95",
"text": "AIM\nThe aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process.\n\n\nBACKGROUND\nRecent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated.\n\n\nDISCUSSION\nA modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review.\n\n\nCONCLUSION\nAn updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.",
"title": ""
},
{
"docid": "f6de868d9d3938feb7c33f082dddcdc0",
"text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.",
"title": ""
},
{
"docid": "2d5d72944f12446a93e63f53ffce7352",
"text": "Standardization of transanal total mesorectal excision requires the delineation of the principal procedural components before implementation in practice. This technique is a bottom-up approach to a proctectomy with the goal of a complete mesorectal excision for optimal outcomes of oncologic treatment. A detailed stepwise description of the approach with technical pearls is provided to optimize one's understanding of this technique and contribute to reducing the inherent risk of beginning a new procedure. Surgeons should be trained according to standardized pathways including online preparation, observational or hands-on courses as well as the potential for proctorship of early cases experiences. Furthermore, technological pearls with access to the \"video-in-photo\" (VIP) function, allow surgeons to link some of the images in this article to operative demonstrations of certain aspects of this technique.",
"title": ""
},
{
"docid": "7c2d0b382685ac7e85c978ece31251d7",
"text": "Given an edge-weighted graph G with a set $$Q$$ Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the size of minimum cut between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class. We show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require $$2^{k-2}$$ 2k-2 edges in any mimicking network. This nearly matches an upper bound of $$\\mathcal {O}(k 2^{2k})$$ O(k22k) of Krauthgamer and Rika (in: Khanna (ed) Proceedings of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms, SODA 2013, New Orleans, 2013) and is in sharp contrast with the upper bounds of $$\\mathcal {O}(k^2)$$ O(k2) and $$\\mathcal {O}(k^4)$$ O(k4) under the assumption that all terminals lie on a single face (Goranci et al., in: Pruhs and Sohler (eds) 25th Annual European symposium on algorithms (ESA 2017), 2017, arXiv:1702.01136; Krauthgamer and Rika in Refined vertex sparsifiers of planar graphs, 2017, arXiv:1702.05951). As a side result we show a tight example for double-exponential upper bounds given by Hagerup et al. (J Comput Syst Sci 57(3):366–375, 1998), Khan and Raghavendra (Inf Process Lett 114(7):365–371, 2014), and Chambers and Eppstein (J Gr Algorithms Appl 17(3):201–220, 2013).",
"title": ""
},
{
"docid": "ad9f3510ffaf7d0bdcf811a839401b83",
"text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.",
"title": ""
},
{
"docid": "136278bd47962b54b644a77bbdaf77e3",
"text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "c45d911aea9d06208a4ef273c9ab5ff3",
"text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.",
"title": ""
},
{
"docid": "aea5591e815e61ba01914a84eda8b6af",
"text": "We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core 2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only cache-timing-attack resistant implementation offering competitive speeds for stream as well as for packet encryption: for 576-byte packets, we improve performance over previous bitsliced implementations by more than a factor of 2. We also report more than 30% improved speeds for lookup-table based Galois/Counter mode authentication, achieving 10.68 cycles/byte for authenticated encryption. Furthermore, we present the first constant-time implementation of AES-GCM that has a reasonable speed of 21.99 cycles/byte, thus offering a full suite of timing-analysis resistant software for authenticated encryption.",
"title": ""
},
{
"docid": "940e7dc630b7dcbe097ade7abb2883a4",
"text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.",
"title": ""
},
{
"docid": "67974bd363f89a9da77b2e09851905d3",
"text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
},
{
"docid": "5a9f6b9f6f278f5f3359d5d58b8516a8",
"text": "BACKGROUND\nMusculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process.\n\n\nCONCLUSIONS\nThe method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.",
"title": ""
}
] | scidocsrr |
d2a63e643af6ee3a04fedd62eab491d7 | Effect of prebiotic intake on gut microbiota, intestinal permeability and glycemic control in children with type 1 diabetes: study protocol for a randomized controlled trial | [
{
"docid": "656baf66e6dd638d9f48ea621593bac3",
"text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.",
"title": ""
}
] | [
{
"docid": "e02207c42eda7ec15db5dcd26ee55460",
"text": "This paper focuses on a new task, i.e. transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design an functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e. the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.",
"title": ""
},
{
"docid": "c6aed5c5e899898083f33eb5f42d4706",
"text": "Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide highquality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the gametheoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.",
"title": ""
},
{
"docid": "a9338a9f699bdc8d0085105e3ad217d1",
"text": "The femoral head receives blood supply mainly from the deep branch of the medial femoral circumflex artery (MFCA). In previous studies we have performed anatomical dissections of 16 specimens and subsequently visualised the arteries supplying the femoral head in 55 healthy individuals. In this further radiological study we compared the arterial supply of the femoral head in 35 patients (34 men and one woman, mean age 37.1 years (16 to 64)) with a fracture/dislocation of the hip with a historical control group of 55 hips. Using CT angiography, we identified the three main arteries supplying the femoral head: the deep branch and the postero-inferior nutrient artery both arising from the MFCA, and the piriformis branch of the inferior gluteal artery. It was possible to visualise changes in blood flow after fracture/dislocation. Our results suggest that blood flow is present after reduction of the dislocated hip. The deep branch of the MFCA was patent and contrast-enhanced in 32 patients, and the diameter of this branch was significantly larger in the fracture/dislocation group than in the control group (p = 0.022). In a subgroup of ten patients with avascular necrosis (AVN) of the femoral head, we found a contrast-enhanced deep branch of the MFCA in eight hips. Two patients with no blood flow in any of the three main arteries supplying the femoral head developed AVN.",
"title": ""
},
{
"docid": "1bb54da28e139390c2176ae244066575",
"text": "A novel non-parametric, multi-variate quickest detection method is proposed for cognitive radios (CRs) using both energy and cyclostationary features. The proposed approach can be used to track state dynamics of communication channels. This capability can be useful for both dynamic spectrum sharing (DSS) and future CRs, as in practice, centralized channel synchronization is unrealistic and the prior information of the statistics of channel usage is, in general, hard to obtain. The proposed multi-variate non-parametric average sample power and cyclostationarity-based quickest detection scheme is shown to achieve better performance compared to traditional energy-based schemes. We also develop a parallel on-line quickest detection/off-line change-point detection algorithm to achieve self-awareness of detection delays and false alarms for future automation. Compared to traditional energy-based quickest detection schemes, the proposed multi-variate non-parametric quickest detection scheme has comparable computational complexity. The simulated performance shows improvements in terms of small detection delays and significantly higher percentage of spectrum utilization.",
"title": ""
},
{
"docid": "309981f955187cef50b5b1f4527d97af",
"text": "BACKGROUND\nDetection of unknown risks with marketed medicines is key to securing the optimal care of individual patients and to reducing the societal burden from adverse drug reactions. Large collections of individual case reports remain the primary source of information and require effective analytics to guide clinical assessors towards likely drug safety signals. Disproportionality analysis is based solely on aggregate numbers of reports and naively disregards report quality and content. However, these latter features are the very fundament of the ensuing clinical assessment.\n\n\nOBJECTIVE\nOur objective was to develop and evaluate a data-driven screening algorithm for emerging drug safety signals that accounts for report quality and content.\n\n\nMETHODS\nvigiRank is a predictive model for emerging safety signals, here implemented with shrinkage logistic regression to identify predictive variables and estimate their respective contributions. The variables considered for inclusion capture different aspects of strength of evidence, including quality and clinical content of individual reports, as well as trends in time and geographic spread. A reference set of 264 positive controls (historical safety signals from 2003 to 2007) and 5,280 negative controls (pairs of drugs and adverse events not listed in the Summary of Product Characteristics of that drug in 2012) was used for model fitting and evaluation; the latter used fivefold cross-validation to protect against over-fitting. All analyses were performed on a reconstructed version of VigiBase(®) as of 31 December 2004, at around which time most safety signals in our reference set were emerging.\n\n\nRESULTS\nThe following aspects of strength of evidence were selected for inclusion into vigiRank: the numbers of informative and recent reports, respectively; disproportional reporting; the number of reports with free-text descriptions of the case; and the geographic spread of reporting. vigiRank offered a statistically significant improvement in area under the receiver operating characteristics curve (AUC) over screening based on the Information Component (IC) and raw numbers of reports, respectively (0.775 vs. 0.736 and 0.707, cross-validated).\n\n\nCONCLUSIONS\nAccounting for multiple aspects of strength of evidence has clear conceptual and empirical advantages over disproportionality analysis. vigiRank is a first-of-its-kind predictive model to factor in report quality and content in first-pass screening to better meet tomorrow's post-marketing drug safety surveillance needs.",
"title": ""
},
{
"docid": "da3650998a4bd6ea31467daa631d0e05",
"text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "1813bda6a39855ce9e5caf24536425c1",
"text": "This article presents VineSens, a hardware and software platform for supporting the decision-making of the vine grower. VineSens is based on a wireless sensor network system composed by autonomous and self-powered nodes that are deployed throughout a vineyard. Such nodes include sensors that allow us to obtain detailed knowledge on different viticulture processes. Thanks to the use of epidemiological models, VineSens is able to propose a custom control plan to prevent diseases like one of the most feared by vine growers: downy mildew. VineSens generates alerts that warn farmers about the measures that have to be taken and stores the historical weather data collected from different spots of the vineyard. Such data can then be accessed through a user-friendly web-based interface that can be accessed through the Internet by using desktop or mobile devices. VineSens was deployed at the beginning in 2016 in a vineyard in the Ribeira Sacra area (Galicia, Spain) and, since then, its hardware and software have been tested to prevent the development of downy mildew, showing during its first season that the system can led to substantial savings, to decrease the amount of phytosanitary products applied, and, as a consequence, to obtain a more ecologically sustainable and healthy wine.",
"title": ""
},
{
"docid": "38fab4cc5cffea363eecbc8b2f2c6088",
"text": "Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the interdomain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.",
"title": ""
},
{
"docid": "6a1e5ffdcac8d22cfc8f9c2fc1ca0e17",
"text": "Magnetic resonance images (MRI) play an important role in supporting and substituting clinical information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient condition and clustering parameters, allowing identification of them as (local) minima of the objective function.",
"title": ""
},
{
"docid": "2a67a524cb3279967207b1fa8748cd04",
"text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.",
"title": ""
},
{
"docid": "57224fab5298169be0da314e55ca6b43",
"text": "Although users’ preference is semantically reflected in the free-form review texts, this wealth of information was not fully exploited for learning recommender models. Specifically, almost all existing recommendation algorithms only exploit rating scores in order to find users’ preference, but ignore the review texts accompanied with rating information. In this paper, we propose a novel matrix factorization model (called TopicMF) which simultaneously considers the ratings and accompanied review texts. Experimental results on 22 real-world datasets show the superiority of our model over the state-of-the-art models, demonstrating its effectiveness for recommendation tasks.",
"title": ""
},
{
"docid": "7423711eab3ab9054618c8445099671d",
"text": "A worldview (or “world view”) is a set of assumptions about physical and social reality that may have powerful effects on cognition and behavior. Lacking a comprehensive model or formal theory up to now, the construct has been underused. This article advances theory by addressing these gaps. Worldview is defined. Major approaches to worldview are critically reviewed. Lines of evidence are described regarding worldview as a justifiable construct in psychology. Worldviews are distinguished from schemas. A collated model of a worldview’s component dimensions is described. An integrated theory of worldview function is outlined, relating worldview to personality traits, motivation, affect, cognition, behavior, and culture. A worldview research agenda is outlined for personality and social psychology (including positive and peace psychology).",
"title": ""
},
{
"docid": "0d802fea4e3d9324ba46c35e5a002b6a",
"text": "Hyponatremia is common in both inpatients and outpatients. Medications are often the cause of acute or chronic hyponatremia. Measuring the serum osmolality, urine sodium concentration and urine osmolality will help differentiate among the possible causes. Hyponatremia in the physical states of extracellular fluid (ECF) volume contraction and expansion can be easy to diagnose but often proves difficult to manage. In patients with these states or with normal or near-normal ECF volume, the syndrome of inappropriate secretion of antidiuretic hormone is a diagnosis of exclusion, requiring a thorough search for all other possible causes. Hyponatremia should be corrected at a rate similar to that at which it developed. When symptoms are mild, hyponatremia should be managed conservatively, with therapy aimed at removing the offending cause. When symptoms are severe, therapy should be aimed at more aggressive correction of the serum sodium concentration, typically with intravenous therapy in the inpatient setting.",
"title": ""
},
{
"docid": "3072c5458a075e6643a7679ccceb1417",
"text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.",
"title": ""
},
{
"docid": "3680f9e5dd82e6f3d5d410f2725225cc",
"text": "This work contributes several new elements to the quest for a biologically plausible implementation of backprop in brains. We introduce a very general and abstract framework for machine learning, in which the quantities of interest are defined implicitly through an energy function. In this framework, only one kind of neural computation is involved both for the first phase (when the prediction is made) and the second phase (after the target is revealed), like the contrastive Hebbian learning algorithm in the continuous Hopfield model for example. Contrary to automatic differentiation in computational graphs (i.e. standard backprop), there is no need for special computation in the second phase of our framework. One advantage of our framework over contrastive Hebbian learning is that the second phase corresponds to only nudging the first-phase fixed point towards a configuration that reduces prediction error. In the case of a multi-layer supervised neural network, the output units are slightly nudged towards their target, and the perturbation introduced at the output layer propagates backward in the network. The signal ’back-propagated’ during this second phase actually contains information about the error derivatives, which we use to implement a learning rule proved to perform gradient descent with respect to an objective function.",
"title": ""
},
{
"docid": "31a0b00a79496deccdec4d3629fcbf88",
"text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.",
"title": ""
},
{
"docid": "5e59888b6e0c562d546618dd95fa00b8",
"text": "The massive acceleration of the nitrogen cycle as a result of the production and industrial use of artificial nitrogen fertilizers worldwide has enabled humankind to greatly increase food production, but it has also led to a host of environmental problems, ranging from eutrophication of terrestrial and aquatic systems to global acidification. The findings of many national and international research programmes investigating the manifold consequences of human alteration of the nitrogen cycle have led to a much improved understanding of the scope of the anthropogenic nitrogen problem and possible strategies for managing it. Considerably less emphasis has been placed on the study of the interactions of nitrogen with the other major biogeochemical cycles, particularly that of carbon, and how these cycles interact with the climate system in the presence of the ever-increasing human intervention in the Earth system. With the release of carbon dioxide (CO2) from the burning of fossil fuels pushing the climate system into uncharted territory, which has major consequences for the functioning of the global carbon cycle, and with nitrogen having a crucial role in controlling key aspects of this cycle, questions about the nature and importance of nitrogen–carbon–climate interactions are becoming increasingly pressing. The central question is how the availability of nitrogen will affect the capacity of Earth’s biosphere to continue absorbing carbon from the atmosphere (see page 289), and hence continue to help in mitigating climate change. Addressing this and other open issues with regard to nitrogen–carbon–climate interactions requires an Earth-system perspective that investigates the dynamics of the nitrogen cycle in the context of a changing carbon cycle, a changing climate and changes in human actions.",
"title": ""
},
{
"docid": "513add6fe4a18ed30e97a2c61ebd59ea",
"text": "Holistic driving scene understanding is a critical step toward intelligent transportation systems. It involves different levels of analysis, interpretation, reasoning and decision making. In this paper, we propose a 3D dynamic scene analysis framework as the first step toward driving scene understanding. Specifically, given a sequence of synchronized 2D and 3D sensory data, the framework systematically integrates different perception modules to obtain 3D position, orientation, velocity and category of traffic participants and the ego car in a reconstructed 3D semantically labeled traffic scene. We implement this framework and demonstrate the effectiveness in challenging urban driving scenarios. The proposed framework builds a foundation for higher level driving scene understanding problems such as intention and motion prediction of surrounding entities, ego motion planning, and decision making.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
},
{
"docid": "a3cd3ec70b5d794173db36cb9a219403",
"text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)",
"title": ""
}
] | scidocsrr |
64d42a604baece201ba258cf06ac275b | DCN+: Mixed Objective and Deep Residual Coattention for Question Answering | [
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] | [
{
"docid": "236896835b48994d7737b9152c0e435f",
"text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.",
"title": ""
},
{
"docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75",
"text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.",
"title": ""
},
{
"docid": "a5a1dd08d612db28770175cc578dd946",
"text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.",
"title": ""
},
{
"docid": "56287b9aea445b570aa7fe77f1b7751a",
"text": "Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.",
"title": ""
},
{
"docid": "dd4cc15729f65a0102028949b34cc56f",
"text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.",
"title": ""
},
{
"docid": "c6bd4cd6f90abf20f2619b1d1af33680",
"text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.",
"title": ""
},
{
"docid": "384dfe9f80cd50ce3a41cd0fdc494e43",
"text": "Optical Character Recognition (OCR) systems often generate errors for images with noise or with low scanning resolution. In this paper, a novel approach that can be used to improve and restore the quality of any clean lower resolution images for easy recognition by OCR process. The method relies on the production of four copies of the original image so that each picture undergoes different restoration processes. These four copies of the images are then passed to a single OCR engine in parallel. In addition to that, the method does not need any traditional alignment between the four resulting texts, which is time consuming and needs complex calculation. It implements a new procedure to choose the best among them and can be applied without prior training on errors. The experimental results show improvement in word error rate for low resolution images by more than 67%.",
"title": ""
},
{
"docid": "29822df06340218a43fbcf046cbeb264",
"text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.",
"title": ""
},
{
"docid": "f095118c63d1531ebdbaec3565b0d91f",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "47ae3428ecddd561b678e5715dfd59ab",
"text": "Social media have become an established feature of the dynamic information space that emerges during crisis events. Both emergency responders and the public use these platforms to search for, disseminate, challenge, and make sense of information during crises. In these situations rumors also proliferate, but just how fast such information can spread is an open question. We address this gap, modeling the speed of information transmission to compare retransmission times across content and context features. We specifically contrast rumor-affirming messages with rumor-correcting messages on Twitter during a notable hostage crisis to reveal differences in transmission speed. Our work has important implications for the growing field of crisis informatics.",
"title": ""
},
{
"docid": "e3664eb9901464d6af312e817393e712",
"text": "The security of computer systems fundamentally relies on memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access. In this paper, we present Meltdown. Meltdown exploits side effects of out-of-order execution on modern processors to read arbitrary kernel-memory locations including personal data and passwords. Out-of-order execution is an indispensable performance feature and present in a wide range of modern processors. The attack is independent of the operating system, and it does not rely on any software vulnerabilities. Meltdown breaks all security guarantees provided by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation. On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer. We show that the KAISER defense mechanism for KASLR has the important (but inadvertent) side effect of impeding Meltdown. We stress that KAISER must be deployed immediately to prevent largescale exploitation of this severe information leakage.",
"title": ""
},
{
"docid": "ae9469b80390e5e2e8062222423fc2cd",
"text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.",
"title": ""
},
{
"docid": "55a29653163bdf9599bf595154a99a25",
"text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.",
"title": ""
},
{
"docid": "424bf67761e234f6cf85eacabf38a502",
"text": "Due to poor efficiencies of Incandescent Lamps (ILs), Fluorescent Lamps (FLs) and Compact Fluorescent Lamps (CFLs) are increasingly used in residential and commercial applications. This proliferation of FLs and CFLs increases the harmonics level in distribution systems that could affect power systems and end users. In order to quantify the harmonics produced by FLs and CFLs precisely, accurate modelling of these loads are required. Matlab Simulink is used to model and simulate the full models of FLs and CFLs to give close results to the experimental measurements. Moreover, a Constant Load Power (CLP) model is also modelled and its results are compared with the full models of FLs and CFLs. This CLP model is much faster to simulate and easier to model than the full model. Such models help engineers and researchers to evaluate the harmonics exist within households and commercial buildings.",
"title": ""
},
{
"docid": "69dc7ae1e3149d475dabb4bbf8f05172",
"text": "Knowledge about entities is essential for natural language understanding. This knowledge includes several facts about entities such as their names, properties, relations and types. This data is usually stored in large scale structures called knowledge bases (KB) and therefore building and maintaining KBs is very important. Examples of such KBs are Wikipedia, Freebase and Google knowledge graph. Incompleteness is unfortunately a reality for every KB, because the world is changing – new entities are emerging, and existing entities are getting new properties. Therefore, we always need to update KBs. To do so, we propose an information extraction method that processes large raw corpora in order to gather knowledge about entities. We focus on extraction of entity types and address the task of fine-grained entity typing: given a KB and a large corpus of text with mentions of entities in the KB, find all fine-grained types of the entities. For example given a large corpus and the entity “Barack Obama” we need to find all his types including PERSON, POLITICIAN, and AUTHOR. Artificial neural networks (NNs) have shown promising results in different machine learning problems. Distributed representation (embedding) is an effective way of representing data for NNs. In this work, we introduce two models for fine-grained entity typing using NNs with distributed representations of language units: (i) A global model that predicts types of an entity based on its global representation learned from the entity’s name and contexts. (ii) A context model that predicts types of an entity based on its context-level predictions. Each of the two proposed models has some specific properties. For the global model, learning high quality entity representations is crucial because it is the only source used for the predictions. Therefore, we introduce representations using name and contexts of entities on three levels of entity, word, and character. We show each has complementary information and a multi-level representation is the best. For the context model, we need to use distant supervision since the contextlevel labels are not available for entities. Distant supervised labels are noisy and this harms the performance of models. Therefore, we introduce and apply new algorithms for noise mitigation using multi-instance learning.",
"title": ""
},
{
"docid": "afefd32f480dbb5880eea1d9e489147e",
"text": "Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.",
"title": ""
},
{
"docid": "5fbedf5f399ee19d083a73f962cd9f29",
"text": "A 70 mm-open-ended coaxial line probe was developed to perform measurements of the dielectric properties of large concrete samples. The complex permittivity was measured in the frequency range 50 MHz – 1.5 GHz during the hardening process of the concrete. As expected, strong dependence of water content was observed.",
"title": ""
},
{
"docid": "1a77d9ee6da4620b38efec315c6357a1",
"text": "The authors present a new approach to culture and cognition, which focuses on the dynamics through which specific pieces of cultural knowledge (implicit theories) become operative in guiding the construction of meaning from a stimulus. Whether a construct comes to the fore in a perceiver's mind depends on the extent to which the construct is highly accessible (because of recent exposure). In a series of cognitive priming experiments, the authors simulated the experience of bicultural individuals (people who have internalized two cultures) of switching between different cultural frames in response to culturally laden symbols. The authors discuss how this dynamic, constructivist approach illuminates (a) when cultural constructs are potent drivers of behavior and (b) how bicultural individuals may control the cognitive effects of culture.",
"title": ""
},
{
"docid": "cba3209a27e1332f25f29e8b2c323d37",
"text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.",
"title": ""
}
] | scidocsrr |
4f6fbc474010df5403b47dcc2023ea7c | Security Improvement for Management Frames in IEEE 802.11 Wireless Networks | [
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] | [
{
"docid": "82e490a288a5c8d94a749dc9b4ac3073",
"text": "We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework.",
"title": ""
},
{
"docid": "466e715ad00c023ab27871d6b4e98104",
"text": "Cheap and versatile cameras make it possible to easily and quickly capture a wide variety of documents. However, low resolution cameras present a challenge to OCR because it is virtually impossible to do character segmentation independently from recognition. In this paper we solve these problems simultaneously by applying methods borrowed from cursive handwriting recognition. To achieve maximum robustness, we use a machine learning approach based on a convolutional neural network. When our system is combined with a language model using dynamic programming, the overall performance is in the vicinity of 80-95% word accuracy on pages captured with a 1024/spl times/768 webcam and 10-point text.",
"title": ""
},
{
"docid": "78fc46165449f94e75e70a2654abf518",
"text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.",
"title": ""
},
{
"docid": "03fbec34e163eb3c53b46ae49b4a3d49",
"text": "We consider the use of multi-agent systems to control network routing. Conventional approaches to this task are based on Ideal Shortest Path routing Algorithm (ISPA), under which at each moment each agent in the network sends all of its traffic down the path that will incur the lowest cost to that traffic. We demonstrate in computer experiments that due to the side-effects of one agent’s actions on another agent’s traffic, use of ISPA’s can result in large global cost. In particular, in a simulation of Braess’ paradox we see that adding new capacity to a network with ISPA agents can decrease overall throughput. The theory of COllective INtelligence (COIN) design concerns precisely the issue of avoiding such side-effects. We use that theory to derive an idealized routing algorithm and show that a practical machine-learning-based version of this algorithm, in which costs are only imprecisely estimated substantially outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this practical COIN algorithm avoids Braess’ paradox.",
"title": ""
},
{
"docid": "5f5dd4b8476efa2d1907bf0185b054ab",
"text": "BACKGROUND\nAn accurate risk stratification tool is critical in identifying patients who are at high risk of frequent hospital readmissions. While 30-day hospital readmissions have been widely studied, there is increasing interest in identifying potential high-cost users or frequent hospital admitters. In this study, we aimed to derive and validate a risk stratification tool to predict frequent hospital admitters.\n\n\nMETHODS\nWe conducted a retrospective cohort study using the readily available clinical and administrative data from the electronic health records of a tertiary hospital in Singapore. The primary outcome was chosen as three or more inpatient readmissions within 12 months of index discharge. We used univariable and multivariable logistic regression models to build a frequent hospital admission risk score (FAM-FACE-SG) by incorporating demographics, indicators of socioeconomic status, prior healthcare utilization, markers of acute illness burden and markers of chronic illness burden. We further validated the risk score on a separate dataset and compared its performance with the LACE index using the receiver operating characteristic analysis.\n\n\nRESULTS\nOur study included 25,244 patients, with 70% randomly selected patients for risk score derivation and the remaining 30% for validation. Overall, 4,322 patients (17.1%) met the outcome. The final FAM-FACE-SG score consisted of nine components: Furosemide (Intravenous 40 mg and above during index admission); Admissions in past one year; Medifund (Required financial assistance); Frequent emergency department (ED) use (≥3 ED visits in 6 month before index admission); Anti-depressants in past one year; Charlson comorbidity index; End Stage Renal Failure on Dialysis; Subsidized ward stay; and Geriatric patient or not. In the experiments, the FAM-FACE-SG score had good discriminative ability with an area under the curve (AUC) of 0.839 (95% confidence interval [CI]: 0.825-0.853) for risk prediction of frequent hospital admission. In comparison, the LACE index only achieved an AUC of 0.761 (0.745-0.777).\n\n\nCONCLUSIONS\nThe FAM-FACE-SG score shows strong potential for implementation to provide near real-time prediction of frequent admissions. It may serve as the first step to identify high risk patients to receive resource intensive interventions.",
"title": ""
},
{
"docid": "ce5063166a5c95e6c3bd75471e827eb0",
"text": "Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval of the number line between 0 and I , and retains one of the partitions as the new interval. Thus, the algorithm successively deals with smaller intervals, and the code string, viewed as a magnitude, lies in each of the nested intervals. The data string is recovered by using magnitude comparisons on the code string to recreate how the encoder must have successively partitioned and retained each nested subinterval. Arithmetic coding differs considerably from the more familiar compression coding techniques, such as prefix (Huffman) codes. Also, it should not be confused with error control coding, whose object is to detect and correct errors in computer operations. This paper presents the key notions of arithmetic compression coding by means of simple examples.",
"title": ""
},
{
"docid": "2a187505ea098a45b5a4da4f4a32e049",
"text": "Combining information extraction systems yields significantly higher quality resources than each system in isolation. In this paper, we generalize such a mixing of sources and features in a framework called Ensemble Semantics. We show very large gains in entity extraction by combining state-of-the-art distributional and patternbased systems with a large set of features from a webcrawl, query logs, and Wikipedia. Experimental results on a webscale extraction of actors, athletes and musicians show significantly higher mean average precision scores (29% gain) compared with the current state of the art.",
"title": ""
},
{
"docid": "470265e6acd60a190401936fb7121c75",
"text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.",
"title": ""
},
{
"docid": "54ae3c09c95cbcf93a0005c8c06b671a",
"text": "Our main objective in this paper is to measure the value of customers acquired from Google search advertising accounting for two factors that have been overlooked in the conventional method widely adopted in the industry: the spillover effect of search advertising on customer acquisition and sales in offline channels and the lifetime value of acquired customers. By merging web traffic and sales data from a small-sized U.S. firm, we create an individual customer level panel which tracks all repeated purchases, both online and offline, and whether or not these purchases were referred from Google search advertising. To estimate the customer lifetime value, we apply the methodology in the CRM literature by developing an integrated model of customer lifetime, transaction rate, and gross profit margin, allowing for individual heterogeneity and a full correlation of the three processes. Results show that customers acquired through Google search advertising in our data have a higher transaction rate than customers acquired from other channels. After accounting for future purchases and spillover to offline channels, the calculated value of new customers is much higher than that when we use the conventional method. The approach used in our study provides a practical framework for firms to evaluate the long term profit impact of their search advertising investment in a multichannel setting.",
"title": ""
},
{
"docid": "fccd5b6a22d566fa52f3a331398c9dde",
"text": "The rapid pace of technological advances in recentyears has enabled a significant evolution and deployment ofWireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiallyincreasing amounts of data. Nonetheless, there are veryfew documented works that tackle the challenges related with thecollection, manipulation and exploitation of the data generated bythese networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage andanalysis of data generated by a WSN that monitors air pollutionlevels in a city. We provide a proof of concept that combinesHadoop and Storm for data processing, storage and analysis,and Arduino-based kits for constructing our sensor prototypes.",
"title": ""
},
{
"docid": "114b9bd9f99a18f6c963c6bbf1fb6e1b",
"text": "A self-rating scale was developed to measure the severity of fatigue. Two-hundred and seventy-four new registrations on a general practice list completed a 14-item fatigue scale. In addition, 100 consecutive attenders to a general practice completed the fatigue scale and the fatigue item of the revised Clinical Interview Schedule (CIS-R). These were compared by the application of Relative Operating Characteristic (ROC) analysis. Tests of internal consistency and principal components analyses were performed on both sets of data. The scale was found to be both reliable and valid. There was a high degree of internal consistency, and the principal components analysis supported the notion of a two-factor solution (physical and mental fatigue). The validation coefficients for the fatigue scale, using an arbitrary cut off score of 3/4 and the item on the CIS-R were: sensitivity 75.5 and specificity 74.5.",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "57124eb55f82f1927252f27cbafff44d",
"text": "Gamification is a growing phenomenon of interest to both practitioners and researchers. There remains, however, uncertainty about the contours of the field. Defining gamification as “the process of making activities more game-like” focuses on the crucial space between the components that make up games and the holistic experience of gamefulness. It better fits real-world examples and connects gamification with the literature on persuasive design.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "be5e98bb924a81baa561a3b3870c4a76",
"text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.",
"title": ""
},
{
"docid": "f3083088c9096bb1932b139098cbd181",
"text": "OBJECTIVE\nMaiming and death due to dog bites are uncommon but preventable tragedies. We postulated that patients admitted to a level I trauma center with dog bites would have severe injuries and that the gravest injuries would be those caused by pit bulls.\n\n\nDESIGN\nWe reviewed the medical records of patients admitted to our level I trauma center with dog bites during a 15-year period. We determined the demographic characteristics of the patients, their outcomes, and the breed and characteristics of the dogs that caused the injuries.\n\n\nRESULTS\nOur Trauma and Emergency Surgery Services treated 228 patients with dog bite injuries; for 82 of those patients, the breed of dog involved was recorded (29 were injured by pit bulls). Compared with attacks by other breeds of dogs, attacks by pit bulls were associated with a higher median Injury Severity Scale score (4 vs. 1; P = 0.002), a higher risk of an admission Glasgow Coma Scale score of 8 or lower (17.2% vs. 0%; P = 0.006), higher median hospital charges ($10,500 vs. $7200; P = 0.003), and a higher risk of death (10.3% vs. 0%; P = 0.041).\n\n\nCONCLUSIONS\nAttacks by pit bulls are associated with higher morbidity rates, higher hospital charges, and a higher risk of death than are attacks by other breeds of dogs. Strict regulation of pit bulls may substantially reduce the US mortality rates related to dog bites.",
"title": ""
},
{
"docid": "40f9065a8dc25d5d6999a98634dafbea",
"text": "Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (“generator”) and a task solving model (“solver”). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.",
"title": ""
},
{
"docid": "334cbe19e968cb424719c7efbee9fd20",
"text": "We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "270e593aa89fb034d0de977fe6d618b2",
"text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"title": ""
}
] | scidocsrr |
14d14d26379bbfd49b3336bbb81eade6 | Understanding compliance with internet use policy from the perspective of rational choice theory | [
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "16fa16d1d27b5800c119a300f4c4a79c",
"text": "Deep learning recently shows strong competitiveness to improve polar code decoding. However, suffering from prohibitive training and computation complexity, the conventional deep neural network (DNN) is only possible for very short code length. In this paper, the main problems of deep learning in decoding are well solved. We first present the multiple scaled belief propagation (BP) algorithm, aiming at obtaining faster convergence and better performance. Based on this, deep neural network decoder (NND) with low complexity and latency, is proposed for any code length. The training only requires a small set of zero codewords. Besides, its computation complexity is close to the original BP. Experiment results show that the proposed (64,32) NND with 5 iterations achieves even lower bit error rate (BER) than the 30-iteration conventional BP and (512, 256) NND also outperforms conventional BP decoder with same iterations. The hardware architecture of basic computation block is given and folding technique is also considered, saving about 50% hardware cost.",
"title": ""
},
{
"docid": "5077d46e909db94b510da0621fcf3a9e",
"text": "This paper presents a high SNR self-capacitance sensing 3D hover sensor that does not use panel offset cancelation blocks. Not only reducing noise components, but increasing the signal components together, this paper achieved a high SNR performance while consuming very low power and die-area. Thanks to the proposed separated structure between driving and sensing circuits of the self-capacitance sensing scheme (SCSS), the signal components are increased without using high-voltage MOS sensing amplifiers which consume big die-area and power and badly degrade SNR. In addition, since a huge panel offset problem in SCSS is solved exploiting the panel's natural characteristics, other costly resources are not required. Furthermore, display noise and parasitic capacitance mismatch errors are compressed. We demonstrate a 39dB SNR at a 1cm hover point under 240Hz scan rate condition with noise experiments, while consuming 183uW/electrode and 0.73mm2/sensor, which are the power per electrode and the die-area per sensor, respectively.",
"title": ""
},
{
"docid": "0cfa7e43d557fa6d68349b637367341a",
"text": "This paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. The model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. The agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. The model consists of two interacting modules. A reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. By selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. The second module performs “within fixation” processing, based exclusively on visual information. Its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. Detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. The results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion.",
"title": ""
},
{
"docid": "1c34abb0e212034a5fb96771499f1ee3",
"text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.",
"title": ""
},
{
"docid": "c5958b1ef21663b89e3823e9c33dc316",
"text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.",
"title": ""
},
{
"docid": "20e6ffb912ee0291d53e7a2750d1b426",
"text": "This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state\") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.",
"title": ""
},
{
"docid": "09985252933e82cf1615dabcf1e6d9a2",
"text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.",
"title": ""
},
{
"docid": "40a7682aabed0e1e5a27118cc70939c9",
"text": "Design and results of a 77 GHz FM/CW radar sensor based on a simple waveguide circuitry and a novel type of printed, low-profile, and low-loss antenna are presented. A Gunn VCO and a finline mixer act as transmitter and receiver, connected by two E-plane couplers. The folded reflector type antenna consists of a printed slot array and another planar substrate which, at the same time, provide twisting of the polarization and focussing of the incident wave. In this way, a folded low-profile, low-loss antenna can be realized. The performance of the radar is described, together with first results on a scanning of the antenna beam.",
"title": ""
},
{
"docid": "b6473fa890eba0fd00ac7e1999ae6fef",
"text": "Memristors have extended their influence beyond memory to logic and in-memory computing. Memristive logic design, the methodology of designing logic circuits using memristors, is an emerging concept whose growth is fueled by the quest for energy efficient computing systems. As a result, many memristive logic families have evolved with different attributes, and a mature comparison among them is needed to judge their merit. This paper presents a framework for comparing logic families by classifying them on the basis of fundamental properties such as statefulness, proximity (from the memory array), and flexibility of computation. We propose metrics to compare memristive logic families using analytic expressions for performance (latency), energy efficiency, and area. Then, we provide guidelines for a holistic comparison of logic families and set the stage for the evolution of new logic families.",
"title": ""
},
{
"docid": "3113012fdaa154e457e81515b755c041",
"text": "Knowledge bases (KB), both automatically and manually constructed, are often incomplete — many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA1, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with combinatorially many destinations from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach obtains state-of-the-art results on several datasets, significantly outperforming prior methods.",
"title": ""
},
{
"docid": "e48dae70582d949a60a5f6b5b05117a7",
"text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.",
"title": ""
},
{
"docid": "2df35b05a40a646ba6f826503955601a",
"text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.",
"title": ""
},
{
"docid": "ec3eac65e52b62af3ab1599e643c49ac",
"text": "Topic models for text corpora comprise a popular family of methods that have inspired many extensions to encode properties such as sparsity, interactions with covariates, and the gradual evolution of topics. In this paper, we combine certain motivating ideas behind variations on topic models with modern techniques for variational inference to produce a flexible framework for topic modeling that allows for rapid exploration of different models. We first discuss how our framework relates to existing models, and then demonstrate that it achieves strong performance, with the introduction of sparsity controlling the trade off between perplexity and topic coherence. We have released our code and preprocessing scripts to support easy future comparisons and exploration.",
"title": ""
},
{
"docid": "a863f6be5416181ce8ec6a5cecadb091",
"text": "Anthropometry is a simple reliable method for quantifying body size and proportions by measuring body length, width, circumference (C), and skinfold thickness (SF). More than 19 sites for SF, 17 for C, 11 for width, and 9 for length have been included in equations to predict body fat percent with a standard error of estimate (SEE) range of +/- 3% to +/- 11% of the mean of the criterion measurement. Recent studies indicate that not only total body fat, but also regional fat and skeletal muscle, can be predicted from anthropometrics. Our Rosetta database supports the thesis that sex, age, ethnicity, and site influence anthropometric predictions; the prediction reliabilities are consistently higher for Whites than for other ethnic groups, and also by axial than by peripheral sites (biceps and calf). The reliability of anthropometrics depends on standardizing the caliper and site of measurement, and upon the measuring skill of the anthropometrist. A reproducibility of +/- 2% for C and +/- 10% for SF measurements usually is required to certify the anthropometrist.",
"title": ""
},
{
"docid": "c4d5464727db6deafc2ce2307284dd0c",
"text": "— Recently, many researchers have focused on building dual handed static gesture recognition systems. Single handed static gestures, however, pose more recognition complexity due to the high degree of shape ambiguities. This paper presents a gesture recognition setup capable of recognizing and emphasizing the most ambiguous static single handed gestures. Performance of the proposed scheme is tested on the alphabets of American Sign Language (ASL). Segmentation of hand contours from image background is carried out using two different strategies; skin color as detection cue with RGB and YCbCr color spaces, and thresholding of gray level intensities. A novel, rotation and size invariant, contour tracing descriptor is used to describe gesture contours generated by each segmentation technique. Performances of k-Nearest Neighbor (k-NN) and multiclass Support Vector Machine (SVM) classification techniques are evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieve accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%.",
"title": ""
},
{
"docid": "a542b157e4be534997ce720329d2fea5",
"text": "Microcavites contribute to enhancing the optical efficiency and color saturation of an organic light emitting diode (OLED) display. A major tradeoff of the strong cavity effect is its apparent color shift, especially for RGB-based OLED displays, due to their mismatched angular intensity distributions. To mitigate the color shift, in this work we first analyze the emission spectrum shifts and angular distributions for the OLEDs with strong and weak cavities, both theoretically and experimentally. Excellent agreement between simulation and measurement is obtained. Next, we propose a systematic approach for RGB-OLED displays based on multi-objective optimization algorithms. Three objectives, namely external quantum efficiency (EQE), color gamut coverage, and angular color shift of primary and mixed colors, can be optimized simultaneously. Our optimization algorithm is proven to be effective for suppressing color shift while keeping a relatively high optical efficiency and wide color gamut. © 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (140.3948) Microcavity devices; (160.4890) Organic materials; (230.3670) Light-emitting diodes;",
"title": ""
},
{
"docid": "81b52831b3b9e9b6384ee4dda84dd741",
"text": "The field of Web development is entering the HTML5 and CSS3 era and JavaScript is becoming increasingly influential. A large number of JavaScript frameworks have been recently promoted. Practitioners applying the latest technologies need to choose a suitable JavaScript framework (JSF) in order to abstract the frustrating and complicated coding steps and to provide a cross-browser compatibility. Apart from benchmark suites and recommendation from experts, there is little research helping practitioners to select the most suitable JSF to a given situation. The few proposals employ software metrics on the JSF, but practitioners are driven by different concerns when choosing a JSF. As an answer to the critical needs, this paper is a call for action. It proposes a research design towards a comparative analysis framework of JSF, which merges researcher needs and practitioner needs.",
"title": ""
},
{
"docid": "1afd50a91b67bd1eab0db1c2a19a6c73",
"text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.",
"title": ""
},
{
"docid": "ead5432cb390756a99e4602a9b6266bf",
"text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"title": ""
}
] | scidocsrr |
981451d8cc78bac714c568f7e27729a1 | Lossy Image Compression with Compressive Autoencoders | [
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
},
{
"docid": "9ece98aee7056ff6c686c12bcdd41d31",
"text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.",
"title": ""
}
] | [
{
"docid": "45c6d576e6c8e1dbd731126c4fb36b62",
"text": "Marine debris is listed among the major perceived threats to biodiversity, and is cause for particular concern due to its abundance, durability and persistence in the marine environment. An extensive literature search reviewed the current state of knowledge on the effects of marine debris on marine organisms. 340 original publications reported encounters between organisms and marine debris and 693 species. Plastic debris accounted for 92% of encounters between debris and individuals. Numerous direct and indirect consequences were recorded, with the potential for sublethal effects of ingestion an area of considerable uncertainty and concern. Comparison to the IUCN Red List highlighted that at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Hence where marine debris combines with other anthropogenic stressors it may affect populations, trophic interactions and assemblages.",
"title": ""
},
{
"docid": "9081cb169f74b90672f84afa526f40b3",
"text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.",
"title": ""
},
{
"docid": "a4c17b823d325ed5f339f78cd4d1e9ab",
"text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.",
"title": ""
},
{
"docid": "371ab49af58c0eb4dc55f3fdf1c741f0",
"text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.",
"title": ""
},
{
"docid": "4915acc826761f950783d9d4206857c0",
"text": "The cognitive modulation of pain is influenced by a number of factors ranging from attention, beliefs, conditioning, expectations, mood, and the regulation of emotional responses to noxious sensory events. Recently, mindfulness meditation has been found attenuate pain through some of these mechanisms including enhanced cognitive and emotional control, as well as altering the contextual evaluation of sensory events. This review discusses the brain mechanisms involved in mindfulness meditation-related pain relief across different meditative techniques, expertise and training levels, experimental procedures, and neuroimaging methodologies. Converging lines of neuroimaging evidence reveal that mindfulness meditation-related pain relief is associated with unique appraisal cognitive processes depending on expertise level and meditation tradition. Moreover, it is postulated that mindfulness meditation-related pain relief may share a common final pathway with other cognitive techniques in the modulation of pain.",
"title": ""
},
{
"docid": "afe1711ee0fbd412f0b425c488f46fbc",
"text": "The Iterated Prisoner’s Dilemma has guided research on social dilemmas for decades. However, it distinguishes between only two atomic actions: cooperate and defect. In real world prisoner’s dilemmas, these choices are temporally extended and different strategies may correspond to sequences of actions, reflecting grades of cooperation. We introduce a Sequential Prisoner’s Dilemma (SPD) game to better capture the aforementioned characteristics. In this work, we propose a deep multiagent reinforcement learning approach that investigates the evolution of mutual cooperation in SPD games. Our approach consists of two phases. The first phase is offline: it synthesizes policies with different cooperation degrees and then trains a cooperation degree detection network. The second phase is online: an agent adaptively selects its policy based on the detected degree of opponent cooperation. The effectiveness of our approach is demonstrated in two representative SPD 2D games: the Apple-Pear game and the Fruit Gathering game. Experimental results show that our strategy can avoid being exploited by exploitative opponents and achieve cooperation with cooperative opponents.",
"title": ""
},
{
"docid": "0206cbec556e66fd19aa42c610cdccfa",
"text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.",
"title": ""
},
{
"docid": "e7a86eeb576d4aca3b5e98dc53fcb52d",
"text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.",
"title": ""
},
{
"docid": "85c360e0354e5eab69dc26b7a2dd715e",
"text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.",
"title": ""
},
{
"docid": "5aab6cd36899f3d5e3c93cf166563a3e",
"text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.",
"title": ""
},
{
"docid": "83102f60343312aa0cc510550c196ae3",
"text": "A method for the on-line calibration of a circuit board trace resistance at the output of a buck converter is described. The input current is measured with a precision resistor and processed to obtain a dc reference for the output current. The voltage drop across a trace resistance at the output is amplified with a gain that is adaptively adjusted to match the dc reference. This method is applied to obtain an accurate and high-bandwidth measurement of the load current in the modern microprocessor voltage regulator application (VRM), thus enabling an accurate dc load-line regulation as well as a fast transient response. Experimental results show an accuracy well within the tolerance band of this application, and exceeding all other popular methods.",
"title": ""
},
{
"docid": "c934f44f485f41676dfed35afbf2d1f2",
"text": "Many icon taxonomy systems have been developed by researchers that organise icons based on their graphic elements. Most of these taxonomies classify icons according to how abstract or concrete they are. Categories however overlap and different researchers use different terminology, sometimes to describe what in essence is the same thing. This paper describes nine taxonomies and compares the terminologies they use. Aware of the lack of icon taxonomy systems in the field of icon design, the authors provide an overview of icon taxonomy and develop an icon taxonomy system that could bring practical benefits to the performance of computer related tasks.",
"title": ""
},
{
"docid": "de364eb64d2377c278cd71d98c2c0729",
"text": "In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a, b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.",
"title": ""
},
{
"docid": "e625c5dc123f0b1e7394c4bae47f7cd8",
"text": "Interconnected embedded devices are increasingly used in various scenarios, including industrial control, building automation, or emergency communication. As these systems commonly process sensitive information or perform safety critical tasks, they become appealing targets for cyber attacks. A promising technique to remotely verify the safe and secure operation of networked embedded devices is remote attestation. However, existing attestation protocols only protect against software attacks or show very limited scalability. In this paper, we present the first scalable attestation protocol for interconnected embedded devices that is resilient to physical attacks. Based on the assumption that physical attacks require an adversary to capture and disable devices for some time, our protocol identifies devices with compromised hardware and software. Compared to existing solutions, our protocol reduces communication complexity and runtimes by orders of magnitude, precisely identifies compromised devices, supports highly dynamic and partitioned network topologies, and is robust against failures. We show the security of our protocol and evaluate it in static as well as dynamic network topologies. Our results demonstrate that our protocol is highly efficient in well-connected networks and robust to network disruptions.",
"title": ""
},
{
"docid": "36a0bdd558de66f1126bbaea287d882a",
"text": "BACKGROUND\nThe aim of this paper was to summarise the anatomical knowledge on the subject of the maxillary nerve and its branches, and to show the clinical usefulness of such information in producing anaesthesia in the region of the maxilla.\n\n\nMATERIALS AND METHODS\nA literature search was performed in Pubmed, Scopus, Web of Science and Google Scholar databases, including studies published up to June 2014, with no lower data limit.\n\n\nRESULTS\nThe maxillary nerve (V2) is the middle sized branch of the trigeminal nerve - the largest of the cranial nerves. The V2 is a purely sensory nerve supplying the maxillary teeth and gingiva, the adjoining part of the cheek, hard and soft palate mucosa, pharynx, nose, dura mater, skin of temple, face, lower eyelid and conjunctiva, upper lip, labial glands, oral mucosa, mucosa of the maxillary sinus, as well as the mobile part of the nasal septum. The branches of the maxillary nerve can be divided into four groups depending on the place of origin i.e. in the cranium, in the sphenopalatine fossa, in the infraorbital canal, and on the face.\n\n\nCONCLUSIONS\nThis review summarises the data on the anatomy and variations of the maxillary nerve and its branches. A thorough understanding of the anatomy will allow for careful planning and execution of anaesthesiological and surgical procedures involving the maxillary nerve and its branches.",
"title": ""
},
{
"docid": "5515e892363c3683e39c6d5ec4abe22d",
"text": "Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ‘‘ex cursus’’ of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "fb0fdbdff165a83671dd9373b36caac4",
"text": "In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.",
"title": ""
},
{
"docid": "ff8cc7166b887990daa6ef355695e54f",
"text": "The knowledge-based theory of the firm suggests that knowledge is the organizational asset that enables sustainable competitive advantage in hypercompetitive environments. The emphasis on knowledge in today’s organizations is based on the assumption that barriers to the transfer and replication of knowledge endow it with strategic importance. Many organizations are developing information systems designed specifically to facilitate the sharing and integration of knowledge. Such systems are referred to as Knowledge Management System (KMS). Because KMS are just beginning to appear in organizations, little research and field data exists to guide the development and implementation of such systems or to guide expectations of the potential benefits of such systems. This study provides an analysis of current practices and outcomes of KMS and the nature of KMS as they are evolving in fifty organizations. The findings suggest that interest in KMS across a variety of industries is very high, the technological foundations are varied, and the major",
"title": ""
},
{
"docid": "793cd937ea1fc91e73735b2b8246f1f5",
"text": "Using data from a national probability sample of heterosexual U.S. adults (N02,281), the present study describes the distribution and correlates of men’s and women’s attitudes toward transgender people. Feeling thermometer ratings of transgender people were strongly correlated with attitudes toward gay men, lesbians, and bisexuals, but were significantly less favorable. Attitudes toward transgender people were more negative among heterosexual men than women. Negative attitudes were associated with endorsement of a binary conception of gender; higher levels of psychological authoritarianism, political conservatism, and anti-egalitarianism, and (for women) religiosity; and lack of personal contact with sexual minorities. In regression analysis, sexual prejudice accounted for much of the variance in transgender attitudes, but respondent gender, educational level, authoritarianism, anti-egalitarianism, and (for women) religiosity remained significant predictors with sexual prejudice statistically controlled. Implications and directions for future research on attitudes toward transgender people are discussed.",
"title": ""
}
] | scidocsrr |
c04a4ff1414cfb023adb5bdbd8b4cd1e | RoadGraph - Graph based environmental modelling and function independent situation analysis for driver assistance systems | [
{
"docid": "b08fe123ea0acc6b942c9069b661a9f9",
"text": "The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world’s best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution ”Caroline”, the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. M. Buehler et al. (Eds.): The DARPA Urban Challenge, STAR 56, pp. 441–508. springerlink.com c © Springer-Verlag Berlin Heidelberg 2009 442 F.W. Rauskolb et al. 1 Motivation and Introduction Focused research is often centered around interesting challenges and awards. The airplane industry started off with awards for the first flight over the British Channel as well as the Atlantic Ocean. The Human Genome Project, the RoboCups and the series of DARPA Grand Challenges for autonomous vehicles serve this very same purpose to foster research and development in a particular direction. The 2007 DARPA Urban Challenge is taking place to boost development of unmanned vehicles for urban areas. Although there is an obvious direct benefit for DARPA and the U.S. government, there will also be a large number of spin-offs in technologies, tools and engineering techniques, both for autonomous vehicles, but also for intelligent driver assistance. An intelligent driver assistance function needs to be able to understand the surroundings of the car, evaluate potential risks and help the driver to behave correctly, safely and, in case it is desired, also efficiently. These topics do not only affect ordinary cars, but also buses, trucks, convoys, taxis, special-purpose vehicles in factories, airports and more. It will take a number of years before we will have a mass market for cars that actively and safely protect the passenger and the surroundings, like pedestrians, from accidents in any situation. Intelligent functions in vehicles are obviously complex systems. Large issues in this project where primarily the methods, techniques and tools for the development of such a highly critical, reliable and complex system. Adapting and combining methods from different engineering disciplines were an important prerequisite for our success. For a stringent deadline-oriented development of such a system it is necessary to rely on a clear structure of the project, a dedicated development process and an efficient engineering that fits the project’s needs. Thus, we did not only concentrate on the single software modules of our autonomously driving vehicle named Caroline, but also on the process itself. We furthermore needed an appropriate tool suite that allowed us to run the development and in particular the testing process as efficient as possible. This includes a simulator allowing us to simulate traffic situations and therefore achieve a sufficient coverage of test situations that would have been hardly to conduct in reality. Only a good collaboration between the participating disciplines allowed us to develop Caroline in time to achieve such a good result in the 2007 DARPA Urban Challenge. In the long term, our goal was not only to participate in a competition but also to establish a sound basis for further research on how to enhance vehicle safety by implementing new technologies to provide vehicle users with reliable and robust driver assistance systems, e.g. by giving special attention on technology for sensor data fusion and robust and reliable system architectures including new methods for simulation and testing. Therefore, the 2007 DARPA Urban Challenge provided a golden opportunity to combine several expertise from several fields of science and engineering. For this purpose, the interdisciplinary team CarOLO had been founded, which drew its members Caroline: An Autonomously Driving Vehicle for Urban Environments 443 from five different institutes. In addition, the team received support from a consortium of national and international companies. In this paper, we firstly introduce the 2007 DARPA Urban Challenge and derive the basic requirements for the car from its rules in section 2. Section 3 describes the overall architecture of the system, which is detailed in section 4 describing sensor fusion, vision, artificial intelligence, vehicle control and along with safety concepts. Section 5 describes the overall development process, discusses quality assurance and the simulator used to achieve sufficient testing coverage in detail. Section 6 finally describes the evaluation of Caroline, namely the performance during the National Qualification Event and the DARPA Urban Challenge Final Event in Victorville, California, the results we found and the conclusions to draw from our performance. 2 2007 DARPA Urban Challenge The 2007 DARPA Urban Challenge is the continuation of the well-known Grand Challenge events of 2004 and 2005, which were entitled ”Barstow to Primm” and ”Desert Classic”. To continue the tradition of having names reflect the actual task, DARPA named the 2007 event ”Urban Challenge”, announcing with it the nature of the mission to be accomplished. The 2004 course, as shown in Fig. 1, led from the Barstow, California (A) to Primm, Nevada (B) and had a total length of about 142 miles. Prior to the main event, DARPA held a qualification, inspection and demonstration for each robot. Nevertheless, none of the original fifteen vehicles managed to come even close to the goal of successfully completing the course. With 7.4 miles as the farthest distance travelled, the challenge ended very disappointingly and no one won the $1 million cash prize. Thereafter, the DARPA program managers heightened the barriers for entering the 2005 challenge significantly. They also modified the entire quality inspection process to one involving a step-by-step application process, including a video of the car in action and the holding of so-called Site Visits, which involved the visit of DARPA officials to team-chosen test sites. The rules for these Site Visits were very strict, e.g. determining exactly how the courses had to be equipped and what obstacles had to be available. From initially 195 teams, 118 were selected for site visits and 43 had finally made it into the National Qualification Event at the California Speedway in Ontario, California. The NQE consisted of several tasks to be completed and obstacles to overcome autonomously by the participating vehicles, including tank traps, a tunnel, speed bumps, stationary cars to pass and many more. On October 5, 2005, DARPA announced the 23 teams that would participate in the final event. The course started in Primm, Nevada, where the 2004 challenge should have ended. With a total distance of 131.6 miles and several natural obstacles, the course was by no means easier than the one from the year before. At the end, five teams completed it and the rest did significantly 444 F.W. Rauskolb et al. Fig. 1. 2004 DARPA Grand Challenge Area between Barstow, CA (A) and Primm, NV (B). better as the teams the year before. The Stanford Racing Team was awarded the $2 million first prize. In 2007, DARPA wanted to increase the difficulty of the requirements, in order to meet the goal set by Congress and the Department of Defense that by 2015 a third of the Army’s ground combat vehicles would operate unmanned. Having already proved the feasibility of crossing a desert and overcome natural obstacles without human intervention, now a tougher task had to be mastered. As the United States Armed Forces are currently facing serious challenges in urban environments, the choice of such seemed logical. DARPA used the good experience and knowledge gained from the first and second Grand Challenge event to define the tasks for the autonomous vehicles. The 2007 DARPA Urban Challenge took place in Vicorville, CA as depicted in Fig. 2. The Technische Universität Braunschweig started in June 2006 as a newcomer in the 2007 DARPA Urban Challenge. Significantly supported by industrial partners, five institutes from the faculties of computer science and mechanical and electrical engineering equipped a 2006 Volkswagen Passat station wagon named ”Caroline” to participate in the DARPA Urban Challenge as a ”Track B” competitor. Track B competitors did not receive any financial support from the DARPA compared to ”Track A” competitors. Track A teams had to submit technical proposals to get technology development funding awards up to $1,000,000 in fall 2006. Track B teams had to provide a 5 minutes video demonstrating the vehicles capabilities in April 2007. Using these videos, DARPA selected 53 teams of the initial 89 teams that advanced to the next stage in the Caroline: An Autonomously Driving Vehicle for Urban Environments 445 Fig. 2. 2007 DARPA Grand Challenge Area in Victorville, CA. qualification process, the ”Site Visit” as already conducted in the 2005 Grand Challenge. Team CarOLO got an invitation for a Site Visit that had to take place in the United States. Therefore, team CarOLO accepted gratefully an offer from the Southwest Research Insitute in San Antonio, Texas providing a location for the Site Visit. On June 20, Caroline proved that she was ready for the National Qualification Event in fall 2007. Against great odds, she showed her abilities to the DARPA officials when a huge thunderstorm hit San Antonio during the Site Visit. The tasks to complete included the correct handling of intersection precedence, passing of vehicles, lane keeping and general safe behaviour. Afte",
"title": ""
}
] | [
{
"docid": "8994337878d2ac35464cb4af5e32fccc",
"text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.",
"title": ""
},
{
"docid": "b7e28e79f938b617ba2e2ed7ef1bade3",
"text": "Computing in schools has gained momentum in the last two years resulting in GCSEs in Computing and teachers looking to up skill from Digital Literacy (ICT). For many students the subject of computer science concerns software code but writing code can be challenging, due to specific requirements on syntax and spelling with new ways of thinking required. Not only do many undergraduate students lack these ways of thinking, but there is a general misrepresentation of computing in education. Were computing taught as a more serious subject like science and mathematics, public understanding of the complexities of computer systems would increase, enabling those not directly involved with IT make better informed decisions and avoid incidents such as over budget and underperforming systems. We present our exploration into teaching a variety of computing skills, most significantly \"computational thinking\", to secondary-school age children through three very different engagements. First, we discuss Print craft, in which participants learn about computer-aided design and additive manufacturing by designing and building a miniature world from scratch using the popular open-world game Mine craft and 3D printers. Second, we look at how students can get a new perspective on familiar technology with a workshop using App Inventor, a graphical Android programming environment. Finally, we look at an ongoing after school robotics club where participants face a number of challenges of their own making as they design and create a variety of robots using a number of common tools such as Scratch and Arduino.",
"title": ""
},
{
"docid": "518d8e621e1239a94f50be3d5e2982f9",
"text": "With a number of emerging biometric applications there is a dire need of less expensive authentication technique which can authenticate even if the input image is of low resolution and low quality. Foot biometric has both the physiological and behavioral characteristics still it is an abandoned field. The reason behind this is, it involves removal of shoes and socks while capturing the image and also dirty feet makes the image noisy. Cracked heels is also a reason behind noisy images. Physiological and behavioral biometric characteristics makes it a great alternative to computational intensive algorithms like fingerprint, palm print, retina or iris scan [1] and face. On one hand foot biometric has minutia features which is considered totally unique. The uniqueness of minutiae feature is already tested in fingerprint analysis [2]. On the other hand it has geometric features like hand geometry which also give satisfactory results in recognition. We can easily apply foot biometrics at those places where people inherently remove their shoes, like at holy places such as temples and mosque people remove their shoes before entering from the perspective of faith, and also remove shoes at famous monuments such as The Taj Mahal, India from the perspective of cleanliness and preservation. Usually these are the places with a strong foot fall and high risk security due to chaotic crowd. Most of the robbery, theft, terrorist attacks, are happening at these places. One very fine example is Akshardham attack in September 2002. Hence we can secure these places using low cost security algorithms based on footprint recognition.",
"title": ""
},
{
"docid": "e4d550dbd7d2acb446c52c4906e7378e",
"text": "Molecular analysis of the 16S rDNA of the intestinal microbiota of whiteleg shrimp Litopenaeus vannamei was examined to investigate the effect of a Bacillus mix (Bacillus endophyticus YC3-b, Bacillus endophyticus C2-2, Bacillus tequilensisYC5-2) and the commercial probiotic (Alibio(®)) on intestinal bacterial communities and resistance to Vibrio infection. PCR and single strain conformation polymorphism (SSCP) analyses were then performed on DNA extracted directly from guts. Injection of shrimp with V. parahaemolyticus at 2.5 × 10(5) CFU g(-1) per shrimp followed 168 h after inoculation with Bacillus mix or the Alibio probiotic or the positive control. Diversity analyses showed that the bacterial community resulting from the Bacillus mix had the highest diversity and evenness and the bacterial community of the control had the lowest diversity. The bacterial community treated with probiotics mainly consisted of α- and γ-proteobacteria, fusobacteria, sphingobacteria, and flavobacteria, while the control mainly consisted of α-proteobacteria and flavobacteria. Differences were grouped using principal component analyses of PCR-SSCP of the microbiota, according to the time of inoculation. In Vibrio parahaemolyticus-infected shrimp, the Bacillus mix (~33 %) induced a significant increase in survival compared to Alibio (~21 %) and the control (~9 %). We conclude that administration of the Bacillus mix induced modulation of the intestinal microbiota of L. vannamei and increased its resistance to V. parahaemolyticus.",
"title": ""
},
{
"docid": "64ed3c6997ed68894db5c30bc91e95cd",
"text": "Affine moment invariant (AMI) is a kind of hand-crafted image feature, which is invariant to affine transformations. This property is precisely what the standard convolution neural network (CNN) is difficult to achieve. In this letter, we present a kind of network architecture to introduce AMI into CNN, which is called AMI-Net. We achieved this by calculating AMI on the feature maps of the hidden layers. These AMIs will be concatenated with the standard CNN's FC layer to determine the network's final output. By calculating AMI on the feature maps, we can not only extend the dimension of AMIs, but also introduce affine transformation invariant into CNN. Two network architectures and training strategies of AMI-Net are illuminated, one is two-stage, and the other is end-to-end. To prove the effectiveness of the AMI-Net, several experiments have been conducted on common image datasets, MNIST, MNIST-rot, affNIST, SVHN, and CIFAR-10. By comparing with the corresponding standard CNN, respectively, we verify the validity of AMI-net.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "4e35e75d5fc074b1e02f5dded5964c19",
"text": "This paper presents a new bidirectional wireless power transfer (WPT) topology using current fed half bridge converter. Generally, WPT topology with current fed converter uses parallel LC resonant tank network in the transmitter side to compensate the reactive power. However, in medium power application this topology suffers a major drawback that the voltage stress in the inverter switches are considerably high due to high reactive power consumed by the loosely coupled coil. In the proposed topology this is mitigated by adding a suitably designed capacitor in series with the transmitter coil. Both during grid to vehicle and vehicle to grid operations the power flow is controlled through variable switching frequency to achieve extended ZVS of the inverter switches. Detail analysis and converter design procedure is presented for both grid to vehicle and vehicle to grid operations. A 1.2kW lab-prototype is developed and experimental results are presented to verify the analysis.",
"title": ""
},
{
"docid": "8955a1d5ad4fcc79bdbbd707d55da2c9",
"text": "A three-way power divider with ultra wideband behavior is presented. It has a compact size with an overall dimension of 20 mm times 30 mm. The proposed divider utilizes broadside coupling via multilayer microstrip/slot transitions of elliptical shape. The simulated and measured results show that the proposed device has 4.77 plusmn 1 dB insertion loss, better than 17 dB return loss, and better than 15 dB isolation across the frequency band 3.1 to 10.6 GHz.",
"title": ""
},
{
"docid": "b0575058a6950bc17a976504145dca0e",
"text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.",
"title": ""
},
{
"docid": "483c95f5f42388409dceb8cdb3792d19",
"text": "The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers.",
"title": ""
},
{
"docid": "86f273bc450b9a3b6acee0e8d183b3cd",
"text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.",
"title": ""
},
{
"docid": "b0766f310c4926b475bb646911a27f34",
"text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.",
"title": ""
},
{
"docid": "a4b123705dda7ae3ac7e9e88a50bd64a",
"text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "99eabf05f0c1b2979aad244c322ecc64",
"text": "Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer, which makes generalization performance potentially better in general. However, training stochastic networks is considerably more difficult. We study training using M samples of hidden activations per input. We show that the case M = 1 leads to a fundamentally different behavior where the network tries to avoid stochasticity. We propose two new estimators for the training gradient and propose benchmark tests for comparing training algorithms. Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.",
"title": ""
},
{
"docid": "b3cb053d44a90a2a9a9332ac920f0e90",
"text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "543d5bf66aae84dab59ed9f124c23375",
"text": "This paper describes a blueprint for the proper design of a Fair Internet Regulation System (IRS), i.e. a system that will be implemented in national level and will encourage the participation of Internet users in enriching and correcting its “behavior”. Such a system will be easier to be accepted by Western democracies willing to implement a fair Internet regulation policy.",
"title": ""
},
{
"docid": "b4dd2dd381fc00419172d87ef113a422",
"text": "Automatic on-line signature verification is an intriguing intellectual challenge with many practical applications. I review the context of this problem and then describe my own approach to it, which breaks with tradition by relying primarily on the detailed shape of a signature for its automatic verification, rather than relying primarily on the pen dynamics during the production of the signature. I propose a robust, reliable, and elastic localshape-based model for handwritten on-line curves; this model is generated by first parameterizing each on-line curve over its normalized arc-length and then representing along the length of the curve, in a moving coordinate frame, measures of the curve within a sliding window that are analogous to the position of the center of mass, the torque exerted by a force, and the moments of inertia of a mass distribution about its center of mass. Further, I suggest the weighted and biased harmonic mean as a graceful mechanism of combining errors from multiple models of which at least one model is applicable but not necessarily more than one model is applicable, recommending that each signature be represented by multiple models, these models, perhaps, local and global, shape based and dynamics based. Finally, I outline a signature-verification algorithm that I have implemented and tested successfully both on databases and in live experiments.",
"title": ""
},
{
"docid": "11deeb206a297419b5af89d5700fc705",
"text": "While offering unique performance and energy-saving advantages, the use of Field-Programmable Gate Arrays (FPGAs) for database acceleration has demanded major concessions from system designers. Either the programmable chips have been used for very basic application tasks (such as implementing a rigid class of selection predicates) or their circuit definition had to be completely recompiled at runtime—a very CPU-intensive and time-consuming effort.\n This work eliminates the need for such concessions. As part of our XLynx implementation—an FPGA-based XML filter—we present skeleton automata, which is a design principle for data-intensive hardware circuits that offers high expressiveness and quick reconfiguration at the same time. Skeleton automata provide a generic implementation for a class of finite-state automata. They can be parameterized to any particular automaton instance in a matter of microseconds or less (as opposed to minutes or hours for complete recompilation).\n We showcase skeleton automata based on XML projection [Marian and Siméon 2003], a filtering technique that illustrates the feasibility of our strategy for a real-world and challenging task. By performing XML projection in hardware and filtering data in the network, we report on performance improvements of several factors while remaining nonintrusive to the back-end XML processor (we evaluate XLynx using the Saxon engine).",
"title": ""
},
{
"docid": "62d1a1214113f74badcbadc83b642eca",
"text": "Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.",
"title": ""
},
{
"docid": "3412cf0349cee1c21c433477696641b4",
"text": "Three experiments examined the impact of excessive violence in sport video games on aggression-related variables. Participants played either a nonviolent simulation-based sports video game (baseball or football) or a matched excessively violent sports video game. Participants then completed measures assessing aggressive cognitions (Experiment 1), aggressive affect and attitudes towards violence in sports (Experiment 2), or aggressive behavior (Experiment 3). Playing an excessively violent sports video game increased aggressive affect, aggressive cognition, aggressive behavior, and attitudes towards violence in sports. Because all games were competitive, these findings indicate that violent content uniquely leads to increases in several aggression-related variables, as predicted by the General Aggression Model and related social–cognitive models. 2009 Elsevier Inc. All rights reserved. In 2002, ESPN aired an investigative piece examining the impact of excessively violent sports video games on youth’s attitudes towards sports (ESPN, 2002). At the time, Midway Games produced several sports games (e.g., NFL Blitz, MLB Slugfest, and NHL Hitz) containing excessive and unrealistic violence, presumably to appeal to non-sport fan video game players. These games were officially licensed by the National Football League, Major League Baseball, and the National Hockey League, which permitted Midway to implement team logos, players’ names, and players’ likenesses into the games. Within these games, players control real-life athletes and can perform excessively violent behaviors on the electronic field. The ESPN program questioned why the athletic leagues would allow their license to be used in this manner and what effect these violent sports games had on young players. Then in December 2004, the NFL granted exclusive license rights to EA Sports (ESPN.com, 2005). In response, Midway Games began publishing a more violent, grittier football game based on a fictitious league. The new football video game, which is rated appropriate only for people seventeen and older, features fictitious players engaging in excessive violent behaviors on and off the field, drug use, sex, and gambling (IGN.com, 2005). Violence in video games has been a major social issue, not limited to violence in sports video games. Over 85% of the games on ll rights reserved. ychology, Iowa State Univernited States. Fax: +1 515 294 the market contain some violence (Children Now, 2001). Approximately half of video games include serious violent actions toward other game characters (Children Now, 2001; Dietz, 1998; Dill, Gentile, Richter, & Dill, 2005). Indeed, Congressman Joe Baca of California recently introduced Federal legislation to require that violent video games contain a warning label about their link to aggression (Baca, 2009). Since 1999, the amount of daily video game usage by youth has nearly doubled (Roberts, Foehr, & Rideout, 2005). Almost 60% of American youth from ages 8 to 18 report playing video games on ‘‘any given day” and 30% report playing for more than an average of an hour a day (Roberts et al., 2005). Video game usage is high in youth regardless of sex, race, parental education, or household income (Roberts et al., 2005). Competition-only versus violent-content hypotheses Recent meta-analyses (e.g., Anderson et al., 2004, submitted for publication) have shown that violent video game exposure increases physiological arousal, aggressive affect, aggressive cognition, and aggressive behavior. Other studies link violent video game play to physiological desensitization to violence (e.g., Bartholow, Bushman, & Sestir, 2006; Carnagey, Anderson, & Bushman, 2007). Particularly interesting is the recent finding that violent video game play can increase aggression in both short and long term contexts. Besides the empirical evidence, there are strong theoretical reasons from the cognitive, social, and personality domains to expect 732 C.A. Anderson, N.L. Carnagey / Journal of Experimental Social Psychology 45 (2009) 731–739 violent video game effects on aggression-related variables. However, currently there are two competing hypotheses as to how violent video games increases aggression: the violent-content hypothesis and the competition-only hypothesis. General Aggression Model and the violent-content hypothesis The General Aggression Model (GAM) is an integration of several prior models of aggression (e.g., social learning theory, cognitive neoassociation) and has been detailed in several publications (Anderson & Bushman, 2002; Anderson & Carnagey, 2004; Anderson, Gentile, & Buckley, 2007; Anderson & Huesmann, 2003). GAM describes a cyclical pattern of interaction between the person and the environment. Input variables, such as provocation and aggressive personality, can affect decision processes and behavior by influencing one’s present internal state in at least one of three primary ways: by influencing current cognitions, affective state, and physiological arousal. That is, a specific input variable may directly influence only one, or two, or all three aspects of a person’s internal state. For example, uncomfortably hot temperature appears to increase aggression primarily by its direct impact on affective state (Anderson, Anderson, Dorr, DeNeve, & Flanagan, 2000). Of course, because affect, arousal, and cognition tend to influence each other, even input variables that primarily influence one aspect of internal state also tend to indirectly influence the other aspects. Although GAM is a general model and not specifically a model of media violence effects, it can easily be applied to media effects. Theoretically, violent media exposure might affect all three components of present internal state. Research has shown that playing violent video games can temporarily increase aggressive thoughts (e.g., Kirsh, 1998), affect (e.g., Ballard & Weist, 1996), and arousal (e.g., Calvert & Tan, 1994). Of course, nonviolent games also can increase arousal, and for this reason much prior work has focused on testing whether violent content can increase aggressive behavior even when physiological arousal is controlled. This usually is accomplished by selecting nonviolent games that are equally arousing (e.g., Anderson et al., 2004). Despite’s GAM’s primary focus on the current social episode, it is not restricted to short-term effects. With repeated exposure to certain types of stimuli (e.g., media violence, certain parenting practices), particular knowledge structures (e.g., aggressive scripts, attitudes towards violence) become chronically accessible. Over time, the individual employs these knowledge structures and occasionally receives environmental reinforcement for their usage. With time and repeated use, these knowledge structures gain strength and connections to other stimuli and knowledge structures, and therefore are more likely to be used in later situations. This accounts for the finding that repeatedly exposing children to media violence increases later aggression, even into adulthood (Anderson, Sakamoto, Gentile, Ihori, & Shibuya, 2008; Huesmann & Miller, 1994; Huesmann, Moise-Titus, Podolski, & Eron, 2003; Möller & Krahé, 2009; Wallenius & Punamaki, 2008). Such longterm effects result from the development, automatization, and reinforcement of aggression-related knowledge structures. In essence, the creation and automatization of these aggression-related knowledge structures and concomitant emotional desensitization changes the individual’s personality. For example, long-term consumers of violent media can become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure, or would have become without such exposure (e.g., Funk, Baldacci, Pasold, & Baumgardner, 2004; Gentile, Lynch, Linder, & Walsh, 2004; Krahé & Möller, 2004; Uhlmann & Swanson, 2004). In sum, GAM predicts that one way violent video games increase aggression is by the violent content increasing at least one of the aggression-related aspects of a person’s current internal state (short-term context), and over time increasing the chronic accessibility of aggression-related knowledge structures. This is the violent-content hypothesis. The competition hypothesis The competition hypothesis maintains that competitive situations stimulate aggressiveness. According to this hypothesis, many previous short-term (experimental) video game studies have found links between violent games and aggression not because of the violent content, but because violent video games typically involve competition, whereas nonviolent video games frequently are noncompetitive. The competitive aspect of video games might increase aggression by increasing arousal or by increasing aggressive thoughts or affect. Previous research has demonstrated that increases in physiological arousal can cause increases in aggression under some circumstances (Berkowitz, 1993). Competitive aspects of violent video games could also increase aggressive cognitions via links between aggressive and competition concepts (Anderson & Morrow, 1995; Deutsch, 1949, 1993). Thus, at a general level such competition effects are entirely consistent with GAM and with the violentcontent hypothesis. However, a strong version of the competition hypothesis states that violent content has no impact beyond its effects on competition and its sequela. This strong version, which we call the competition-only hypothesis, has not been adequately tested. Testing the competition-only hypothesis There has been little research conducted to examine the violent-content hypothesis versus the competition-only hypothesis (see Carnagey & Anderson, 2005 for one such example). To test these hypotheses against each other, one must randomly assign participants to play either violent or nonviolent video games, all of which are competitive. The use of sports video games meets this requirement and has other benefits. E",
"title": ""
}
] | scidocsrr |
a147482deaac13986d5360193423da26 | Sex differences in response to visual sexual stimuli: a review. | [
{
"docid": "9b8d4b855bab5e2fdcadd1fe1632f197",
"text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
}
] | [
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "e92299720be4d028b4a7d726c99bc216",
"text": "Nowadays terahertz spectroscopy is a well-established technique and recent progresses in technology demonstrated that this new technique is useful for both fundamental research and industrial applications. Varieties of applications such as imaging, non destructive testing, quality control are about to be transferred to industry supported by permanent improvements from basic research. Since chemometrics is today routinely applied to IR spectroscopy, we discuss in this paper the advantages of using chemometrics in the framework of terahertz spectroscopy. Different analytical procedures are illustrates. We conclude that advanced data processing is the key point to validate routine terahertz spectroscopy as a new reliable analytical technique.",
"title": ""
},
{
"docid": "b27ab468a885a3d52ec2081be06db2ef",
"text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.",
"title": ""
},
{
"docid": "1abcede6d3044e5550df404cfb7c87a4",
"text": "There is intense interest in graphene in fields such as physics, chemistry, and materials science, among others. Interest in graphene's exceptional physical properties, chemical tunability, and potential for applications has generated thousands of publications and an accelerating pace of research, making review of such research timely. Here is an overview of the synthesis, properties, and applications of graphene and related materials (primarily, graphite oxide and its colloidal suspensions and materials made from them), from a materials science perspective.",
"title": ""
},
{
"docid": "ee25ce281929eb63ce5027060be799c9",
"text": "Enfin, je ne peux terminer ces quelques lignes sans remercier ma famille qui dans l'ombre m'apporte depuis si longtemps tout le soutien qui m'est nécessaire, et spécialement, mes pensées vont à Magaly. Un peu d'histoire… Dès les années 60, les données informatisées dans les organisations ont pris une importance qui n'a cessé de croître. Les systèmes informatiques gérant ces données sont utilisés essentiellement pour faciliter l'activité quotidienne des organisations et pour soutenir les prises de décision. La démocratisation de la micro-informatique dans les années 80 a permis un important développement de ces systèmes augmentant considérablement les quantités de données 1 2 informatisées disponibles. Face aux évolutions nombreuses et rapides qui s'imposent aux organisations, la prise de décision est devenue dès les années 90 une activité primordiale nécessitant la mise en place de systèmes dédiés efficaces [Inmon, 1994]. A partir de ces années, les éditeurs de logiciels ont proposé des outils facilitant l'analyse des données pour soutenir les prises de décision. Les tableurs sont probablement les premiers outils qui ont été utilisés pour analyser les données à des fins décisionnelles. Ils ont été complétés par des outils facilitant l'accès aux données pour les décideurs au travers d'interfaces graphiques dédiées au « requêtage » ; citons le logiciel Business Objects qui reste aujourd'hui encore l'un des plus connus. Le développement de systèmes dédiés à la prise de décision a vu naître des outils E.T.L. (« Extract-Transform-Load ») destinés à faciliter l'extraction et la transformation de données décisionnelles. Dès la fin des années 90, les acteurs importants tels que Microsoft, Oracle, IBM, SAP sont intervenus sur ce nouveau marché en faisant évoluer leurs outils et en acquérant de nombreux logiciels spécialisés ; par exemple, SAP vient d'acquérir Business Objects pour 4,8 milliards d'euros. Ils disposent aujourd'hui d'offres complètes intégrant l'ensemble de la chaîne décisionnelle : E.T.L., stockage (S.G.B.D.), restitution et analyse. Cette dernière décennie connaît encore une évolution marquante avec le développement d'une offre issue du monde du logiciel libre (« open source ») qui atteint aujourd'hui une certaine maturité (Talend 3 , JPalo 4 , Jasper 5). Dominée par les outils du marché, l'informatique décisionnelle est depuis le milieu des années 90 un domaine investi par le monde de la recherche au travers des concepts d'entrepôt de données (« data warehouse ») [Widom, 1995] [Chaudhury, et al., 1997] et d'OLAP (« On-Line Analytical Processing ») [Codd, et al., 1993]. D'abord diffusées dans …",
"title": ""
},
{
"docid": "bc3aaa01d7e817a97c6709d193a74f9e",
"text": "Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them. It exploits the inherent recursivity of dictionaries by encouraging consistency between the representations it uses as inputs and the representations it produces as outputs. The resulting embeddings are shown to capture semantic similarity better than regular distributional methods and other dictionary-based methods. In addition, the method shows strong performance when trained exclusively on dictionary data and generalizes in one shot.",
"title": ""
},
{
"docid": "e34d244a395a753b0cb97f8535b56add",
"text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.",
"title": ""
},
{
"docid": "9276dc2798f90483025ea01e69c51001",
"text": "Convolutional Neural Network (CNN) was firstly introduced in Computer Vision for image recognition by LeCun et al. in 1989. Since then, it has been widely used in image recognition and classification tasks. The recent impressive success of Krizhevsky et al. in ILSVRC 2012 competition demonstrates the significant advance of modern deep CNN on image classification task. Inspired by his work, many recent research works have been concentrating on understanding CNN and extending its application to more conventional computer vision tasks. Their successes and lessons have promoted the development of both CNN and vision science. This article makes a survey of recent progress in CNN since 2012. We will introduce the general architecture of a modern CNN and make insights into several typical CNN incarnations which have been studied extensively. We will also review the efforts to understand CNNs and review important applications of CNNs in computer vision tasks.",
"title": ""
},
{
"docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3",
"text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.",
"title": ""
},
{
"docid": "361e874cccb263b202155ef92e502af3",
"text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"title": ""
},
{
"docid": "edd39b11eaed2dc89ab74542ce9660bb",
"text": "The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).",
"title": ""
},
{
"docid": "e74240aef79f42ac0345a2ae49ecde4a",
"text": "Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind’s WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "d0ebee0648beecbd00faaf67f76f256c",
"text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and",
"title": ""
},
{
"docid": "86ba97e91a8c2bcb1015c25df7c782db",
"text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.",
"title": ""
},
{
"docid": "be6e3666eba5752a59605a86e5bd932f",
"text": "Accurate knowledge on the absolute or true speed of a vehicle, if and when available, can be used to enhance advanced vehicle dynamics control systems such as anti-lock brake systems (ABS) and auto-traction systems (ATS) control schemes. Current conventional method uses wheel speed measurements to estimate the speed of the vehicle. As a result, indication of the vehicle speed becomes erroneous and, thus, unreliable when large slips occur between the wheels and terrain. This paper describes a fuzzy rule-based Kalman filtering technique which employs an additional accelerometer to complement the wheel-based speed sensor, and produce an accurate estimation of the true speed of a vehicle. We use the Kalman filters to deal with the noise and uncertainties in the speed and acceleration models, and fuzzy logic to tune the covariances and reset the initialization of the filter according to slip conditions detected and measurement-estimation condition. Experiments were conducted using an actual vehicle to verify the proposed strategy. Application of the fuzzy logic rule-based Kalman filter shows that accurate estimates of the absolute speed can be achieved euen under sagnapcant brakang skzd and traction slip conditions.",
"title": ""
},
{
"docid": "335e92a896c6cce646f3ae81c5d9a02c",
"text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.",
"title": ""
},
{
"docid": "c1e3872540aa37d6253b3e52bb3551a9",
"text": "Human Activity recognition (HAR) is an important area of research in ubiquitous computing and Human Computer Interaction. To recognize activities using mobile or wearable sensor, data are collected using appropriate sensors, segmented, needed features extracted and activities categories using discriminative models (SVM, HMM, MLP etc.). Feature extraction is an important stage as it helps to reduce computation time and ensure enhanced recognition accuracy. Earlier researches have used statistical features which require domain expert and handcrafted features. However, the advent of deep learning that extracts salient features from raw sensor data and has provided high performance in computer vision, speech and image recognition. Based on the recent advances recorded in deep learning for human activity recognition, we briefly reviewed the different deep learning methods for human activities implemented recently and then propose a conceptual deep learning frameworks that can be used to extract global features that model the temporal dependencies using Gated Recurrent Units. The framework when implemented would comprise of seven convolutional layer, two Gated recurrent unit and Support Vector Machine (SVM) layer for classification of activity details. The proposed technique is still under development and will be evaluated with benchmarked datasets and compared with other baseline deep learning algorithms.",
"title": ""
},
{
"docid": "46e63e9f9dc006ad46e514adc26c12bd",
"text": "In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV3MR) to integrate multiple features. MV3MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV3MR for image classification.",
"title": ""
}
] | scidocsrr |
61066bf4becb9baa87e749a01fb1e6b4 | Real-Time Color Coded Object Detection Using a Modular Computer Vision Library | [
{
"docid": "d8a743e35109af56eb2e99f002289d72",
"text": "The core problem addressed in this article is the 3D position detection of a spherical object of knownradius in a single image frame, obtained by a dioptric vision system consisting of only one fisheye lens camera that follows equidistant projection model. The central contribution is a bijection principle between a known-radius spherical object’s 3D world position and its 2D projected image curve, that we prove, thus establishing that for every possible 3D world position of the spherical object, there exists a unique curve on the image plane if the object is projected through a fisheye lens that follows equidistant projection model. Additionally, we present a setup for the experimental verification of the principle’s correctness. In previously published works we have applied this principle to detect and subsequently track a known-radius spherical object. 2014 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "17c6859c2ec80d4136cb8e76859e47a6",
"text": "This paper describes a complete and efficient vision system d eveloped for the robotic soccer team of the University of Aveiro, CAMB ADA (Cooperative Autonomous Mobile roBots with Advanced Distributed Ar chitecture). The system consists on a firewire camera mounted vertically on th e top of the robots. A hyperbolic mirror placed above the camera reflects the 360 d egrees of the field around the robot. The omnidirectional system is used to find t he ball, the goals, detect the presence of obstacles and the white lines, used by our localization algorithm. In this paper we present a set of algorithms to extract efficiently the color information of the acquired images and, in a second phase, ex tract the information of all objects of interest. Our vision system architect ure uses a distributed paradigm where the main tasks, namely image acquisition, co lor extraction, object detection and image visualization, are separated in se veral processes that can run at the same time. We developed an efficient color extracti on algorithm based on lookup tables and a radial model for object detection. Our participation in the last national robotic contest, ROBOTICA 2007, where we have obtained the first place in the Medium Size League of robotic soccer, shows the e ffectiveness of our algorithms. Moreover, our experiments show that the sys tem is fast and accurate having a maximum processing time independently of the r obot position and the number of objects found in the field.",
"title": ""
}
] | [
{
"docid": "944efa24cef50c0fd9d940a2ccbcdbcc",
"text": "This conceptual paper in sustainable business research introduces a business sustainability maturity model as an innovative solution to support companies move towards sustainable development. Such model offers the possibility for each firm to individually assess its position regarding five sustainability maturity levels and, as a consequence, build a tailored as well as a common strategy along its network of relationships and influence to progress towards higher levels of sustainable development. The maturity model suggested is based on the belief that business sustainability is a continuous process of evolution in which a company will be continuously seeking to achieve its vision of sustainable development in uninterrupted cycles of improvement, where at each new cycle the firm starts the process at a higher level of business sustainability performance. The referred model is therefore dynamic to incorporate changes along the way and enable its own evolution following the firm’s and its network partners’ progress towards the sustainability vision. The research on which this paper is based combines expertise in science and technology policy, R&D and innovation management, team performance and organisational learning, strategy alignment and integrated business performance, knowledge management and technology foresighting.",
"title": ""
},
{
"docid": "f132d1e91058ebc9484464e006a16da0",
"text": "We propose drl-RPN, a deep reinforcement learning-based visual recognition model consisting of a sequential region proposal network (RPN) and an object detector. In contrast to typical RPNs, where candidate object regions (RoIs) are selected greedily via class-agnostic NMS, drl-RPN optimizes an objective closer to the final detection task. This is achieved by replacing the greedy RoI selection process with a sequential attention mechanism which is trained via deep reinforcement learning (RL). Our model is capable of accumulating class-specific evidence over time, potentially affecting subsequent proposals and classification scores, and we show that such context integration significantly boosts detection accuracy. Moreover, drl-RPN automatically decides when to stop the search process and has the benefit of being able to jointly learn the parameters of the policy and the detector, both represented as deep networks. Our model can further learn to search over a wide range of exploration-accuracy trade-offs making it possible to specify or adapt the exploration extent at test time. The resulting search trajectories are image- and category-dependent, yet rely only on a single policy over all object categories. Results on the MS COCO and PASCAL VOC challenges show that our approach outperforms established, typical state-of-the-art object detection pipelines.",
"title": ""
},
{
"docid": "5e318e222f265b44aeea8bd2ee1b04db",
"text": "In this paper, a comparison between existing state of the art multilevel inverter topologies is performed. The topologies examined are the neutral point clamp multilevel inverter (NPCMLI) or diode-clamped multilevel inverter (DCMLI), the flying capacitor multilevel inverter (FCMLI) and the cascaded cell multilevel inverter (CCMLI). The comparison of these inverters is based on the criteria of output voltage quality (Peak value of the fundamental and dominant harmonic components and THD), power circuitry complexity, and implementation cost. The comparison results are based on theoretical results verified by detailed simulation results.",
"title": ""
},
{
"docid": "e960eda87bd5f7ed4d161f5aec90c166",
"text": "This paper presents a new method of extracting LF model based parameters using a spectral model matching approach. Strategies are described for overcoming some of the known difficulties of this type of approach, in particular high frequency noise. The new method performed well compared to a typical time based method particularly in terms of robustness against distortions introduced by the recording system and in terms of the ability of parameters extracted in this manner to differentiate three discrete voice qualities. Results from this study are very promising for the new method and offer a way of extracting a set of non-redundant spectral parameters that may be very useful in both recognition and synthesis systems.",
"title": ""
},
{
"docid": "02ea5b61b22d5af1b9362ca46ead0dea",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "15ef258e08dcc0fe0298c089fbf5ae1c",
"text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.",
"title": ""
},
{
"docid": "6dbaeff4f3cb814a47e8dc94c4660d33",
"text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.",
"title": ""
},
{
"docid": "ae23145d649c6df81a34babdfc142b31",
"text": "Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions. In this work, we introduce a disagreement regularization to explicitly encourage the diversity among multiple attention heads. Specifically, we propose three types of disagreement regularization, which respectively encourage the subspace, the attended positions, and the output representation associated with each attention head to be different from other heads. Experimental results on widely-used WMT14 English⇒German and WMT17 Chinese⇒English translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "46e2d530dc4feb51d759261afbb57c70",
"text": "A fall monitor system is necessary to reduce the rate of fall fatalities in elderly people. As an accelerometer has been smaller and inexpensive, it has been becoming widely used in motion detection fields. This paper proposes the falling detection algorithm based on back propagation neural network to detect the fall of elderly people. In the experiment, a tri-axial accelerometer was attached to waists of five healthy and young people. In order to evaluate the performance of the fall detection, five young people were asked to simulate four daily-life activities and four falls; walking, jumping, flopping on bed, rising from bed, front fall, back fall, left fall and right fall. The experimental results show that the proposed algorithm can potentially distinguish the falling activities from the other daily-life activities.",
"title": ""
},
{
"docid": "48f2d5374dd93771f39981bd3ae632e7",
"text": "A standard set of variables is extracted from a set of studies with different perspectives and different findings involving learning aids in the classroom. The variables are then used to analyze the studies in order to draw conclusions about learning aids in general and manipulatives in particular.",
"title": ""
},
{
"docid": "016891dcefdf3668b6359d95617536b3",
"text": "While most steps in the modern object detection methods are learnable, the region feature extraction step remains largely handcrafted, featured by RoI pooling methods. This work proposes a general viewpoint that unifies existing region feature extraction methods and a novel method that is end-to-end learnable. The proposed method removes most heuristic choices and outperforms its RoI pooling counterparts. It moves further towards fully learnable object detection.",
"title": ""
},
{
"docid": "3f268b6048d534720cac533f04c2aa7e",
"text": "This paper seeks a simple, cost effective and compact gate drive circuit for bi-directional switch of matrix converter. Principals of IGBT commutation and bi-directional switch commutation in matrix converters are reviewed. Three simple IGBT gate drive circuits are presented and simulated in PSpice and simulation results are approved by experiments in the end of this paper. Paper concludes with comparative numbers of gate drive costs.",
"title": ""
},
{
"docid": "d82538ac4d517da99a68e09c577f72c3",
"text": "Attempting to locate and quantify material on the Web that is hidden from typical search techniques.",
"title": ""
},
{
"docid": "2bfd884e92a26d017a7854be3dfb02e8",
"text": "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.",
"title": ""
},
{
"docid": "2ea86c4c0ed1b55166a5ee592f24aa95",
"text": "As a prospective candidate material for surface coating and repair applications, nickel-based superalloy Inconel 718 (IN718) was deposited on American Iron and Steel Institute (AISI) 4140 alloy steel substrate by laser engineered net shaping (LENS) to investigate the compatibility between two dissimilar materials with a focus on interface bonding and fracture behavior of the hybrid specimens. The results show that the interface between the two dissimilar materials exhibits good metallurgical bonding. Through the tensile test, all the fractures occurred in the as-deposited IN718 section rather than the interface or the substrate, implying that the as-deposited interlayer bond strength is weaker than the interfacial bond strength. From the fractography using scanning electron microscopy (SEM) and energy disperse X-ray spectrometry (EDS), three major factors affecting the tensile fracture failure of the as-deposited part are (i) metallurgical defects such as incompletely melted powder particles, lack-of-fusion porosity, and micropores; (ii) elemental segregation and Laves phase, and (iii) oxide formation. The fracture failure mechanism is a combination of all these factors which are detrimental to the mechanical properties and structural integrity by causing premature fracture failure of the as-deposited IN718.",
"title": ""
},
{
"docid": "965a86af0ad0c8a4255ac0ea36c1d497",
"text": "In this paper we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data.",
"title": ""
},
{
"docid": "94c32d8ce4ad31ca630643de0c9feaeb",
"text": "A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework",
"title": ""
},
{
"docid": "8ed61e6caf9864805fb6017aa5c35600",
"text": "Self-modifying code (SMC) is widely used in obfuscated program for enhancing the difficulty in reverse engineering. The typical mode of self-modifying code is restore-execute-hide, it drives program to conceal real behaviors at most of the time, and only under actual running will the real code be restored and executed. In order to locate the SMC and further recover the original logic of code for guiding program analysis, dynamic self-modifying code detecting method based on backward analysis is proposed. Our method first extracts execution trace such as instructions and status through dynamic analysis. Then we maintain a memory set to store the memory address of execution instructions, the memory set will update dynamically while backward searching the trace, and simultaneously will we check the memory write address to match with current memory set in order to identify the mode \"modify then execute\". By means of validating self-modifying code which is identified via above procedures, we can easily deobfuscate the program which use self-modifying code and achieve its original logic. A prototype that can be applied in self-modifying code detection is designed and implemented. The evaluation results show our method can trace the execution of program effectively, and can reduce the consumption in time and space.",
"title": ""
},
{
"docid": "bd39bc6b8038332c13601da16363b70b",
"text": "Hadoop MapReduce is the most popular open-source parallel programming model extensively used in Big Data analytics. Although fault tolerance and platform independence make Hadoop MapReduce the most popular choice for many users, it still has huge performance improvement potentials. Recently, RDMA-based design of Hadoop MapReduce has alleviated major performance bottlenecks with the implementation of many novel design features such as in-memory merge, prefetching and caching of map outputs, and overlapping of merge and reduce phases. Although these features reduce the overall execution time for MapReduce jobs compared to the default framework, further improvement is possible if shuffle and merge phases can also be overlapped with the map phase during job execution. In this paper, we propose HOMR (a Hybrid approach to exploit maximum Overlapping in MapReduce), that incorporates not only the features implemented in RDMA-based design, but also exploits maximum possible overlapping among all different phases compared to current best approaches. Our solution introduces two key concepts: Greedy Shuffle Algorithm and On-demand Shuffle Adjustment, both of which are essential to achieve significant performance benefits over the default MapReduce framework. Architecture of HOMR is generalized enough to provide performance efficiency both over different Sockets interface as well as previous RDMA-based designs over InfiniBand. Performance evaluations show that HOMR with RDMA over InfiniBand can achieve performance benefits of 54% and 56% compared to default Hadoop over IPoIB (IP over InfiniBand) and 10GigE, respectively. Compared to the previous best RDMA-based designs, this benefit is 29%. HOMR over Sockets also achieves a maximum of 38-40% benefit compared to default Hadoop over Sockets interface. We also evaluate our design with real-world workloads like SWIM and PUMA, and observe benefits of up to 16% and 18%, respectively, over the previous best-case RDMA-based design. To the best of our knowledge, this is the first approach to achieve maximum possible overlapping for MapReduce framework.",
"title": ""
}
] | scidocsrr |
ce039a9a63bbaf7898379e83b597090f | On brewing fresh espresso: LinkedIn's distributed data serving platform | [
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
}
] | [
{
"docid": "7974d8e70775f1b7ef4d8c9aefae870e",
"text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.",
"title": ""
},
{
"docid": "80c21770ada160225e17cb9673fff3b3",
"text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.",
"title": ""
},
{
"docid": "8b515e03e551d120db9ce670d930adeb",
"text": "In this letter, a broadband planar substrate truncated tapered microstrip line-to-dielectric image line transition on a single substrate is proposed. The design uses substrate truncated microstrip line which helps to minimize the losses due to the surface wave generation on thick microstrip line. Generalized empirical equations are proposed for the transition design and validated for different dielectric constants in millimeter-wave frequency band. Full-wave simulations are carried out using high frequency structural simulator. A back-to-back transition prototype of Ku-band is fabricated and measured. The measured return loss for 80-mm-long structure is better than 10 dB and the insertion loss is better than 2.5 dB in entire Ku-band (40% impedance bandwidth).",
"title": ""
},
{
"docid": "a93361b09b4aaf1385569a9efce7087e",
"text": "Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.",
"title": ""
},
{
"docid": "33b281b2f3509a6fdc3fd5f17f219820",
"text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.",
"title": ""
},
{
"docid": "7ce147a433a376dd1cc0f7f09576e1bd",
"text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).",
"title": ""
},
{
"docid": "eb962e14f34ea53dec660dfe304756b0",
"text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.",
"title": ""
},
{
"docid": "6cd301f1b6ffe64f95b7d63eb0356a87",
"text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.",
"title": ""
},
{
"docid": "7a8ded6daecbee4492f19ef85c92b0fd",
"text": "Sleep problems bave become epidemic aod traditional research has discovered many causes of poor sleep. The purpose of this study was to complement existiog research by using a salutogenic or health origins framework to investigate the correlates of good sleep. The aoalysis for this study used the National College Health Assessment data that included 54,111 participaots at 71 institutions. Participaots were raodomly selected or were in raodomly selected classrooms. Results of these aoalyses indicated that males aod females who reported \"good sleep\" were more likely to have engaged regularly in physical activity, felt less exhausted, were more likely to have a healthy Body Mass Index (BMI), aod also performed better academically. In addition, good male sleepers experienced less anxietY aod had less back pain. Good female sleepers also had fewer abusive relationships aod fewer broken bones, were more likely to have been nonsmokers aod were not binge drinkers. Despite the limitations of this exploratory study, these results are compelling, however they suggest the need for future research to clarify the identified relationships.",
"title": ""
},
{
"docid": "a6f534f6d6a27b076cee44a8a188bb72",
"text": "Managing models requires extracting information from them and modifying them, and this is performed through queries. Queries can be executed at the model or at the persistence-level. Both are complementary but while model-level queries are closer to modelling engineers, persistence-level queries are specific to the persistence technology and leverage its capabilities. This paper presents MQT, an approach that translates EOL (model-level queries) to SQL (persistence-level queries) at runtime. Runtime translation provides several benefits: (i) queries are executed only when the information is required; (ii) context and metamodel information is used to get more performant translated queries; and (iii) supports translating query programs using variables and dependant queries. Translation process used by MQT is described through two examples and we also evaluate performance of the approach.",
"title": ""
},
{
"docid": "87b5c0021e513898693e575ca5479757",
"text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.",
"title": ""
},
{
"docid": "058a128a15c7d0e343adb3ada80e18d3",
"text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.",
"title": ""
},
{
"docid": "e2e640c34a9c30a24b068afa23f916d4",
"text": "BACKGROUND\nMicrosurgical resection of arteriovenous malformations (AVMs) located in the language and motor cortex is associated with the risk of neurological deterioration, yet electrocortical stimulation mapping has not been widely used.\n\n\nOBJECTIVE\nTo demonstrate the usefulness of intraoperative mapping with language/motor AVMs.\n\n\nMETHODS\nDuring an 11-year period, mapping was used in 12 of 431 patients (2.8%) undergoing AVM resection (5 patients with language and 7 patients with motor AVMs). Language mapping was performed under awake anesthesia and motor mapping under general anesthesia.\n\n\nRESULTS\nIdentification of a functional cortex enabled its preservation in 11 patients (92%), guided dissection through overlying sulci down to the nidus in 3 patients (25%), and influenced the extent of resection in 4 patients (33%). Eight patients (67%) had complete resections. Four patients (33%) had incomplete resections, with circumferentially dissected and subtotally disconnected AVMs left in situ, attached to areas of eloquence and with preserved venous drainage. All were subsequently treated with radiosurgery. At follow-up, 6 patients recovered completely, 3 patients were neurologically improved, and 3 patients had new neurological deficits.\n\n\nCONCLUSION\nIndications for intraoperative mapping include preoperative functional imaging that identifies the language/motor cortex adjacent to the AVM; larger AVMs with higher Spetzler-Martin grades; and patients presenting with unruptured AVMs without deficits. Mapping identified the functional cortex, promoted careful tissue handling, and preserved function. Mapping may guide dissection to AVMs beneath the cortical surface, and it may impact the decision to resect the AVM completely. More conservative, subtotal circumdissections followed by radiosurgery may be an alternative to observation or radiosurgery alone in patients with larger language/motor cortex AVMs.",
"title": ""
},
{
"docid": "aee5eb38d6cbcb67de709a30dd37c29a",
"text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.",
"title": ""
},
{
"docid": "554a0628270978757eda989c67ac3416",
"text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "3b000325d8324942fc192c3df319c21d",
"text": "The proposed automatic bone age estimation system was based on the phalanx geometric characteristics and carpals fuzzy information. The system could do automatic calibration by analyzing the geometric properties of hand images. Physiological and morphological features are extracted from medius image in segmentation stage. Back-propagation, radial basis function, and support vector machine neural networks were applied to classify the phalanx bone age. In addition, the proposed fuzzy bone age (BA) assessment was based on normalized bone area ratio of carpals. The result reveals that the carpal features can effectively reduce classification errors when age is less than 9 years old. Meanwhile, carpal features will become less influential to assess BA when children grow up to 10 years old. On the other hand, phalanx features become the significant parameters to depict the bone maturity from 10 years old to adult stage. Owing to these properties, the proposed novel BA assessment system combined the phalanxes and carpals assessment. Furthermore, the system adopted not only neural network classifiers but fuzzy bone age confinement and got a result nearly to be practical clinically.",
"title": ""
},
{
"docid": "61a9bc06d96eb213ed5142bfa47920b9",
"text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.",
"title": ""
},
{
"docid": "e4179fd890a55f829e398a6f80f1d26a",
"text": "This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.",
"title": ""
},
{
"docid": "69504625b05c735dd80135ef106a8677",
"text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.",
"title": ""
}
] | scidocsrr |
2dfb5e06121079ab4320b8a496e4d45e | Dialog state tracking, a machine reading approach using Memory Network | [
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "3d22f5be70237ae0ee1a0a1b52330bfa",
"text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.",
"title": ""
}
] | [
{
"docid": "b2418dc7ae9659d643a74ba5c0be2853",
"text": "MITJA D. BACK*, LARS PENKE, STEFAN C. SCHMUKLE, KAROLINE SACHSE, PETER BORKENAU and JENS B. ASENDORPF Department of Psychology, Johannes Gutenberg-University Mainz, Germany Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK Department of Psychology, Westfälische Wilhelms-University Münster, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Humboldt University Berlin, Germany",
"title": ""
},
{
"docid": "05a77d687230dc28697ca1751586f660",
"text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.",
"title": ""
},
{
"docid": "854b2bfdef719879a437f2d87519d8e8",
"text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.",
"title": ""
},
{
"docid": "2493570aa0a224722a07e81c9aab55cd",
"text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.",
"title": ""
},
{
"docid": "1dbed92fbed2293b1da3080c306709e2",
"text": "The large number of peer-to-peer file-sharing applications can be subdivided in three basic categories: having a mediated, pure or hybrid architecture. This paper details each of these and indicates their respective strengths and weaknesses. In addition to this theoretical study, a number of practical experiments were conducted, with special attention for three popular applications, representative of each of the three architectures. Although a number of measurement studies have been done in the past ([1], [3], etc.) these all investigate only a fraction of the available applications and architectures, with very little focus on the bigger picture and to the recent evolutions in peer-to-peer architectures.",
"title": ""
},
{
"docid": "e9059b28b268c4dfc091e63c7419f95b",
"text": "Internet advertising is one of the most popular online business models. JavaScript-based advertisements (ads) are often directly embedded in a web publisher's page to display ads relevant to users (e.g., by checking the user's browser environment and page content). However, as third-party code, the ads pose a significant threat to user privacy. Worse, malicious ads can exploit browser vulnerabilities to compromise users' machines and install malware. To protect users from these threats, we propose AdSentry, a comprehensive confinement solution for JavaScript-based advertisements. The crux of our approach is to use a shadow JavaScript engine to sandbox untrusted ads. In addition, AdSentry enables flexible regulation on ad script behaviors by completely mediating its access to the web page (including its DOM) without limiting the JavaScript functionality exposed to the ads. Our solution allows both web publishers and end users to specify access control policies to confine ads' behaviors. We have implemented a proof-of-concept prototype of AdSentry that transparently supports the Mozilla Firefox browser. Our experiments with a number of ads-related attacks successfully demonstrate its practicality and effectiveness. The performance measurement indicates that our system incurs a small performance overhead.",
"title": ""
},
{
"docid": "c4421784554095ffed1365b3ba41bdc0",
"text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences",
"title": ""
},
{
"docid": "9a1505d126d1120ffa8d9670c71cb076",
"text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.",
"title": ""
},
{
"docid": "4bec71105c8dca3d0b48e99cdd4e809a",
"text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
"title": ""
},
{
"docid": "73be9211451c3051d817112dc94df86c",
"text": "Shear wave elasticity imaging (SWEI) is a new approach to imaging and characterizing tissue structures based on the use of shear acoustic waves remotely induced by the radiation force of a focused ultrasonic beam. SWEI provides the physician with a virtual \"finger\" to probe the elasticity of the internal regions of the body. In SWEI, compared to other approaches in elasticity imaging, the induced strain in the tissue can be highly localized, because the remotely induced shear waves are attenuated fully within a very limited area of tissue in the vicinity of the focal point of a focused ultrasound beam. SWEI may add a new quality to conventional ultrasonic imaging or magnetic resonance imaging. Adding shear elasticity data (\"palpation information\") by superimposing color-coded elasticity data over ultrasonic or magnetic resonance images may enable better differentiation of tissues and further enhance diagnosis. This article presents a physical and mathematical basis of SWEI with some experimental results of pilot studies proving feasibility of this new ultrasonic technology. A theoretical model of shear oscillations in soft biological tissue remotely induced by the radiation force of focused ultrasound is described. Experimental studies based on optical and magnetic resonance imaging detection of these shear waves are presented. Recorded spatial and temporal profiles of propagating shear waves fully confirm the results of mathematical modeling. Finally, the safety of the SWEI method is discussed, and it is shown that typical ultrasonic exposure of SWEI is significantly below the threshold of damaging effects of focused ultrasound.",
"title": ""
},
{
"docid": "53ac28d19b9f3c8f68e12016e9cfabbc",
"text": "Despite surveillance systems becoming increasingly ubiquitous in our living environment, automated surveillance, currently based on video sensory modality and machine intelligence, lacks most of the time the robustness and reliability required in several real applications. To tackle this issue, audio sensory devices have been incorporated, both alone or in combination with video, giving birth in the past decade, to a considerable amount of research. In this article, audio-based automated surveillance methods are organized into a comprehensive survey: A general taxonomy, inspired by the more widespread video surveillance field, is proposed to systematically describe the methods covering background subtraction, event classification, object tracking, and situation analysis. For each of these tasks, all the significant works are reviewed, detailing their pros and cons and the context for which they have been proposed. Moreover, a specific section is devoted to audio features, discussing their expressiveness and their employment in the above-described tasks. Differing from other surveys on audio processing and analysis, the present one is specifically targeted to automated surveillance, highlighting the target applications of each described method and providing the reader with a systematic and schematic view useful for retrieving the most suited algorithms for each specific requirement.",
"title": ""
},
{
"docid": "4457c0b480ec9f3d503aa89c6bbf03b9",
"text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.",
"title": ""
},
{
"docid": "2f8a6dcaeea91ef5034908b5bab6d8d3",
"text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.",
"title": ""
},
{
"docid": "e777794833a060f99e11675952cd3342",
"text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.",
"title": ""
},
{
"docid": "f1c1a0baa9f96d841d23e76b2b00a68d",
"text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08",
"title": ""
},
{
"docid": "823680e9de11eb0fe987d2b6827f4665",
"text": "Narcissism has been a perennial topic for psychoanalytic papers since Freud's 'On narcissism: An introduction' (1914). The understanding of this field has recently been greatly furthered by the analytical writings of Kernberg and Kohut despite, or perhaps because of, their glaring disagreements. Despite such theoretical advances, clinical theory has far outpaced clinical practice. This paper provides a clarification of the characteristics, diagnosis and development of the narcissistic personality disorder and draws out the differing treatment implications, at various levels of psychological intensity, of the two theories discussed.",
"title": ""
},
{
"docid": "abb06d560266ca1695f72e4d908cf6ea",
"text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.",
"title": ""
},
{
"docid": "a3cb3e28db4e44642ecdac8eb4ae9a8a",
"text": "A Ka-band highly linear power amplifier (PA) is implemented in 28-nm bulk CMOS technology. Using a deep class-AB PA topology with appropriate harmonic control circuit, highly linear and efficient PAs are designed at millimeter-wave band. This PA architecture provides a linear PA operation close to the saturated power. Also elaborated harmonic tuning and neutralization techniques are used to further improve the transistor gain and stability. A two-stack PA is designed for higher gain and output power than a common source (CS) PA. Additionally, average power tracking (APT) is applied to further reduce the power consumption at a low power operation and, hence, extend battery life. Both the PAs are tested with two different signals at 28.5 GHz; they are fully loaded long-term evolution (LTE) signal with 16-quadrature amplitude modulation (QAM), a 7.5-dB peakto-average power ratio (PAPR), and a 20-MHz bandwidth (BW), and a wireless LAN (WLAN) signal with 64-QAM, a 10.8-dB PAPR, and an 80-MHz BW. The CS/two-stack PAs achieve power-added efficiency (PAE) of 27%/25%, error vector magnitude (EVM) of 5.17%/3.19%, and adjacent channel leakage ratio (ACLRE-UTRA) of -33/-33 dBc, respectively, with an average output power of 11/14.6 dBm for the LTE signal. For the WLAN signal, the CS/2-stack PAs achieve the PAE of 16.5%/17.3%, and an EVM of 4.27%/4.21%, respectively, at an average output power of 6.8/11 dBm.",
"title": ""
}
] | scidocsrr |
becd7ce11f5b60485a157d56f110813c | A Systematic Classification of Knowledge, Reasoning, and Context within the ARC Dataset | [
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "fa6f272026605bddf1b18c8f8234dba6",
"text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
}
] | [
{
"docid": "2d4fd6da60cad3b6a427bd406f16d6fa",
"text": "BACKGROUND\nVarious cutaneous side-effects, including, exanthema, pruritus, urticaria and Lyell or Stevens-Johnson syndrome, have been reported with meropenem (carbapenem), a rarely-prescribed antibiotic. Levofloxacin (fluoroquinolone), a more frequently prescribed antibiotic, has similar cutaneous side-effects, as well as photosensitivity. We report a case of cutaneous hyperpigmentation induced by meropenem and levofloxacin.\n\n\nPATIENTS AND METHODS\nA 67-year-old male was treated with meropenem (1g×4 daily), levofloxacin (500mg twice daily) and amikacin (500mg daily) for 2 weeks, followed by meropenem, levofloxacin and rifampicin (600mg twice daily) for 4 weeks for osteitis of the fifth metatarsal. Three weeks after initiation of antibiotic therapy, dark hyperpigmentation appeared on the lower limbs, predominantly on the anterior aspects of the legs. Histology revealed dark, perivascular and interstitial deposits throughout the dermis, which stained with both Fontana-Masson and Perls stains. Infrared microspectroscopy revealed meropenem in the dermis of involved skin. After withdrawal of the antibiotics, the pigmentation subsided slowly.\n\n\nDISCUSSION\nSimilar cases of cutaneous hyperpigmentation have been reported after use of minocycline. In these cases, histological examination also showed iron and/or melanin deposits within the dermis, but the nature of the causative pigment remains unclear. In our case, infrared spectroscopy enabled us to identify meropenem in the dermis. Two cases of cutaneous hyperpigmentation have been reported following use of levofloxacin, and the results of histological examination were similar. This is the first case of cutaneous hyperpigmentation induced by meropenem.",
"title": ""
},
{
"docid": "61c4e955604011a9b9a50ccbd2858070",
"text": "This paper presents a second-order pulsewidth modulation (PWM) feedback loop to improve power supply rejection (PSR) of any open-loop PWM class-D amplifiers (CDAs). PSR of the audio amplifier has always been a key parameter in mobile phone applications. In contrast to class-AB amplifiers, the poor PSR performance has always been the major drawback for CDAs with a half-bridge connected power stage. The proposed PWM feedback loop is fabricated using GLOBALFOUNDRIES' 0.18-μm CMOS process technology. The measured PSR is more than 80 dB and the measured total harmonic distortion is less than 0.04% with a 1-kHz input sinusoidal test tone.",
"title": ""
},
{
"docid": "354a136fc9bc939906ae1c347fd21d9c",
"text": "Due to the growing popularity of indoor location-based services, indoor data management has received significant research attention in the past few years. However, we observe that the existing indexing and query processing techniques for the indoor space do not fully exploit the properties of the indoor space. Consequently, they provide below par performance which makes them unsuitable for large indoor venues with high query workloads. In this paper, we propose two novel indexes called Indoor Partitioning Tree (IPTree) and Vivid IP-Tree (VIP-Tree) that are carefully designed by utilizing the properties of indoor venues. The proposed indexes are lightweight, have small pre-processing cost and provide nearoptimal performance for shortest distance and shortest path queries. We also present efficient algorithms for other spatial queries such as k nearest neighbors queries and range queries. Our extensive experimental study on real and synthetic data sets demonstrates that our proposed indexes outperform the existing algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "f032d36e081d2b5a4b0408b8f9b77954",
"text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.",
"title": ""
},
{
"docid": "69de53fde2c621d3fa9763fab700f37a",
"text": "Every year the number of installed wind power plants in the world increases. The horizontal axis wind turbine is the most common type of turbine but there exist other types. Here, three different wind turbines are considered; the horizontal axis wind turbine and two different concepts of vertical axis wind turbines; the Darrieus turbine and the H-rotor. This paper aims at making a comparative study of these three different wind turbines from the most important aspects including structural dynamics, control systems, maintenance, manufacturing and electrical equipment. A case study is presented where three different turbines are compared to each other. Furthermore, a study of blade areas for different turbines is presented. The vertical axis wind turbine appears to be advantageous to the horizontal axis wind turbine in several aspects. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5ea45a4376e228b3eacebb8dd8e290d2",
"text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).",
"title": ""
},
{
"docid": "d8b0ef94385d1379baeb499622253a02",
"text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.",
"title": ""
},
{
"docid": "a72ca91ab3d89e5918e8e13f98dc4a7d",
"text": "We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.",
"title": ""
},
{
"docid": "548a3bd89ca480788c258c54e67fb406",
"text": "Document summarization and keyphrase extraction are two related tasks in the IR and NLP fields, and both of them aim at extracting condensed representations from a single text document. Existing methods for single document summarization and keyphrase extraction usually make use of only the information contained in the specified document. This article proposes using a small number of nearest neighbor documents to improve document summarization and keyphrase extraction for the specified document, under the assumption that the neighbor documents could provide additional knowledge and more clues. The specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results on the Document Understanding Conference (DUC) benchmark datasets demonstrate the effectiveness and robustness of our proposed approaches. The cross-document sentence relationships in the expanded document set are validated to be beneficial to single document summarization, and the word cooccurrence relationships in the neighbor documents are validated to be very helpful to single document keyphrase extraction.",
"title": ""
},
{
"docid": "2f7944399a1f588d1b11d3cf7846af1c",
"text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.",
"title": ""
},
{
"docid": "18dbbf0338d138f71a57b562883f0677",
"text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "beff14cfa1d0e5437a81584596e666ea",
"text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.",
"title": ""
},
{
"docid": "6b2c009eca44ea374bb5f1164311e593",
"text": "The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.",
"title": ""
},
{
"docid": "b5df59d926ca4778c306b255d60870a1",
"text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.",
"title": ""
},
{
"docid": "20705a14783c89ac38693b2202363c1f",
"text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.",
"title": ""
},
{
"docid": "ab98f6dc31d080abdb06bb9b4dba798e",
"text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.",
"title": ""
},
{
"docid": "c0745b124949fdb5c9ffc54d66da1789",
"text": "Anemia resulting from iron deficiency is one of the most prevalent diseases in the world. As iron has important roles in several biological processes such as oxygen transport, DNA synthesis and cell growth, there is a high need for iron therapies that result in high iron bioavailability with minimal toxic effects to treat patients suffering from anemia. This study aims to develop a novel oral iron-complex formulation based on hemin-loaded polymeric micelles composed of the biodegradable and thermosensitive polymer methoxy-poly(ethylene glycol)-b-poly[N-(2-hydroxypropyl)methacrylamide-dilactate], abbreviated as mPEG-b-p(HPMAm-Lac2). Hemin-loaded micelles were prepared by addition of hemin dissolved in DMSO:DMF (1:9, one volume) to an aqueous polymer solution (nine volumes) of mPEG-b-p(HPMAm-Lac2) followed by rapidly heating the mixture at 50°C to form hemin-loaded micelles that remain intact at room and physiological temperature. The highest loading capacity for hemin in mPEG-b-p(HPMAm-Lac2) micelles was 3.9%. The average particle diameter of the hemin-micelles ranged from 75 to 140nm, depending on the concentration of hemin solution that was used to prepare the micelles. The hemin-loaded micelles were stable at pH 2 for at least 3 h which covers the residence time of the formulation in the stomach after oral administration and up to 17 h at pH 7.4 which is sufficient time for uptake of the micelles by the enterocytes. Importantly, incubation of Caco-2 cells with hemin-micelles for 24 h at 37°C resulted in ferritin levels of 2500ng/mg protein which is about 10-fold higher than levels observed in cells incubated with iron sulfate under the same conditions. The hemin formulation also demonstrated superior cell viability compared to iron sulfate with and without ascorbic acid. The study presented here demonstrates the development of a promising novel iron complex for oral delivery.",
"title": ""
},
{
"docid": "1d4a116465d9c50f085b18d526119a90",
"text": "In this paper, we investigate the efficiency of FPGA implementations of AES and AES-like ciphers, specially in the context of authenticated encryption. We consider the encryption/decryption and the authentication/verification structures of OCB-like modes (like OTR or SCT modes). Their main advantage is that they are fully parallelisable. While this feature has already been used to increase the throughput/performance of hardware implementations, it is usually overlooked while comparing different ciphers. We show how to use it with zero area overhead, leading to a very significant efficiency gain. Additionally, we show that using FPGA technology mapping instead of logic optimization, the area of both the linear and non linear parts of the round function of several AES-like primitives can be reduced, without affecting the run-time performance. We provide the implementation results of two multi-stream implementations of both the LED and AES block ciphers. The AES implementation in this paper achieves an efficiency of 38 Mbps/slice, which is the most efficient implementation in literature, to the best of our knowledge. For LED, achieves 2.5 Mbps/slice on Spartan 3 FPGA, which is 2.57x better than the previous implementation. Besides, we use our new techniques to optimize the FPGA implementation of the CAESAR candidate Deoxys-I in both the encryption only and encryption/decryption settings. Finally, we show that the efficiency gains of the proposed techniques extend to other technologies, such as ASIC, as well.",
"title": ""
},
{
"docid": "8390fd7e559832eea895fabeb48c3549",
"text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear",
"title": ""
},
{
"docid": "fc9ec90a7fb9c18a5209f462d21cf0e1",
"text": "The demand for accurate and reliable positioning in industrial applications, especially in robotics and high-precision machines, has led to the increased use of Harmonic Drives. The unique performance features of harmonic drives, such as high reduction ratio and high torque capacity in a compact geometry, justify their widespread application. However, nonlinear torsional compliance and friction are the most fundamental problems in these components and accurate modelling of the dynamic behaviour is expected to improve the performance of the system. This paper offers a model for torsional compliance of harmonic drives. A statistical measure of variation is defined, by which the reliability of the estimated parameters for different operating conditions, as well as the accuracy and integrity of the proposed model, are quantified. The model performance is assessed by simulation to verify the experimental results. Two test setups have been developed and built, which are employed to evaluate experimentally the behaviour of the system. Each setup comprises a different type of harmonic drive, namely the high load torque and the low load torque harmonic drive. The results show an accurate match between the simulation torque obtained from the identified model and the measured torque from the experiment, which indicates the reliability of the proposed model.",
"title": ""
}
] | scidocsrr |
d03b7d2744599ea36557e544448ca5e2 | Analysis of Credit Card Fraud Detection Techniques | [
{
"docid": "782c8958fa9107b8d1087fe0c79de6ee",
"text": "Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions. The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.",
"title": ""
},
{
"docid": "e404699c5b86d3a3a47a1f3d745eecc1",
"text": "We apply Artificial Immune Systems(AIS) [4] for credit card fraud detection and we compare it to other methods such as Neural Nets(NN) [8] and Bayesian Nets(BN) [2], Naive Bayes(NB) and Decision Trees(DT) [13]. Exhaustive search and Genetic Algorithm(GA) [7] are used to select optimized parameters sets, which minimizes the fraud cost for a credit card database provided by a Brazilian card issuer. The specifics of the fraud database are taken into account, such as skewness of data and different costs associated with false positives and negatives. Tests are done with holdout sample sets, and all executions are run using Weka [18], a publicly available software. Our results are consistent with the early result of Maes in [12] which concludes that BN is better than NN, and this occurred in all our evaluated tests. Although NN is widely used in the market today, the evaluated implementation of NN is among the worse methods for our database. In spite of a poor behavior if used with the default parameters set, AIS has the best performance when parameters optimized by GA are used.",
"title": ""
}
] | [
{
"docid": "b5c7b9f1f57d3d79d3fc8a97eef16331",
"text": "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset [36], and object category detection, where we out-perform Aubry et al. [3] for \"chair\" detection on a subset of the Pascal VOC dataset.",
"title": ""
},
{
"docid": "cd1274c785a410f0e38b8e033555ee9b",
"text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.",
"title": ""
},
{
"docid": "46b5082df5dfd63271ec942ce28285fa",
"text": "The area under the ROC curve (AUC) is a very widely used measure of performance for classification and diagnostic rules. It has the appealing property of being objective, requiring no subjective input from the user. On the other hand, the AUC has disadvantages, some of which are well known. For example, the AUC can give potentially misleading results if ROC curves cross. However, the AUC also has a much more serious deficiency, and one which appears not to have been previously recognised. This is that it is fundamentally incoherent in terms of misclassification costs: the AUC uses different misclassification cost distributions for different classifiers. This means that using the AUC is equivalent to using different metrics to evaluate different classification rules. It is equivalent to saying that, using one classifier, misclassifying a class 1 point is p times as serious as misclassifying a class 0 point, but, using another classifier, misclassifying a class 1 point is P times as serious, where p≠P. This is nonsensical because the relative severities of different kinds of misclassifications of individual points is a property of the problem, not the classifiers which happen to have been chosen. This property is explored in detail, and a simple valid alternative to the AUC is proposed.",
"title": ""
},
{
"docid": "2c798421352e4f128823fca2e229e812",
"text": "The use of renewables materials for industrial applications is becoming impellent due to the increasing demand of alternatives to scarce and unrenewable petroleum supplies. In this regard, nanocrystalline cellulose, NCC, derived from cellulose, the most abundant biopolymer, is one of the most promising materials. NCC has unique features, interesting for the development of new materials: the abundance of the source cellulose, its renewability and environmentally benign nature, its mechanical properties and its nano-scaled dimensions open a wide range of possible properties to be discovered. One of the most promising uses of NCC is in polymer matrix nanocomposites, because it can provide a significant reinforcement. This review provides an overview on this emerging nanomaterial, focusing on extraction procedures, especially from lignocellulosic biomass, and on technological developments and applications of NCC-based materials. Challenges and future opportunities of NCC-based materials will be are discussed as well as obstacles remaining for their large use.",
"title": ""
},
{
"docid": "9b55e6dc69517848ae5e5040cd9d0d55",
"text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.",
"title": ""
},
{
"docid": "69f417e17188c4abcd3d0873ab9a4e90",
"text": "The genetic divergence, population genetic structure, and possible speciation of the Korean firefly, Pyrocoelia rufa, were investigated on the midsouthern Korean mainland, coastal islets, a remote offshore island, Jedu-do, and Tsushima Island in Japan. Analysis of DNA sequences from the mitochondrial COI protein-coding gene revealed 20 mtDNA-sequence-based haplotypes with a maximum divergence of 5.5%. Phylogenetic analyses using PAUP, PHYLIP, and networks subdivided the P. rufa into two clades (termed clade A and B) and the minimum nucleotide divergence between them was 3.7%. Clade A occurred throughout the Korean mainland and the coastal islets and Tsushima Island in Japan, whereas clade B was exclusively found on Jeju-do Island. In the analysis of the population genetic structure, clade B formed an independent phylogeographic group, but clade A was further subdivided into three groups: two covering western and eastern parts of the Korean peninsula, respectively, and the other occupying one eastern coastal islet and Japanese Tsushima Island. Considering both phylogeny and population structure of P. rufa, the Jeju-do Island population is obviously differentiated from other P. rufa populations, but the Tsushima Island population was a subset of the Korean coastal islet, Geoje. We interpreted the isolation of the Jeju-do population and the grouping of Tsushima Island with Korean coastal islets in terms of Late Pleistocene–Holocene events. The eastern–western subdivision on the Korean mainland was interpreted partially by the presence of a large major mountain range, which bisects the midpart of the Korean peninsula into western and eastern parts.",
"title": ""
},
{
"docid": "9042a72bc42bdfd3b2f2a1fc6145b7f1",
"text": "In this paper we introduce a framework for learning from RDF data using graph kernels that count substructures in RDF graphs, which systematically covers most of the existing kernels previously defined and provides a number of new variants. Our definitions include fast kernel variants that are computed directly on the RDF graph. To improve the performance of these kernels we detail two strategies. The first strategy involves ignoring the vertex labels that have a low frequency among the instances. Our second strategy is to remove hubs to simplify the RDF graphs. We test our kernels in a number of classification experiments with real-world RDF datasets. Overall the kernels that count subtrees show the best performance. However, they are closely followed by simple bag of labels baseline kernels. The direct kernels substantially decrease computation time, while keeping performance the same. For the walks counting kernel the decrease in computation time of the approximation is so large that it thereby becomes a computationally viable kernel to use. Ignoring low frequency labels improves the performance for all datasets. The hub removal algorithm increases performance on two out of three of our smaller datasets, but has little impact when used on our larger datasets.",
"title": ""
},
{
"docid": "8414116e8ac1dbbad2403565e897c53e",
"text": "Underground is a challenging environment for wireless communication since the propagation medium is no longer air but soil, rock and water. The well established wireless communication techniques using electromagnetic (EM) waves do not work well in this environment due to three problems: high path loss, dynamic channel condition and large antenna size. New techniques using magnetic induction (MI) can solve two of the three problems (dynamic channel condition and large antenna size), but may still cause even higher path loss. In this paper, a complete characterization of the underground MI communication channel is provided. Based on the channel model, the MI waveguide technique for communication is developed in order to reduce the MI path loss. The performance of the traditional EM wave systems, the current MI systems and our improved MI waveguide system are quantitatively compared. The results reveal that our MI waveguide system has much lower path loss than the other two cases for any channel conditions.",
"title": ""
},
{
"docid": "6dc078974eb732b2cdc9538d726ab853",
"text": "We propose a non-permanent add-on that enables plenoptic imaging with standard cameras. Our design is based on a physical copying mechanism that multiplies a sensor image into a number of identical copies that still carry the plenoptic information of interest. Via different optical filters, we can then recover the desired information. A minor modification of the design also allows for aperture sub-sampling and, hence, light-field imaging. As the filters in our design are exchangeable, a reconfiguration for different imaging purposes is possible. We show in a prototype setup that high dynamic range, multispectral, polarization, and light-field imaging can be achieved with our design.",
"title": ""
},
{
"docid": "72453a8b2b70c781e1a561b5cfb9eecb",
"text": "Pair Programming is an innovative collaborative software development methodology. Anecdotal and empirical evidence suggests that this agile development method produces better quality software in reduced time with higher levels of developer satisfaction. To date, little explanation has been offered as to why these improved performance outcomes occur. In this qualitative study, we focus on how individual differences, and specifically task conflict, impact results of the collaborative software development process and related outcomes. We illustrate that low to moderate levels of task conflict actually enhance performance, while high levels mitigate otherwise anticipated positive results.",
"title": ""
},
{
"docid": "d33048fc28d13d11e7cbd68c62590e03",
"text": "Studies have recommended usability criteria for evaluating Enterprise Resource Planning (ERP) systems. However these criteria do not provide sufficient qualitative information regarding the behaviour of users when interacting with the user interface of these systems. A triangulation technique, including the use of time diaries, can be used in Human Computer Interaction (HCI) research for providing additional qualitative data that cannot be accurately collected by experimental or even observation means alone.\n Limited studies have been performed on the use of time diaries in a triangulation approach as an HCI research method for the evaluation of the usability of ERP systems. This paper reports on a case study where electronic time diaries were used in conjunction with other HCI research methods, namely, surveys and usability questionnaires, in order to evaluate the usability of an ERP system. The results of the study show that a triangulation technique including the use of time diaries is a rich and useful method that allows more flexibility for respondents and can be used to help understand user behaviour when interacting with ERP systems. A thematic analysis of the qualitative data collected from the time diaries validated the quantitative data and highlighted common problem areas encountered during typical tasks performed with the ERP system. An improved understanding of user behaviour enabled the redesign of the tasks performed during the ERP learning process and could provide guidance to ERP designers for improving the usability and ease of learning of ERP systems.",
"title": ""
},
{
"docid": "27bcbde431c340db7544b58faa597fb7",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "b0950aaea13e1eaf13a17d64feddf9b0",
"text": "In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.",
"title": ""
},
{
"docid": "547f8fc80017e3c63911e2ea4b2eeadd",
"text": "This work investigates the effectiveness of learning to rank methods for entity search. Entities are represented by multi-field documents constructed from their RDF triples, and field-based text similarity features are extracted for query-entity pairs. State-of-the-art learning to rank methods learn models for ad-hoc entity search. Our experiments on an entity search test collection based on DBpedia confirm that learning to rank methods are as powerful for ranking entities as for ranking documents, and establish a new state-of-the-art for accuracy on this benchmark dataset.",
"title": ""
},
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "0816dce7bc85621fd55933badcd7414e",
"text": "A novel dual-mode resonator with square-patch or corner-cut elements located at four corners of a conventional microstrip loop resonator is proposed. One of these patches or corner cuts is called the perturbation element, while the others are called reference elements. In the proposed design method, the transmission zeros are created or eliminated without sacrificing the passband response by changing the perturbation's size depending on the size of the reference elements. A simple transmission-line model is used to calculate the frequencies of the two transmission zeros. It is shown that the nature of the coupling between the degenerate modes determines the type of filter characteristic, whether it is Chebyshev or elliptic. Finally, two dual-mode microstrip bandpass filters are designed and realized using degenerate modes of the novel dual-mode resonator. The filters are evaluated by experiment and simulation with very good agreement.",
"title": ""
},
{
"docid": "a6910b083a244865805c5a9ba9167dd1",
"text": "Through the Freedom of Information and Protection of Privacy Act, we obtained design documents, called PAR Sheets, for slot machine games that are in use in Ontario, Canada. From our analysis of these PAR Sheets and observations from playing and watching others play these games, we report on the design of the structural characteristics of Ontario slots and their implications for problem gambling. We discuss characteristics such as speed of play, stop buttons, bonus modes, hand-pays, nudges, near misses, how some wins are in fact losses, and how two identical looking slot machines can have very different payback percentages. We then discuss how these characteristics can lead to multi-level reinforcement schedules (different reinforcement schedules for frequent and infrequent gamblers playing the same game) and how they may provide an illusion of control and contribute in other ways to irrational thinking, all of which are known risk factors for problem gambling.",
"title": ""
},
{
"docid": "cee66cf1d7d44e4a21d0aeb2e6d0ff64",
"text": "Generating images of texture mapped geometry requires projecting surfaces onto a two-dimensional screen. If this projection involves perspective, then a division must be performed at each pixel of the projected surface in order to correctly calculate texture map coordinates. We show how a simple extension to perspective-comect texture mapping can be used to create various lighting effects, These include arbitrary projection of two-dimensional images onto geometry, realistic spotlights, and generation of shadows using shadow maps[ 10]. These effects are obtained in real time using hardware that performs correct texture mapping. CR",
"title": ""
},
{
"docid": "98ac7e59d28d63db1e572ab160b6aa64",
"text": "We show that the Learning with Errors (LWE) problem is classically at least as hard as standard worst-case lattice problems. Previously this was only known under quantum reductions.\n Our techniques capture the tradeoff between the dimension and the modulus of LWE instances, leading to a much better understanding of the landscape of the problem. The proof is inspired by techniques from several recent cryptographic constructions, most notably fully homomorphic encryption schemes.",
"title": ""
},
{
"docid": "11a140232485cb8bcc4914b8538ab5ea",
"text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.",
"title": ""
}
] | scidocsrr |
f0b9c8ab88c8d01722e7b6b391497182 | EX 2 : Exploration with Exemplar Models for Deep Reinforcement Learning | [
{
"docid": "4fc6ac1b376c965d824b9f8eb52c4b50",
"text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.",
"title": ""
},
{
"docid": "3181171d92ce0a8d3a44dba980c0cc5f",
"text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.",
"title": ""
}
] | [
{
"docid": "7b5be6623ad250bea3b84c86c6fb0000",
"text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.",
"title": ""
},
{
"docid": "2997be0d8b1f7a183e006eba78135b13",
"text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.",
"title": ""
},
{
"docid": "7deac3cbb3a30914412db45f69fb27f1",
"text": "This paper presents the design, numerical analysis and measurements of a planar bypass balun that provides 1:4 impedance transformations between the unbalanced microstrip (MS) and balanced coplanar strip line (CPS). This type of balun is suitable for operation with small antennas fed with balanced a (parallel wire) transmission line, i.e. wire, planar dipoles and loop antennas. The balun has been applied to textile CPS-fed loop antennas, designed for operations below 1GHz. The performance of a loop antenna with the balun is described, as well as an idea of incorporating rigid circuits with flexible textile structures.",
"title": ""
},
{
"docid": "600ecbb2ae0e5337a568bb3489cd5e29",
"text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.",
"title": ""
},
{
"docid": "3daa9fc7d434f8a7da84dd92f0665564",
"text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).",
"title": ""
},
{
"docid": "244b0b0029b4b440e1c5b953bda84aed",
"text": "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.",
"title": ""
},
{
"docid": "2c3142a8432d8ef14047c0b827d76c29",
"text": "During the early phase of replication, HIV reverse transcribes its RNA and crosses the nuclear envelope while escaping host antiviral defenses. The host factor Cyclophilin A (CypA) is essential for these steps and binds the HIV capsid; however, the mechanism underlying this effect remains elusive. Here, we identify related capsid mutants in HIV-1, HIV-2, and SIVmac that are restricted by CypA. This antiviral restriction of mutated viruses is conserved across species and prevents nuclear import of the viral cDNA. Importantly, the inner nuclear envelope protein SUN2 is required for the antiviral activity of CypA. We show that wild-type HIV exploits SUN2 in primary CD4+ T cells as an essential host factor that is required for the positive effects of CypA on reverse transcription and infection. Altogether, these results establish essential CypA-dependent functions of SUN2 in HIV infection at the nuclear envelope.",
"title": ""
},
{
"docid": "96edea15b87643d4501d10ad9e386c70",
"text": "As the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often chose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this paper, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface.",
"title": ""
},
{
"docid": "c3566f9addba75542296f41be2bd604e",
"text": "We consider the problem of content-based spam filtering for short text messages that arise in three contexts: mobile (SMS) communication, blog comments, and email summary information such as might be displayed by a low-bandwidth client. Short messages often consist of only a few words, and therefore present a challenge to traditional bag-of-words based spam filters. Using three corpora of short messages and message fields derived from real SMS, blog, and spam messages, we evaluate feature-based and compression-model-based spam filters. We observe that bag-of-words filters can be improved substantially using different features, while compression-model filters perform quite well as-is. We conclude that content filtering for short messages is surprisingly effective.",
"title": ""
},
{
"docid": "b02bcb7e0d7669b69130604157c27c08",
"text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.",
"title": ""
},
{
"docid": "96331faaf58a7e1b651a12047a1c7455",
"text": "The goal of scattered data interpolation techniques is to construct a (typically smooth) function from a set of unorganized samples. These techniques have a wide range of applications in computer graphics. For instance they can be used to model a surface from a set of sparse samples, to reconstruct a BRDF from a set of measurements, to interpolate motion capture data, or to compute the physical properties of a fluid. This course will survey and compare scattered interpolation algorithms and describe their applications in computer graphics. Although the course is focused on applying these techniques, we will introduce some of the underlying mathematical theory and briefly mention numerical considerations.",
"title": ""
},
{
"docid": "e510143a57bc2c0da2c745ec6f53572a",
"text": "Lately, fire outbreaks are common issues and its occurrence could cause severe damage toward nature and human properties. Thus, fire detection has been an important issue to protect human life and property and has increases in recent years. This paper focusing on the algorithm of fire detection using image processing techniques i.e. colour pixel classification. This Fire detection system does not require any special type of sensors and it has the ability to monitor large area and depending on the quality of camera used. The objective of this research is to design a methodology for fire detection using image as input. The propose algorithm is using colour pixel classification. This system used image enhancement technique, RGB and YCbCr colour models with given conditions to separate fire pixel from background and isolates luminance from chrominance contrasted from original image to detect fire. The propose system achieved 90% fire detection rate on average.",
"title": ""
},
{
"docid": "04af9445949dec4a0ba16fe54b7a9e62",
"text": "In the last five years, deep learning methods and particularly Convolutional Neural Networks (CNNs) have exhibited excellent accuracies in many pattern classification problems. Most of the state-of-the-art models apply data-augmentation techniques at the training stage. This paper provides a brief tutorial on data preprocessing and shows its benefits by using the competitive MNIST handwritten digits classification problem. We show and analyze the impact of different preprocessing techniques on the performance of three CNNs, LeNet, Network3 and DropConnect, together with their ensembles. The analyzed transformations are, centering, elastic deformation, translation, rotation and different combinations of them. Our analysis demonstrates that data-preprocessing techniques, such as the combination of elastic deformation and rotation, together with ensembles have a high potential to further improve the state-of-the-art accuracy in MNIST classification.",
"title": ""
},
{
"docid": "b0ce4a13ea4a2401de4978b6859c5ef2",
"text": "We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model’s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.",
"title": ""
},
{
"docid": "dfb125e8ae2b65540c14482fe5fe26a5",
"text": "Considering the shift of museums towards digital experiences that can satiate the interests of their young audiences, we suggest an integrated schema for socially engaging large visitor groups. As a means to present our position we propose a framework for audience involvement with complex educational material, combining serious games and virtual environments along with a theory of contextual learning in museums. We describe the research methodology for validating our framework, including the description of a testbed application and results from existing studies with children in schools, summer camps, and a museum. Such findings serve both as evidence for the applicability of our position and as a guidepost for the direction we should move to foster richer social engagement of young crowds. Author",
"title": ""
},
{
"docid": "78c3573511176ba63e2cf727e09c7eb4",
"text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.",
"title": ""
},
{
"docid": "a53904f277c06e32bd6ad148399443c6",
"text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.",
"title": ""
},
{
"docid": "5c2f115e0159d15a87904e52879c1abf",
"text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.",
"title": ""
},
{
"docid": "37a91db42be93afebb02a60cd9a7b339",
"text": "We present novel method for image-text multi-modal representation learning. In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature. We only use category information in contrast with most previous methods using image-text pair information for multi-modal embedding. In this paper, we show that multi-modal feature can be achieved without image-text pair information and our method makes more similar distribution with image and text in multi-modal feature space than other methods which use image-text pair information. And we show our multi-modal feature has universal semantic information, even though it was trained for category prediction. Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work.",
"title": ""
},
{
"docid": "c2722939dca35be6fd8662c6b77cee1d",
"text": "The cost of moving and storing data is still a fundamental concern for computer architects. Inefficient handling of data can be attributed to conventional architectures being oblivious to the nature of the values that these data bits carry. We observe the phenomenon of spatio-value similarity, where data elements that are approximately similar in value exhibit spatial regularity in memory. This is inherent to 1) the data values of real-world applications, and 2) the way we store data structures in memory. We propose the Bunker Cache, a design that maps similar data to the same cache storage location based solely on their memory address, sacrificing some application quality loss for greater efficiency. The Bunker Cache enables performance gains (ranging from 1.08x to 1.19x) via reduced cache misses and energy savings (ranging from 1.18x to 1.39x) via reduced off-chip memory accesses and lower cache storage requirements. The Bunker Cache requires only modest changes to cache indexing hardware, integrating easily into commodity systems.",
"title": ""
}
] | scidocsrr |
3b7f92cc3a43d3f7693a835df61c006d | Deep Convolutional Neural Networks for Spatiotemporal Crime Prediction | [
{
"docid": "c39fe902027ba5cb5f0fa98005596178",
"text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.",
"title": ""
},
{
"docid": "3a58c1a2e4428c0b875e1202055e5b13",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
}
] | [
{
"docid": "eb83f7367ba11bb5582864a08bb746ff",
"text": "Probabilistic inference algorithms for find ing the most probable explanation, the max imum aposteriori hypothesis, and the maxi mum expected utility and for updating belief are reformulated as an elimination-type al gorithm called bucket elimination. This em phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition ing and elimination within this framework. Bounds on complexity are given for all the al gorithms as a function of the problem's struc ture.",
"title": ""
},
{
"docid": "303d489a5f2f1cf021a1854b6c2724e0",
"text": "RGB-D cameras provide both color images and per-pixel depth stimates. The richness of this data and the recent development of low-c ost sensors have combined to present an attractive opportunity for mobile robot ics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging resu lts from recent stateof-the-art algorithms and hardware, our system enables 3D fl ight in cluttered environments using only onboard sensor data. All computation an d se sing required for local position control are performed onboard the vehicle, r educing the dependence on unreliable wireless links. However, even with accurate 3 D sensing and position estimation, some parts of the environment have more percept ual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localiz e itself along that path, it runs the risk of becoming lost or worse. We show how the Belief Roadmap (BRM) algorithm (Prentice and Roy, 2009), a belief space extensio n of the Probabilistic Roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effective ness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its li mitations. Abraham Bachrach and Samuel Prentice contributed equally to this work. Abraham Bachrach, Samuel Prentice, Ruijie He, Albert Huang a nd Nicholas Roy Computer Science and Artificial Intelligence Laboratory, Ma ss chusetts Institute of Technology, Cambridge, MA 02139. e-mail: abachrac, ruijie, albert, prentice, [email protected] u Peter Henry, Michael Krainin and Dieter Fox University of Washington, Department of Computer Science & Engi neering, Seattle, WA. e-mail: peter, mkrainin, [email protected]. Daniel Maturana The Robotics Institute, Carnegie Mellon University, Pittsbur gh, PA. e-mail: [email protected]",
"title": ""
},
{
"docid": "379df071aceaee1be2228070f0245257",
"text": "This paper reports a SiC-based solid-state circuit breaker (SSCB) with an adjustable current-time (I-t) tripping profile for both ultrafast short circuit protection and overload protection. The tripping time ranges from 0.5 microsecond to 10 seconds for a fault current ranging from 0.8X to 10X of the nominal current. The I-t tripping profile, adjustable by choosing different resistance values in the analog control circuit, can help avoid nuisance tripping of the SSCB due to inrush transient current. The maximum thermal capability of the 1200V SiC JFET static switch in the SSCB is investigated to set a practical thermal limit for the I-t tripping profile. Furthermore, a low fault current ‘blind zone’ limitation of the prior SSCB design is discussed and a new circuit solution is proposed to operate the SSCB even under a low fault current condition. Both simulation and experimental results are reported.",
"title": ""
},
{
"docid": "6018c84c0e5666b5b4615766a5bb98a9",
"text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.",
"title": ""
},
{
"docid": "e3557b0f064d848c5a9127a0c3d5f1db",
"text": "Understanding the behaviors of a software system is very important for performing daily system maintenance tasks. In practice, one way to gain knowledge about the runtime behavior of a system is to manually analyze system logs collected during the system executions. With the increasing scale and complexity of software systems, it has become challenging for system operators to manually analyze system logs. To address these challenges, in this paper, we propose a new approach for contextual analysis of system logs for understanding a system's behaviors. In particular, we first use execution patterns to represent execution structures reflected by a sequence of system logs, and propose an algorithm to mine execution patterns from the program logs. The mined execution patterns correspond to different execution paths of the system. Based on these execution patterns, our approach further learns essential contextual factors (e.g., the occurrences of specific program logs with specific parameter values) that cause a specific branch or path to be executed by the system. The mining and learning results can help system operators to understand a software system's runtime execution logic and behaviors during various tasks such as system problem diagnosis. We demonstrate the feasibility of our approach upon two real-world software systems (Hadoop and Ethereal).",
"title": ""
},
{
"docid": "cf3e66247ab575b5a8e5fe1678c209bd",
"text": "Metamorphic testing (MT) is an effective methodology for testing those so-called ``non-testable'' programs (e.g., scientific programs), where it is sometimes very difficult for testers to know whether the outputs are correct. In metamorphic testing, metamorphic relations (MRs) (which specify how particular changes to the input of the program under test would change the output) play an essential role. However, testers may typically have to obtain MRs manually.\n In this paper, we propose a search-based approach to automatic inference of polynomial MRs for a program under test. In particular, we use a set of parameters to represent a particular class of MRs, which we refer to as polynomial MRs, and turn the problem of inferring MRs into a problem of searching for suitable values of the parameters. We then dynamically analyze multiple executions of the program, and use particle swarm optimization to solve the search problem. To improve the quality of inferred MRs, we further use MR filtering to remove some inferred MRs.\n We also conducted three empirical studies to evaluate our approach using four scientific libraries (including 189 scientific functions). From our empirical results, our approach is able to infer many high-quality MRs in acceptable time (i.e., from 9.87 seconds to 1231.16 seconds), which are effective in detecting faults with no false detection.",
"title": ""
},
{
"docid": "cac379c00a4146acd06c446358c3e95a",
"text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.",
"title": ""
},
{
"docid": "58390e457d03dfec19b0ae122a7c0e0b",
"text": "A single-fed CP stacked patch antenna is proposed to cover all the GPS bands, including E5a/E5b for the Galileo system. The small aperture size (lambda/8 at the L5 band) and the single feeding property make this antenna a promising element for small GPS arrays. The design procedures and antenna performances are presented, and issues related to coupling between array elements are discussed.",
"title": ""
},
{
"docid": "efced3407e46faf9fa43ce299add28f4",
"text": "This is a pilot study of the use of “Flash cookies” by popular websites. We find that more than 50% of the sites in our sample are using Flash cookies to store information about the user. Some are using it to “respawn” or re-instantiate HTTP cookies deleted by the user. Flash cookies often share the same values as HTTP cookies, and are even used on government websites to assign unique values to users. Privacy policies rarely disclose the presence of Flash cookies, and user controls for effectuating privacy preferences are",
"title": ""
},
{
"docid": "47b5e127b64cf1842841afcdb67d6d84",
"text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.",
"title": ""
},
{
"docid": "c9be394df8b4827c57c5413fc28b47e8",
"text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.",
"title": ""
},
{
"docid": "d6976dd4280c0534049c33ff9efb2058",
"text": "Bitcoin, as well as many of its successors, require the whole transaction record to be reliably acquired by all nodes to prevent double-spending. Recently, many blockchains have been proposed to achieve scale-out throughput by letting nodes only acquire a fraction of the whole transaction set. However, these schemes, e.g., sharding and off-chain techniques, suffer from a degradation in decentralization or the capacity of fault tolerance. In this paper, we show that the complete set of transactions is not a necessity for the prevention of double-spending if the properties of value transfers is fully explored. In other words, we show that a value-transfer ledger like Bitcoin has the potential to scale-out by its nature without sacrificing security or decentralization. Firstly, we give a formal definition for the value-transfer ledger and its distinct features from a generic database. Then, we introduce the blockchain structure with a shared main chain for consensus and an individual chain for each node for recording transactions. A locally executable validation scheme is proposed with uncompromising validity and consistency. A beneficial consequence of our design is that nodes will spontaneously try to reduce their transmission cost by only providing the transactions needed to show that their transactions are not double spend. As a result, the network is sharded as each node only acquires part of the transaction record and a scale-out throughput could be achieved, which we call \"spontaneous sharding\".",
"title": ""
},
{
"docid": "ba56c75498bfd733eb29ea5601c53181",
"text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.",
"title": ""
},
{
"docid": "a29b94fb434ec5899ede49ff18561610",
"text": "Contrary to the classical (time-triggered) principle that calculates the control signal in a periodic fashion, an event-driven control is computed and updated only when a certain condition is satisfied. This notably enables to save computations in the control task while ensuring equivalent performance. In this paper, we develop and implement such strategies to control a nonlinear and unstable system, that is the inverted pendulum. We are first interested on the stabilization of the pendulum near its inverted position and propose an event-based control approach. This notably demonstrates the efficiency of the event-based scheme even in the case where the system has to be actively actuated to remain upright. We then study the swinging of the pendulum up to the desired position and propose a low-cost control law based on an energy function. The switch between both strategies is also analyzed. A real-time experimentation is realized and shows that a reduction of about 98% and 50% of samples less than the classical scheme is achieved for the swing up and stabilization parts respectively.",
"title": ""
},
{
"docid": "32cd2820e7f74e1307645874700641aa",
"text": "This paper presents a framework to model the semantic representation of binary relations produced by open information extraction systems. For each binary relation, we infer a set of preferred types on the two arguments simultaneously, and generate a ranked list of type pairs which we call schemas. All inferred types are drawn from the Freebase type taxonomy, which are human readable. Our system collects 171,168 binary relations from ReVerb, and is able to produce top-ranking relation schemas with a mean reciprocal rank of 0.337.",
"title": ""
},
{
"docid": "b4ecb4c62562517b9b16088ad8ae8c22",
"text": "This articleii presents the results of video-based Human Robot Interaction (HRI) trials which investigated people’s perceptions of different robot appearances and associated attention-seeking features and behaviors displayed by robots with different appearance and behaviors. The HRI trials studied the participants’ preferences for various features of robot appearance and behavior, as well as their personality attributions towards the robots compared to their own personalities. Overall, participants tended to prefer robots with more human-like appearance and attributes. However, systematic individual differences in the dynamic appearance ratings are not consistent with a universal effect. Introverts and participants with lower emotional stability tended to prefer the mechanical looking appearance to a greater degree than other participants. It is also shown that it is possible to rate individual elements of a particular robot’s behavior and then assess the contribution, or otherwise, of that element to the overall perception of the robot by people. Relating participants’ dynamic appearance ratings of individual robots to independent static appearance ratings provided evidence that could be taken to support a portion of the left hand side of Mori’s theoretically proposed ‘uncanny valley’ diagram. Suggestions for future work are outlined. I.INTRODUCTION Robots that are currently commercially available for use in a domestic environment and which have human interaction features are often orientated towards toy or entertainment functions. In the future, a robot companion which is to find a more generally useful place within a human oriented domestic environment, and thus sharing a private home with a person or family, must satisfy two main criteria (Dautenhahn et al. (2005); Syrdal et al. (2006); Woods et al. (2007)): It must be able to perform a range of useful tasks or functions. It must carry out these tasks or functions in a manner that is socially acceptable and comfortable for people it shares the environment with and/or it interacts with. The technical challenges in getting a robot to perform useful tasks are extremely difficult, and many researchers are currently researching into the technical capabilities that will be required to perform useful functions in a human centered environment including navigation, manipulation, vision, speech, sensing, safety, system integration and planning. The second criteria is arguably equally important, because if the robot does not exhibit socially acceptable behavior, then people may reject the robot if it is annoying, irritating, unsettling or frightening to human users. Therefore: How can a robot behave in a socially acceptable manner? Research into social robots is generally contained within the rapidly developing field of Human-Robot Interaction (HRI). For an overview of socially interactive robots (robots designed to interact with humans in a social way) see Fong et al. (2003). Relevant examples of studies and investigations into human reactions to robots include: Goetz et al. (2003) where issues of robot appearance, behavior and task domains were investigated, and Severinson-Eklundh et al. (2003) which documents a longitudinal HRI trial investigating the human perspective of using a robotic assistant over several weeks . Khan (1998), Scopelliti et al. (2004) and Dautenhahn et al. (2005) have surveyed peoples’ views of domestic robots in order to aid the development of an initial design specification for domestic or servant robots. Kanda et al. (2004) presents results from a longitudinal HRI trial with a robot as a social partner and peer tutor aiding children learning English.",
"title": ""
},
{
"docid": "5d002ab84e1a6034d2751f0807d914ac",
"text": "We live in a world with a population of more than 7.1 Billion, have we ever imagine how many Leaders do we have? Yes, most of us are followers; we live in a world where we follow what have been commanded. The intension of this paper is to equip everyone with some knowledge to know how we can identify who leaders are, are you one of them, and how can we help our-selves and other develop leadership qualities. The Model highlights various traits which are very necessary for leadership. This paper have been investigate and put together after probing almost 30 other research papers. The Principal result we arrived on was that the major/ essential traits which are identified in a Leader are Honesty, Integrity, Drive (Achievement, Motivation, Ambition, Energy, Tenacity and Initiative), Self Confidence, Vision and Cognitive Ability. The Key finding also says that the people with such qualities are not necessary to be in politics, but they are from various walks of life such as major organization, different culture, background, education and ethnicities. Also we found out that just possessing of such traits alone does not guarantee one leadership success as evidence shows that effective leaders are different in nature from most of the other people in certain key respects. So, let us go through the paper to enhance out our mental abilities to search for the Leaders out there.",
"title": ""
},
{
"docid": "5b9c12c1d65ab52d1a7bb6575c6c0bb1",
"text": "The purpose of image enhancement is to process an acquired image for better contrast and visibility of features of interest for visual examination as well as subsequent computer-aided analysis and diagnosis. Therefore, we have proposed an algorithm for medical images enhancement. In the study, we used top-hat transform, contrast limited histogram equalization and anisotropic diffusion filter methods. The system results are quite satisfactory for many different medical images like lung, breast, brain, knee and etc.",
"title": ""
},
{
"docid": "9bbee4d4c1040b5afd92910ce23d5ba5",
"text": "BACKGROUND\nNovel interventions for treatment-resistant depression (TRD) in adolescents are urgently needed. Ketamine has been studied in adults with TRD, but little information is available for adolescents. This study investigated efficacy and tolerability of intravenous ketamine in adolescents with TRD, and explored clinical response predictors.\n\n\nMETHODS\nAdolescents, 12-18 years of age, with TRD (failure to respond to two previous antidepressant trials) were administered six ketamine (0.5 mg/kg) infusions over 2 weeks. Clinical response was defined as a 50% decrease in Children's Depression Rating Scale-Revised (CDRS-R); remission was CDRS-R score ≤28. Tolerability assessment included monitoring vital signs and dissociative symptoms using the Clinician-Administered Dissociative States Scale (CADSS).\n\n\nRESULTS\nThirteen participants (mean age 16.9 years, range 14.5-18.8 years, eight biologically male) completed the protocol. Average decrease in CDRS-R was 42.5% (p = 0.0004). Five (38%) adolescents met criteria for clinical response. Three responders showed sustained remission at 6-week follow-up; relapse occurred within 2 weeks for the other two responders. Ketamine infusions were generally well tolerated; dissociative symptoms and hemodynamic symptoms were transient. Higher dose was a significant predictor of treatment response.\n\n\nCONCLUSIONS\nThese results demonstrate the potential role for ketamine in treating adolescents with TRD. Limitations include the open-label design and small sample; future research addressing these issues are needed to confirm these results. Additionally, evidence suggested a dose-response relationship; future studies are needed to optimize dose. Finally, questions remain regarding the long-term safety of ketamine as a depression treatment; more information is needed before broader clinical use.",
"title": ""
},
{
"docid": "6e77a99b6b0ddf18560580fed1ca5bbe",
"text": "Theoretical analysis of the connection between taxation and risktaking has mainly been concerned with the effect of taxes on portfolio decisions of consumers, Mossin (1968b) and Stiglitz (1969). However, there are some problems which are not naturally classified under this heading and which, although of considerable practical interest, have been left out of the theoretical discussions. One such problem is tax evasion. This takes many forms, and one can hardly hope to give a completely general analysis of all these. Our objective in this paper is therefore the more limited one of analyzing the individual taxpayer’s decision on whether and to what extent to avoid taxes by deliberate underreporting. On the one hand our approach is related to the studies of economics of criminal activity, as e.g. in the papers by Becker ( 1968) and by Tulkens and Jacquemin (197 1). On the other hand it is related to the analysis of optimal portfolio and insurance policies in the economics of uncertainty, as in the work by Arrow ( 1970), Mossin ( 1968a) and several others. We shall start by considering a simple static model where this decision is the only one with which the individual is concerned, so that we ignore the interrelationships that probably exist with other types of economic choices. After a detailed study of this simple case (sections",
"title": ""
}
] | scidocsrr |
ce005239bc1f2180ad8508470e4a168d | Agent-based decision-making process in airport ground handling management | [
{
"docid": "b20aa2222759644b4b60b5b450424c9e",
"text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "36b609f1c748154f0f6193c6578acec9",
"text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "df7a68ebb9bc03d8a73a54ab3474373f",
"text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.",
"title": ""
},
{
"docid": "720a3d65af4905cbffe74ab21d21dd3f",
"text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.",
"title": ""
},
{
"docid": "6f1e71399e5786eb9c3923a1e967cd8f",
"text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23",
"title": ""
},
{
"docid": "39d15901cd5fbd1629d64a165a94c5f5",
"text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.",
"title": ""
},
{
"docid": "01e064e0f2267de5a26765f945114a6e",
"text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples",
"title": ""
},
{
"docid": "4d445832d38c288b1b59a3df7b38eb1b",
"text": "UNLABELLED\nThe aim of this prospective study was to assess the predictive value of (18)F-FDG PET/CT imaging for pathologic response to neoadjuvant chemotherapy (NACT) and outcome in inflammatory breast cancer (IBC) patients.\n\n\nMETHODS\nTwenty-three consecutive patients (51 y ± 12.7) with newly diagnosed IBC, assessed by PET/CT at baseline (PET1), after the third course of NACT (PET2), and before surgery (PET3), were included. The patients were divided into 2 groups according to pathologic response as assessed by the Sataloff classification: pathologic complete response for complete responders (stage TA and NA or NB) and non-pathologic complete response for noncomplete responders (not stage A for tumor or not stage NA or NB for lymph nodes). In addition to maximum standardized uptake value (SUVmax) measurements, a global breast metabolic tumor volume (MTV) was delineated using a semiautomatic segmentation method. Changes in SUVmax and MTV between PET1 and PET2 (ΔSUV1-2; ΔMTV1-2) and PET1 and PET3 (ΔSUV1-3; ΔMTV1-3) were measured.\n\n\nRESULTS\nMean SUVmax on PET1, PET2, and PET3 did not statistically differ between the 2 pathologic response groups. On receiver-operating-characteristic analysis, a 72% cutoff for ΔSUV1-3 provided the best performance to predict residual disease, with sensitivity, specificity, and accuracy of 61%, 80%, and 65%, respectively. On univariate analysis, the 72% cutoff for ΔSUV1-3 was the best predictor of distant metastasis-free survival (P = 0.05). On multivariate analysis, the 72% cutoff for ΔSUV1-3 was an independent predictor of distant metastasis-free survival (P = 0.01).\n\n\nCONCLUSION\nOur results emphasize the good predictive value of change in SUVmax between baseline and before surgery to assess pathologic response and survival in IBC patients undergoing NACT.",
"title": ""
},
{
"docid": "53a67740e444b5951bc6ab257236996e",
"text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.",
"title": ""
},
{
"docid": "c7160e93c9cce017adc1200dc7d597f2",
"text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.",
"title": ""
},
{
"docid": "f2c203e9364fee062747468dc7995429",
"text": "Microinverters are module-level power electronic (MLPE) systems that are expected to have a service life more than 25 years. The general practice for providing assurance in long-term reliability under humid climatic conditions is to subject the microinverters to ‘damp heat test’ at 85°C/85%RH for 1000hrs as recommended in lEC 61215 standard. However, there is limited understanding on the correlation between the said ‘damp heat’ test and field conditions for microinverters. In this paper, a physics-of-failure (PoF)-based approach is used to correlate damp heat test to field conditions. Results of the PoF approach indicates that even 3000hrs at 85°C/85%RH may not be sufficient to guarantee 25-years' service life in certain places in the world. Furthermore, we also demonstrate that use of Miami, FL weathering data as benchmark for defining damp heat test durations will not be sufficient to guarantee 25 years' service life. Finally, when tests were conducted at 85°C/85%RH for more than 3000hrs, it was found that the PV connectors are likely to fail before the actual power electronics could fail.",
"title": ""
},
{
"docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b",
"text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.",
"title": ""
},
{
"docid": "f44b5199f93d4b441c125ac55e4e0497",
"text": "A modified method for better superpixel generation based on simple linear iterative clustering (SLIC) is presented and named BSLIC in this paper. By initializing cluster centers in hexagon distribution and performing k-means clustering in a limited region, the generated superpixels are shaped into regular and compact hexagons. The additional cluster centers are initialized as edge pixels to improve boundary adherence, which is further promoted by incorporating the boundary term into the distance calculation of the k-means clustering. Berkeley Segmentation Dataset BSDS500 is used to qualitatively and quantitatively evaluate the proposed BSLIC method. Experimental results show that BSLIC achieves an excellent compromise between boundary adherence and regularity of size and shape. In comparison with SLIC, the boundary adherence of BSLIC is increased by at most 12.43% for boundary recall and 3.51% for under segmentation error.",
"title": ""
},
{
"docid": "54bee01d53b8bcb6ca067493993b4ff3",
"text": "Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL) to model delayed reward with a log-linear function approximation of residual future score improvement. Our method provides dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.",
"title": ""
},
{
"docid": "6f1fc6a07d0beb235f5279e17a46447f",
"text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.",
"title": ""
},
{
"docid": "fad164e21c7ec013450a8b96d75d9457",
"text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.",
"title": ""
},
{
"docid": "05477664471a71eebc26d59aed9b0350",
"text": "This article serves as a quick reference for respiratory alkalosis. Guidelines for analysis and causes, signs, and a stepwise approach are presented.",
"title": ""
},
{
"docid": "9078698db240725e1eb9d1f088fb05f4",
"text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.",
"title": ""
},
{
"docid": "cb641fc639b86abadec4f85efc226c14",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "d775cdc31c84d94d95dc132b88a37fae",
"text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.",
"title": ""
},
{
"docid": "4630ade03760cb8ec1da11b16703b3f1",
"text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.",
"title": ""
},
{
"docid": "6cb480efca7138e26ce484eb28f0caec",
"text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.",
"title": ""
}
] | scidocsrr |
ffbcb3fd0a81574fee47ea757dcc44e4 | Estimation accuracy of a vector-controlled frequency converter used in the determination of the pump system operating state | [
{
"docid": "63755caaaad89e0ef6a687bb5977f5de",
"text": "rotor field orientation stator field orientation stator model rotor model MRAS, observers, Kalman filter parasitic properties field angle estimation Abstract — Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor, the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open loop estimators or closed loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors.",
"title": ""
},
{
"docid": "b8ce74fc2a02a1a5c2d93e2922529bb0",
"text": "The basic evolution of direct torque control from other drive types is explained. Qualitative comparisons with other drives are included. The basic concepts behind direct torque control are clarified. An explanation of direct self-control and the field orientation concepts implemented in the adaptive motor model block is presented. The reliance of the control method on fast processing techniques is stressed. The theoretical foundations for the control concept are provided in summary format. Information on the ancillary control blocks outside the basic direct torque control is given. The implementation of special functions directly related to the control approach is described. Finally, performance data from an actual system is presented.",
"title": ""
}
] | [
{
"docid": "530ef3f5d2f7cb5cc93243e2feb12b8e",
"text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.",
"title": ""
},
{
"docid": "f6679ca9f6c9efcb4093a33af15176d3",
"text": "This paper reports our recent finding that a laser that is radiated on a thin light-absorbing elastic medium attached on the skin can elicit a tactile sensation of mechanical tap. Laser radiation to the elastic medium creates inner elastic waves on the basis of thermoelastic effects, which subsequently move the medium and stimulate the skin. We characterize the associated stimulus by measuring its physical properties. In addition, the perceptual identity of the stimulus is confirmed by comparing it to mechanical and electrical stimuli by means of perceptual spaces. All evidence claims that indirect laser radiation conveys a sensation of short mechanical tap with little individual difference. To the best of our knowledge, this is the first study that discovers the possibility of using indirect laser radiation for mid-air tactile rendering.",
"title": ""
},
{
"docid": "63efc8aecf9b28b2a2bbe4514ed3a7fe",
"text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the statistics and chemometrics for analytical chemistry book. You can open the device and get the book by on-line.",
"title": ""
},
{
"docid": "53c962bd71abbe13d59e03e01c19d82e",
"text": "Correctness of SQL queries is usually tested by executing the queries on one or more datasets. Erroneous queries are often the results of small changes or mutations of the correct query. A mutation Q $$'$$ ′ of a query Q is killed by a dataset D if Q(D) $$\\ne $$ ≠ Q $$'$$ ′ (D). Earlier work on the XData system showed how to generate datasets that kill all mutations in a class of mutations that included join type and comparison operation mutations. In this paper, we extend the XData data generation techniques to handle a wider variety of SQL queries and a much larger class of mutations. We have also built a system for grading SQL queries using the datasets generated by XData. We present a study of the effectiveness of the datasets generated by the extended XData approach, using a variety of queries including queries submitted by students as part of a database course. We show that the XData datasets outperform predefined datasets as well as manual grading done earlier by teaching assistants, while also avoiding the drudgery of manual correction. Thus, we believe that our techniques will be of great value to database course instructors and TAs, particularly to those of MOOCs. It will also be valuable to database application developers and testers for testing SQL queries.",
"title": ""
},
{
"docid": "55f80d7b459342a41bb36a5c0f6f7e0d",
"text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.",
"title": ""
},
{
"docid": "17ccae5f98711c8698f0fb4a449a591f",
"text": "Blind image deconvolution: theory and applications Images are ubiquitous and indispensable in science and everyday life. Mirroring the abilities of our own human visual system, it is natural to display observations of the world in graphical form. Images are obtained in areas 1 2 Blind Image Deconvolution: problem formulation and existing approaches ranging from everyday photography to astronomy, remote sensing, medical imaging, and microscopy. In each case, there is an underlying object or scene we wish to observe; the original or true image is the ideal representation of the observed scene. Yet the observation process is never perfect: there is uncertainty in the measurements , occurring as blur, noise, and other degradations in the recorded images. Digital image restoration aims to recover an estimate of the original image from the degraded observations. The key to being able to solve this ill-posed inverse problem is proper incorporation of prior knowledge about the original image into the restoration process. Classical image restoration seeks an estimate of the true image assuming the blur is known. In contrast, blind image restoration tackles the much more difficult, but realistic, problem where the degradation is unknown. In general, the degradation is nonlinear (including, for example, saturation and quantization) and spatially varying (non uniform motion, imperfect optics); however, for most of the work, it is assumed that the observed image is the output of a Linear Spatially Invariant (LSI) system to which noise is added. Therefore it becomes a Blind Deconvolution (BD) problem, with the unknown blur represented as a Point Spread Function (PSF). Classical restoration has matured since its inception, in the context of space exploration in the 1960s, and numerous techniques can be found in the literature (for recent reviews see [1, 2]). These differ primarily in the prior information about the image they include to perform the restoration task. The earliest algorithms to tackle the BD problem appeared as long ago as the mid-1970s [3, 4], and attempted to identify known patterns in the blur; a small but dedicated effort followed through the late 1980s (see for instance [5, 6, 7, 8, 9]), and a resurgence was seen in the 1990s (see the earlier reviews in [10, 11]). Since then, the area has been extensively explored by the signal processing , astronomical, and optics communities. Many of the BD algorithms have their roots in estimation theory, linear algebra, and numerical analysis. An important question …",
"title": ""
},
{
"docid": "b386c24fc4412d050c1fb71692540b45",
"text": "In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a (1 + ) approximation of the maximum density with high probability; the algorithm uses O( −2npolylog n) space, processes each stream update in polylog(n) time, and uses poly(n) post-processing time where n is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al. (PVLDB 2012) up to a poly-logarithmic factor for constant . The best existing results for this problem were established recently by Bhattacharya et al. (STOC 2015). They presented a (2 + ) approximation algorithm using similar space and another algorithm that both processed each update and maintained a (4 + ) approximation of the current maximum density in polylog(n) time per-update.",
"title": ""
},
{
"docid": "494b375064fbbe012b382d0ad2db2900",
"text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?",
"title": ""
},
{
"docid": "f11ee9f354936eefa539d9aa518ac6b1",
"text": "This paper presents a modified priority based probe algorithm for deadlock detection and resolution in distributed database systems. The original priority based probe algorithm was presented by Sinha and Natarajan based on work by Chandy, Misra, and Haas. Various examples are used to show that the original priority based algorithm either fails to detect deadlocks or reports deadlocks which do not exist in many situations. A modified algorithm which eliminates these problems is proposed. This algorithm has been tested through simulation and appears to be error free. Finally, the performance of the modified algorithm is briefly discussed.",
"title": ""
},
{
"docid": "225fa1a3576bc8cea237747cb25fc38d",
"text": "Common video systems for laparoscopy provide the surgeon a two-dimensional image (2D), where information on spatial depth can be derived only from secondary spatial depth cues and experience. Although the advantage of stereoscopy for surgical task efficiency has been clearly shown, several attempts to introduce three-dimensional (3D) video systems into clinical routine have failed. The aim of this study is to evaluate users’ performances in standardised surgical phantom model tasks using 3D HD visualisation compared with 2D HD regarding precision and working speed. This comparative study uses a 3D HD video system consisting of a dual-channel laparoscope, a stereoscopic camera, a camera controller with two separate outputs and a wavelength multiplex stereoscopic monitor. Each of 20 medical students and 10 laparoscopically experienced surgeons (more than 100 laparoscopic cholecystectomies each) pre-selected in a stereo vision test were asked to perform one task to familiarise themselves with the system and subsequently a set of five standardised tasks encountered in typical surgical procedures. The tasks were performed under either 3D or 2D conditions at random choice and subsequently repeated under the other vision condition. Predefined errors were counted, and time needed was measured. In four of the five tasks the study participants made fewer mistakes in 3D than in 2D vision. In four of the tasks they needed significantly more time in the 2D mode. Both the student group and the surgeon group showed similarly improved performance, while the surgeon group additionally saved more time on difficult tasks. This study shows that 3D HD using a state-of-the-art 3D monitor permits superior task efficiency, even as compared with the latest 2D HD video systems.",
"title": ""
},
{
"docid": "13449fab143effbaf5408ce4abcdbeea",
"text": "Extractive summarization typically uses sentences as summarization units. In contrast, joint compression and summarization can use smaller units such as words and phrases, resulting in summaries containing more information. The goal of compressive summarization is to find a subset of words that maximize the total score of concepts and cutting dependency arcs under the grammar constraints and summary length constraint. We propose an efficient decoding algorithm for fast compressive summarization using graph cuts. Our approach first relaxes the length constraint using Lagrangian relaxation. Then we propose to bound the relaxed objective function by the supermodular binary quadratic programming problem, which can be solved efficiently using graph max-flow/min-cut. Since finding the tightest lower bound suffers from local optimality, we use convex relaxation for initialization. Experimental results on TAC2008 dataset demonstrate our method achieves competitive ROUGE score and has good readability, while is much faster than the integer linear programming (ILP) method.",
"title": ""
},
{
"docid": "b72f5bfc24139c309c196d80956a2241",
"text": "The SIMC method for PID controller tuning (Skogestad 2003) has already found widespread industrial usage in Norway. This chapter gives an updated overview of the method, mainly from a user’s point of view. The basis for the SIMC method is a first-order plus time delay model, and we present a new effective method to obtain the model from a simple closed-loop experiment. An important advantage of the SIMC rule is that there is a single tuning parameter (τc) that gives a good balance between the PID parameters (Kc,τI ,τD), and which can be adjusted to get a desired trade-off between performance (“tight” control) and robustness (“smooth” control). Compared to the original paper of Skogestad (2003), the choice of the tuning parameter τc is discussed in more detail, and lower and upper limits are presented for tight and smooth tuning, respectively. Finally, the optimality of the SIMC PI rules is studied by comparing the performance (IAE) versus robustness (Ms) trade-off with the Pareto-optimal curve. The difference is small which leads to the conclusion that the SIMC rules are close to optimal. The only exception is for pure time delay processes, so we introduce the “improved” SIMC rule to improve the performance for this case. Chapter for PID book (planned: Springer, 2011, Editor: R. Vilanova) This version: September 7, 2011 Sigurd Skogestad Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, e-mail: [email protected] Chriss Grimholt Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim",
"title": ""
},
{
"docid": "91eecde9d0e3b67d7af0194782923ead",
"text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.",
"title": ""
},
{
"docid": "b856dcd9db802260ca22e7b426b87afa",
"text": "This research seeks to validate a comprehensive model of quality in the context of e-business systems. It also extends the UTAUT model with e-quality, trust, and satisfaction constructs. The proposed model brings together extant research on systems and data quality, trust, and satisfaction and provides an important cluster of antecedents to eventual technology acceptance via constructs of behavioral intention to use and actual system usage.",
"title": ""
},
{
"docid": "369e5fb60d3afc993821159b64bc3560",
"text": "For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.",
"title": ""
},
{
"docid": "b620dd7e1db47db6c37ea3bcd2d83744",
"text": "Software failures due to configuration errors are commonplace as computer systems continue to grow larger and more complex. Troubleshooting these configuration errors is a major administration cost, especially in server clusters where problems often go undetected without user interference. This paper presents CODE–a tool that automatically detects software configuration errors. Our approach is based on identifying invariant configuration access rules that predict what access events follow what contexts. It requires no source code, application-specific semantics, or heavyweight program analysis. Using these rules, CODE can sift through a voluminous number of events and detect deviant program executions. This is in contrast to previous approaches that focus on only diagnosis. In our experiments, CODE successfully detected a real configuration error in one of our deployment machines, in addition to 20 user-reported errors that we reproduced in our test environment. When analyzing month-long event logs from both user desktops and production servers, CODE yielded a low false positive rate. The efficiency ofCODE makes it feasible to be deployed as a practical management tool with low overhead.",
"title": ""
},
{
"docid": "3ea104489fb5ac5b3e671659f8498530",
"text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.",
"title": ""
},
{
"docid": "8d24516bda25e60bf68362a88668f675",
"text": "Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views",
"title": ""
},
{
"docid": "119215115226e0bd3ee4c2762433aad5",
"text": "Super-coiled polymer (SCP) artificial muscles have many attractive properties, such as high energy density, large contractions, and good dynamic range. To fully utilize them for robotic applications, it is necessary to determine how to scale them up effectively. Bundling of SCP actuators, as though they are individual threads in woven textiles, can demonstrate the versatility of SCP actuators and artificial muscles in general. However, this versatility comes with a need to understand how different bundling techniques can be achieved with these actuators and how they may trade off in performance. This letter presents the first quantitative comparison, analysis, and modeling of bundled SCP actuators. By exploiting weaving and braiding techniques, three new types of bundled SCP actuators are created: woven bundles, two-dimensional, and three-dimensional braided bundles. The bundle performance is adjustable by employing different numbers of individual actuators. Experiments are conducted to characterize and compare the force, strain, and speed of different bundles, and a linear model is proposed to predict their performance. This work lays the foundation for model-based SCP-actuated textiles, and physically scaling robots that employ SCP actuators as the driving mechanism.",
"title": ""
}
] | scidocsrr |
14f8b7ea36774ef990759bc644743083 | Neural Variational Inference for Text Processing | [
{
"docid": "55b9284f9997b18d3b1fad9952cd4caa",
"text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.",
"title": ""
},
{
"docid": "8b6832586f5ec4706e7ace59101ea487",
"text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
}
] | [
{
"docid": "c73af0945ac35847c7a86a7f212b4d90",
"text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.",
"title": ""
},
{
"docid": "d35d96730b71db044fccf4c8467ff081",
"text": "Image steganalysis is to discriminate innocent images and those suspected images with hidden messages. This task is very challenging for modern adaptive steganography, since modifications due to message hiding are extremely small. Recent studies show that Convolutional Neural Networks (CNN) have demonstrated superior performances than traditional steganalytic methods. Following this idea, we propose a novel CNN model for image steganalysis based on residual learning. The proposed Deep Residual learning based Network (DRN) shows two attractive properties than existing CNN based methods. First, the model usually contains a large number of network layers, which proves to be effective to capture the complex statistics of digital images. Second, the residual learning in DRN preserves the stego signal coming from secret messages, which is extremely beneficial for the discrimination of cover images and stego images. Comprehensive experiments on standard dataset show that the DRN model can detect the state of arts steganographic algorithms at a high accuracy. It also outperforms the classical rich model method and several recently proposed CNN based methods.",
"title": ""
},
{
"docid": "738a69ad1006c94a257a25c1210f6542",
"text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.",
"title": ""
},
{
"docid": "5f1c1baca54a8af4bf9c96818ad5b688",
"text": "The spatial organisation of museums and its influence on the visitor experience has been the subject of numerous studies. Previous research, despite reporting some actual behavioural correlates, rarely had the possibility to investigate the cognitive processes of the art viewers. In the museum context, where spatial layout is one of the most powerful curatorial tools available, attention and memory can be measured as a means of establishing whether or not the gallery fulfils its function as a space for contemplating art. In this exploratory experiment, 32 participants split into two groups explored an experimental, non-public exhibition and completed two unanticipated memory tests afterwards. The results show that some spatial characteristics of an exhibition can inhibit the recall of pictures and shift the focus to perceptual salience of the artworks.",
"title": ""
},
{
"docid": "62da9a85945652f195086be0ef780827",
"text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.",
"title": ""
},
{
"docid": "49e616b9db5ba5003ae01abfb6ed3e16",
"text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.",
"title": ""
},
{
"docid": "865306ad6f5288cf62a4082769e8068a",
"text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.",
"title": ""
},
{
"docid": "485f7998056ef7a30551861fad33bef4",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "e104544e8ac61ea6d77415df1deeaf81",
"text": "This thesis is devoted to marker-less 3D human motion tracking in calibrated and synchronized multicamera systems. Pose estimation is based on a 3D model, which is transformed into the image plane and then rendered. Owing to elaborated techniques the tracking of the full body has been achieved in real-time via dynamic optimization or dynamic Bayesian filtering. The objective function of a particle swarm optimization algorithm and the observation model of a particle filter are based on matching between the rendered 3D models in the required poses and image features representing the extracted person. In such an approach the main part of the computational overload is associated with the rendering of 3D models in hypothetical poses as well as determination of value of objective function. Effective methods for rendering of 3D models in real-time with support of OpenGL as well as parallel methods for determining the objective function on the GPU were developed. The elaborated solutions permit 3D tracking of full body motion in real-time.",
"title": ""
},
{
"docid": "ea8b083238554866d36ac41b9c52d517",
"text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .",
"title": ""
},
{
"docid": "d44080fc547355ff8389f9da53d03c45",
"text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "dbfbdd4866d7fd5e34620c82b8124c3a",
"text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ",
"title": ""
},
{
"docid": "5e60c55f419c7d62f4eeb9165e7f107c",
"text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "021bf98bc0ff722d9a44c5ef5e73f3c8",
"text": "BACKGROUND\nMalignant bowel obstruction is a highly symptomatic, often recurrent, and sometimes refractory condition in patients with intra-abdominal tumor burden. Gastro-intestinal symptoms and function may improve with anti-inflammatory, anti-secretory, and prokinetic/anti-nausea combination medical therapy.\n\n\nOBJECTIVE\nTo describe the effect of octreotide, metoclopramide, and dexamethasone in combination on symptom burden and bowel function in patients with malignant bowel obstruction and dysfunction.\n\n\nDESIGN\nA retrospective case series of patients with malignant bowel obstruction (MBO) and malignant bowel dysfunction (MBD) treated by a palliative care consultation service with octreotide, metoclopramide, and dexamethasone. Outcomes measures were nausea, pain, and time to resumption of oral intake.\n\n\nRESULTS\n12 cases with MBO, 11 had moderate/severe nausea on presentation. 100% of these had improvement in nausea by treatment day #1. 100% of patients with moderate/severe pain improved to tolerable level by treatment day #1. The median time to resumption of oral intake was 2 days (range 1-6 days) in the 8 cases with evaluable data. Of 7 cases with MBD, 6 had For patients with malignant bowel dysfunction, of those with moderate/severe nausea. 5 of 6 had subjective improvement by day#1. Moderate/severe pain improved to tolerable levels in 5/6 by day #1. Of the 4 cases with evaluable data on resumption of PO intake, time to resume PO ranged from 1-4 days.\n\n\nCONCLUSION\nCombination medical therapy may provide rapid improvement in symptoms associated with malignant bowel obstruction and dysfunction.",
"title": ""
},
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "72b4ca7dbbd4cc0cca0779b1e9fdafe4",
"text": "As population structure can result in spurious associations, it has constrained the use of association studies in human and plant genetics. Association mapping, however, holds great promise if true signals of functional association can be separated from the vast number of false signals generated by population structure. We have developed a unified mixed-model approach to account for multiple levels of relatedness simultaneously as detected by random genetic markers. We applied this new approach to two samples: a family-based sample of 14 human families, for quantitative gene expression dissection, and a sample of 277 diverse maize inbred lines with complex familial relationships and population structure, for quantitative trait dissection. Our method demonstrates improved control of both type I and type II error rates over other methods. As this new method crosses the boundary between family-based and structured association samples, it provides a powerful complement to currently available methods for association mapping.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "cc7179392883ad12d42469fc7e1b3e01",
"text": "A low-profile broadband dual-polarized patch subarray is designed in this letter for a highly integrated X-band synthetic aperture radar payload on a small satellite. The proposed subarray is lightweight and has a low profile due to its tile structure realized by a multilayer printed circuit board process. The measured results confirm that the subarray yields 14-dB bandwidths from 9.15 to 10.3 GHz for H-pol and from 9.35 to 10.2 GHz for V-pol. The isolation remains better than 40 dB. The average realized gains are approximately 13 dBi for both polarizations. The sidelobe levels are 25 dB for H-pol and 20 dB for V-pol. The relative cross-polarization levels are 30 dB within the half-power beamwidth range.",
"title": ""
}
] | scidocsrr |
93f55dd33860b0763d6a60c00ecb3596 | Socially Aware Networking: A Survey | [
{
"docid": "7fc6e701aacc7d014916b9b47b01be16",
"text": "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.",
"title": ""
}
] | [
{
"docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d",
"text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.",
"title": ""
},
{
"docid": "84a2d26a0987a79baf597508543f39b6",
"text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.",
"title": ""
},
{
"docid": "a4c76e58074a42133a59a31d9022450d",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
},
{
"docid": "2bf0219394d87654d2824c805844fcaa",
"text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 [email protected] • [email protected] • [email protected]",
"title": ""
},
{
"docid": "91e8516d2e7e1e9de918251ac694ee08",
"text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.",
"title": ""
},
{
"docid": "a76ba02ef0f87a41cdff1a4046d4bba1",
"text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.",
"title": ""
},
{
"docid": "6f265af3f4f93fcce13563cac14b5774",
"text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.",
"title": ""
},
{
"docid": "e1b536458ddc8603b281bac69e6bd2e8",
"text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.",
"title": ""
},
{
"docid": "288845120cdf96a20850b3806be3d89a",
"text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.",
"title": ""
},
{
"docid": "0b4c076b80d91eb20ef71e63f17e9654",
"text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.",
"title": ""
},
{
"docid": "915b9627736c6ae916eafcd647cb39af",
"text": "This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including ‘fighting’ and ‘assault’, which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.",
"title": ""
},
{
"docid": "8b863cd49dfe5edc2d27a0e9e9db0429",
"text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.",
"title": ""
},
{
"docid": "d6a6cadd782762e4591447b7dd2c870a",
"text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.",
"title": ""
},
{
"docid": "bed89842ee325f9dc662d63c07f34726",
"text": "Analysis of flows such as human movement can help spatial planners better understand territorial patterns in urban environments. In this paper, we describe FlowSampler, an interactive visual interface designed for spatial planners to gather, extract and analyse human flows in geolocated social media data. Our system adopts a graph-based approach to infer movement pathways from spatial point type data and expresses the resulting information through multiple linked multiple visualisations to support data exploration. We describe two use cases to demonstrate the functionality of our system and characterise how spatial planners utilise it to address analytical task.",
"title": ""
},
{
"docid": "1ddbe5990a1fc4fe22a9788c77307a9f",
"text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.",
"title": ""
},
{
"docid": "b41c0a4e2a312d74d9a244e01fc76d66",
"text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.",
"title": ""
},
{
"docid": "18ef3fbade2856543cae1fcc563c1c43",
"text": "This paper induces the prominence of variegated machine learning techniques adapted so far for the identifying different network attacks and suggests a preferable Intrusion Detection System (IDS) with the available system resources while optimizing the speed and accuracy. With booming number of intruders and hackers in todays vast and sophisticated computerized world, it is unceasingly challenging to identify unknown attacks in promising time with no false positive and no false negative. Principal Component Analysis (PCA) curtails the amount of data to be compared by reducing their dimensions prior to classification that results in reduction of detection time. In this paper, PCA is adopted to reduce higher dimension dataset to lower dimension dataset. It is accomplished by converting network packet header fields into a vector then PCA applied over high dimensional dataset to reduce the dimension. The reduced dimension dataset is tested with Support Vector Machines (SVM), K-Nearest Neighbors (KNN), J48 Tree algorithm, Random Forest Tree classification algorithm, Adaboost algorihm, Nearest Neighbors generalized Exemplars algorithm, Navebayes probabilistic classifier and Voting Features Interval classification algorithm. Obtained results demonstrates detection accuracy, computational efficiency with minimal false alarms, less system resources utilization. Experimental results are compared with respect to detection rate and detection time and found that TREE classification algorithms achieved superior results over other algorithms. The whole experiment is conducted by using KDD99 data set.",
"title": ""
},
{
"docid": "fac3285b06bd12db0cef95bb854d4480",
"text": "The design of a novel and versatile single-port quad-band patch antenna is presented. The antenna is capable of supporting a maximum of four operational sub-bands, with the inherent capability to enhance or suppress any resonance(s) of interest. In addition, circular-polarisation is also achieved at the low frequency band, to demonstrate the polarisation agility. A prototype model of the antenna has been fabricated and its performance experimentally validated. The antenna's single layer and low-profile configuration makes it suitable for mobile user terminals and its cavity-backed feature results in low levels of coupling.",
"title": ""
},
{
"docid": "fa34cdffb421f2c514d5bacbc6776ae9",
"text": "A review on various CMOS voltage level shifters is presented in this paper. A voltage level-shifter shifts the level of input voltage to desired output voltage. Voltage Level Shifter circuits are compared with respect to output voltage level, power consumption and delay. Systems often require voltage level translation devices to allow interfacing between integrated circuit devices built from different voltage technologies. The choice of the proper voltage level translation device depends on many factors and will affect the performance and efficiency of the circuit application.",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
}
] | scidocsrr |
29fb6d39a7bbf4fceac6f6b6d3a18387 | Designing the digital workplace of the future - what scholars recommend to practitioners | [
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
}
] | [
{
"docid": "1a3cad2f10dd5c6a5aacb3676ca8917a",
"text": "BACKGROUND\nRecent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.\n\n\nMETHODS\nThe study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.\n\n\nRESULTS\nResults show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).\n\n\nCONCLUSIONS\nA considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment.",
"title": ""
},
{
"docid": "e206ea46c20fb0ceb03ad8b535eadebc",
"text": "Recently, increasing attention has been directed to the study of the speech emotion recognition, in which global acoustic features of an utterance are mostly used to eliminate the content differences. However, the expression of speech emotion is a dynamic process, which is reflected through dynamic durations, energies, and some other prosodic information when one speaks. In this paper, a novel local dynamic pitch probability distribution feature, which is obtained by drawing the histogram, is proposed to improve the accuracy of speech emotion recognition. Compared with most of the previous works using global features, the proposed method takes advantage of the local dynamic information conveyed by the emotional speech. Several experiments on Berlin Database of Emotional Speech are conducted to verify the effectiveness of the proposed method. The experimental results demonstrate that the local dynamic information obtained with the proposed method is more effective for speech emotion recognition than the traditional global features.",
"title": ""
},
{
"docid": "65ed76ddd6f7fd0aea717d2e2643dd16",
"text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "2a3f5f621195c036064e3d8c0b9fc884",
"text": "This paper describes our system for the CoNLL 2016 Shared Task’s supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.",
"title": ""
},
{
"docid": "80fed8845ca14843855383d714600960",
"text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.",
"title": ""
},
{
"docid": "7355bf66dac6e027c1d6b4c2631d8780",
"text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.",
"title": ""
},
{
"docid": "bdfb3a761d7d9dbb96fa4f07bc2c1f89",
"text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.",
"title": ""
},
{
"docid": "036ac7fc6886f1f7d1734be18a11951f",
"text": "Often the challenge associated with tasks like fraud and spam detection is the lack of all likely patterns needed to train suitable supervised learning models. This problem accentuates when the fraudulent patterns are not only scarce, they also change over time. Change in fraudulent pattern is because fraudsters continue to innovate novel ways to circumvent measures put in place to prevent fraud. Limited data and continuously changing patterns makes learning signi cantly di cult. We hypothesize that good behavior does not change with time and data points representing good behavior have consistent spatial signature under di erent groupings. Based on this hypothesis we are proposing an approach that detects outliers in large data sets by assigning a consistency score to each data point using an ensemble of clustering methods. Our main contribution is proposing a novel method that can detect outliers in large datasets and is robust to changing patterns. We also argue that area under the ROC curve, although a commonly used metric to evaluate outlier detection methods is not the right metric. Since outlier detection problems have a skewed distribution of classes, precision-recall curves are better suited because precision compares false positives to true positives (outliers) rather than true negatives (inliers) and therefore is not a ected by the problem of class imbalance. We show empirically that area under the precision-recall curve is a better than ROC as an evaluation metric. The proposed approach is tested on the modi ed version of the Landsat satellite dataset, the modi ed version of the ann-thyroid dataset and a large real world credit card fraud detection dataset available through Kaggle where we show signi cant improvement over the baseline methods.",
"title": ""
},
{
"docid": "02c41dae589c89c297977d48b90f7218",
"text": "Recently new data center topologies have been proposed that offer higher aggregate bandwidth and location independence by creating multiple paths in the core of the network. To effectively use this bandwidth requires ensuring different flows take different paths, which poses a challenge.\n Plainly put, there is a mismatch between single-path transport and the multitude of available network paths. We propose a natural evolution of data center transport from TCP to multipath TCP. We show that multipath TCP can effectively and seamlessly use available bandwidth, providing improved throughput and better fairness in these new topologies when compared to single path TCP and randomized flow-level load balancing. We also show that multipath TCP outperforms laggy centralized flow scheduling without needing centralized control or additional infrastructure.",
"title": ""
},
{
"docid": "b783e3a8b9aaec7114603bafffcb5bfd",
"text": "Acknowledgements This paper has benefited from conversations and collaborations with colleagues, including most notably Stefan Dercon, Cheryl Doss, and Chris Udry. None of them has read this manuscript, however, and they are not responsible for the views expressed here. Steve Wiggins provided critical comments on the first draft of the document and persuaded me to rethink a number of points. The aim of the Natural Resources Group is to build partnerships, capacity and wise decision-making for fair and sustainable use of natural resources. Our priority in pursuing this purpose is on local control and management of natural resources and other ecosystems. The Institute of Development Studies (IDS) is a leading global Institution for international development research, teaching and learning, and impact and communications, based at the University of Sussex. Its vision is a world in which poverty does not exist, social justice prevails and sustainable economic growth is focused on improving human wellbeing. The Overseas Development Institute (ODI) is a leading independent think tank on international development and humanitarian issues. Its mission is to inspire and inform policy and practice which lead to the reduction of poverty, the alleviation of suffering and the achievement of sustainable livelihoods. Smallholder agriculture has long served as the dominant economic activity for people in sub-Saharan Africa, and it will remain enormously important for the foreseeable future. But the size of the sector does not necessarily imply that investments in the smallholder sector will yield high social benefits in comparison to other possible uses of development resources. Large changes could potentially affect the viability of smallholder systems, emanating from shifts in technology, markets, climate and the global environment. The priorities for development policy will vary across and within countries due to the highly heterogeneous nature of the smallholder sector.",
"title": ""
},
{
"docid": "ff952443eef41fb430ff2831b5ee33d5",
"text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.",
"title": ""
},
{
"docid": "d141c13cea52e72bb7b84d3546496afb",
"text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.",
"title": ""
},
{
"docid": "d269ebe2bc6ab4dcaaac3f603037b846",
"text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.",
"title": ""
},
{
"docid": "f3e219c14f495762a2a6ced94708a477",
"text": "We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (valley floor) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ’bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.",
"title": ""
},
{
"docid": "9f2ade778dce9e007e9f5fa47af861b2",
"text": "The potency of the environment to shape brain function changes dramatically across the lifespan. Neural circuits exhibit profound plasticity during early life and are later stabilized. A focus on the cellular and molecular bases of these developmental trajectories has begun to unravel mechanisms, which control the onset and closure of such critical periods. Two important concepts have emerged from the study of critical periods in the visual cortex: (1) excitatory-inhibitory circuit balance is a trigger; and (2) molecular \"brakes\" limit adult plasticity. The onset of the critical period is determined by the maturation of specific GABA circuits. Targeting these circuits using pharmacological or genetic approaches can trigger premature onset or induce a delay. These manipulations are so powerful that animals of identical chronological age may be at the peak, before, or past their plastic window. Thus, critical period timing per se is plastic. Conversely, one of the outcomes of normal development is to stabilize the neural networks initially sculpted by experience. Rather than being passively lost, the brain's intrinsic potential for plasticity is actively dampened. This is demonstrated by the late expression of brake-like factors, which reversibly limit excessive circuit rewiring beyond a critical period. Interestingly, many of these plasticity regulators are found in the extracellular milieu. Understanding why so many regulators exist, how they interact and, ultimately, how to lift them in noninvasive ways may hold the key to novel therapies and lifelong learning.",
"title": ""
},
{
"docid": "3f4fcbc355d7f221eb6c9bc4a26b0448",
"text": "BACKGROUND\nMost of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.\n\n\nMETHODS\nTwo independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: \"fMRI AND happy faces,\" \"fMRI AND sad faces,\" \"fMRI AND fearful faces,\" \"fMRI AND angry faces,\" \"fMRI AND disgusted faces\" and \"fMRI AND neutral faces.\" We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.\n\n\nRESULTS\nOf the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.\n\n\nLIMITATIONS\nAlthough the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.\n\n\nCONCLUSION\nOur study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.",
"title": ""
},
{
"docid": "03aac64e2d209d628874614d061b90f9",
"text": "Patterns of reading development were examined in native English-speaking (L1) children and children who spoke English as a second language (ESL). Participants were 978 (790 L1 speakers and 188 ESL speakers) Grade 2 children involved in a longitudinal study that began in kindergarten. In kindergarten and Grade 2, participants completed standardized and experimental measures including reading, spelling, phonological processing, and memory. All children received phonological awareness instruction in kindergarten and phonics instruction in Grade 1. By the end of Grade 2, the ESL speakers' reading skills were comparable to those of L1 speakers, and ESL speakers even outperformed L1 speakers on several measures. The findings demonstrate that a model of early identification and intervention for children at risk is beneficial for ESL speakers and also suggest that the effects of bilingualism on the acquisition of early reading skills are not negative and may be positive.",
"title": ""
},
{
"docid": "0d95c132ff0dcdb146ed433987c426cf",
"text": "A smart connected car in conjunction with the Internet of Things (IoT) is an emerging topic. The fundamental concept of the smart connected car is connectivity, and such connectivity can be provided by three aspects, such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X). To meet the aspects of V2V and V2I connectivity, we developed modules in accordance with international standards with respect to On-Board Diagnostics II (OBDII) and 4G Long Term Evolution (4G-LTE) to obtain and transmit vehicle information. We also developed software to visually check information provided by our modules. Information related to a user’s driving, which is transmitted to a cloud-based Distributed File System (DFS), was then analyzed for the purpose of big data analysis to provide information on driving habits to users. Yet, since this work is an ongoing research project, we focus on proposing an idea of system architecture and design in terms of big data analysis. Therefore, our contributions through this work are as follows: (1) Develop modules based on Controller Area Network (CAN) bus, OBDII, and 4G-LTE; (2) Develop software to check vehicle information on a PC; (3) Implement a database related to vehicle diagnostic codes; (4) Propose system architecture and design for big data analysis.",
"title": ""
},
{
"docid": "cad8b81a115a2a59c8e3e6d44519b850",
"text": "Inter-cell interference is the main obstacle for increasing the network capacity of Long-Term Evolution Advanced (LTE-A) system. Interference Cancellation (IC) is a promising way to improve spectral efficiency. 3rd Generation Partnership Project (3GPP) raises a new research project — Network-Assisted Interference Cancellation and Suppression (NAICS) in LTE Rel-12. Advanced receivers used in NAICS include maximum likelihood (ML) receiver and symbol-level IC (SLIC) receiver. Those receivers need some interference parameters, such as rank indicator (RI), precoding matrix indicator (PMI) and modulation level (MOD). This paper presents a new IC receiver based on detection. We get the clean interfering signal assisted by detection and use it in SLIC. The clean interfering signal makes the estimation of interfering transmitted signal more accurate. So interference cancellation would be more significant. We also improve the method of interference parameter estimation that avoids estimating power boosting and precoding matrix simultaneously. The simulation results show that the performance of proposed SLIC is better than traditional SLIC and close to ML.",
"title": ""
},
{
"docid": "58e84998bca4d4d9368f5bc5879e64c0",
"text": "This paper summarizes the effect of age, gender and race on Electrocardiographic parameters. The conduction system and heart muscles get degenerative changes with advancing age so these parameters get changed. The ECG parameters also changes under certain diseases. Then it is essential to know the normal limits of these parameters for diagnostic purpose under the influence of age, gender and race. The automated ECG analysis systems require normal limits of these parameters. The age and gender of the population clearly influencing the normal limits of ECG parameters. However the investigation of the effect of Body Mass Index on cross section of the population is further warranted.",
"title": ""
}
] | scidocsrr |
372c783257d114e091be0f39a6337383 | Self-Serving Justifications : Doing Wrong and Feeling Moral | [
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
}
] | [
{
"docid": "5f94ffca0f652a6b3ecd386d5ce73eb7",
"text": "Identifying a patient’s important medical problems requires broad and deep medical expertise, as well as significant time to gather all the relevant facts from the patient’s medical record and assess the clinical importance of the facts in reaching the final conclusion. A patient’s medical problem list is by far the most critical information that a physician uses in treatment and care of a patient. In spite of its critical role, its curation, manual or automated, has been an unmet need in clinical practice. We developed a machine learning technique in IBM Watson to automatically generate a patient’s medical problem list. The machine learning model uses lexical and medical features extracted from a patient’s record using NLP techniques. We show that the automated method achieves 70% recall and 67% precision based on the gold standard that medical experts created on a set of deidentified patient records from a major hospital system in the US. To the best of our knowledge this is the first successful machine learning/NLP method of extracting an open-ended patient’s medical problems from an Electronic Medical Record (EMR). This paper also contributes a methodology for assessing accuracy of a medical problem list generation technique.",
"title": ""
},
{
"docid": "306a833c0130678e1b2ece7e8b824d5e",
"text": "In many natural languages, there are clear syntactic and/or intonational differences between declarative sentences, which are primarily used to provide information, and interrogative sentences, which are primarily used to request information. Most logical frameworks restrict their attention to the former. Those that are concerned with both usually assume a logical language that makes a clear syntactic distinction between declaratives and interrogatives, and usually assign different types of semantic values to these two types of sentences. A different approach has been taken in recent work on inquisitive semantics. This approach does not take the basic syntactic distinction between declaratives and interrogatives as its starting point, but rather a new notion of meaning that captures both informative and inquisitive content in an integrated way. The standard way to treat the logical connectives in this approach is to associate them with the basic algebraic operations on these new types of meanings. For instance, conjunction and disjunction are treated as meet and join operators, just as in classical logic. This gives rise to a hybrid system, where sentences can be both informative and inquisitive at the same time, and there is no clearcut division between declaratives and interrogatives. It may seem that these two general approaches in the existing literature are quite incompatible. The main aim of this paper is to show that this is not the case. We develop an inquisitive semantics for a logical language that has a clearcut division between declaratives and interrogatives. We show that this language coincides in expressive power with the hybrid language that is standardly assumed in inquisitive semantics, we establish a sound and complete axiomatization for the associated logic, and we consider a natural enrichment of the system with presuppositional interrogatives.",
"title": ""
},
{
"docid": "add26519d60ec2a972ad550cd79129d6",
"text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.",
"title": ""
},
{
"docid": "1fe1513d5b94ad9ca5a9ae4691407ae5",
"text": "We constructed a face database PF01(Postech Faces '01). PF01 contains the true-color face images of 103 people, 53 men and 50 women, representing 17 various images (1 normal face, 4 illumination variations, 8 pose variations, 4 expression variations) per person. All of the people in the database are Asians. There are three kinds of systematic variations, such as illumination, pose, and expression variations in the database. The database is expected to be used to evaluate the technology of face recognition for Asian people or for people with systematic variations.",
"title": ""
},
{
"docid": "8f4f687aff724496efcc37ff7f6bbbeb",
"text": "Sentiment Analysis is new way of machine learning to extract opinion orientation (positive, negative, neutral) from a text segment written for any product, organization, person or any other entity. Sentiment Analysis can be used to predict the mood of people that have impact on stock prices, therefore it can help in prediction of actual stock movement. In order to exploit the benefits of sentiment analysis in stock market industry we have performed sentiment analysis on tweets related to Apple products, which are extracted from StockTwits (a social networking site) from 2010 to 2017. Along with tweets, we have also used market index data which is extracted from Yahoo Finance for the same period. The sentiment score of a tweet is calculated by sentiment analysis of tweets through SVM. As a result each tweet is categorized as bullish or bearish. Then sentiment score and market data is used to build a SVM model to predict next day's stock movement. Results show that there is positive relation between people opinion and market data and proposed work has an accuracy of 76.65% in stock prediction.",
"title": ""
},
{
"docid": "a856b4fc2ec126ee3709d21ff4c3c49c",
"text": "In this work, glass fiber reinforced epoxy composites were fabricated. Epoxy resin was used as polymer matrix material and glass fiber was used as reinforcing material. The main focus of this work was to fabricate this composite material by the cheapest and easiest way. For this, hand layup method was used to fabricate glass fiber reinforced epoxy resin composites and TiO2 material was used as filler material. Six types of compositions were made with and without filler material keeping the glass fiber constant and changing the epoxy resin with respect to filler material addition. Mechanical properties such as tensile, impact, hardness, compression and flexural properties were investigated. Additionally, microscopic analysis was done. The experimental investigations show that without filler material the composites exhibit overall lower value in mechanical properties than with addition of filler material in the composites. The results also show that addition of filler material increases the mechanical properties but highest values were obtained for different filler material addition. From the obtained results, it was observed that composites filled by 15wt% of TiO2 particulate exhibited maximum tensile strength, 20wt% of TiO2 particulate exhibited maximum impact strength, 25wt% of TiO2 particulate exhibited maximum hardness value, 25wt% of TiO2 particulate exhibited maximum compressive strength, 20wt% of TiO2 particulate exhibited maximum flexural strength.",
"title": ""
},
{
"docid": "f779162d57bb2b9eeb37d3f604131234",
"text": "A classical issue in many applied fields is to obtain an approximating surface to a given set of data points. This problem arises in Computer-Aided Design and Manufacturing (CAD/CAM), virtual reality, medical imaging, computer graphics, computer animation, and many others. Very often, the preferred approximating surface is polynomial, usually described in parametric form. This leads to the problem of determining suitable parametric values for the data points, the so-called surface parameterization. In real-world settings, data points are generally irregularly sampled and subjected to measurement noise, leading to a very difficult nonlinear continuous optimization problem, unsolvable with standard optimization techniques. This paper solves the parameterization problem for polynomial Bézier surfaces by applying the firefly algorithm, a powerful nature-inspired metaheuristic algorithm introduced recently to address difficult optimization problems. The method has been successfully applied to some illustrative examples of open and closed surfaces, including shapes with singularities. Our results show that the method performs very well, being able to yield the best approximating surface with a high degree of accuracy.",
"title": ""
},
{
"docid": "ba2029c92fc1e9277e38edff0072ac82",
"text": "Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on. In this article, we propose a solution to solve this problem based on a novel deep learning model dubbed disjunctive factored four-way conditional restricted Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning techniques for high dimensional time-series modeling by introducing a novel tensor factorization capable of driving forth order Boltzmann machines to considerably lower energy levels, at no computational costs. DFFW-CRBMs are capable of accurately estimating, recognizing, and performing near-future prediction of three-dimensional trajectories from their 2D projections while requiring limited amount of labeled data. We evaluate our method on both simulated and real-world data, showing its effectiveness in predicting and classifying complex ball trajectories and human activities.",
"title": ""
},
{
"docid": "181356b104a26d1d300d10619fb78f45",
"text": "Recent advances in combining deep neural network architectures with reinforcement learning techniques have shown promising potential results in solving complex control problems with high dimensional state and action spaces. Inspired by these successes, in this paper, we build two kinds of reinforcement learning algorithms: deep policy-gradient and value-function based agents which can predict the best possible traffic signal for a traffic intersection. At each time step, these adaptive traffic light control agents receive a snapshot of the current state of a graphical traffic simulator and produce control signals. The policy-gradient based agent maps its observation directly to the control signal, however the value-function based agent first estimates values for all legal control signals. The agent then selects the optimal control action with the highest value. Our methods show promising results in a traffic network simulated in the SUMO traffic simulator, without suffering from instability issues during the training process.",
"title": ""
},
{
"docid": "2fe2f83fa9a0dca9f01fd9e5e80ca515",
"text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.",
"title": ""
},
{
"docid": "ef8c4dfa058106cc38d9497c2713b5c6",
"text": "Of current interest are the causal attributions offered by depressives for the good and bad events in their lives. One important attributional account of depression is the reformulated learned helplessness model, which proposes that depressive symptoms are associated with an attributional style in which uncontrollable bad events are attributed to internal (versus external), stable (versus unstable), and global (versus specific) causes. We describe the Attributional Style Questionnaire, which measures individual differences in the use of these attributional dimensions. We report means, reliabilities, intercorrelations, and test-retest stabilities for a sample of 130 undergraduates. Evidence for the questionnaire's validity is discussed. The Attributional Style Questionnaire promises to be a reliable and valid instrument.",
"title": ""
},
{
"docid": "bc1ff96ebc41bc3040bb254f1620b190",
"text": "The paper presents a new generation of torque-controlled li ghtweight robots (LWR) developed at the Institute of Robotics and Mechatronics of the German Aerospace Center . I order to act in unstructured environments and interact with humans, the robots have design features an d co trol/software functionalities which distinguish them from classical robots, such as: load-to-weight ratio o f 1:1, torque sensing in the joints, active vibration damping, sensitive collision detection, as well as complia nt control on joint and Cartesian level. Due to the partially unknown properties of the environment, robustne s of planning and control with respect to environmental variations is crucial. After briefly describing the main har dware features, the paper focuses on showing how joint torque sensing (as a main feature of the robot) is conse quently used for achieving the above mentioned performance, safety, and robustness properties.",
"title": ""
},
{
"docid": "f07d714a45f26e6b80c6a315e50cdf92",
"text": "X-ray images are the essential aiding means all along in clinical diagnosis of fracture. So the processing and analysis of X-ray fracture images is particularly important. Extracting the features of X-ray images is a very important process in classifying fracture images according to the principle of AO classification of fractures. A proposed algorithm is used in this paper. First, use marker-controlled watershed transform based on gradient and homotopy modification to segment X-ray fracture images. Then the features consisted of region number, region area, region centroid and protuberant polygon of fracture image are extracted by marker processing and regionprops function. Next we use Hough transform to detect and extract lines in the protuberant polygon of X-ray fracture image. The lines are consisted of fracture line and parallel lines of centerline. Through the parallel lines of centerline, we obtain centerline over centroid and perpendicular line of centerline over centroid. Finally compute the angle between fracture line and perpendicular line of centerline. This angle can be used to classify femur backbone fracture.",
"title": ""
},
{
"docid": "8f0276f7a902fa02b6236dfc76b882d2",
"text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.",
"title": ""
},
{
"docid": "eaf4c209d02bcbdd1ee595e7820c9bfe",
"text": "We develop a consumer-level model of vehicle choice to investigate the reasons behind the erosion of the U.S. automobile manufacturers’ market share during the past decade. Our model accounts for the influence of vehicle attributes, brand loyalty, product line characteristics, and dealerships on choice. We find that nearly all of the loss in market share for U.S. manufacturers can be explained by changes in the basic attributes of a vehicle: price, size, power, operating cost, and body type. During the past decade, U.S. manufacturers have improved their vehicles’ attributes but not as much as Japanese and European manufacturers have improved the attributes of their vehicles.",
"title": ""
},
{
"docid": "c0f68f1b8b6fee87203f62baf133b793",
"text": "Modern PWM inverter output voltage has high dv/dt, which causes problems such as voltage doubling that can lead to insulation failure, ground currents that results in electromagnetic interference concerns. The IGBT switching device used in such inverter are becoming faster, exacerbating these problems. This paper proposes a new procedure for designing the LC clamp filter. The filter increases the rise time of the output voltage of inverter, resulting in smaller dv/dt. In addition suitable selection of resonance frequency gives LCL filter configuration with improved attenuation. By adding this filter at output terminal of inverter which uses long cable, voltage doubling effect is reduced at the motor terminal. The design procedure is carried out in terms of the power converter based per unit scheme. This generalizes the design procedure to a wide range of power level and to study optimum designs. The effectiveness of the design is verified by computer simulation and experimental measurements.",
"title": ""
},
{
"docid": "28d801f032cd494c53c35a691a1aa4ee",
"text": "We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11 % is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28 % using only basic 2D image features.",
"title": ""
},
{
"docid": "449270c00ce54ba3772a7af9955f5231",
"text": "Demand for high-speed DRAM in graphics application pushes a single-ended I/O signaling to operate up to 6Gb/s. To maintain the speed increase, the GDDR5 specification shifts from GDDR3/4 with respect to forwarded clocking, data training for write and read de-skewing, clock training, channel-error detection, bank group and data coding. This work tackles challenges in GDDR5 such as clock jitter and signal integrity.",
"title": ""
},
{
"docid": "5d9c9dd2a1d9e85ead934ec6cdcbb1cb",
"text": "Acute myeloid leukemia (AML) is the most common type of acute leukemia in adults. AML is a heterogeneous malignancy characterized by distinct genetic and epigenetic abnormalities. Recent genome-wide DNA methylation studies have highlighted an important role of dysregulated methylation signature in AML from biological and clinical standpoint. In this review, we will outline the recent advances in the methylome study of AML and overview the impacts of DNA methylation on AML diagnosis, treatment, and prognosis.",
"title": ""
},
{
"docid": "f8ddedb1bdc57d75fb5ea9bf81ec51f5",
"text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.",
"title": ""
}
] | scidocsrr |
53cdbbf8e5d99570f01d5a6de645d932 | Microstrip high-pass filter with attenuation poles using cross-coupling | [
{
"docid": "7e61b5f63d325505209c3284c8a444a1",
"text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.",
"title": ""
}
] | [
{
"docid": "9b8b91bbade21813b16dfa40e70c2b91",
"text": "to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders",
"title": ""
},
{
"docid": "ab2e5ec6e48c87b3e4814840ad29afe7",
"text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.",
"title": ""
},
{
"docid": "98df90734e276e0cf020acfdcaa9b4b4",
"text": "High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point. Our evaluation with four graph applications on five diverse scale graph shows that .",
"title": ""
},
{
"docid": "afd32dd6a9b076ed976ecd612c1cc14f",
"text": "Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.",
"title": ""
},
{
"docid": "d1fa477646e636a3062312d6f6444081",
"text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.",
"title": ""
},
{
"docid": "75e9253b7c6333db1aa3cef2ab364f99",
"text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.",
"title": ""
},
{
"docid": "ff947ccb7efdd5517f9b60f9c11ade6a",
"text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.",
"title": ""
},
{
"docid": "68f38ad22fe2c9c24d329b181d1761d2",
"text": "Data mining approach can be used to discover knowledge by analyzing the patterns or correlations among of fields in large databases. Data mining approach was used to find the patterns of the data from Tanzania Ministry of Water. It is used to predict current and future status of water pumps in Tanzania. The data mining method proposed is XGBoost (eXtreme Gradient Boosting). XGBoost implement the concept of Gradient Tree Boosting which designed to be highly fast, accurate, efficient, flexible, and portable. In addition, Recursive Feature Elimination (RFE) is also proposed to select the important features of the data to obtain an accurate model. The best accuracy achieved with using 27 input factors selected by RFE and XGBoost as a learning model. The achieved result show 80.38% in accuracy. The information or knowledge which is discovered from data mining approach can be used by the government to improve the inspection planning, maintenance, and identify which factor that can cause damage to the water pumps to ensure the availability of potable water in Tanzania. Using data mining approach is cost-effective, less time consuming and faster than manual inspection.",
"title": ""
},
{
"docid": "997993e389cdb1e40714e20b96927890",
"text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.",
"title": ""
},
{
"docid": "f8e4db50272d14f026d0956ac25d39d6",
"text": "Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.",
"title": ""
},
{
"docid": "18bbb75b46f6397a6abab7e0d4af4735",
"text": "This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving for example sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.",
"title": ""
},
{
"docid": "e69ecf0d4d04a956b53f34673e353de3",
"text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.",
"title": ""
},
{
"docid": "fada1434ec6e060eee9a2431688f82f3",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "e649c3a48eccb6165320356e94f5ed7d",
"text": "There have been several attempts to create scalable and hardware independent software architectures for Unmanned Aerial Vehicles (UAV). In this work, we propose an onboard architecture for UAVs where hardware abstraction, data storage and communication between modules are efficiently maintained. All processing and software development is done on the UAV while state and mission status of the UAV is monitored from a ground station. The architecture also allows rapid development of mission-specific third party applications on the vehicle with the help of the core module.",
"title": ""
},
{
"docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5",
"text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.",
"title": ""
},
{
"docid": "d2aebe4f8d8d90427bee7c8b71b1361f",
"text": "Automated vehicles are complex systems with a high degree of interdependencies between its components. This complexity sets increasing demands for the underlying software framework. This paper firstly analyzes the requirements for software frameworks. Afterwards an overview on existing software frameworks, that have been used for automated driving projects, is provided with an in-depth introduction into an emerging open-source software framework, the Robot Operating System (ROS). After discussing the main features, advantages and disadvantages of ROS, the communication overhead of ROS is analyzed quantitatively in various configurations showing its applicability for systems with a high data load.",
"title": ""
},
{
"docid": "c8bd7e1e70ac2dbe613c6eb8efe3bd5f",
"text": "This work aims at constructing a semiotic framework for an expanded evolutionary synthesis grounded on Peirce's universal categories and the six space/time/function relations [Taborsky, E., 2004. The nature of the sign as a WFF--a well-formed formula, SEED J. (Semiosis Evol. Energy Dev.) 4 (4), 5-14] that integrate the Lamarckian (internal/external) and Darwinian (individual/population) cuts. According to these guide lines, it is proposed an attempt to formalize developmental systems theory by using the notion of evolving developing agents (EDA) that provides an internalist model of a general transformative tendency driven by organism's need to cope with environmental uncertainty. Development and evolution are conceived as non-programmed open-ended processes of information increase where EDA reach a functional compromise between: (a) increments of phenotype's uniqueness (stability and specificity) and (b) anticipation to environmental changes. Accordingly, changes in mutual information content between the phenotype/environment drag subsequent changes in mutual information content between genotype/phenotype and genotype/environment at two interwoven scales: individual life cycle (ontogeny) and species time (phylogeny), respectively. Developmental terminal additions along with increment minimization of developmental steps must be positively selected.",
"title": ""
},
{
"docid": "207d3e95d3f04cafa417478ed9133fcc",
"text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9e6bfc7b5cc87f687a699c62da013083",
"text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.",
"title": ""
}
] | scidocsrr |
bed598d119ebf08545c93e7c90802bc1 | Mash: fast genome and metagenome distance estimation using MinHash | [
{
"docid": "6059b4bbf5d269d0a5f1f596b48c1acb",
"text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.",
"title": ""
},
{
"docid": "faac043b0c32bad5a44d52b93e468b78",
"text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.",
"title": ""
},
{
"docid": "252f4bcaeb5612a3018578ec2008dd71",
"text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .",
"title": ""
}
] | [
{
"docid": "15de232c8daf22cf1a1592a21e1d9df3",
"text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.",
"title": ""
},
{
"docid": "51e307584d6446ba2154676d02d2cc84",
"text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.",
"title": ""
},
{
"docid": "7db6124dc1f196ec2067a2d9dc7ba028",
"text": "We describe a graphical representation of probabilistic relationships-an alternative to the Bayesian network-called a dependency network. Like a Bayesian network, a dependency network has a graph and a probability component. The graph component is a (cyclic) directed graph such that a node's parents render that node independent of all other nodes in the network. The probability component consists of the probability of a node given its parents for each node (as in a Bayesian network). We identify several basic properties of this representation, and describe its use in collaborative filtering (the task of predicting preferences) and the visualization of predictive relationships.",
"title": ""
},
{
"docid": "789a9d6e2a007938fa8f1715babcabd2",
"text": "We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a crossplatform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the τ (tau) lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Highenergy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.",
"title": ""
},
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
},
{
"docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2",
"text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet",
"title": ""
},
{
"docid": "566913d3a3d2e8fe24d6f5ff78440b94",
"text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.",
"title": ""
},
{
"docid": "e2d0a4d2c2c38722d9e9493cf506fc1c",
"text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.",
"title": ""
},
{
"docid": "56a2279c9c3bcbddf03561bec2508f81",
"text": "The article introduces a framework for users' design quality judgments based on Adaptive Decision Making theory. The framework describes judgment on quality attributes (usability, content/functionality, aesthetics, customisation and engagement) with dependencies on decision making arising from the user's background, task and context. The framework is tested and refined by three experimental studies. The first two assessed judgment of quality attributes of websites with similar content but radically different designs for aesthetics and engagement. Halo effects were demonstrated whereby attribution of good quality on one attribute positively influenced judgment on another, even in the face of objective evidence to the contrary (e.g., usability errors). Users' judgment was also shown to be susceptible to framing effects of the task and their background. These appear to change the importance order of the quality attributes; hence, quality assessment of a design appears to be very context dependent. The third study assessed the influence of customisation by experiments on mobile services applications, and demonstrated that evaluation of customisation depends on the users' needs and motivation. The results are discussed in the context of the literature on aesthetic judgment, user experience and trade-offs between usability and hedonic/ludic design qualities.",
"title": ""
},
{
"docid": "8477b50ea5b4dd76f0bf7190ba05c284",
"text": "It is shown how Conceptual Graphs and Formal Concept Analysis may be combined to obtain a formalization of Elementary Logic which is useful for knowledge representation and processing. For this, a translation of conceptual graphs to formal contexts and concept lattices is described through an example. Using a suitable mathematization of conceptual graphs, basics of a uniied mathematical theory for Elementary Logic are proposed.",
"title": ""
},
{
"docid": "f48ee93659a25bee9a49e8be6c789987",
"text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles",
"title": ""
},
{
"docid": "f3b1e1c9effb7828a62187e9eec5fba7",
"text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.",
"title": ""
},
{
"docid": "5203f520e6992ae6eb2e8cb28f523f6a",
"text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.",
"title": ""
},
{
"docid": "9fdd2b84fc412e03016a12d951e4be01",
"text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.",
"title": ""
},
{
"docid": "2e0585860c1fa533412ff1fea76632cb",
"text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.",
"title": ""
},
{
"docid": "5447d3fe8ed886a8792a3d8d504eaf44",
"text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.",
"title": ""
},
{
"docid": "f10d79d1eb6d3ec994c1ec7ec3769437",
"text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]",
"title": ""
},
{
"docid": "2da44919966d841d4a1d6f3cc2a648e9",
"text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.",
"title": ""
},
{
"docid": "228678ad5d18d21d4bc7c1819329274f",
"text": "Intentional frequency perturbation by recently researched active islanding detection techniques for inverter based distributed generation (DG) define new threshold settings for the frequency relays. This innovation has enabled the modern frequency relays to operate inside the non-detection zone (NDZ) of the conventional frequency relays. However, the effect of such perturbation on the performance of the rate of change of frequency (ROCOF) relays has not been researched so far. This paper evaluates the performance of ROCOF relays under such perturbations for an inverter interfaced DG and proposes an algorithm along with the new threshold settings to enable it work under the NDZ. The proposed algorithm is able to differentiate between an islanding and a non-islanding event. The operating principle of relay is based on low frequency current injection through grid side voltage source converter (VSC) control of doubly fed induction generator (DFIG) and therefore, the relay is defined as “active ROCOF relay”. Simulations are done in MATLAB.",
"title": ""
},
{
"docid": "4e6ca2d20e904a0eb72fcdcd1164a5e2",
"text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.",
"title": ""
}
] | scidocsrr |
44ed08dbcd7d51cc6ac8befc167739fe | A belief-desire-intention model for blog users' negative emotional norm compliance: Decision-making in crises | [
{
"docid": "9665c72fd804d630791fdd0bc381d116",
"text": "Social Sharing of Emotion (SSE) occurs when one person shares an emotional experience with another and is considered potentially beneficial. Though social sharing has been shown prevalent in interpersonal communication, research on its occurrence and communication structure in online social networks is lacking. Based on a content analysis of blog posts (n = 540) in a blog social network site (Live Journal), we assess the occurrence of social sharing in blog posts, characterize different types of online SSE, and present a theoretical model of online SSE. A large proportion of initiation expressions were found to conform to full SSE, with negative emotion posts outnumbering bivalent and positive posts. Full emotional SSE posts were found to prevail, compared to partial feelings or situation posts. Furthermore, affective feedback predominated to cognitive and provided emotional support, empathy and admiration. The study found evidence that the process of social sharing occurs in Live Journal, replicating some features of face to face SSE. Instead of a superficial view of online social sharing, our results support a prosocial and beneficial character to online SSE. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "7c3bd683a927626c97ec9ae31b0bae3e",
"text": "Project portfolio management in relation to innovation has increasingly gained the attention of practitioners and academics during the last decade. While significant progress has been made in the pursuit of a process approach to achieve an effective project portfolio management, limited attention has been paid to the issue of how to integrate sustainability into innovation portfolio management decision making. The literature is lacking insights on how to manage the innovation project portfolio throughout the strategic analysis phase to the monitoring of the portfolio performance in relation to sustainability during the development phase of projects. This paper presents a 5step framework for integrating sustainability in the innovation project portfolio management process in the field of product development. The framework can be applied for the management of a portfolio of three project categories that involve breakthrough projects, platform projects and derivative projects. It is based on the assessment of various methods of project evaluation and selection, and a case analysis in the automotive industry. It enables the integration of the three dimensions of sustainability into the innovation project portfolio management process within firms. The three dimensions of sustainability involve ecological sustainability, social sustainability and economic sustainability. Another benefit is enhancing the ability of firms to achieve an effective balance of investment between the three dimensions of sustainability, taking the competitive approach of a firm toward the marketplace into account. 2014 Published by Elsevier B.V. * Corresponding author. Tel.: +31 6 12990878. E-mail addresses: [email protected] (J.W. Brook), [email protected] (F. Pagnanelli). G Models ENGTEC-1407; No. of Pages 17 Please cite this article in press as: Brook, J.W., Pagnanelli, F., Integrating sustainability into innovation project portfolio management – A strategic perspective. J. Eng. Technol. Manage. (2014), http://dx.doi.org/10.1016/j.jengtecman.2013.11.004",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
},
{
"docid": "88d8783128ea9052f99b7f491d042029",
"text": "1.1 UWB antennas in the field of high pulsed power For the last few years, the generation of high-power electromagnetic waves has been one of the major applications of high pulsed power (HPP). It has aroused great interest in the scientific community since it is at the origin of several technological advances. Several kinds of high power radiation sources have been created. There currently appears to be a strong inclination towards compact and autonomous sources of high power microwaves (HPM) (Cadilhon et al., 2010; Pécastaing et al., 2009). The systems discussed here always consist of an electrical high pulsed power generator combined with an antenna. The HPP generator consists of a primary energy source, a power-amplification system and a pulse forming stage. It sends the energy to a suitable antenna. When this radiating element has good electromagnetic characteristics over a wide band of frequency and high dielectric strength, it is possible to generate high power electromagnetic waves in the form of pulses. The frequency band of the wave that is radiated can cover a very broad spectrum of over one decade in frequency. In this case, the technique is of undoubted interest for a wide variety of civil and military applications. Such applications can include, for example, ultra-wideband (UWB) pulse radars to detect buried mines or to rescue buried people, the production of nuclear electromagnetic pulse (NEMP) simulators for electromagnetic compatibility and vulnerability tests on electronic and IT equipment, and UWB communications systems and electromagnetic jamming, the principle of which consists of focusing high-power electromagnetic waves on an identified target to compromise the target’s mission by disrupting or destroying its electronic components. Over the years, the evolution of the R&D program for the development of HPM sources has evidenced the technological difficulties intrinsic to each elementary unit and to each of the physical parameters considered. Depending on the wave form chosen, there is in fact a very wide range of possibilities for the generation of microwave power. The only real question is",
"title": ""
},
{
"docid": "106915eaac271c255aef1f1390577c64",
"text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.",
"title": ""
},
{
"docid": "fcf6136271b04ac78717799d43017d74",
"text": "STUDY DESIGN\nPragmatic, multicentered randomized controlled trial, with 12-month follow-up.\n\n\nOBJECTIVE\nTo evaluate the effect of adding specific spinal stabilization exercises to conventional physiotherapy for patients with recurrent low back pain (LBP) in the United Kingdom.\n\n\nSUMMARY OF BACKGROUND DATA\nSpinal stabilization exercises are a popular form of physiotherapy management for LBP, and previous small-scale studies on specific LBP subgroups have identified improvement in outcomes as a result.\n\n\nMETHODS\nA total of 97 patients (18-60 years old) with recurrent LBP were recruited. Stratified randomization was undertaken into 2 groups: \"conventional,\" physiotherapy consisting of general active exercise and manual therapy; and conventional physiotherapy plus specific spinal stabilization exercises. Stratifying variables used were laterality of symptoms, duration of symptoms, and Roland Morris Disability Questionnaire score at baseline. Both groups received The Back Book, by Roland et al. Back-specific functional disability (Roland Morris Disability Questionnaire) at 12 months was the primary outcome. Pain, quality of life, and psychologic measures were also collected at 6 and 12 months. Analysis was by intention to treat.\n\n\nRESULTS\nA total of 68 patients (70%) provided 12-month follow-up data. Both groups showed improved physical functioning, reduced pain intensity, and an improvement in the physical component of quality of life. Mean change in physical functioning, measured by the Roland Morris Disability Questionnaire, was -5.1 (95% confidence interval -6.3 to -3.9) for the specific spinal stabilization exercises group and -5.4 (95% confidence interval -6.5 to -4.2) for the conventional physiotherapy group. No statistically significant differences between the 2 groups were shown for any of the outcomes measured, at any time.\n\n\nCONCLUSIONS\nPatients with LBP had improvement with both treatment packages to a similar degree. There was no additional benefit of adding specific spinal stabilization exercises to a conventional physiotherapy package for patients with recurrent LBP.",
"title": ""
},
{
"docid": "cff8ae2635684a6f0e07142175b7fbf1",
"text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.",
"title": ""
},
{
"docid": "32f96ae1a99ed2ade25df0792d8d3779",
"text": "The success of software development depends on the proper estimation of the effort required to develop the software. Project managers require a reliable approach for software effort estimation. It is especially important during the early stages of the software development life cycle. Accurate software effort estimation is a major concern in software industries. Stochastic Gradient Boosting (SGB) is one of the machine learning techniques that helps in getting improved estimated values. SGB is used for improving the accuracy of models built on decision trees. In this paper, the main goal is to estimate the effort required to develop various software projects using the class point approach. Then, optimization of the effort parameters is achieved using the SGB technique to obtain better prediction accuracy. Further- more, performance comparisons of the models obtained using the SGB technique with the Multi Layer Perceptron and the Radial Basis Function Network are presented in order to highlight the performance achieved by each method.",
"title": ""
},
{
"docid": "a88dc240c7cbb2570c1fc7c22a813ef3",
"text": "The Acropolis of Athens is one of the most prestigious ancient monuments in the world, attracting daily many visitors, and therefore its structural integrity is of paramount importance. During the last decade an accelerographic array has been installed at the Archaeological Site, in order to monitor the seismic response of the Acropolis Hill and the dynamic behaviour of the monuments (including the Circuit Wall), while several optical fibre sensors have been attached at a middle-vertical section of the Wall. In this study, indicative real time recordings of strain and acceleration on the Wall and the Hill with the use of optical fibre sensors and accelerographs, respectively, are presented and discussed. The records aim to investigate the static and dynamic behaviour – distress of the Wall and the Acropolis Hill, taking also into account the prevailing geological conditions. The optical fibre technology, the location of the sensors, as well as the installation methodology applied is also presented. Emphasis is given to the application of real time instrumental monitoring which can be used as a valuable tool to predict potential structural",
"title": ""
},
{
"docid": "e57168f624200cdfd6798cfd42ecce23",
"text": "Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation. We introduce 12 tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.",
"title": ""
},
{
"docid": "7d0bbf3a83881a97b0217b427b596b76",
"text": "This paper proposes a novel tracker which is controlled by sequentially pursuing actions learned by deep reinforcement learning. In contrast to the existing trackers using deep networks, the proposed tracker is designed to achieve a light computation as well as satisfactory tracking accuracy in both location and scale. The deep network to control actions is pre-trained using various training sequences and fine-tuned during tracking for online adaptation to target and background changes. The pre-training is done by utilizing deep reinforcement learning as well as supervised learning. The use of reinforcement learning enables even partially labeled data to be successfully utilized for semi-supervised learning. Through evaluation of the OTB dataset, the proposed tracker is validated to achieve a competitive performance that is three times faster than state-of-the-art, deep network–based trackers. The fast version of the proposed method, which operates in real-time on GPU, outperforms the state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "99511c1267d396d3745f075a40a06507",
"text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …",
"title": ""
},
{
"docid": "046ae00fa67181dff54e170e48a9bacf",
"text": "For the evaluation of grasp quality, different measures have been proposed that are based on wrench spaces. Almost all of them have drawbacks that derive from the non-uniformity of the wrench space, composed of force and torque dimensions. Moreover, many of these approaches are computationally expensive. We address the problem of choosing a proper task wrench space to overcome the problems of the non-uniform wrench space and show how to integrate it in a well-known, high precision and extremely fast computable grasp quality measure.",
"title": ""
},
{
"docid": "eb228251938f240cdcf7fed80e3079a6",
"text": "We introduce an approach to biasing language models towards known contexts without requiring separate language models or explicit contextually-dependent conditioning contexts. We do so by presenting an alternative ASR objective, where we predict the acoustics and words given the contextual cue, such as the geographic location of the speaker. A simple factoring of the model results in an additional biasing term, which effectively indicates how correlated a hypothesis is with the contextual cue (e.g., given the hypothesized transcript, how likely is the user’s known location). We demonstrate that this factorization allows us to train relatively small contextual models which are effective in speech recognition. An experimental analysis shows a perplexity reduction of up to 35% and a relative reduction in word error rate of 1.6% on a targeted voice search dataset when using the user’s coarse location as a contextual cue.",
"title": ""
},
{
"docid": "53ab46387cb1c04e193d2452c03a95ad",
"text": "Real time control of five-axis machine tools requires smooth generation of feed, acceleration and jerk in CNC systems without violating the physical limits of the drives. This paper presents a feed scheduling algorithm for CNC systems to minimize the machining time for five-axis contour machining of sculptured surfaces. The variation of the feed along the five-axis tool-path is expressed in a cubic B-spline form. The velocity, acceleration and jerk limits of the five axes are considered in finding the most optimal feed along the toolpath in order to ensure smooth and linear operation of the servo drives with minimal tracking error. The time optimal feed motion is obtained by iteratively modulating the feed control points of the B-spline to maximize the feed along the tool-path without violating the programmed feed and the drives’ physical limits. Long tool-paths are handled efficiently by applying a moving window technique. The improvement in the productivity and linear operation of the five drives is demonstrated with five-axis simulations and experiments on a CNC machine tool. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6987e20daf52bcf25afe6a7f0a95a730",
"text": "Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "3d337d14e8f8cd2ded2c9f00cd3f2274",
"text": "BACKGROUND AND PURPOSE\nThe primary objective of this study is to establish the validity and reliability of a perceived medication knowledge and confidence survey instrument (Okere-Renier Survey).\n\n\nMETHODS\nTwo-stage psychometric analyses were conducted to assess reliability (Cronbach's alpha > .70) of the associated knowledge scale. To evaluate the construct validity, exploratory and confirmatory factor analyses were performed.\n\n\nRESULTS\nExploratory factor analysis (EFA) revealed three subscale measures and confirmatory factor analysis (CFA) indicated an acceptable fit to the data (goodness-of-fit index [GFI = 0.962], adjusted goodness-of-fit index [AGFI = 0.919], root mean square residual [RMR = 0.065], root mean square error of approximation [RMSEA] = 0.073). A high internal consistency with Cronbach's a of .833 and .744 were observed in study Stages 1 and 2, respectively.\n\n\nCONCLUSIONS\nThe Okere-Renier Survey is a reliable instrument for predicting patient-perceived level of medication knowledge and confidence.",
"title": ""
},
{
"docid": "16bce7d93ebedbf136bc6f6bbd6278c6",
"text": "Conversational systems have come a long way since their inception in the 1960s. After decades of research and development, we have seen progress from Eliza and Parry in the 1960s and 1970s, to task-completion systems as in the Defense Advanced Research Projects Agency (DARPA) communicator program in the 2000s, to intelligent personal assistants such as Siri, in the 2010s, to today’s social chatbots like XiaoIce. Social chatbots’ appeal lies not only in their ability to respond to users’ diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying users’ need for communication, affection, as well as social belonging. To further the advancement and adoption of social chatbots, their design must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with a social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual awareness to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with artificial intelligenc (AI), we have a responsibility to design social chatbots to be both useful and empathetic, so they will become ubiquitous and help society as a whole.",
"title": ""
},
{
"docid": "d47c543f396059cc0ab6c5d98f8db35c",
"text": "Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets.",
"title": ""
},
{
"docid": "3f2081f9c1cf10e9ec27b2541f828320",
"text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.",
"title": ""
}
] | scidocsrr |
b067267dc90e3f058449faafd07a9fda | A Framework for Investigating the Impact of Information Systems Capability on Strategic Information Systems Planning Outcomes | [
{
"docid": "5ebd4fc7ee26a8f831f7fea2f657ccdd",
"text": "1 This article was reviewed and accepted by all the senior editors, including the editor-in-chief. Articles published in future issues will be accepted by just a single senior editor, based on reviews by members of the Editorial Board. 2 Sincere thanks go to Anna Dekker and Denyse O’Leary for their assistance with this research. Funding was generously provided by the Advanced Practices Council of the Society for Information Management and by the Social Sciences and Humanities Research Council of Canada. An earlier version of this manuscript was presented at the Academy of Management Conference in Toronto, Canada, in August 2000. 3 In this article, the terms information systems (IS) and information technology (IT) are used interchangeably. 4 Regardless of whether IS services are provided internally (in a centralized, decentralized, or federal manner) or are outsourced, we assume the boundaries of the IS function can be identified. Thus, the fit between the unit(s) providing IS services and the rest of the organization can be examined. and books have been written on the subject, firms continue to demonstrate limited alignment.",
"title": ""
}
] | [
{
"docid": "8dfa68e87eee41dbef8e137b860e19cc",
"text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.",
"title": ""
},
{
"docid": "e8cf458c60dc7b4a8f71df2fabf1558d",
"text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.",
"title": ""
},
{
"docid": "fc6726bddf3d70b7cb3745137f4583c1",
"text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.",
"title": ""
},
{
"docid": "9efd74df34775bc4c7a08230e67e990b",
"text": "OBJECTIVE\nFirearm violence is a significant public health problem in the United States, and alcohol is frequently involved. This article reviews existing research on the relationships between alcohol misuse; ownership, access to, and use of firearms; and the commission of firearm violence, and discusses the policy implications of these findings.\n\n\nMETHOD\nNarrative review augmented by new tabulations of publicly-available data.\n\n\nRESULTS\nAcute and chronic alcohol misuse is positively associated with firearm ownership, risk behaviors involving firearms, and risk for perpetrating both interpersonal and self-directed firearm violence. In an average month, an estimated 8.9 to 11.7 million firearm owners binge drink. For men, deaths from alcohol-related firearm violence equal those from alcohol-related motor vehicle crashes. Enforceable policies restricting access to firearms for persons who misuse alcohol are uncommon. Policies that restrict access on the basis of other risk factors have been shown to reduce risk for subsequent violence.\n\n\nCONCLUSION\nThe evidence suggests that restricting access to firearms for persons with a documented history of alcohol misuse would be an effective violence prevention measure. Restrictions should rely on unambiguous definitions of alcohol misuse to facilitate enforcement and should be rigorously evaluated.",
"title": ""
},
{
"docid": "384943f815aadbad990cc42ca9f6f9d0",
"text": "The N×N queen's puzzle is the problem of placing N chess queen on an N×N chess board so that no two queens attack each other. This approach is a classical problem in the artificial intelligence area. A solution requires that no two queens share the same row, column or diagonal. These problems for computer scientists present practical solution to many useful applications and have become an important issue. In this paper we proposed new resolution for solving n-Queens used combination of depth firs search (DFS) and breathe first search (BFS) techniques. The proposed algorithm act based on placing queens on chess board directly. This is possible by regular pattern on the basis of the work law of minister. The results show that performance and run time in this approach better then back tracking methods and hill climbing modes.",
"title": ""
},
{
"docid": "a45e7855be4a99ef2d382e914650e8bc",
"text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.",
"title": ""
},
{
"docid": "0289858bb9002e00d753e1ed2da8b204",
"text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.",
"title": ""
},
{
"docid": "9d34171c2fcc8e36b2fb907fe63fc08d",
"text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.",
"title": ""
},
{
"docid": "a8f5f7c147c1ac8cabf86d4809aa3f65",
"text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.",
"title": ""
},
{
"docid": "2cfb782a527b1806eda302c4c7b63219",
"text": "The latest version of the ISO 26262 standard from 2016 represents the state of the art for a safety-guided development of safety-critical electric/electronic vehicle systems. These vehicle systems include advanced driver assistance systems and vehicle guidance systems. The development process proposed in the ISO 26262 standard is based upon multiple V-models, and defines activities and work products for each process step. In many of these process steps, scenario based approaches can be applied to achieve the defined work products for the development of automated driving functions. To accomplish the work products of different process steps, scenarios have to focus on various aspects like a human understandable notation or a description via state variables. This leads to contradictory requirements regarding the level of detail and way of notation for the representation of scenarios. In this paper, the authors discuss requirements for the representation of scenarios in different process steps defined by the ISO 26262 standard, propose a consistent terminology based on prior publications for the identified levels of abstraction, and demonstrate how scenarios can be systematically evolved along the phases of the development process outlined in the ISO 26262 standard.",
"title": ""
},
{
"docid": "54396daee78bb3ca974925159d6dec15",
"text": "Classroom Salon is an on-line social collaboration tool that allows instructors to create, manage, and analyze social net- works (called Salons) to enhance student learning. Students in a Salon can cooperatively create, comment on, and modify documents. Classroom Salon provides tools that allow the instructor to monitor the social networks and gauge both student participation and individual effectiveness. This pa- per describes Classroom Salon, provides some use cases that we have developed for introductory computer science classes and presents some preliminary observations of using this tool in several computer science courses at Carnegie Mellon University.",
"title": ""
},
{
"docid": "ddecb743bc098a3e31ca58bc17810cf1",
"text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.",
"title": ""
},
{
"docid": "99582c5c50f5103f15a6777af94c6584",
"text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.",
"title": ""
},
{
"docid": "a9f8f3946dd963066006f19a251eef7c",
"text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.",
"title": ""
},
{
"docid": "f631ceda1a738c12ea71846650a11372",
"text": "An object recognition engine needs to extract discriminative features from data representing an object and accurately classify the object to be of practical use in robotics. Furthermore, the classification of the object must be rapidly performed in the presence of a voluminous stream of data. These conditions call for a distributed and scalable architecture that can utilize a cloud computing infrastructure for performing object recognition. This paper introduces a Cloud-based Object Recognition Engine (CORE) to address these needs. CORE is able to train on large-scale datasets, perform classification of 3D point cloud data, and efficiently transfer data in a robotic network.",
"title": ""
},
{
"docid": "cdee55e977d5809b87f3e8be98acaaa3",
"text": "Proximity effects caused by uneven distribution of current among the insulated wire strands of stator multi-strand windings can contribute significant bundle-level proximity losses in permanent magnet (PM) machines operating at high speeds. Three-dimensional finite element analysis is used to investigate the effects of transposition of the insulated strands in stator winding bundles on the copper losses in high-speed machines. The investigation confirms that the bundle proximity losses must be considered in the design of stator windings for high-speed machines, and the amplitude of these losses decreases monotonically as the level of transposition is increased from untransposed to fully-transposed (360°) wire bundles. Analytical models are introduced to estimate the currents in strands in a slot for a high-speed machine.",
"title": ""
},
{
"docid": "4308a50a3a7cf1d426cb476545147f50",
"text": "In this paper, the relationship between the numbers of stator slots, winding polarities, and rotor poles for variable reluctance resolvers is derived and verified, which makes it possible for the same stator and winding to be shared by the rotors with different poles. Based on the established relationship, a simple factor is introduced to evaluate the level of voltage harmonics as an index for choosing appropriate stator slot and rotor pole combinations. With due account for easy manufacturing, alternate windings are proposed without apparent deterioration in voltage harmonics of a resolver. In particular, alternate windings with nonoverlapping uniform coils are proved to be possible for output windings in some stator slot and rotor pole combinations, which further simplify the manufacture process. Finite element method is adopted to verify the proposed design, together with experiments on the prototypes.",
"title": ""
},
{
"docid": "c8db1af44dccc23bf0e06dcc8c43bca6",
"text": "A reconfigurable mechanism for varying the footprint of a four-wheeled omnidirectional vehicle is developed and applied to wheelchairs. The variable footprint mechanism consists of a pair of beams intersecting at a pivotal point in the middle. Two pairs of ball wheels at the diagonal positions of the vehicle chassis are mounted, respectively, on the two beams intersecting in the middle. The angle between the two beams varies actively so that the ratio of the wheel base to the tread may change. Four independent servo motors driving the four ball wheels allow the vehicle to move in an arbitrary direction from an arbitrary configuration as well as to change the angle between the two beams and thereby change the footprint. The objective of controlling the beam angle is threefold. One is to augment static stability by varying the footprint so that the mass centroid of the vehicle may be kept within the footprint at all times. The second is to reduce the width of the vehicle when going through a narrow doorway. The third is to apparently change the gear ratio relating the vehicle speed to individual actuator speeds. First the concept of the varying footprint mechanism is described, and its kinematic behavior is analyzed, followed by the three control algorithms for varying the footprint. A prototype vehicle for an application as a wheelchair platform is designed, built, and tested.",
"title": ""
},
{
"docid": "cfdee8bd0802872f4bd216df226f9c35",
"text": "Single-unit recording studies in the macaque have carefully documented the modulatory effects of attention on the response properties of visual cortical neurons. Attention produces qualitatively different effects on firing rate, depending on whether a stimulus appears alone or accompanied by distracters. Studies of contrast gain control in anesthetized mammals have found parallel patterns of results when the luminance contrast of a stimulus increases. This finding suggests that attention has co-opted the circuits that mediate contrast gain control and that it operates by increasing the effective contrast of the attended stimulus. Consistent with this idea, microstimulation of the frontal eye fields, one of several areas that control the allocation of spatial attention, induces spatially local increases in sensitivity both at the behavioral level and among neurons in area V4, where endogenously generated attention increases contrast sensitivity. Studies in the slice have begun to explain how modulatory signals might cause such increases in sensitivity.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
}
] | scidocsrr |
5cf491216962a850c261749dc519155d | Deep Learning For Video Saliency Detection | [
{
"docid": "b716af4916ac0e4a0bf0b040dccd352b",
"text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.",
"title": ""
},
{
"docid": "ed9d6571634f30797fb338a928cc8361",
"text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"title": ""
}
] | [
{
"docid": "6b79d1db9565fc7540d66ff8bf5aae1f",
"text": "Recognizing sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and real world facts. Most of the current sarcasm detection systems consider only the utterance in isolation. There are some limited attempts toward taking into account the conversational context. In this paper, we propose an interpretable end-to-end model that combines information from both the utterance and the conversational context to detect sarcasm, and demonstrate its effectiveness through empirical evaluations. We also study the behavior of the proposed model to provide explanations for the model’s decisions. Importantly, our model is capable of determining the impact of utterance and conversational context on the model’s decisions. Finally, we provide an ablation study to illustrate the impact of different components of the proposed model.",
"title": ""
},
{
"docid": "904454a191da497071ee9b835561c6e6",
"text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.",
"title": ""
},
{
"docid": "6669f61c302d79553a3e49a4f738c933",
"text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "c2453816adf52157fca295274a4d8627",
"text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. Low-cost gas sensors are used to effectively perceive the environment by mounting them on top of mobile vehicles, for example, using a public transport network. Thus, these sensors are part of a mobile network and perform from time to time measurements in each others vicinity. In this paper, we study three calibration algorithms that exploit co-located sensor measurements to enhance sensor calibration and consequently the quality of the pollution measurements on-the-fly. Forward calibration, based on a traditional approach widely used in the literature, is used as performance benchmark for two novel algorithms: backward and instant calibration. We validate all three algorithms with real ozone pollution measurements carried out in an urban setting by comparing gas sensor output to high-quality measurements from analytical instruments. We find that both backward and instant calibration reduce the average measurement error by a factor of two compared to forward calibration. Furthermore, we unveil the arising difficulties if sensor calibration is not based on reliable reference measurements but on sensor readings of low-cost gas sensors which is inevitable in a mobile scenario with only a few reliable sensors. We propose a solution and evaluate its effect on the measurement accuracy in experiments and simulation.",
"title": ""
},
{
"docid": "c0ee7fef7f96db6908f49170c6c75b2c",
"text": "Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from a neural network during training. This prevents the units from co-adapting too much. Dropping units creates thinned networks during training. The number of possible thinned networks is exponential in the number of units in the network. At test time all possible thinned networks are combined using an approximate model averaging procedure. Dropout training followed by this approximate model combination significantly reduces overfitting and gives major improvements over other regularization methods. In this work, we describe models that improve the performance of neural networks using dropout, often obtaining state-of-the-art results on benchmark datasets.",
"title": ""
},
{
"docid": "b73a9a7770a2bbd5edcc991d7b848371",
"text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.",
"title": ""
},
{
"docid": "89526592b297342697c131daba388450",
"text": "Fundamental and advanced developments in neum-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuuy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The f u u y models under the framework of adaptive networks is called Adaptive-Network-based Fuzzy Inference System (ANFIS), which possess certain advantages over neural networks. We introduce the design methods f o r ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed.",
"title": ""
},
{
"docid": "e36e0c8659b8bae3acf0f178fce362c3",
"text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.",
"title": ""
},
{
"docid": "35812bda0819769efb1310d1f6d5defd",
"text": "Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.",
"title": ""
},
{
"docid": "75952f3945628c15a66b7288e6c1d1a7",
"text": "Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.",
"title": ""
},
{
"docid": "a2d7fc045b1c8706dbfe3772a8f6ef70",
"text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains",
"title": ""
},
{
"docid": "bd590555337d3ada2c641c5f1918cf2c",
"text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.",
"title": ""
},
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
},
{
"docid": "d24331326c59911f9c1cdc5dd5f14845",
"text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters",
"title": ""
},
{
"docid": "30de4ba4607cfcc106361fa45b89a628",
"text": "The purpose of this study is to characterize and understand the long-term behavior of the output from megavoltage radiotherapy linear accelerators. Output trends of nine beams from three linear accelerators over a period of more than three years are reported and analyzed. Output, taken during daily warm-up, forms the basis of this study. The output is measured using devices having ion chambers. These are not calibrated by accredited dosimetry laboratory, but are baseline-compared against monthly output which is measured using calibrated ion chambers. We consider the output from the daily check devices as it is, and sometimes normalized it by the actual output measured during the monthly calibration of the linacs. The data show noisy quasi-periodic behavior. The output variation, if normalized by monthly measured \"real' output, is bounded between ± 3%. Beams of different energies from the same linac are correlated with a correlation coefficient as high as 0.97, for one particular linac, and as low as 0.44 for another. These maximum and minimum correlations drop to 0.78 and 0.25 when daily output is normalized by the monthly measurements. These results suggest that the origin of these correlations is both the linacs and the daily output check devices. Beams from different linacs, independent of their energies, have lower correlation coefficient, with a maximum of about 0.50 and a minimum of almost zero. The maximum correlation drops to almost zero if the output is normalized by the monthly measured output. Some scatter plots of pairs of beam output from the same linac show band-like structures. These structures are blurred when the output is normalized by the monthly calibrated output. Fourier decomposition of the quasi-periodic output is consistent with a 1/f power law. The output variation appears to come from a distorted normal distribution with a mean of slightly greater than unity. The quasi-periodic behavior is manifested in the seasonally averaged output, showing annual variability with negative variations in the winter and positive in the summer. This trend is weakened when the daily output is normalized by the monthly calibrated output, indicating that the variation of the periodic component may be intrinsic to both the linacs and the daily measurement devices. Actual linac output was measured monthly. It needs to be adjusted once every three to six months for our tolerance and action levels. If these adjustments are artificially removed, then there is an increase in output of about 2%-4% per year.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "adb2d3d17599c0aa36236f923adc934f",
"text": "This paper reports, to our knowledge, the first spherical induction motor (SIM) operating with closed loop control. The motor can produce up to 4 Nm of torque along arbitrary axes with continuous speeds up to 300 rpm. The motor's rotor is a two-layer copper-over-iron spherical shell. The stator has four independent inductors that generate thrust forces on the rotor surface. The motor is also equipped with four optical mouse sensors that measure surface velocity to estimate the rotor's angular velocity, which is used for vector control of the inductors and control of angular velocity and orientation. Design considerations including torque distribution for the inductors, angular velocity sensing, angular velocity control, and orientation control are presented. Experimental results show accurate tracking of velocity and orientation commands.",
"title": ""
},
{
"docid": "40083241b498dc6ac14de7dcc0b38399",
"text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.",
"title": ""
},
{
"docid": "aa3e8c4e4695d8c372987c8e409eb32f",
"text": "We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.",
"title": ""
}
] | scidocsrr |
c0c7f6e365f2bdd184a9df5cfc5f8587 | A Practical Wireless Attack on the Connected Car and Security Protocol for In-Vehicle CAN | [
{
"docid": "8d041241f1a587b234c8784dea9088a4",
"text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.",
"title": ""
}
] | [
{
"docid": "ae83a2258907f00500792178dc65340d",
"text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.",
"title": ""
},
{
"docid": "ee6612fa13482f7e3bbc7241b9e22297",
"text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.",
"title": ""
},
{
"docid": "5cd68b483657180231786dc5a3407c85",
"text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.",
"title": ""
},
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
},
{
"docid": "a9121a1211704006dc8de14a546e3bdc",
"text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "4d2fa4e81281f40626028192cf2f71ff",
"text": "In this tutorial paper, we present a general architecture for digital clock and data recovery (CDR) for high-speed binary links. The architecture is based on replacing the analog loop filter and voltage-controlled oscillator (VCO) in a typical analog phase-locked loop (PLL)-based CDR with digital components. We provide a linearized analysis of the bang-bang phase detector and CDR loop including the effects of decimation and self-noise. Additionally, we provide measured results from an implementation of the digital CDR system which are directly comparable to the linearized analysis, plus measurements of the limit cycle behavior which arises in these loops when incoming jitter is small. Finally, the relative advantages of analog and digital implementations of the CDR for high-speed binary links is considered",
"title": ""
},
{
"docid": "ebf92a0faf6538f1d2b85fb2aa497e80",
"text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.",
"title": ""
},
{
"docid": "1a153e0afca80aaf35ffa1b457725fa3",
"text": "Cloud computing can reduce mainframe management costs, so more and more users choose to build their own cloud hosting environment. In cloud computing, all the commands through the network connection, therefore, information security is particularly important. In this paper, we will explore the types of intrusion detection systems, and integration of these types, provided an effective and output reports, so system administrators can understand the attacks and damage quickly. With the popularity of cloud computing, intrusion detection system log files are also increasing rapidly, the effect is limited and inefficient by using the conventional analysis system. In this paper, we use Hadoop's MapReduce algorithm analysis of intrusion detection System log files, the experimental results also confirmed that the calculation speed can be increased by about 89%. For the system administrator, IDS Log Cloud Analysis System (called ICAS) can provide fast and high reliability of the system.",
"title": ""
},
{
"docid": "ed4d6179e2e432e752d7598c0db6ec59",
"text": "In image deblurring, a fundamental problem is that the blur kernel suppresses a number of spatial frequencies that are difficult to recover reliably. In this paper, we explore the potential of a class-specific image prior for recovering spatial frequencies attenuated by the blurring process. Specifically, we devise a prior based on the class-specific subspace of image intensity responses to band-pass filters. We learn that the aggregation of these subspaces across all frequency bands serves as a good class-specific prior for the restoration of frequencies that cannot be recovered with generic image priors. In an extensive validation, our method, equipped with the above prior, yields greater image quality than many state-of-the-art methods by up to 5 dB in terms of image PSNR, across various image categories including portraits, cars, cats, pedestrians and household objects.",
"title": ""
},
{
"docid": "705eca342fb014d0ae943a17c60a47c0",
"text": "This is a critical design paper offering a possible scenario of use intended to provoke reflection about values and politics of design in persuasive computing. We describe the design of a system - Fit4Life - that encourages individuals to address the larger goal of reducing obesity in society by promoting individual healthy behaviors. Using the Persuasive Systems Design Model [26], this paper outlines the Fit4Life persuasion context, the technology, its use of persuasive messages, and an experimental design to test the system's efficacy. We also contribute a novel discussion of the ethical and sociocultural considerations involved in our design, an issue that has remained largely unaddressed in the existing persuasive technologies literature [29].",
"title": ""
},
{
"docid": "8d432d8fd4a6d0f368a608ebca5d67d7",
"text": "The origin and continuation of mankind is based on water. Water is one of the most abundant resources on earth, covering three-fourths of the planet’s surface. However, about 97% of the earth’s water is salt water in the oceans, and a tiny 3% is fresh water. This small percentage of the earth’s water—which supplies most of human and animal needs—exists in ground water, lakes and rivers. The only nearly inexhaustible sources of water are the oceans, which, however, are of high salinity. It would be feasible to address the water-shortage problem with seawater desalination; however, the separation of salts from seawater requires large amounts of energy which, when produced from fossil fuels, can cause harm to the environment. Therefore, there is a need to employ environmentally-friendly energy sources in order to desalinate seawater. After a historical introduction into desalination, this paper covers a large variety of systems used to convert seawater into fresh water suitable for human use. It also covers a variety of systems, which can be used to harness renewable energy sources; these include solar collectors, photovoltaics, solar ponds and geothermal energy. Both direct and indirect collection systems are included. The representative example of direct collection systems is the solar still. Indirect collection systems employ two subsystems; one for the collection of renewable energy and one for desalination. For this purpose, standard renewable energy and desalination systems are most often employed. Only industrially-tested desalination systems are included in this paper and they comprise the phase change processes, which include the multistage flash, multiple effect boiling and vapour compression and membrane processes, which include reverse osmosis and electrodialysis. The paper also includes a review of various systems that use renewable energy sources for desalination. Finally, some general guidelines are given for selection of desalination and renewable energy systems and the parameters that need to be considered. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b77d257b62ee7af929b64168c62fd785",
"text": "The analysis of time series data is of interest to many application domains. But this analysis is challenging due to many reasons such as missing data in the series, unstructured nature of the data and errors in the data collection procedure, measuring equipment, etc. The problem of missing data while matching two time series is dealt with either by predicting a value for the missing data using the already collected data, or by completely ignoring the missing values. In this paper, we present an approach where we make use of the characteristics of the Mahalanobis Distance to inherently accommodate the missing values while finding the best match between two time series. Using this approach, we have designed two algorithms which can find the best match for a given query series in a candidate series, without imputing the missing values in the candidate. The initial algorithm finds the best nonwarped match between the candidate and the query time series, while the second algorithm is an extension of the initial algorithm to find the best match in the case of warped data using a Dynamic Time Warping (DTW) like algorithm. Thus, with experimental results we go on to conclude that the proposed warping algorithm is a good method for matching between two time series with warping and missing data.",
"title": ""
},
{
"docid": "e299966eded9f65f6446b3cd7ab41f49",
"text": "BACKGROUND Asthma is the most common chronic pulmonary disease during pregnancy. Several previous reports have documented reversible electrocardiographic changes during severe acute asthma attacks, including tachycardia, P pulmonale, right bundle branch block, right axis deviation, and ST segment and T wave abnormalities. CASE REPORT We present the case of a pregnant patient with asthma exacerbation in which acute bronchospasm caused S1Q3T3 abnormality on an electrocardiogram (ECG). The complete workup of ECG findings of S1Q3T3 was negative and correlated with bronchospasm. The S1Q3T3 electrocardiographic abnormality can be seen in acute bronchospasm in pregnant women. The other causes like pulmonary embolism, pneumothorax, acute lung disease, cor pulmonale, and left posterior fascicular block were excluded. CONCLUSIONS Asthma exacerbations are of considerable concern during pregnancy due to their adverse effect on the fetus, and optimization of asthma treatment during pregnancy is vital for achieving good outcomes. Prompt recognition of electrocardiographic abnormality and early treatment can prevent adverse perinatal outcomes.",
"title": ""
},
{
"docid": "f4222d776f90050c15032e802d294d1a",
"text": "We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.",
"title": ""
},
{
"docid": "c20393a25f4e53be6df2bd49abf6635f",
"text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.",
"title": ""
},
{
"docid": "d003deabc7748959e8c5cc220b243e70",
"text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.",
"title": ""
},
{
"docid": "94b85074da2eedcff74b9ad16c5b562c",
"text": "The purpose of the paper is to investigate the design of rectangular patch antenna arrays fed by miscrostrip and coaxial lines at 28 GHz for future 5G applications. Our objective is to design a four element antenna array with a bandwidth higher than 1 GHz and a maximum radiation gain. The performances of the rectangular 4∗1 and 2∗2 patch antenna arrays designed on Rogers RT/Duroid 5880 substrate were optimized and the simulation results reveal that the performance of 4∗1 antenna array fed by microstrip line is better than 2∗2 antenna array fed by coaxial cable. We obtained for the topology of 4∗1 rectangular patch array antenna a bandwidth of 2.15 GHz and 1.3 GHz respectively with almost similar gains of the order of 13.3 dBi.",
"title": ""
},
{
"docid": "3e6aac2e0ff6099aabeee97dc1292531",
"text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.",
"title": ""
},
{
"docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef",
"text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: [email protected] The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?",
"title": ""
}
] | scidocsrr |
2b9f73c78aaddbc89a10cbf5f6094d04 | Perception , Planning , and Execution for Mobile Manipulation in Unstructured Environments | [
{
"docid": "5c4a81dd06b5c80ba7c32a9ac1673a4f",
"text": "We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).",
"title": ""
}
] | [
{
"docid": "198967b505c9ded9255bff7b82fb2781",
"text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.",
"title": ""
},
{
"docid": "1368ea6ddef1ac1c37261a532d630b7a",
"text": "Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.",
"title": ""
},
{
"docid": "b852aad0d205aff17cd8a9b7c21ed99f",
"text": "In present investigation, two glucose based smart tumor-targeted drug delivery systems coupled with enzyme-sensitive release strategy are introduced. Magnetic nanoparticles (Fe3O4) were grafted with carboxymethyl chitosan (CS) and β-cyclodextrin (β-CD) as carriers. Prodigiosin (PG) was used as the model anti-tumor drug, targeting aggressive tumor cells. The morphology, properties and composition and grafting process were characterized by transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FT-IR), vibration sample magnetometer (VSM), X-ray diffraction (XRD) analysis. The results revealed that the core crystal size of the nanoparticles synthesized were 14.2±2.1 and 9.8±1.4nm for β-CD and CS-MNPs respectively when measured using TEM; while dynamic light scattering (DLS) gave diameters of 121.1 and 38.2nm. The saturation magnetization (Ms) of bare magnetic nanoparticles is 50.10emucm-3, while modification with β-CD and CS gave values of 37.48 and 65.01emucm-3, respectively. The anticancer compound, prodigiosin (PG) was loaded into the NPs with an encapsulation efficiency of approximately 81% for the β-CD-MNPs, and 92% for the CS-MNPs. This translates to a drug loading capacity of 56.17 and 59.17mg/100mg MNPs, respectively. Measurement of in vitro release of prodigiosin from the loaded nanocarriers in the presence of the hydrolytic enzymes, alpha-amylase and chitosanase showed that 58.1 and 44.6% of the drug was released after one-hour of incubation. Cytotoxicity studies of PG-loaded nanocarriers on two cancer cell lines, MCF-7 and HepG2, and on a non-cancerous control, NIH/3T3 cells, revealed that the drug loaded nanoparticles had greater efficacy on the cancer cell lines. The selective index (SI) for free PG on MCF-7 and HepG2 cells was 1.54 and 4.42 respectively. This parameter was reduced for PG-loaded β-CD-MNPs to 1.27 and 1.85, while the SI for CS-MNPs improved considerably to 7.03 on MCF-7 cells. Complementary studies by fluorescence and confocal microscopy and flow cytometry confirm specific targeting of the nanocarriers to the cancer cells. The results suggest that CS-MNPs have higher potency and are better able to target the prodigiosin toxicity effect on cancerous cells than β-CD-MNPs.",
"title": ""
},
{
"docid": "020f05ae272da143bc6e0d3096925582",
"text": "In this study we describe a new approach to relate simulator sickness ratings with the main frequency component of the simulator motion mismatch, that is, the computed difference between the time histories of simulator motion and vehicle motion, respectively. During two driving simulator experiments in the TNO moving-base driving simulatorthat were performed for other reasons than the purpose of this studywe collected simulator sickness questionnaires from in total 58 subjects. The main frequency component was computed by means of the power spectrum density of the computed mismatch signal. We hypothesized that simulator sickness incidence depends on this frequency component, in a similar way as the incidence of real motion sickness, such as sea sickness, depends on motion frequency. The results show that the simulator sickness ratings differed between both driving simulator experiments. The experiment with its main frequency component of the mismatch signal of 0.08 Hz had significantly higher simulator sickness incidence than the experiment with its main frequency at 0.46 Hz. Since the experimental design differed between both experiments, we cannot exclusively attribute the difference in sickness ratings to the frequency component, but the observation does suggest that quantitative analysis of the mismatch between the motion profiles of the simulator and the vehicle may greatly improve our understanding of the causal mechanism of simulator sickness.",
"title": ""
},
{
"docid": "d41365a01ad3ce3ec30141ed08132aa1",
"text": "This paper reports on a mixed-method study in progress. The qualitative part has been completed and the quantitative part is underway. The findings of the qualitative study -- the theory of Integral Decision-Making (IDM) -- are introduced, and the research method to test IDM is discussed. It is expected that the integration of the qualitative and quantitative studies will provide insight into how data, information, and knowledge capacities can lead to more effective management decisions by incorporating more human inputs in the decision-and policy-making process. Implications for theory and practice will be suggested.",
"title": ""
},
{
"docid": "1880bb9c3229cab3e614ca39079c7781",
"text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.",
"title": ""
},
{
"docid": "46a8022eea9ed7bcfa1cd8041cab466f",
"text": "In this paper, a bidirectional converter with a uniform controller for Vehicle to grid (V2G) application is designed. The bidirectional converter consists of two stages one is ac-dc converter and second is dc-dc converter. For ac-dc converter bipolar modulation is used. Two separate controller systems are designed for converters which follow active and reactive power commands from grid. Uniform controller provides reactive power support to the grid. The charger operates in two quadrants I and IV. There are three modes of operation viz. charging only operation, charging-capacitive operation and charging-inductive operation. During operation under these three operating modes vehicle's battery is not affected. The whole system is tested using MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "0206cbec556e66fd19aa42c610cdccfa",
"text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.",
"title": ""
},
{
"docid": "1eec132681c491da9f2384a1df6b4790",
"text": "Projective analysis is an important solution in three-dimensional (3D) shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different viewpoints. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, and thus cannot satisfy the basic requirement of scalability for search engines. In the past three years, shape retrieval contest (SHREC) pays much attention to the scalability of 3D shape retrieval algorithms, and organizes several large scale tracks accordingly [1]– [3]. However, the experimental results indicate that conventional algorithms cannot be directly applied to large datasets. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: 1) efficient projection and view feature extraction using GPU acceleration; 2) the first inverted file, called F-IF, is utilized to speed up the procedure of multiview matching; and 3) the second inverted file, which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and inverted file (t wice), as GIFT. Besides its high efficiency, GIFT also outperforms state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks (ModelNet40 dataset, ModelNet10 dataset, PSB dataset, McGill dataset) and competitions (SHREC14LSGTB, ShapeNet Core55, WM-SHREC07).",
"title": ""
},
{
"docid": "d4c7efe10b1444d0f9cb6032856ba4e1",
"text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.",
"title": ""
},
{
"docid": "7ac2f63821256491f45e2a9666333853",
"text": "Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ . We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "f55ac9e319ad8b9782a34251007a5d06",
"text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",
"title": ""
},
{
"docid": "b2c67b5d74cc293293162fe750dc3f2a",
"text": "As object-oriented class libraries evolve, classes are occasionally deprecated in favor of others with roughly the same functionality. In Java's standard libraries, for example, class Hashtable has been superseded by HashMap, and Iterator is now preferred over Enumeration. Migrating client applications to use the new idioms is often desirable, but making the required changes to declarations and allocation sites can be quite labor-intensive. Moreover, migration becomes complicated---and sometimes impossible---if an application interacts with external components, if a legacy class is not completely equivalent to its replacement, or if multiple interdependent classes must be migrated simultaneously. We present an approach in which mappings between legacy classes and their replacements are specified by the programmer. Then, an analysis based on type constraints determines where declarations and allocation sites can be updated. The method was implemented in Eclipse, and evaluated on a number of Java applications. On average, our tool could migrate more than 90% of the references to legacy classes.",
"title": ""
},
{
"docid": "28df21c82806cb660713d720f6a6d324",
"text": "Neural Networks are very successful in acquiring hidden knowledge in datasets. Their most important weakness is that the knowled ge they acquire is represented in a form not understandable to humans. Understandability problem of Neural Networks can be solved by extracti ng Decision Rules or Decision Trees from the trained network. There are s everal Decision Rule extraction methods and Mark Craven’s TREPAN which extracts MofN type Decision Trees from trained networks. We introduced new splitting techniques for extracting classical Decision Trees fr om trained Neural Networks. We showed that the new method (DecText) is effecti ve in extracting high fidelity trees from trained networks. We also introduced a new discretization technique to make DecText be able to hand le continuous features and a new pruning technique for finding simplest tree with the highest fidelity.",
"title": ""
},
{
"docid": "74593565b633d29041637d877428a0a4",
"text": "The kinematics of contact describe the motion of a point of contact over the surfaces of two contacting objects in response to a relative motion of these objects. Using concepts from differential geometry, I derive a set of equations, called the contact equations, that embody this relationship. I employ the contact equations to design the following applications to be executed by an end-effector with tactile sensing capability: (1) determining the curvature form of an unknown object at a point of contact; and (2) following the surface of an unknown object. The contact equations also serve as a basis for an investigation of the kinematics of grasp. I derive the relationship between the relative motion of two fingers grasping an object and the motion of the points of contact over the object surface. Based on this analysis, we explore the following applications: (1) rolling a sphere between two arbitrarily shaped fingers ; (2) fine grip adjustment (i.e., having two fingers that grasp an unknown object locally optimize their grip for maximum stability ).",
"title": ""
},
{
"docid": "9c98b0652776a8402979134e753a8b86",
"text": "In this paper, the shielded coil structure using the ferrites and the metallic shielding is proposed. It is compared with the unshielded coil structure (i.e. a pair of circular loop coils only) to demonstrate the differences in the magnetic field distributions and system performance. The simulation results using the 3D Finite Element Analysis (FEA) tool show that it can considerably suppress the leakage magnetic field from 100W-class wireless power transfer (WPT) system with the enhanced system performance.",
"title": ""
},
{
"docid": "c8e83a1eb803d9e091c2cb3418577aa7",
"text": "We review the literature on pathological narcissism and narcissistic personality disorder (NPD) and describe a significant criterion problem related to four inconsistencies in phenotypic descriptions and taxonomic models across clinical theory, research, and practice; psychiatric diagnosis; and social/personality psychology. This impedes scientific synthesis, weakens narcissism's nomological net, and contributes to a discrepancy between low prevalence rates of NPD and higher rates of practitioner-diagnosed pathological narcissism, along with an enormous clinical literature on narcissistic disturbances. Criterion issues must be resolved, including clarification of the nature of normal and pathological narcissism, incorporation of the two broad phenotypic themes of narcissistic grandiosity and narcissistic vulnerability into revised diagnostic criteria and assessment instruments, elimination of references to overt and covert narcissism that reify these modes of expression as distinct narcissistic types, and determination of the appropriate structure for pathological narcissism. Implications for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders and the science of personality disorders are presented.",
"title": ""
},
{
"docid": "10c861ca1bdd7133d05f659efc7c9874",
"text": "Based on land use and land cover (LULC) datasets in the late 1970s, the early 1990s, 2004 and 2012, we analyzed characteristics of LULC change in the headwaters of the Yangtze River and Yellow River over the past 30 years contrastively, using the transition matrix and LULC change index. The results showed that, in 2012, the LULC in the headwaters of the Yellow River were different compared to those of the headwaters of the Yangtze River, with more grassland and wetand marshland. In the past 30 years, the grassland and wetand marshland increasing at the expense of sand, gobi, and bare land and desert were the main LULC change types in the headwaters of the Yangtze River, with the macro-ecological situation experiencing a process of degeneration, slight melioration, and continuous melioration, in that order. In the headwaters of the Yellow River, severe reduction of grassland coverage, shrinkage of wetand marshland and the consequential expansion of sand, gobi and bare land were noticed. The macro-ecological situation experienced a process of degeneration, obvious degeneration, and slight melioration, in that order, and the overall change in magnitude was more dramatic than that in the headwaters of the Yangtze River. These different LULC change courses were jointly driven by climate change, grassland-grazing pressure, and the implementation of ecological construction projects.",
"title": ""
},
{
"docid": "14693800dc75380784dcf0227d96174a",
"text": "Forecasting of time series that have seasonal and other variations remains an important problem for forecasters. This paper presents a neural network (NN) approach to forecasting quarterly time series. With a large data set of 756 quarterly time series from the M3 forecasting competition, we conduct a comprehensive investigation of the effectiveness of several data preprocessing and modeling approaches. We consider two data preprocessing methods and 48 NN models with different possible combinations of lagged observations, seasonal dummy variables, trigonometric variables, and time index as inputs to the NN. Both parametric and nonparametric statistical analyses are performed to identify the best models under different circumstances and categorize similar models. Results indicate that simpler models, in general, outperform more complex models. In addition, data preprocessing especially with deseasonalization and detrending is very helpful in improving NN performance. Practical guidelines are also provided.",
"title": ""
}
] | scidocsrr |
0d88654662c00b9af81177886c26ca45 | ARGUS: An Automated Multi-Agent Visitor Identification System | [
{
"docid": "ab34db97483f2868697f7b0abab8daaa",
"text": "This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.",
"title": ""
}
] | [
{
"docid": "07239163734357138011bbcc7b9fd38f",
"text": "Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "6adb6b6a60177ed445a300a2a02c30b9",
"text": "Barrier coverage is a critical issue in wireless sensor networks for security applications (e.g., border protection) where directional sensors (e.g., cameras) are becoming more popular than omni-directional scalar sensors (e.g., microphones). However, barrier coverage cannot be guaranteed after initial random deployment of sensors, especially for directional sensors with limited sensing angles. In this paper, we study how to efficiently use mobile sensors to achieve \\(k\\) -barrier coverage. In particular, two problems are studied under two scenarios. First, when only the stationary sensors have been deployed, what is the minimum number of mobile sensors required to form \\(k\\) -barrier coverage? Second, when both the stationary and mobile sensors have been pre-deployed, what is the maximum number of barriers that could be formed? To solve these problems, we introduce a novel concept of weighted barrier graph (WBG) and prove that determining the minimum number of mobile sensors required to form \\(k\\) -barrier coverage is related with finding \\(k\\) vertex-disjoint paths with the minimum total length on the WBG. With this observation, we propose an optimal solution and a greedy solution for each of the two problems. Both analytical and experimental studies demonstrate the effectiveness of the proposed algorithms.",
"title": ""
},
{
"docid": "531d387a14eefa6a8c45ad64039f29be",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
},
{
"docid": "44928aa4c5b294d1b8f24eaab14e9ce7",
"text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.",
"title": ""
},
{
"docid": "e082b7792f72d54c63ed025ae5c7fa0f",
"text": "Cloud computing's pay-per-use model greatly reduces upfront cost and also enables on-demand scalability as service demand grows or shrinks. Hybrid clouds are an attractive option in terms of cost benefit, however, without proper elastic resource management, computational resources could be over-provisioned or under-provisioned, resulting in wasting money or failing to satisfy service demand. In this paper, to accomplish accurate performance prediction and cost-optimal resource management for hybrid clouds, we introduce Workload-tailored Elastic Compute Units (WECU) as a measure of computing resources analogous to Amazon EC2's ECUs, but customized for a specific workload. We present a dynamic programming-based scheduling algorithm to select a combination of private and public resources which satisfy a desired throughput. Using a loosely-coupled benchmark, we confirmed WECUs have 24 (J% better runtime prediction ability than ECUs on average. Moreover, simulation results with a real workload distribution of web service requests show that our WECU-based algorithm reduces costs by 8-31% compared to a fixed provisioning approach.",
"title": ""
},
{
"docid": "c998ec64fed8be3657f26f4128593581",
"text": "In recent years, data mining researchers have developed efficient association rule algorithms for retail market basket analysis. Still, retailers often complain about how to adopt association rules to optimize concrete retail marketing-mix decisions. It is in this context that, in a previous paper, the authors have introduced a product selection model called PROFSET. This model selects the most interesting products from a product assortment based on their cross-selling potential given some retailer defined constraints. However this model suffered from an important deficiency: it could not deal effectively with supermarket data, and no provisions were taken to include retail category management principles. Therefore, in this paper, the authors present an important generalization of the existing model in order to make it suitable for supermarket data as well, and to enable retailers to add category restrictions to the model. Experiments on real world data obtained from a Belgian supermarket chain produce very promising results and demonstrate the effectiveness of the generalized PROFSET model.",
"title": ""
},
{
"docid": "12a5fb7867cddaca43c3508b0c1a1ed2",
"text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.",
"title": ""
},
{
"docid": "8671020512161e35818565450e557f77",
"text": "Continuous vector representations of words and objects appear to carry surprisingly rich semantic content. In this paper, we advance both the conceptual and theoretical understanding of word embeddings in three ways. First, we ground embeddings in semantic spaces studied in cognitivepsychometric literature and introduce new evaluation tasks. Second, in contrast to prior work, we take metric recovery as the key object of study, unify existing algorithms as consistent metric recovery methods based on co-occurrence counts from simple Markov random walks, and propose a new recovery algorithm. Third, we generalize metric recovery to graphs and manifolds, relating co-occurence counts on random walks in graphs and random processes on manifolds to the underlying metric to be recovered, thereby reconciling manifold estimation and embedding algorithms. We compare embedding algorithms across a range of tasks, from nonlinear dimensionality reduction to three semantic language tasks, including analogies, sequence completion, and classification.",
"title": ""
},
{
"docid": "2037a51f2f67fd480f327ecd1b906a2a",
"text": "Electronic fashion (eFashion) garments use technology to augment the human body with wearable interaction. In developing ideas, eFashion designers need to prototype the role and behavior of the interactive garment in context; however, current wearable prototyping toolkits require semi-permanent construction with physical materials that cannot easily be altered. We present Bod-IDE, an augmented reality 'mirror' that allows eFashion designers to create virtual interactive garment prototypes. Designers can quickly build, refine, and test on-the-body interactions without the need to connect or program electronics. By envisioning interaction with the body in mind, eFashion designers can focus more on reimagining the relationship between bodies, clothing, and technology.",
"title": ""
},
{
"docid": "556f33d199e6516a4aa8ebca998facf2",
"text": "R ecommender systems have become important tools in ecommerce. They combine one user’s ratings of products or services with ratings from other users to answer queries such as “Would I like X?” with predictions and suggestions. Users thus receive anonymous recommendations from people with similar tastes. While this process seems innocuous, it aggregates user preferences in ways analogous to statistical database queries, which can be exploited to identify information about a particular user. This is especially true for users with eclectic tastes who rate products across different types or domains in the systems. These straddlers highlight the conflict between personalization and privacy in recommender systems. While straddlers enable serendipitous recommendations, information about their existence could be used in conjunction with other data sources to uncover identities and reveal personal details. We use a graph-theoretic model to study the benefit from and risk to straddlers.",
"title": ""
},
{
"docid": "26f2c0ca1d9fc041a6b04f0644e642dd",
"text": "During the last years, in particular due to the Digital Humanities, empirical processes, data capturing or data analysis got more and more popular as part of humanities research. In this paper, we want to show that even the complete scientific method of natural science can be applied in the humanities. By applying the scientific method to the humanities, certain kinds of problems can be solved in a confirmable and replicable manner. In particular, we will argue that patterns may be perceived as the analogon to formulas in natural science. This may provide a new way of representing solution-oriented knowledge in the humanities. Keywords-pattern; pattern languages; digital humanities;",
"title": ""
},
{
"docid": "a9de4aa3f0268f23d77f882425afbcd5",
"text": "This paper describes a CMOS-based time-of-flight depth sensor and presents some experimental data while addressing various issues arising from its use. Our system is a single-chip solution based on a special CMOS pixel structure that can extract phase information from the received light pulses. The sensor chip integrates a 64x64 pixel array with a high-speed clock generator and ADC. A unique advantage of the chip is that it can be manufactured with an ordinary CMOS process. Compared with other types of depth sensors reported in the literature, our solution offers significant advantages, including superior accuracy, high frame rate, cost effectiveness and a drastic reduction in processing required to construct the depth maps. We explain the factors that determine the resolution of our system, discuss various problems that a time-of-flight depth sensor might face, and propose practical solutions.",
"title": ""
},
{
"docid": "c1017772d5986b4676fc36b14791f735",
"text": "Research and industry increasingly make use of large amounts of data to guide decision-making. To do this, however, data needs to be analyzed in typically nontrivial refinement processes, which require technical expertise about methods and algorithms, experience with how a precise analysis should proceed, and knowledge about an exploding number of analytic approaches. To alleviate these problems, a plethora of different systems have been proposed that “intelligently” help users to analyze their data.\n This article provides a first survey to almost 30 years of research on intelligent discovery assistants (IDAs). It explicates the types of help IDAs can provide to users and the kinds of (background) knowledge they leverage to provide this help. Furthermore, it provides an overview of the systems developed over the past years, identifies their most important features, and sketches an ideal future IDA as well as the challenges on the road ahead.",
"title": ""
},
{
"docid": "179c8e5dba09c4bac35522b0f8a2be88",
"text": "The purpose of this research was to explore the experiences of left-handed adults. Four semistructured interviews were conducted with left-handed adults (2 men and 2 women) about their experiences. After transcribing the data, interpretative phenomenological analysis (IPA), which is a qualitative approach, was utilized to analyze the data. The analysis highlighted some major themes which were organized to make a model of life experiences of left-handers. The highlighted themes included Left-handers‟ Development: Interplay of Heredity Basis and Environmental Influences, Suppression of Left-hand, Support and Consideration, Feeling It Is Okay, Left-handers as Being Particular, Physical and Psychological Health Challenges, Agonized Life, Struggle for Maintaining Identity and Transforming Attitude, Attitudinal Barriers to Equality and Acceptance. Implications of the research for parents, teachers, and psychologists are discussed.",
"title": ""
},
{
"docid": "246faf136fc925f151f7006a08fcee2d",
"text": "A key goal of smart grid initiatives is significantly increasing the fraction of grid energy contributed by renewables. One challenge with integrating renewables into the grid is that their power generation is intermittent and uncontrollable. Thus, predicting future renewable generation is important, since the grid must dispatch generators to satisfy demand as generation varies. While manually developing sophisticated prediction models may be feasible for large-scale solar farms, developing them for distributed generation at millions of homes throughout the grid is a challenging problem. To address the problem, in this paper, we explore automatically creating site-specific prediction models for solar power generation from National Weather Service (NWS) weather forecasts using machine learning techniques. We compare multiple regression techniques for generating prediction models, including linear least squares and support vector machines using multiple kernel functions. We evaluate the accuracy of each model using historical NWS forecasts and solar intensity readings from a weather station deployment for nearly a year. Our results show that SVM-based prediction models built using seven distinct weather forecast metrics are 27% more accurate for our site than existing forecast-based models.",
"title": ""
},
{
"docid": "8ea9944c2326e3aba41701919045779f",
"text": "Accurate keypoint localization of human pose needs diversified features: the high level for contextual dependencies and the low level for detailed refinement of joints. However, the importance of the two factors varies from case to case, but how to efficiently use the features is still an open problem. Existing methods have limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modeling the human perception process. This paper presents three novel techniques step by step to efficiently utilize different levels of features for human pose estimation. Firstly, an inception of inception (IOI) block is designed to emphasize the low level features. Secondly, an attention mechanism is proposed to adjust the importance of individual levels according to the context. Thirdly, a cascaded network is proposed to sequentially localize the joints to enforce message passing from joints of stand-alone parts like head and torso to remote joints like wrist or ankle. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both MPII and",
"title": ""
},
{
"docid": "7264274530c5e42e09f91592f606364b",
"text": "In this paper, we propose a novel NoC architecture, called darkNoC, where multiple layers of architecturally identical, but physically different routers are integrated, leveraging the extra transistors available due to dark silicon. Each layer is separately optimized for a particular voltage-frequency range by the adroit use of multi-Vt circuit optimization. At a given time, only one of the network layers is illuminated while all the other network layers are dark. We provide architectural support for seamless integration of multiple network layers, and a fast inter-layer switching mechanism without dropping in-network packets. Our experiments on a 4 × 4 mesh with multi-programmed real application workloads show that darkNoC improves energy-delay product by up to 56% compared to a traditional single layer NoC with state-of-the-art DVFS. This illustrates darkNoC can be used as an energy-efficient communication fabric in future dark silicon chips.",
"title": ""
},
{
"docid": "ca7fa11ff559a572499fc0f319cba83c",
"text": "54-year-old male with a history of necrolytic migratory erythema (NME) (Figs. 1 and 2) and glossitis. Routine blood tests were normal except for glucose: 145 mg/dL. Baseline plasma glucagon levels were 1200 pg/mL. Serum zinc was 97 mcg/dL. An abdominal CT scan showed a large mass involving the body and tail of the pancreas. A dis-tal pancreatectomy with splenectomy was performed and no evidence of metastatic disease was observed. The skin rash cleared within a week after the operation and the patient remains free of disease at 38 months following surgery. COMMENTS Glucagonoma syndrome is a paraneoplastic phenomenon comprising a pancreatic glucagon-secreting insular tumor, necrolytic migratory erythema (NME), diabetes, weight loss, anemia, stomatitis, thrombo-embolism, dyspepsia, and neu-ropsychiatric disturbances. The occurrence of one or more of these symptoms associated with a proven pancreatic neo-plasm fits this diagnosis (1). Other skin and mucosal changes such as atrophic glossitis, cheylitis, and inflammation of the oral mucosa may be found (2). Hyperglycemia may be included –multiple endocrine neoplasia syndrome, i.e., Zollinger-Ellison syndrome– or the disease may result from a glucagon-secreting tumor alone. These tumors are of slow growth and present with non-specific symptoms in early stages. At least 50% are metasta-tic at the time of diagnosis, and therefore have a poor prognosis. Fig. 2.-NME. Necrolytic migratory erythema. ENM. Eritema necrolítico migratorio.",
"title": ""
},
{
"docid": "3789f0298f0ad7935e9267fa64c33a59",
"text": "We study session key distribution in the three-party setting of Needham and Schroeder. (This is the trust model assumed by the popular Kerberos authentication system.) Such protocols are basic building blocks for contemporary distributed systems|yet the underlying problem has, up until now, lacked a de nition or provably-good solution. One consequence is that incorrect protocols have proliferated. This paper provides the rst treatment of this problem in the complexitytheoretic framework of modern cryptography. We present a de nition, protocol, and a proof that the protocol satis es the de nition, assuming the (minimal) assumption of a pseudorandom function. When this assumption is appropriately instantiated, our protocols are simple and e cient. Abstract appearing in Proceedings of the 27th ACM Symposium on the Theory of Computing, May 1995.",
"title": ""
}
] | scidocsrr |
0b9b8da4a6c3e07a7dd33b61b6d44a3e | Classifying and visualizing motion capture sequences using deep neural networks | [
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] | [
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "d3b24655e01cbb4f5d64006222825361",
"text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "89b8317509d27a6f13d8ba38f52f4816",
"text": "The merging of optimization and simulation technologies has seen a rapid growth in recent years. A Google search on \"Simulation Optimization\" returns more than six thousand pages where this phrase appears. The content of these pages ranges from articles, conference presentations and books to software, sponsored work and consultancy. This is an area that has sparked as much interest in the academic world as in practical settings. In this paper, we first summarize some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems. We then concentrate on the metaheuristic black-box approach that leads the field of practical applications and provide some relevant details of how this approach has been implemented and used in commercial software. Finally, we present an example of simulation optimization in the context of a simulation model developed to predict performance and measure risk in a real world project selection problem.",
"title": ""
},
{
"docid": "0ca703e4379b89bd79b1c33d6cc0ce3e",
"text": "PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently, the deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained optimization problem and solve it using the alternating direction method of multipliers algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.",
"title": ""
},
{
"docid": "5d8bc7d7c3ca5f8ebef7cbdace5a5db2",
"text": "The concept of knowledge management (KM) as a powerful competitive weapon has been strongly emphasized in the strategic management literature, yet the sustainability of the competitive advantage provided by KM capability is not well-explained. To fill this gap, this paper develops the concept of KM as an organizational capability and empirically examines the association between KM capabilities and competitive advantage. In order to provide a better presentation of significant relationships, through resource-based view of the firm explicitly recognizes important of KM resources and capabilities. Firm specific KM resources are classified as social KM resources, and technical KM resources. Surveys collected from 177 firms were analyzed and tested. The results confirmed the impact of social KM resource on competitive advantage. Technical KM resource is negatively related with competitive advantage, and KM capability is significantly related with competitive advantage. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "da5c56f30c9c162eb80c418ba9dbc31a",
"text": "Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.",
"title": ""
},
{
"docid": "0f7f8557ffa238a529f28f9474559cc4",
"text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "973426438175226bb46c39cc0a390d97",
"text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.",
"title": ""
},
{
"docid": "458e4b5196805b608e15ee9c566123c9",
"text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK",
"title": ""
},
{
"docid": "d6d07f50778ba3d99f00938b69fe0081",
"text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.",
"title": ""
},
{
"docid": "c53c60ceb793e8bc986837b5d82145fd",
"text": "Within the information overload on the web and the diversity of the user interests, it is increasingly difficult for search engines to satisfy the user information needs. Personalized search tackles this problem by considering the user profile during the search. This paper describes a personalized search approach involving a semantic graph-based user profile issued from ontology. User profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built using a score propagation that activates a set of semantically related concepts and maintained in the same search session using a graph-based merging scheme. We also define a session boundary recognition mechanism based on tracking changes in the dominant concepts held by the user profile relatively to a new submitted query using the Kendall rank correlation measure. Then, personalization is achieved by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and shows that our approach is effective.",
"title": ""
},
{
"docid": "9d9665a21e5126ba98add5a832521cd1",
"text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.",
"title": ""
},
{
"docid": "80a61f27dab6a8f71a5c27437254778b",
"text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.",
"title": ""
},
{
"docid": "b02bcb7e0d7669b69130604157c27c08",
"text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.",
"title": ""
},
{
"docid": "77be4363f9080eb8a3b73c9237becca4",
"text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.",
"title": ""
},
{
"docid": "47eef1318d313e2f89bb700f8cd34472",
"text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.",
"title": ""
},
{
"docid": "5ffe358766049379b0910ac1181100af",
"text": "A novel one-section bandstop filter (BSF), which possesses the characteristics of compact size, wide bandwidth, and low insertion loss is proposed and fabricated. This bandstop filter was constructed by using single quarter-wavelength resonator with one section of anti-coupled lines with short circuits at one end. The attenuation-pole characteristics of this type of bandstop filters are investigated through TEM transmission-line model. Design procedures are clearly presented. The 3-dB bandwidth of the first stopband and insertion loss of the first passband of this BSF is from 2.3 GHz to 9.5 GHz and below 0.3 dB, respectively. There is good agreement between the simulated and experimental results.",
"title": ""
},
{
"docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
},
{
"docid": "2da1279270b3e8925100f281447bfb6b",
"text": "Consideration of confounding is fundamental to the design and analysis of studies of causal effects. Yet, apart from confounding in experimental designs, the topic is given little or no discussion in most statistics texts. We here provide an overview of confounding and related concepts based on a counterfactual model for causation. Special attention is given to definitions of confounding, problems in control of confounding, the relation of confounding to exchangeability and collapsibility, and the importance of distinguishing confounding from noncollapsibility.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
}
] | scidocsrr |
05ca07b1c09ce719ff3b9996f71b078e | SPAM: Signal Processing to Analyze Malware | [
{
"docid": "73b239e6449d82c0d9b1aaef0e9e1d23",
"text": "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a contextbased vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.",
"title": ""
},
{
"docid": "746caf3de0977c7362b5963f1f956d05",
"text": "In this work, we propose SigMal, a fast and precise malware detection framework based on signal processing techniques. SigMal is designed to operate with systems that process large amounts of binary samples. It has been observed that many samples received by such systems are variants of previously-seen malware, and they retain some similarity at the binary level. Previous systems used this notion of malware similarity to detect new variants of previously-seen malware. SigMal improves the state-of-the-art by leveraging techniques borrowed from signal processing to extract noise-resistant similarity signatures from the samples. SigMal uses an efficient nearest-neighbor search technique, which is scalable to millions of samples. We evaluate SigMal on 1.2 million recent samples, both packed and unpacked, observed over a duration of three months. In addition, we also used a constant dataset of known benign executables. Our results show that SigMal can classify 50% of the recent incoming samples with above 99% precision. We also show that SigMal could have detected, on average, 70 malware samples per day before any antivirus vendor detected them.",
"title": ""
}
] | [
{
"docid": "e9940668ce12749d7b6ee82ea1e1e2e4",
"text": "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into “task-specific” and “robot-specific” modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel approach to train modular neural networks, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.",
"title": ""
},
{
"docid": "c9cae26169a89ad8349889b3fd221d32",
"text": "Dense kernel matrices Θ ∈ RN×N obtained from point evaluations of a covariance function G at locations {xi}1≤i≤N arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green’s functions of elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset S ⊂ {1, . . . , N} × {1, . . . , N}, with #S = O(N log(N) log(N/ )), such that the zero fill-in incomplete Cholesky factorisation of Θi,j1(i,j)∈S is an -approximation of Θ. This blockfactorisation can provably be obtained in complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time. The algorithm only needs to know the spatial configuration of the xi and does not require an analytic representation of G. Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a solver for elliptic PDE with complexity O(N log(N) log(N/ )) in space and O(N log(N) log(N/ )) in time.",
"title": ""
},
{
"docid": "e642f9a626652da7917766257ca5bccf",
"text": "The work presented in this paper addresses the problem of the stitching task in laparoscopic surgery using a circular needle and a conventional 4 DOFs needle-holder. This task is particularly difficult for the surgeons because of the kinematic constraints generated by the fulcrum in the abdominal wall of the patient. In order to assist the surgeons during suturing, we propose to compute possible pathes for the needle through the tissues, which limit as much as possible tissues deformations while driving the needle towards the desired target. The paper proposes a kinematic analysis and a geometric modeling of the stitching task from which useful information can be obtained to assist the surgeon. The description of the task with appropriate state variables allows to propose a simple practical method for computing optimal pathes. Simulations show that the obtained pathes are satisfactory even under difficult configurations and short computation times make the technique useful for intra-operative planning. The use of the stitching planning is shown for manual as well as for robot-assisted interventions in in vitro experiments. 1 Clinical and technical motivations Computer assisted interventions have been widely developed during the last decade. Minimally invasive surgery is one of the surgical fields where the computer assistance is particularly useful for intervention planning as well as for gesture guidance because the access to the areas of interest is considerably restricted. In keyhole surgery such as laparoscopy or thoracoscopy, the surgical instruments are introduced into the abdomen through small 1 tissue to be sutured needle O∗ exit point I∗ entry point endoscope abdominal wall trocar needle-holder Figure 1: Outline of the laparoscopic setup for suturing with desired entry point (I*) and exit point (O*) incisions in the abdominal wall. The vision of the surgical scene is given to the surgeon via an endoscopic camera also introduced in the abdomen through a similar incision (see figure 1). This kind of surgery has many advantages over conventional open surgery : less pain for the patient, reduced risk of infection, decreased hospital stays and recovery times. However the surgical tasks are more difficult than in conventional surgery which often results in longer interventions and tiredness for the surgeon. As a consequence, laparoscopic surgery requires a specific training of the surgeon [12]. The main origins of the difficulties for the surgeon are well identified [30]. Firstly, the fulcrum reduces the possible motions of the instruments and creates kinematic constraints. Secondly, surgeons encounter perceptual limitations : tactile sensing is higly reduced and the visual feedback given by the endoscopic camera (laparoscope) is bidimendional and with a restricted field of view. Hence, it is difficult for the surgeon to estimate depths, relative positions and angles inside the abdomen. Finally there are perceptual motor difficulties which affect the coordination between what the surgeon sees and the movements he performs. According to laparoscopic surgery specialists, the suturing task is one of the most difficult gestures in laparoscopic surgery[4]. The task is usually performed using two instruments : a needle, often circular, and a needle-holder (see figure 2). Suturing can be",
"title": ""
},
{
"docid": "6fa8f69736b79639f390bc94ae569973",
"text": "We present results of an empirical study of the usefulness of different types of features in selecting extractive summaries of news broadcasts for our Broadcast News Summarization System. We evaluate lexical, prosodic, structural and discourse features as predictors of those news segments which should be included in a summary. We show that a summarization system that uses a combination of these feature sets produces the most accurate summaries, and that a combination of acoustic/prosodic and structural features are enough to build a ‘good’ summarizer when speech transcription is not available.",
"title": ""
},
{
"docid": "06abf2a7c6d0c25cfe54422268300e58",
"text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.",
"title": ""
},
{
"docid": "eed3f46ca78b6fbbb235fecf71d28f47",
"text": "The popularity of location-based social networks available on mobile devices means that large, rich datasets that contain a mixture of behavioral (users visiting venues), social (links between users), and spatial (distances between venues) information are available for mobile location recommendation systems. However, these datasets greatly differ from those used in other online recommender systems, where users explicitly rate items: it remains unclear as to how they capture user preferences as well as how they can be leveraged for accurate recommendation. This paper seeks to bridge this gap with a three-fold contribution. First, we examine how venue discovery behavior characterizes the large check-in datasets from two different location-based social services, Foursquare and Go Walla: by using large-scale datasets containing both user check-ins and social ties, our analysis reveals that, across 11 cities, between 60% and 80% of users' visits are in venues that were not visited in the previous 30 days. We then show that, by making constraining assumptions about user mobility, state-of-the-art filtering algorithms, including latent space models, do not produce high quality recommendations. Finally, we propose a new model based on personalized random walks over a user-place graph that, by seamlessly combining social network and venue visit frequency data, obtains between 5 and 18% improvement over other models. Our results pave the way to a new approach for place recommendation in location-based social systems.",
"title": ""
},
{
"docid": "733d0b179ec81a283166830be087546c",
"text": "This paper presents an interactive face retrieval framework for clarifying an image representation envisioned by a user. Our system is designed for a situation in which the user wishes to find a person but has only visual memory of the person. We address a critical challenge of image retrieval across the user's inputs. Instead of target-specific information, the user can select several images (or a single image) that are similar to an impression of the target person the user wishes to search for. Based on the user's selection, our proposed system automatically updates a deep convolutional neural network. By interactively repeating these process (human-in-the-loop optimization), the system can reduce the gap between human-based similarities and computer-based similarities and estimate the target image representation. We ran user studies with 10 subjects on a public database and confirmed that the proposed framework is effective for clarifying the image representation envisioned by the user easily and quickly.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "faac043b0c32bad5a44d52b93e468b78",
"text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.",
"title": ""
},
{
"docid": "438094ef7913de0236b57a85e7d511c2",
"text": "Magnetic resonance (MR) is the best way to assess the new anatomy of the pelvis after male to female (MtF) sex reassignment surgery. The aim of the study was to evaluate the radiological appearance of the small pelvis after MtF surgery and to compare it with the normal women's anatomy. Fifteen patients who underwent MtF surgery were subjected to pelvic MR at least 6 months after surgery. The anthropometric parameters of the small pelvis were measured and compared with those of ten healthy women (control group). Our personal technique (creation of the mons Veneris under the pubic skin) was performed in all patients. In patients who underwent MtF surgery, the mean neovaginal depth was slightly superior than in women (P=0.009). The length of the inferior pelvic aperture and of the inlet of pelvis was higher in the control group (P<0.005). The inclination between the axis of the neovagina and the inferior pelvis aperture, the thickness of the mons Veneris and the thickness of the rectovaginal septum were comparable between the two study groups. MR consents a detailed assessment of the new pelvic anatomy after MtF surgery. The anthropometric parameters measured in our patients were comparable with those of women.",
"title": ""
},
{
"docid": "ce32b34898427802abd4cc9c99eac0bc",
"text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.",
"title": ""
},
{
"docid": "cafc0d64162aad586e8b8cc2edccaae8",
"text": "This paper focuses on estimation of the vertical velocity of the vehicle chassis and the relative velocity between chassis and wheel. These velocities are important variables in semi-active suspension control. A model-based estimator is proposed including a Kalman filter and a non-linear model of the damper. Inputs to the estimator are signals from wheel displacement sensors and from accelerometers placed at the chassis. In addition, the control signal is used as input to the estimator. The Kalman filter is analyzed in the frequency domain and compared with a conventional filter solution including derivation of the displacement signal and integration of the acceleration signal.",
"title": ""
},
{
"docid": "06b99205e1dc53e5120a22dc4f927aa0",
"text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record",
"title": ""
},
{
"docid": "73b85a4948faf5b4d9a6a4019d3048ea",
"text": "In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.",
"title": ""
},
{
"docid": "371be25b5ae618c599e551784641bbcb",
"text": "The paper presents proposal of practical implementation simple IoT gateway based on Arduino microcontroller, dedicated to use in home IoT environment. Authors are concentrated on research of performance and security aspects of created system. By performed load tests and denial of service attack were investigated performance and capacity limits of implemented gateway.",
"title": ""
},
{
"docid": "5efa00e0b5973515dff10d8267ac025f",
"text": "Porokeratosis, a disorder of keratinisation, is clinically characterized by the presence of annular plaques with a surrounding keratotic ridge. Clinical variants include linear, disseminated superficial actinic, verrucous/hypertrophic, disseminated eruptive, palmoplantar and porokeratosis of Mibelli (one or two typical plaques with atrophic centre and guttered keratotic rim). All of these subtypes share the histological feature of a cornoid lamella, characterized by a column of 'stacked' parakeratosis with focal absence of the granular layer, and dysmaturation (prematurely keratinised cells in the upper spinous layer). In recent years, a proposed new subtype, follicular porokeratosis (FP_, has been described, in which the cornoid lamella are exclusively located in the follicular ostia. We present four new cases that showed typical histological features of FP.",
"title": ""
},
{
"docid": "6214002bfc73399b233452fc1ac5f85e",
"text": "Touchscreen-based mobile devices (TMDs) are one of the most popular and widespread kind of electronic device. Many manufacturers have published its own design principles as a guideline for developers. Each platform has specific constrains and recommendations for software development; specially in terms of user interface. Four sets of design principles from iOS, Windows Phone, Android and Tizen OS has been mapped against a set of usability heuristics for TMDs. The map shows that the TMDs usability heuristics cover almost every design pattern with the addition of two new dimensions: user experience and cognitive load. These new dimensions will be considered when updating the proposal of usability heuristics for TMDs.",
"title": ""
},
{
"docid": "3b63cbc770c5e616eeee6b90814c64a0",
"text": "The present study investigated the relationships between adolescents' online communication and compulsive Internet use, depression, and loneliness. The study had a 2-wave longitudinal design with an interval of 6 months. The sample consisted of 663 students, 318 male and 345 female, ages 12 to 15 years. Questionnaires were administered in a classroom setting. The results showed that instant messenger use and chatting in chat rooms were positively related to compulsive Internet use 6 months later. Moreover, in agreement with the well-known HomeNet study (R. Kraut et al., 1998), instant messenger use was positively associated with depression 6 months later. Finally, loneliness was negatively related to instant messenger use 6 months later.",
"title": ""
},
{
"docid": "e96eaf2bde8bf50605b67fb1184b760b",
"text": "In response to your recent publication comparing subjective effects of D9-tetrahydrocannabinol and herbal cannabis (Wachtel et al. 2002), a number of comments are necessary. The first concerns the suitability of the chosen “marijuana” to assay the issues at hand. NIDA cannabis has been previously characterized in a number of studies (Chait and Pierri 1989; Russo et al. 2002), as a crude lowgrade product (2–4% THC) containing leaves, stems and seeds, often 3 or more years old after processing, with a stale odor lacking in terpenoids. This contrasts with the more customary clinical cannabis employed by patients in Europe and North America, composed solely of unseeded flowering tops with a potency of up to 20% THC. Cannabis-based medicine extracts (CBME) (Whittle et al. 2001), employed in clinical trials in the UK (Notcutt 2002; Robson et al. 2002), are extracted from flowering tops with abundant glandular trichomes, and retain full terpenoid and flavonoid components. In the study at issue (Wachtel et al. 2002), we are informed that marijuana contained 2.11% THC, 0.30% cannabinol (CBN), and 0.05% (CBD). The concentration of the latter two cannabinoids is virtually inconsequential. Thus, we are not surprised that no differences were seen between NIDA marijuana with essentially only one cannabinoid, and pure, synthetic THC. In comparison, clinical grade cannabis and CBME customarily contain high quantities of CBD, frequently equaling the percentage of THC (Whittle et al. 2001). Carlini et al. (1974) determined that cannabis extracts produced effects “two or four times greater than that expected from their THC content, based on animal and human studies”. Similarly, Fairbairn and Pickens (1981) detected the presence of unidentified “powerful synergists” in cannabis extracts, causing 330% greater activity in mice than THC alone. The clinical contribution of other CBD and other cannabinoids, terpenoids and flavonoids to clinical cannabis effects has been espoused as an “entourage effect” (Mechoulam and Ben-Shabat 1999), and is reviewed in detail by McPartland and Russo (2001). Briefly summarized, CBD has anti-anxiety effects (Zuardi et al. 1982), anti-psychotic benefits (Zuardi et al. 1995), modulates metabolism of THC by blocking its conversion to the more psychoactive 11-hydroxy-THC (Bornheim and Grillo 1998), prevents glutamate excitotoxicity, serves as a powerful anti-oxidant (Hampson et al. 2000), and has notable anti-inflammatory and immunomodulatory effects (Malfait et al. 2000). Terpenoid cannabis components probably also contribute significantly to clinical effects of cannabis and boil at comparable temperatures to THC (McPartland and Russo 2001). Cannabis essential oil demonstrates serotonin receptor binding (Russo et al. 2000). Its terpenoids include myrcene, a potent analgesic (Rao et al. 1990) and anti-inflammatory (Lorenzetti et al. 1991), betacaryophyllene, another anti-inflammatory (Basile et al. 1988) and gastric cytoprotective (Tambe et al. 1996), limonene, a potent inhalation antidepressant and immune stimulator (Komori et al. 1995) and anti-carcinogenic (Crowell 1999), and alpha-pinene, an anti-inflammatory (Gil et al. 1989) and bronchodilator (Falk et al. 1990). Are these terpenoid effects significant? A dried sample of drug-strain cannabis buds was measured as displaying an essential oil yield of 0.8% (Ross and ElSohly 1996), or a putative 8 mg per 1000 mg cigarette. Buchbauer et al. (1993) demonstrated that 20–50 mg of essential oil in the ambient air in mouse cages produced measurable changes in behavior, serum levels, and bound to cortical cells. Similarly, Komori et al. (1995) employed a gel of citrus fragrance with limonene to produce a significant antidepressant benefit in humans, obviating the need for continued standard medication in some patients, and also improving CD4/8 immunologic ratios. These data would E. B. Russo ()) Montana Neurobehavioral Specialists, 900 North Orange Street, Missoula, MT, 59802 USA e-mail: [email protected]",
"title": ""
}
] | scidocsrr |
bb69c348c491a9e51967331ec007799f | Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks | [
{
"docid": "27ad413fa5833094fb2e557308fa761d",
"text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.",
"title": ""
},
{
"docid": "e37b3a68c850d1fb54c9030c22b5792f",
"text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.",
"title": ""
}
] | [
{
"docid": "f9de4041343fb6c570e5cbce4cb1ff66",
"text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.",
"title": ""
},
{
"docid": "b41d8ca866268133f2af88495dad6482",
"text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.",
"title": ""
},
{
"docid": "846f8f33181c3143bb8f54ce8eb3e5cc",
"text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.",
"title": ""
},
{
"docid": "55160cc3013b03704555863c710e6d21",
"text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.",
"title": ""
},
{
"docid": "7d117525263c970c7c23f2a8ba0357d6",
"text": "Entity search is an emerging IR and NLP task that involves the retrieval of entities of a specific type in response to a query. We address the similar researcher search\" or the \"researcher recommendation\" problem, an instance of similar entity search\" for the academic domain. In response to a researcher name' query, the goal of a researcher recommender system is to output the list of researchers that have similar expertise as that of the queried researcher. We propose models for computing similarity between researchers based on expertise profiles extracted from their publications and academic homepages. We provide results of our models for the recommendation task on two publicly-available datasets. To the best of our knowledge, we are the first to address content-based researcher recommendation in an academic setting and demonstrate it for Computer Science via our system, ScholarSearch.",
"title": ""
},
{
"docid": "77d94447b208e5a9fe441c4bda31dc25",
"text": "The author reviewed cultural competence models and cultural competence assessment instruments developed and published by nurse researchers since 1982. Both models and instruments were examined in terms of their components, theoretical backgrounds, empirical validation, and psychometric evaluation. Most models were not empirically tested; only a few models developed model-based instruments. About half of the instruments were tested with varying levels of psychometric properties. Other related issues were discussed, including the definition of cultural competence and its significance in model and instrument development, limitations of existing models and instruments, impact of cultural competence on health disparities, and further work in cultural competence research and practice.",
"title": ""
},
{
"docid": "3f5627bb20164666317ba4783ed4eddb",
"text": "The agriculture sector is the backbone of an economy which provides the basic ingredients to mankind and raw materials for industrialization. With the increasing number of the population over the world, the demand for agricultural products is also increased. In order to increase the production rate, irrigation technique should be more efficient. The irrigation techniques used till date are not in satisfactory level, especially in a developing country like Bangladesh. This paper has proposed a line follower robot for irrigation based application which may be considered as a cost-effective solution by minimizing water loss as well as an efficient system for irrigation purposes. This proposed system does not require an operator to accomplish its task. This gardening robot is completely portable and is equipped with a microcontroller, an on-board water reservoir, and an attached water pump. The area to be watered by the robot can be any field with plants, placed in a predefined path. It is capable of comparing movable objects and stationary plants to minimize water loss and finally watering them autonomously without any human intervention. The designed robot was tested and it performed nicely.",
"title": ""
},
{
"docid": "d9e39f2513e74917023d508596cff6c7",
"text": "In recent years the data mining is data analyzing techniques that used to analyze crime data previously stored from various sources to find patterns and trends in crimes. In additional, it can be applied to increase efficiency in solving the crimes faster and also can be applied to automatically notify the crimes. However, there are many data mining techniques. In order to increase efficiency of crime detection, it is necessary to select the data mining techniques suitably. This paper reviews the literatures on various data mining applications, especially applications that applied to solve the crimes. Survey also throws light on research gaps and challenges of crime data mining. In additional to that, this paper provides insight about the data mining for finding the patterns and trends in crime to be used appropriately and to be a help for beginners in the research of crime data mining.",
"title": ""
},
{
"docid": "987c22f071f91b09b4e4e698c454f16a",
"text": "There is much current debate about the existence of mirror neurons in humans. To identify mirror neurons in the inferior frontal gyrus (IFG) of humans, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging. Subjects either executed or observed a series of actions. Here we show that in the IFG, responses were suppressed both when an executed action was followed by the same rather than a different observed action and when an observed action was followed by the same rather than a different executed action. This pattern of responses is consistent with that predicted by mirror neurons and is evidence of mirror neurons in the human IFG.",
"title": ""
},
{
"docid": "0b2f0b36bb458221b340b5e4a069fe2b",
"text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.",
"title": ""
},
{
"docid": "1dee93ec9e8de1cf365534581fb19623",
"text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.",
"title": ""
},
{
"docid": "52b8e748a87a114f5d629f8dcd9a7dfc",
"text": "Delay-tolerant networks (DTNs) rely on the mobility of nodes and their contacts to make up with the lack of continuous connectivity and, thus, enable message delivery from source to destination in a “store-carry-forward” fashion. Since message delivery consumes resource such as storage and power, some nodes may choose not to forward or carry others' messages while relying on others to deliver their locally generated messages. These kinds of selfish behaviors may hinder effective communications over DTNs. In this paper, we present an efficient incentive-compatible (IC) routing protocol (ICRP) with multiple copies for two-hop DTNs based on the algorithmic game theory. It takes both the encounter probability and transmission cost into consideration to deal with the misbehaviors of selfish nodes. Moreover, we employ the optimal sequential stopping rule and Vickrey-Clarke-Groves (VCG) auction as a strategy to select optimal relay nodes to ensure that nodes that honestly report their encounter probability and transmission cost can maximize their rewards. We attempt to find the optimal stopping time threshold adaptively based on realistic probability model and propose an algorithm to calculate the threshold. Based on this threshold, we propose a new method to select relay nodes for multicopy transmissions. To ensure that the selected relay nodes can receive their rewards securely, we develop a signature scheme based on a bilinear map to prevent the malicious nodes from tampering. Through simulations, we demonstrate that ICRP can effectively stimulate nodes to forward/carry messages and achieve higher packet delivery ratio with lower transmission cost.",
"title": ""
},
{
"docid": "2ae76fff668bc448e841a27cb951f046",
"text": "We present efficient algorithms for the problem of contextual bandits with i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of policies. Our algorithm BISTRO requires d calls to the empirical risk minimization (ERM) oracle per round, where d is the number of actions. The method uses unlabeled data to make the problem computationally simple. When the ERM problem itself is computationally hard, we extend the approach by employing multiplicative approximation algorithms for the ERM. The integrality gap of the relaxation only enters in the regret bound rather than the benchmark. Finally, we show that the adversarial version of the contextual bandit problem is learnable (and efficient) whenever the full-information supervised online learning problem has a non-trivial regret guarantee (and efficient).",
"title": ""
},
{
"docid": "30394ae468bc521e8e00db030f19e983",
"text": "A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.",
"title": ""
},
{
"docid": "2b8c0923372e97ca5781378b7e220021",
"text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.",
"title": ""
},
{
"docid": "6a2e5831f2a2e1625be2bfb7941b9d1b",
"text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.",
"title": ""
},
{
"docid": "9b085f5cd0a080560d7ae17b7d4d6878",
"text": "The commercial roll-type corona-electrostatic separators, which are currently employed for the recovery of metals and plastics from mm-size granular mixtures, are inappropriate for the processing of finely-grinded wastes. The aim of the present work is to demonstrate that a belt-type corona-electrostatic separator could be an appropriate solution for the selective sorting of conductive and non-conductive products contained in micronized wastes. The experiments are carried out on a laboratory-scale multi-functional electrostatic separator designed by the authors. The corona discharge is generated between a wire-type dual electrode and the surface of the metal belt conveyor. The distance between the wire and the belt and the applied voltage are adjusted to values that permit particles charging without having an electric wind that puts them into motion on the surface of the belt. The separation is performed in the electric field generated between a high-voltage roll-type electrode (diameter 30 mm) and the grounded belt electrode. The study is conducted according to experimental design methodology, to enable the evaluation of the effects of the various factors that affect the efficiency of the separation: position of the roll-type electrode and applied high-voltage. The conclusions of this study will serve at the optimum design of an industrial belt-type corona-electrostatic separator for the recycling of metals and plastics from waste electric and electronic equipment.",
"title": ""
},
{
"docid": "ece8f2f4827decf0c440ca328ee272b4",
"text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.",
"title": ""
},
{
"docid": "7f8ff45fd3006e9635c0814bd5c11a3e",
"text": "On 4 August 2016, DARPA conducted the final event of the Cyber Grand Challenge (CGC). The challenge in CGC was to build an autonomous system capable of playing in a capture-the-flag hacking competition. The final event pitted the systems from seven finalists against each other, with each system attempting to defend its own network services while proving vulnerabilities in other systems’ defended services. Xandra, our automated cyber reasoning system, took second place overall in the final event. Xandra placed first in security (preventing exploits), second in availability (keeping services operational and efficient), and fourth in evaluation (proving vulnerabilities in competitor services). Xandra also drew the least power of any of the competitor systems. In this article, we describe the high-level strategies applied by Xandra, their realization in Xandra’s architecture, the synergistic interplay between offense and defense, and finally, lessons learned via post-mortem analysis of the final event.",
"title": ""
},
{
"docid": "cd1cfbdae08907e27a4e1c51e0508839",
"text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.",
"title": ""
}
] | scidocsrr |
62abf4c7902fc30c9910a6923efa26b4 | SciHadoop: Array-based query processing in Hadoop | [
{
"docid": "bf56462f283d072c4157d5c5665eead3",
"text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.",
"title": ""
},
{
"docid": "c5e37e68f7a7ce4b547b10a1888cf36f",
"text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.",
"title": ""
}
] | [
{
"docid": "de22ed244b7b2c5fb9da0981ec1b9852",
"text": "Knowledge graph embedding aims to construct a low-dimensional and continuous space, which is able to describe the semantics of high-dimensional and sparse knowledge graphs. Among existing solutions, translation models have drawn much attention lately, which use a relation vector to translate the head entity vector, the result of which is close to the tail entity vector. Compared with classical embedding methods, translation models achieve the state-of-the-art performance; nonetheless, the rationale and mechanism behind them still aspire after understanding and investigation. In this connection, we quest into the essence of translation models, and present a generic model, namely, GTrans, to entail all the existing translation models. In GTrans, each entity is interpreted by a combination of two states—eigenstate and mimesis. Eigenstate represents the features that an entity intrinsically owns, and mimesis expresses the features that are affected by associated relations. The weighting of the two states can be tuned, and hence, dynamic and static weighting strategies are put forward to best describe entities in the problem domain. Besides, GTrans incorporates a dynamic relation space for each relation, which not only enables the flexibility of our model but also reduces the noise from other relation spaces. In experiments, we evaluate our proposed model with two benchmark tasks—triplets classification and link prediction. Experiment results witness significant and consistent performance gain that is offered by GTrans over existing alternatives.",
"title": ""
},
{
"docid": "8b6246f6e328bed5fa1064ee8478dda1",
"text": "This study examined the relationships between attachment styles, drama viewing, parasocial interaction, and romantic beliefs. A survey of students revealed that drama viewing was weakly but negatively related to romantic beliefs, controlling for parasocial interaction and attachment styles. Instead, parasocial interaction mediated the effect of drama viewing on romantic beliefs: Those who viewed television dramas more heavily reported higher levels of parasocial interaction, which lead to stronger romantic beliefs. Regarding the attachment styles, anxiety was positively related to drama viewing and parasocial interaction, while avoidance was negatively related to parasocial interaction and romantic beliefs. These findings indicate that attachment styles and parasocial interaction are important for the association of drama viewing and romantic beliefs.",
"title": ""
},
{
"docid": "34896de2fb07161ecfa442581c1c3260",
"text": "Predicting the location of a video based on its content is a very meaningful, yet very challenging problem. Most existing work has focused on developing representative visual features and then searching for visually nearest neighbors in the development set to achieve a prediction. Interestingly, the relationship between scenes has been overlooked in prior work. Two scenes that are visually different, but frequently co-occur in same location, should naturally be considered similar for the geotagging problem. To build upon the above ideas, we propose to model the geo-spatial distributions of scenes by Gaussian Mixture Models (GMMs) and measure the distribution similarity by the Jensen-Shannon divergence (JSD). Subsequently, we present the Spatial Relationship Model (SRM) for geotagging which integrates the geo-spatial relationship of scenes into a hierarchical framework. We segment the Earth's surface into multiple levels of grids and measure the likelihood of input videos with an adaptation to region granularities. We have evaluated our approach using the YFCC100M dataset in the context of the MediaEval 2014 placing task. The total set of 35,000 geotagged videos is further divided into a training set of 25,000 videos and a test set of 10,000 videos. Our experimental results demonstrate the effectiveness of our proposed framework, as our solution achieves good accuracy and outperforms existing visual approaches for video geotagging.",
"title": ""
},
{
"docid": "71cb9b8dd8a19d67d64cdd14d4f82a48",
"text": "This paper illustrates the development of the automatic measurement and control system (AMCS) based on control area network (CAN) bus for the engine electronic control unit (ECU). The system composes of the ECU, test ECU (TECU), simulation ECU (SIMU), usb-CAN communication module (CRU ) and AMCS software platform, which is applied for engine control, data collection, transmission, engine signals simulation and the data conversion from CAN to USB and master control simulation and measurement. The AMCS platform software is designed with VC++ 6.0.This system has been applied in the development of engine ECU widely.",
"title": ""
},
{
"docid": "d93eb3d262bfe5dee777d27d21407cc6",
"text": "Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to \"rationalize,\" or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies oO the relevant research. Comments on anomalies printed here are also welcome. The address is: Richard Thaler, c/o Journal of Economic Perspectives, Johnson Graduate School of Management, Malott Hall, Cornell University, Ithaca, NY 14853. After this issue, the \"Anomalies\" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that lakes so much time.",
"title": ""
},
{
"docid": "b2e5a2395641c004bdc84964d2528b13",
"text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.",
"title": ""
},
{
"docid": "94d40100337b2f6721cdc909513e028d",
"text": "Data deduplication systems detect redundancies between data blocks to either reduce storage needs or to reduce network traffic. A class of deduplication systems splits the data stream into data blocks (chunks) and then finds exact duplicates of these blocks.\n This paper compares the influence of different chunking approaches on multiple levels. On a macroscopic level, we compare the chunking approaches based on real-life user data in a weekly full backup scenario, both at a single point in time as well as over several weeks.\n In addition, we analyze how small changes affect the deduplication ratio for different file types on a microscopic level for chunking approaches and delta encoding. An intuitive assumption is that small semantic changes on documents cause only small modifications in the binary representation of files, which would imply a high ratio of deduplication. We will show that this assumption is not valid for many important file types and that application-specific chunking can help to further decrease storage capacity demands.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "3785c5a330da1324834590c8ed92f743",
"text": "We address the problem of joint detection and segmentation of multiple object instances in an image, a key step towards scene understanding. Inspired by data-driven methods, we propose an exemplar-based approach to the task of instance segmentation, in which a set of reference image/shape masks is used to find multiple objects. We design a novel CRF framework that jointly models object appearance, shape deformation, and object occlusion. To tackle the challenging MAP inference problem, we derive an alternating procedure that interleaves object segmentation and shape/appearance adaptation. We evaluate our method on two datasets with instance labels and show promising results.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "e84699f276c807eb7fddb49d61bd8ae8",
"text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "129dd084e485da5885e2720a4bddd314",
"text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "5575b83290bec9b0c5da065f30762eb5",
"text": "The resource-based view can be positioned relative to at least three theoretical traditions: SCPbased theories of industry determinants of firm performance, neo-classical microeconomics, and evolutionary economics. In the 1991 article, only the first of these ways of positioning the resourcebased view is explored. This article briefly discusses some of the implications of positioning the resource-based view relative to these other two literatures; it also discusses some of the empirical implications of each of these different resource-based theories. © 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "5689528f407b9a32320e0f303f682b04",
"text": "Previous studies have shown that there is a non-trivial amount of duplication in source code. This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 428 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere 85 million unique files. In other words, 70% of the code on GitHub consists of clones of previously created files. There is considerable variation between language ecosystems. JavaScript has the highest rate of file duplication, only 6% of the files are distinct. Java, on the other hand, has the least duplication, 60% of files are distinct. Lastly, a project-level analysis shows that between 9% and 31% of the projects contain at least 80% of files that can be found elsewhere. These rates of duplication have implications for systems built on open source software as well as for researchers interested in analyzing large code bases. As a concrete artifact of this study, we have created DéjàVu, a publicly available map of code duplicates in GitHub repositories.",
"title": ""
},
{
"docid": "8e5c2bfb2ef611c94a02c2a214c4a968",
"text": "This paper defines and explores a somewhat different type of genetic algorithm (GA) a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either underor overspecified with respect to the problem being solved . As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 3D-bit, orderthree-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worstcase ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeatedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers. © 1989 Complex Systems Publications , Inc. 494 David E. Goldberg, Bradley Kotb, an d Kalyanmoy Deb",
"title": ""
},
{
"docid": "1a8d7f9766de483fc70bb98d2925f9b1",
"text": "Data mining applications are becoming a more common tool in understanding and solving educational and administrative problems in higher education. In general, research in educational mining focuses on modeling student's performance instead of instructors' performance. One of the common tools to evaluate instructors' performance is the course evaluation questionnaire to evaluate based on students' perception. In this paper, four different classification techniques - decision tree algorithms, support vector machines, artificial neural networks, and discriminant analysis - are used to build classifier models. Their performances are compared over a data set composed of responses of students to a real course evaluation questionnaire using accuracy, precision, recall, and specificity performance metrics. Although all the classifier models show comparably high classification performances, C5.0 classifier is the best with respect to accuracy, precision, and specificity. In addition, an analysis of the variable importance for each classifier model is done. Accordingly, it is shown that many of the questions in the course evaluation questionnaire appear to be irrelevant. Furthermore, the analysis shows that the instructors' success based on the students' perception mainly depends on the interest of the students in the course. The findings of this paper indicate the effectiveness and expressiveness of data mining models in course evaluation and higher education mining. Moreover, these findings may be used to improve the measurement instruments.",
"title": ""
},
{
"docid": "7db1b370d0e14e80343cbc7718bbb6c9",
"text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.",
"title": ""
},
{
"docid": "994a674367471efecd38ac22b3c209fc",
"text": "In vehicular ad hoc networks (VANETs), trust establishment among vehicles is important to secure integrity and reliability of applications. In general, trust and reliability help vehicles to collect correct and credible information from surrounding vehicles. On top of that, a secure trust model can deal with uncertainties and risk taking from unreliable information in vehicular environments. However, inaccurate, incomplete, and imprecise information collected by vehicles as well as movable/immovable obstacles have interrupting effects on VANET. In this paper, a fuzzy trust model based on experience and plausibility is proposed to secure the vehicular network. The proposed trust model executes a series of security checks to ensure the correctness of the information received from authorized vehicles. Moreover, fog nodes are adopted as a facility to evaluate the level of accuracy of event’s location. The analyses show that the proposed solution not only detects malicious attackers and faulty nodes, but also overcomes the uncertainty and imprecision of data in vehicular networks in both line of sight and non-line of sight environments.",
"title": ""
}
] | scidocsrr |
abe3acf92e474cdec6fd595d6388dd7b | Sensor for Measuring Strain in Textile | [
{
"docid": "13d8ce0c85befb38e6f2da583ac0295b",
"text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.",
"title": ""
}
] | [
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: [email protected] Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "f90471c5c767c40be52c182055c9ebca",
"text": "Deep intracranial tumor removal can be achieved if the neurosurgical robot has sufficient flexibility and stability. Toward achieving this goal, we have developed a spring-based continuum robot, namely a minimally invasive neurosurgical intracranial robot (MINIR-II) with novel tendon routing and tunable stiffness for use in a magnetic resonance imaging (MRI) environment. The robot consists of a pair of springs in parallel, i.e., an inner interconnected spring that promotes flexibility with decoupled segment motion and an outer spring that maintains its smooth curved shape during its interaction with the tissue. We propose a shape memory alloy (SMA) spring backbone that provides local stiffness control and a tendon routing configuration that enables independent segment locking. In this paper, we also present a detailed local stiffness analysis of the SMA backbone and model the relationship between the resistive force at the robot tip and the tension in the tendon. We also demonstrate through experiments, the validity of our local stiffness model of the SMA backbone and the correlation between the tendon tension and the resistive force. We also performed MRI compatibility studies of the three-segment MINIR-II robot by attaching it to a robotic platform that consists of SMA spring actuators with integrated water cooling modules.",
"title": ""
},
{
"docid": "ff04d4c2b6b39f53e7ddb11d157b9662",
"text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.",
"title": ""
},
{
"docid": "921cb9021dc606af3b63116c45e093b2",
"text": "Since its introduction, the orbitrap has proven to be a robust mass analyzer that can routinely deliver high resolving power and mass accuracy. Unlike conventional ion traps such as the Paul and Penning traps, the orbitrap uses only electrostatic fields to confine and to analyze injected ion populations. In addition, its relatively low cost, simple design and high space-charge capacity make it suitable for tackling complex scientific problems in which high performance is required. This review begins with a brief account of the set of inventions that led to the orbitrap, followed by a qualitative description of ion capture, ion motion in the trap and modes of detection. Various orbitrap instruments, including the commercially available linear ion trap-orbitrap hybrid mass spectrometers, are also discussed with emphasis on the different methods used to inject ions into the trap. Figures of merit such as resolving power, mass accuracy, dynamic range and sensitivity of each type of instrument are compared. In addition, experimental techniques that allow mass-selective manipulation of the motion of confined ions and their potential application in tandem mass spectrometry in the orbitrap are described. Finally, some specific applications are reviewed to illustrate the performance and versatility of the orbitrap mass spectrometers.",
"title": ""
},
{
"docid": "c385054322970c86d3f08b298aa811e2",
"text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.",
"title": ""
},
{
"docid": "93625a1cc77929e98a3bdbf30ac16f3a",
"text": "The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.",
"title": ""
},
{
"docid": "e263a93a9d936a90f4e513ac65317541",
"text": "Neuronal power attenuation or enhancement in specific frequency bands over the sensorimotor cortex, called Event-Related Desynchronization (ERD) or Event-Related Synchronization (ERS), respectively, is a major phenomenon in brain activities involved in imaginary movement of body parts. However, it is known that the nature of motor imagery-related electroencephalogram (EEG) signals is non-stationary and highly timeand frequency-dependent spatial filter, which we call ‘non-homogeneous filter.’ We adaptively select bases of spatial filters over time and frequency. By taking both temporal and spectral features of EEGs in finding a spatial filter into account it is beneficial to be able to consider non-stationarity of EEG signals. In order to consider changes of ERD/ERS patterns over the time–frequency domain, we devise a spectrally and temporally weighted classification method via statistical analysis. Our experimental results on the BCI Competition IV dataset II-a and BCI Competition II dataset IV clearly presented the effectiveness of the proposed method outperforming other competing methods in the literature. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e6990d8f1f81799e2e7fdfe29d14e4d",
"text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.",
"title": ""
},
{
"docid": "73b0a5820c8268bb5911e1b44401273b",
"text": "In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the realworld, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.",
"title": ""
},
{
"docid": "83b5da6ab8ab9a906717fda7aa66dccb",
"text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.",
"title": ""
},
{
"docid": "63115b12e4a8192fdce26eb7e2f8989a",
"text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.",
"title": ""
},
{
"docid": "18bc45b89f56648176ab3e1b9658ec16",
"text": "In this experiment, it was defined a protocol of fluorescent probes combination: propidium iodide (PI), fluorescein isothiocyanate-conjugated Pisum sativum agglutinin (FITC-PSA), and JC-1. For this purpose, four ejaculates from three different rams (n=12), all showing motility 80% and abnormal morphology 10%, were diluted in TALP medium and split into two aliquots. One of the aliquots was flash frozen and thawed in three continuous cycles, to induce damage in cellular membranes and to disturb mitochondrial function. Three treatments were prepared with the following fixed ratios of fresh semen:flash frozen semen: 0:100 (T0), 50:50 (T50), and 100:0 (T100). Samples were stained in the proposal protocol and evaluated by epifluorescence microscopy. For plasmatic membrane integrity, detected by PI probe, it was obtained the equation: Ŷ=1.09+0.86X (R 2 =0.98). The intact acrosome, verified by the FITC-PSA probe, produced the equation: Ŷ=2.76+0.92X (R 2 =0.98). The high mitochondrial membrane potential, marked in red-orange by JC-1, was estimated by the equation: Ŷ=1.90+0.90X (R 2 =0.98). The resulting linear equations demonstrate that this technique is efficient and practical for the simultaneous evaluations of the plasmatic, acrosomal, and mitochondrial membranes in ram spermatozoa.",
"title": ""
},
{
"docid": "4c200a1452ef7951cdbad6fe98cc379a",
"text": "A key challenge for timeline summarization is to generate a concise, yet complete storyline from large collections of news stories. Previous studies in extractive timeline generation are limited in two ways: first, most prior work focuses on fully-observable ranking models or clustering models with hand-designed features that may not generalize well. Second, most summarization corpora are text-only, which means that text is the sole source of information considered in timeline summarization, and thus, the rich visual content from news images is ignored. To solve these issues, we leverage the success of matrix factorization techniques from recommender systems, and cast the problem as a sentence recommendation task, using a representation learning approach. To augment text-only corpora, for each candidate sentence in a news article, we take advantage of top-ranked relevant images from the Web and model the image using a convolutional neural network architecture. Finally, we propose a scalable low-rank approximation approach for learning joint embeddings of news stories and images. In experiments, we compare our model to various competitive baselines, and demonstrate the stateof-the-art performance of the proposed textbased and multimodal approaches.",
"title": ""
},
{
"docid": "111e34b97cbefb43dc611efe434bbc30",
"text": "This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and succinct way. Algorithms to efficiently analyze such formulae are introduced. The approach is illustrated by model-checking a probabilistic cost model of the IPv4 zeroconf protocol for distributed address assignment in ad-hoc networks.",
"title": ""
},
{
"docid": "9fb27226848da6b18fdc1e3b3edf79c9",
"text": "In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.",
"title": ""
},
{
"docid": "6ca59955cbf84ab9dce3c444709040b7",
"text": "We introduce SPHINCS-Simpira, which is a variant of the SPHINCS signature scheme with Simpira as a building block. SPHINCS was proposed by Bernstein et al. at EUROCRYPT 2015 as a hash-based signature scheme with post-quantum security. At ASIACRYPT 2016, Gueron and Mouha introduced the Simpira family of cryptographic permutations, which delivers high throughput on modern 64-bit processors by using only one building block: the AES round function. The Simpira family claims security against structural distinguishers with a complexity up to 2 using classical computers. In this document, we explain why the same claim can be made against quantum computers as well. Although Simpira follows a very conservative design strategy, our benchmarks show that SPHINCS-Simpira provides a 1.5× speed-up for key generation, a 1.4× speed-up for signing 59-byte messages, and a 2.0× speed-up for verifying 59-byte messages compared to the originally proposed SPHINCS-256.",
"title": ""
},
{
"docid": "e016d5fc261def252f819f350b155c1a",
"text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.",
"title": ""
},
{
"docid": "45881ab3fc9b2d09f211808e8c9b0a3c",
"text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.",
"title": ""
},
{
"docid": "e291628468d30d72b75bcd407382c8eb",
"text": "In order to implement reliable digital system, it is becoming important making tests and finding bugs by setting up a verification environment. It is possible to set up effective verification environment by using Universal Verification Methodology which is standardized and used in worldwide chip industry. In this work, the slave circuit of Serial Peripheral Interface, which is commonly used for communication integrated circuits, have been designed with hardware description and verification language SystemVerilog and creating test environment with UVM.",
"title": ""
},
{
"docid": "7ad4f52279e85f8e20239e1ea6c85bbb",
"text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.",
"title": ""
}
] | scidocsrr |
Subsets and Splits