_id
stringlengths
40
40
text
stringlengths
0
10k
dcd72f0a9cdc37450379f401fc2f4f87e30f5021
c1b66422b1dab3eeee6d6c760f4bd227a8bb16c5
e75c5d1b7ecd71cd9f1fdc3d07f56290517ef1e5
The two areas of online transaction processing (OLTP) and online analytical processing (OLAP) present different challenges for database architectures. Currently, customers with high rates of mission-critical transactions have split their data into two separate systems, one database for OLTP and one so-called data warehouse for OLAP. While allowing for decent transaction rates, this separation has many disadvantages including data freshness issues due to the delay caused by only periodically initiating the Extract Transform Load-data staging and excessive resource consumption due to maintaining two separate information systems. We present an efficient hybrid system, called HyPer, that can handle both OLTP and OLAP simultaneously by using hardware-assisted replication mechanisms to maintain consistent snapshots of the transactional data. HyPer is a main-memory database system that guarantees the ACID properties of OLTP transactions and executes OLAP query sessions (multiple queries) on the same, arbitrarily current and consistent snapshot. The utilization of the processor-inherent support for virtual memory management (address translation, caching, copy on update) yields both at the same time: unprecedentedly high transaction rates as high as 100000 per second and very fast OLAP query response times on a single system executing both workloads in parallel. The performance analysis is based on a combined TPC-C and TPC-H benchmark.
732212be0e6c5216158a7470c79fa2ff98a2da06
We present a stacked-FET monolithic millimeter-wave (mmW) integrated circuit Doherty power amplifier (DPA). The DPA employs a novel asymmetrical stack gate bias to achieve high power and high efficiency at 6-dB power back-off (PBO). The circuit is fabricated in a 0.15-µm enhancement mode (E-mode) Gallium Arsenide (GaAs) process. Experimental results demonstrate output power at 1-dB gain compression (P1dB) of 28.2 dBm, peak power added efficiency (PAE) of 37% and PAE at 6-dB PBO of 27% at 28 GHz. Measured small signal gain is 15 dB while the 3-dB bandwidth covers from 25.5 to 29.5 GHz. Using digital predistortion (DPD) with a 20 MHz 64 QAM modulated signal, an adjacent channel power ratio (ACPR) of −46 dBc has been observed.
03837b659b4a8878c2a2dbef411cd986fecfef8e
We introduce an autoregressive attention mechanism for parallelizable characterlevel sequence modeling. We use this method to augment a neural model consisting of blocks of causal convolutional layers connected by highway network skip connections. We denote the models with and without the proposed attention mechanism respectively as Highway Causal Convolution (Causal Conv) and Autoregressive-attention Causal Convolution (ARA-Conv). The autoregressive attention mechanism crucially maintains causality in the decoder, allowing for parallel implementation. We demonstrate that these models, compared to their recurrent counterparts, enable fast and accurate learning in character-level NLP tasks. In particular, these models outperform recurrent neural network models in natural language correction and language modeling tasks, and run in a fraction of the time.
fe419be5c53e2931e1d6370c914ce166be29ff6e
0c3078bf214cea52669ec13962a0a242243d0e09
A broadband printed quadrifilar helical antenna employing a novel compact feeding circuit is proposed in this paper. This antenna presents an excellent axial ratio over a wide beamwidth, with a 29% bandwidth. A specific feeding circuit based on an aperture-coupled transition and including two 90° surface mount hybrids has been designed to be integrated with the quadrifilar antenna. Over the bandwidth, the measured reflection coefficient of the antenna fed by the wideband compact circuit has been found to be equal to or lower than -12 dB and the maximum gain varies between 1.5 and 2.7 dBic from 1.18 to 1.58 GHz. The half-power beamwidth is 150°, with an axial ratio below 3 dB over this range. The compactness of the feeding circuit allows small element spacing in array arrangements.
0c3751db5a24c636c1aa8abfd9d63321b38cfce5
Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.
24424918dc93c016deeaeb86a01c8bfc01253c9b
Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: originally designed for strongly convex objectives, is also very robust under non-strongly convex or sum-of-non-convex settings. If f(x) is a sum of smooth, convex functions but f is not strongly convex (such as Lasso or logistic regression), we propose a variant SVRG that makes a novel choice of growing epoch length on top of SVRG. SVRG is a direct, faster variant of SVRG in this setting. If f(x) is a sum of non-convex functions but f is strongly convex, we show that the convergence of SVRG linearly depends on the non-convexity parameter of the summands. This improves the best known result in this setting, and gives better running time for stochastic PCA.
680cbbc88d537bd6f5a68701b1bb0080a77faa00
Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.
9fe5a7a24ff81ba2b6769e811b6ab47188a45242
Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a critical point in nonconvex programming. In this paper, we extend APG for general nonconvex and nonsmooth programs by introducing a monitor that satisfies the sufficient descent property. Accordingly, we propose a monotone APG and a nonmonotone APG. The latter waives the requirement on monotonic reduction of the objective function and needs less computation in each iteration. To the best of our knowledge, we are the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every accumulation point is a critical point, and the convergence rates remain O ( 1 k2 ) when the problems are convex, in which k is the number of iterations. Numerical results testify to the advantage of our algorithms in speed.
3e36eb936002a59b81d8abb4548dc2c42a29b743
Often security is seen as an add-on service for automation systems that frequently conflicts with other goals such as efficient transmission or resource limitations. This article goes for a practice-oriented approach to security in automation systems. It analysis common threats to automation systems and automation networks in particular, sets up a model to classify systems with respect to security and discusses common measures available at different system levels. The description of measures should allow to rate the effects on the overall system security
8b74a32cebb5faf131595496f6470ff9c2c33468
Facebook is quickly becoming one of the most popular tools for social communication. However, Facebook is somewhat different from other Social Networking Sites as it demonstrates an offline-to-online trend; that is, the majority of Facebook Friends are met offline and then added later. The present research investigated how the Five-Factor Model of personality relates to Facebook use. Despite some expected trends regarding Extraversion and Openness to Experience, results indicated that personality factors were not as influential as previous literature would suggest. The results also indicated that a motivation to communicate was influential in terms of Facebook use. It is suggested that different motivations may be influential in the decision to use tools such as Facebook, especially when individual functions of Facebook are being considered. Ó 2008 Elsevier Ltd. All rights reserved. 1. Personality correlates and competency factors associated
423fff94db2be3ddef5e3204338d2111776eafea
We have analyzed the fully-anonymized headers of 362 million messages exchanged by 4.2 million users of Facebook, an online social network of college students, during a 26 month interval. The data reveal a number of strong daily and weekly regularities which provide insights into the time use of college students and their social lives, including seasonal variations. We also examined how factors such as school affiliation and informal online “friend” lists affect the observed behavior and temporal patterns. Finally, we show that Facebook users appear to be clustered by school with respect to their temporal messaging patterns.
1bed30d161683d279780aee34619f94a860fa973
Our analysis shows that many "big-memory" server workloads, such as databases, in-memory caches, and graph analytics, pay a high cost for page-based virtual memory. They consume as much as 10% of execution cycles on TLB misses, even using large pages. On the other hand, we find that these workloads use read-write permission on most pages, are provisioned not to swap, and rarely benefit from the full flexibility of page-based virtual memory. To remove the TLB miss overhead for big-memory workloads, we propose mapping part of a process's linear virtual address space with a direct segment, while page mapping the rest of the virtual address space. Direct segments use minimal hardware---base, limit and offset registers per core---to map contiguous virtual memory regions directly to contiguous physical memory. They eliminate the possibility of TLB misses for key data structures such as database buffer pools and in-memory key-value stores. Memory mapped by a direct segment may be converted back to paging when needed. We prototype direct-segment software support for x86-64 in Linux and emulate direct-segment hardware. For our workloads, direct segments eliminate almost all TLB misses and reduce the execution time wasted on TLB misses to less than 0.5%.
86dc975f9cbd9a205f8e82fb1db3b61c6b738fa5
As increasingly powerful techniques emerge for machine tagging multimedia content, it becomes ever more important to standardize the underlying vocabularies. Doing so provides interoperability and lets the multimedia community focus ongoing research on a well-defined set of semantics. This paper describes a collaborative effort of multimedia researchers, library scientists, and end users to develop a large standardized taxonomy for describing broadcast news video. The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets
2b2ba9b0022ff45939527836a150959fe388ee23
c39d04c6f3b84c77ad379d0358bfbe7148ad4fd2
8db7f5a54321e1a4cd51d0666607279556a57404
BACKGROUND Meditative techniques are sought frequently by patients coping with medical and psychological problems. Because of their increasingly widespread appeal and use, and the potential for use as medical therapies, a concise and thorough review of the current state of scientific knowledge of these practices as medical interventions was conducted. PURPOSE To systematically review the evidence supporting efficacy and safety of meditative practices in treating illnesses, and examine areas warranting further study. Studies on normal healthy populations are not included. METHODS Searches were performed using PubMed, PsycInfo, and the Cochrane Database. Keywords were Meditation, Meditative Prayer, Yoga, Relaxation Response. Qualifying studies were reviewed and independently rated based on quality by two reviewers. Mid-to-high-quality studies (those scoring above 0.65 or 65% on a validated research quality scale) were included. RESULTS From a total of 82 identified studies, 20 randomized controlled trials met our criteria. The studies included 958 subjects total (397 experimentally treated, 561 controls). No serious adverse events were reported in any of the included or excluded clinical trials. Serious adverse events are reported in the medical literature, though rare. The strongest evidence for efficacy was found for epilepsy, symptoms of the premenstrual syndrome and menopausal symptoms. Benefit was also demonstrated for mood and anxiety disorders, autoimmune illness, and emotional disturbance in neoplastic disease. CONCLUSIONS The results support the safety and potential efficacy of meditative practices for treating certain illnesses, particularly in nonpsychotic mood and anxiety disorders. Clear and reproducible evidence supporting efficacy from large, methodologically sound studies is lacking.
46b2cd0ef7638dcb4a6220a52232712beb2fa850
Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.
3c094494f6a911de3087ed963d3d893f6f2b1d71
OBJECTIVE The aim of this study was to review the literature on clinical applications of the Hybrid Assistive Limb system for gait training. METHODS A systematic literature search was conducted using Web of Science, PubMed, CINAHL and clinicaltrials.gov and additional search was made using reference lists in identified reports. Abstracts were screened, relevant articles were reviewed and subject to quality assessment. RESULTS Out of 37 studies, 7 studies fulfilled inclusion criteria. Six studies were single group studies and 1 was an explorative randomized controlled trial. In total, these studies involved 140 participants of whom 118 completed the interventions and 107 used HAL for gait training. Five studies concerned gait training after stroke, 1 after spinal cord injury (SCI) and 1 study after stroke, SCI or other diseases affecting walking ability. Minor and transient side effects occurred but no serious adverse events were reported in the studies. Beneficial effects on gait function variables and independence in walking were observed. CONCLUSIONS The accumulated findings demonstrate that the HAL system is feasible when used for gait training of patients with lower extremity paresis in a professional setting. Beneficial effects on gait function and independence in walking were observed but data do not allow conclusions. Further controlled studies are recommended.
515e34476452bbfeb111ce5480035ae1f7aa4bee
Good indoor air quality is a vital part of human health. Poor indoor air quality can contribute to the development of chronic respiratory diseases such as asthma, heart disease, and lung cancer. Complicating matters, poor air quality is extremely difficult for humans to detect through sight and smell alone and existing sensing equipment is designed to be used by and provide data for scientists rather than everyday citizens. We propose inAir, a tool for measuring, visualizing, and learning about indoor air quality. inAir provides historical and real-time visualizations of indoor air quality by measuring tiny hazardous airborne particles as small as 0.5 microns in size. Through user studies we demonstrate how inAir promotes greater awareness and motivates individual actions to improve indoor air quality.
44db0c2f729661e7b30af484a1ad5df4e70cb22a
Bluetooth worms currently pose relatively little danger compared to Internet scanning worms. The BlueBag project shows targeted attacks through Bluetooth malware using proof-of-concept codes and mobile devices
0a87428c6b2205240485ee6bb9cfb00fd9ed359c
The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.
90930683f4ef3da8c51ed7d2553774c196172cb3
919bd86eb5fbccd3862e3e2927d4a0d468c7c591
73e51b9820e90eb6525fc953c35c9288527cecfd
Existing neural dependency parsers usually encode each word in a sentence with bi-directional LSTMs, and estimate the score of an arc from the LSTM representations of the head and the modifier, possibly missing relevant context information for the arc being considered. In this study, we propose a neural feature extraction method that learns to extract arcspecific features. We apply a neural network-based attention method to collect evidences for and against each possible head-modifier pair, with which our model computes certainty scores of belief and disbelief, and determines the final arc score by subtracting the score of disbelief from the one of belief. By explicitly introducing two kinds of evidences, the arc candidates can compete against each other based on more relevant information, especially for the cases where they share the same head or modifier. It makes possible to better discriminate two or more competing arcs by presenting their rivals (disbelief evidence). Experiments on various datasets show that our arc-specific feature extraction mechanism significantly improves the performance of bi-directional LSTMbased models by explicitly modeling long-distance dependencies. For both English and Chinese, the proposed model achieve a higher accuracy on dependency parsing task than most existing neural attention-based models.
7bdb08efd640311ad18466a80498c78267f886ca
26d92017242e51238323983eba0fad22bac67505
This paper studies people recommendations designed to help users find known, offline contacts and discover new friends on social networking sites. We evaluated four recommender algorithms in an enterprise social networking site using a personalized survey of 500 users and a field study of 3,000 users. We found all algorithms effective in expanding users' friend lists. Algorithms based on social network information were able to produce better-received recommendations and find more known contacts for users, while algorithms using similarity of user-created content were stronger in discovering new friends. We also collected qualitative feedback from our survey users and draw several meaningful design implications.
3621bc359003e36707733650cccadf4333683293
54c32d432fb624152da7736543f2685840860a57
We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks.
3d8650c28ae2b0f8d8707265eafe53804f83f416
In an earlier paper [9], we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a “pseudo-loss” which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman’s [1] “bagging” method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem.
85379baf4972e15cd7b9f5e06ce177e693b35f53
In this paper, we propose a semi-supervised kernel matching method to address domain adaptation problems where the source distribution substantially differs from the target distribution. Specifically, we learn a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. Our empirical results show the proposed kernel matching method significantly outperforms alternative methods on the task of cross domain sentiment classification.
7eeb362f11bfc1c89996e68e3a7c5678e271f95b
893167546c870eac602d81874c6473fd3cd8bd21
The skyline of a set of multi-dimensional points (tuples) consists of those points for which no clearly better point exists in the given set, using component-wise comparison on domains of interest. Skyline queries, i.e., queries that involve computation of a skyline, can be computationally expensive, so it is natural to consider parallelized approaches which make good use of multiple processors. We approach this problem by using hyperplane projections to obtain useful partitions of the data set for parallel processing. These partitions not only ensure small local skyline sets, but enable efficient merging of results as well. Our experiments show that our method consistently outperforms similar approaches for parallel skyline computation, regardless of data distribution, and provides insights on the impacts of different optimization strategies.
b87d5f9b8013386f4ff5ad1a130efe6e924dca5c
Article history: Received 27 August 2012 Received in revised form 1 August 2013 Accepted 5 August 2013 Available online 15 August 2013
bc18ee4a0f26320a86852b057077e8eca78b0c13
This study extends the technology acceptance model to identify factors that influence technology acceptance among pre-service teachers in Ghana. Data from 380 usable questionnaires were tested against the research model. Utilising the extended technology acceptance model (TAM) as a research framework, the study found that: pre-service teachers’ pedagogical beliefs, perceived ease of use, perceived usefulness of computer technology and attitude towards computer use to be significant determinants of actual use of computer technology. Results obtained employing multiple stepwise regression analysis revealed that: (1) pre-service teachers’ pedagogical beliefs significantly influenced both perceived ease of use and perceived usefulness, (2) both perceived ease of use and perceived usefulness influence attitude towards computer use and attitude towards computer use significantly influences pre-service teachers’ actual use of computers. However, statistically, perceived ease of use did not significantly influence perceived usefulness. The findings contribute to the literature by validating the TAM in the Ghanaian context and provide several prominent implications for the research and practice of technology integration development.
04e5b276da90c8181d6ad8397f763a181baae949
Cross-validation is a mainstay for measuring performance and progress in machine learning. There are subtle differences in how exactly to compute accuracy, F-measure and Area Under the ROC Curve (AUC) in cross-validation studies. However, these details are not discussed in the literature, and incompatible methods are used by various papers and software packages. This leads to inconsistency across the research literature. Anomalies in performance calculations for particular folds and situations go undiscovered when they are buried in aggregated results over many folds and datasets, without ever a person looking at the intermediate performance measurements. This research note clarifies and illustrates the differences, and it provides guidance for how best to measure classification performance under cross-validation. In particular, there are several divergent methods used for computing F-measure, which is often recommended as a performance measure under class imbalance, e.g., for text classification domains and in one-vs.-all reductions of datasets having many classes. We show by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance. This paper is of particular interest to those designing machine learning software libraries and researchers focused on high class imbalance.
8efac913ff430ef698dd3fa5df4cbb7ded3cab50
We present an unsupervised clustering tool, Principal Direction Divisive Partitioning, which is a scal-able and versatile top-down method applicable to any set of data that can be represented as numerical vectors. A description of the basic method, a summary of the main application areas where this has been used, and some recent results on the selection of signiicant words as well as the process of updating clusters as new data arrives is discussed.
1e20f9de45d26950ecd11965989d2b15a5d0d86b
Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, we can easily express our problem domain knowledge in the constraints of the model at the expense of difficulties during inference. Deterministic deep neural networks are constructed in such a way that inference is straightforward, but we sacrifice the ability to easily incorporate problem domain knowledge. The goal of this paper is to provide a general strategy to obtain the advantages of both approaches while avoiding many of their disadvantages. The general idea can be summarized as follows: given a model-based approach that requires an iterative inference method, we unfold the iterations into a layer-wise structure analogous to a neural network. We then de-couple the model parameters across layers to obtain novel neural-network-like architectures that can easily be trained discriminatively using gradient-based methods. The resulting formula combines the expressive power of a conventional deep network with the internal structure of the model-based approach, while allowing inference to be performed in a fixed number of layers that can be optimized for best performance. We show how this framework can be applied to the non-negative matrix factorization to obtain a novel non-negative deep neural network architecture, that can be trained with a multiplicative back-propagation-style update algorithm. We present experiments in the domain of speech enhancement, where we show that the resulting model is able to outperform conventional neural network while only requiring a fraction of the number of parameters. We believe this is due to the ability afforded by our framework to incorporate problem level assumptions into the architecture of the deep network. arXiv.org This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2014 201 Broadway, Cambridge, Massachusetts 02139
26f4f07696a3828f5eeb0d8bb8944da80228b77d
The application of boosting procedures to decision tree algorithms has been shown to produce very accurate classi ers. These classiers are in the form of a majority vote over a number of decision trees. Unfortunately, these classi ers are often large, complex and diÆcult to interpret. This paper describes a new type of classi cation rule, the alternating decision tree, which is a generalization of decision trees, voted decision trees and voted decision stumps. At the same time classi ers of this type are relatively easy to interpret. We present a learning algorithm for alternating decision trees that is based on boosting. Experimental results show it is competitive with boosted decision tree algorithms such as C5.0, and generates rules that are usually smaller in size and thus easier to interpret. In addition these rules yield a natural measure of classi cation con dence which can be used to improve the accuracy at the cost of abstaining from predicting examples that are hard to classify.
5e28e81e757009d2f76b8674e0da431f5845884a
|This paper describes the automatic selection of features from an image training set using the theories of multi-dimensional linear discriminant analysis and the associated optimal linear projection. We demonstrate the eeectiveness of these Most Discriminating Features for view-based class retrieval from a large database of widely varying real-world objects presented as \well-framed" views, and compare it with that of the principal component analysis.
dbe8c61628896081998d1cd7d10343a45b7061bd
Several strategies are described that overcome limitations of basic network models as steps towards the design of large connectionist speech recognition systems. The two major areas of concern are the problem of time and the problem of scaling. Speech signals continuously vary over time and encode and transmit enormous amounts of human knowledge. To decode these signals, neural networks must be able to use appropriate representations of time and it must be possible to extend these nets to almost arbitrary sizes and complexity within finite resources. The problem of time is addressed by the development of a Time-Delay Neural Network; the problem of scaling by Modularity and Incremental Design of large nets based on smaller subcomponent nets. It is shown that small networks trained to perform limited tasks develop time invariant, hidden abstractions that can subsequently be exploited to train larger, more complex nets efficiently. Using these techniques, phoneme recognition networks of increasing complexity can be constructed that all achieve superior recognition performance.
eeb8c7a22f731839755a4e820b608215e9885276
01905a9c0351aad54ee7dbba1544cd9db06ca935
Risk management is today a major steering tool for any organisation wanting to deal with information system (IS) security. However, IS security risk management (ISSRM) remains a difficult process to establish and maintain, mainly in a context of multi-regulations with complex and inter-connected IS. We claim that a connection with enterprise architecture management (EAM) contributes to deal with these issues. A first step towards a better integration of both domains is to define an integrated EAM-ISSRM conceptual model. This paper is about the elaboration and validation of this model. To do so, we improve an existing ISSRM domain model, i.e. a conceptual model depicting the domain of ISSRM, with the concepts of EAM. The validation of the EAM-ISSRM integrated model is then performed with the help of a validation group assessing the utility and usability of the model.
1976c9eeccc7115d18a04f1e7fb5145db6b96002
Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.
77b99e0a3a6f99537a4b497c5cd67be95c1b7088
Autonomous vehicle research has been prevalent for well over a decade but only recently has there been a small amount of research conducted on the human interaction that occurs in autonomous vehicles. Although functional software and sensor technology is essential for safe operation, which has been the main focus of autonomous vehicle research, handling all elements of human interaction is also a very salient aspect of their success. This paper will provide an overview of the importance of human vehicle interaction in autonomous vehicles, while considering relevant related factors that are likely to impact adoption. Particular attention will be given to prior research conducted on germane areas relating to control in the automobile, in addition to the different elements that are expected to affect the likelihood of success for these vehicles initially developed for human operation. This paper will also include a discussion of the limited research conducted to consider interactions with humans and the current state of published functioning software and sensor technology that exists.
31f3a12fb25ddb0a27ebdda7dd8d014996debd74
We collected usage information from 12,500 Android devices in the wild over the course of nearly 2 years. Our dataset contains 53 billion data points from 894 models of devices running 687 versions of Android. Processing the collected data presents a number of challenges ranging from scalability to consistency and privacy considerations. We present our system architecture for collection and analysis of this highly-distributed dataset, discuss how our system can reliably collect time-series data in the presence of unreliable timing information, and discuss issues and lessons learned that we believe apply to many other big data collection projects.
408a8e250316863da94ffb3eab077175d08c01bf
5656fa5aa6e1beeb98703fc53ec112ad227c49ca
We introduce the multi-prediction deep Boltzmann machine (MP-DBM). The MPDBM can be seen as a single probabilistic model trained to maximize a variational approximation to the generalized pseudolikelihood, or as a family of recurrent nets that share parameters and approximately solve different inference problems. Prior methods of training DBMs either do not perform well on classification tasks or require an initial learning pass that trains the DBM greedily, one layer at a time. The MP-DBM does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks.1
4c99b87df6385bd945a00633f829e4a9ec5ce314
Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Toolkit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes and a real-world graph (Kwak, et al.) with 61.6 million vertices and 1.47 billion edges in 105 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system. Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset.
7b1e18688dae102b8702a074f71bbea8ba540998
The ever increasing complexity of automotive vehicular systems, their connection to external networks, to the internet of things as well as their greater internal networking opens doors to hacking and malicious attacks. Security an d privacy risks in modern automotive vehicular systems are well publicized by now. That violation of securit y could lead to safety violations – is a well-argued and accepted argument. The safety discipline has matured over decades , but the security discipline is much younger . There are arguments and rightfully so, that the security engineering process is similar to the functional safety engineering process (formalized by the norm ISO 26262 ) and that they could be laid side -by-side and could be performed together but, by a different set of experts. There are moves to define a security engineering process along the lines of a functional safety engineering process for automotive vehicular systems . But, are these efforts at formalizing safety-security sufficient to produce safe and secure systems? When one sets out on this path with the idea of building safe and s ecure systems , one realizes that there are quite a few challenges, contradictions , dis imilarities, concerns to be addressed before safe and secure systems started coming out of production lines. The effort of this paper is to bring some such challeng e areas to the notice of the community and to suggest a way forward.
a608bd857a131fe0d9e10c2219747b9fa03c5afc
Modern automobiles are pervasively computerized, and hence potentially vulnerable to attack. However, while previous research has shown that the internal networks within some modern cars are insecure, the associated threat model — requiring prior physical access — has justifiably been viewed as unrealistic. Thus, it remains an open question if automobiles can also be susceptible to remote compromise. Our work seeks to put this question to rest by systematically analyzing the external attack surface of a modern automobile. We discover that remote exploitation is feasible via a broad range of attack vectors (including mechanics tools, CD players, Bluetooth and cellular radio), and further, that wireless communications channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft. Finally, we discuss the structural characteristics of the automotive ecosystem that give rise to such problems and highlight the practical challenges in mitigating them.
cdbb46785f9b9acf8d03f3f8aba58b201f06639f
The IT security of automotive systems is an evolving area of research. To analyse the current situation and the potentially growing tendency of arising threats we performed several practical tests on recent automotive technology. With a focus on automotive systems based on CAN bus technology, this article summarises the results of four selected tests performed on the control systems for the window lift, warning light and airbag control system as well as the central gateway. These results are supplemented in this article by a classification of these four attack scenarios using the established CERT taxonomy and an analysis of underlying security vulnerabilities, and especially, potential safety implications. With respect to the results of these tests, in this article we further discuss two selected countermeasures to address basic weaknesses exploited in our tests. These are adaptations of intrusion detection (discussing three exemplary detection patterns) and IT-forensic measures (proposing proactive measures based on a forensic model). This article discusses both looking at the four attack scenarios introduced before, covering their capabilities and restrictions. While these reactive approaches are short-term measures, which could already be added to today’s automotive IT architecture, long-term concepts also are shortly introduced, which are mainly preventive but will require a major redesign. Beneath a short overview on respective research approaches, we discuss their individual requirements, potential and restrictions. & 2010 Elsevier Ltd. All rights reserved.
13b44d1040bf8fc1edb9de23f50af1f324e63697
We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a highres face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate high-quality results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces high-quality and interesting results on identity transfer. We demonstrate three applications on identityguided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.
8a7b0520de8d9af82617bb13d7aef000aae26119
A mixed characterisation by means of the generalized admittance matrix and the generalized scattering matrices, obtained with mode matching is proposed to design dual-band orthomode transducer (OMT) components. Accurate and efficient fullwave analysis software based on this procedure has been developed. A dual frequency OMT in the Ku band with high performance has been fully designed with the developed software. The good agreement between the numerical and experimental results validates the design process.
17168ca2262960c57ee141b5d7095022e038ddb4
Activity recognition from smart devices and wearable sensors is an active area of research due to the widespread adoption of smart devices and the benefits it provide for supporting people in their daily lives. Many of the available datasets for fine-grained primitive activity recognition focus on locomotion or sports activities with less emphasis on real-world day-to-day behavior. This paper presents a new dataset for activity recognition in a realistic unmodified kitchen environment. Data was collected using only smart-watches from 10 lay participants while they prepared food in an unmodified rented kitchen. The paper also providing baseline performance measures for different classifiers on this dataset. Moreover, a deep feature learning system and more traditional statistical features based approaches are compared. This analysis shows that - for all evaluation criteria - data-driven feature learning allows the classifier to achieve best performance compared with hand-crafted features.
62a6cf246c9bec56babab9424fa36bfc9d4a47e8
How can we enable computers to automatically answer questions like “Who created the character Harry Potter”? Carefully built knowledge bases provide rich sources of facts. However, it remains a challenge to answer factoid questions raised in natural language due to numerous expressions of one question. In particular, we focus on the most common questions — ones that can be answered with a single fact in the knowledge base. We propose CFO, a Conditional Focused neuralnetwork-based approach to answering factoid questions with knowledge bases. Our approach first zooms in a question to find more probable candidate subject mentions, and infers the final answers with a unified conditional probabilistic framework. Powered by deep recurrent neural networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7% on a dataset of 108k questions – the largest public one to date. It outperforms the current state of the art by an absolute margin of 11.8%.
7bbacae9177e5349090336c23718a51bc94f6bfc
We seek to recognize the place depicted in a query image using a database of “street side” images annotated with geolocation information. This is a challenging task due to changes in scale, viewpoint and lighting between the query and the images in the database. One of the key problems in place recognition is the presence of objects such as trees or road markings, which frequently occur in the database and hence cause significant confusion between different places. As the main contribution, we show how to avoid features leading to confusion of particular places by using geotags attached to database images as a form of supervision. We develop a method for automatic detection of image-specific and spatially-localized groups of confusing features, and demonstrate that suppressing them significantly improves place recognition performance while reducing the database size. We show the method combines well with the state of the art bag-of-features model including query expansion, and demonstrate place recognition that generalizes over wide range of viewpoints and lighting conditions. Results are shown on a geotagged database of over 17K images of Paris downloaded from Google Street View.
d72b366e1d45cbcddfe5c856b77a2801d8d0c11f
Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees. In this paper, we first propose to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features. We further employ a graph-to-sequence model to encode the syntactic graph and decode a logical form. Experimental results on benchmark datasets show that our model is comparable to the state-ofthe-art on Jobs640, ATIS and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information.
32cde90437ab5a70cf003ea36f66f2de0e24b3ab
Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.
7c5f143adf1bf182bf506bd31f9ddb0f302f3ce9
ceb7784d1bebbc8e97e97cbe2b3b76bce1e708a5
Business Intelligence (BI) is on everyone's lips nowadays, since it provides businesses with the possibility to analyze their business practices and improve them. However, Small and Medium Enterprises (SME) often cannot leverage the positive effects of BI because of missing resources like personnel, knowledge, or money. Since SME pose a major form of business organization, this fact has to be overcome. As the retail industry is a substantial part of the SME branch, we propose an inter-organizational approach for a BI system for retail SME, which allows them to collaboratively collect data and perform analysis task. The aim of our ongoing research effort is the development of such a system following the Design Science Research Methodology. Within this article, the status quo of current BI practices in SME in the retail industry is analyzed through qualitative interviews with ten SME managers. Afterwards, adoption and success factors of BI systems and Inter-organizational Information Systems are worked out in a comprehensive structured literature review. Based on the status quo and the adoption and success factors, first requirements for the acceptance of an inter-organizational BI system are identified and validated in another round of qualitative interviews. This leads to nine functional requirements and three non-functional requirements, which can be used for designing and implementing an inter-organizational BI system for SME in the following research efforts.
43e33e80d74205e860dd4b8e26b7c458c60e201a
We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike “deep kernels”, has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GPs with a comparable number of parameters. 1
1e38c680492a958a2bd616a9a7121f905746a37e
The Bitcoin system (https://bitcoin.org) is a pseudo-anonymous currency that can dissociate a user from any real-world identity. In that context, a successful breach of the virtual and physical divide represents a signficant aw in the Bit-coin system [1]. In this project we demonstrate how to glean information about the real-world users behind Bitcoin transactions. We analyze publicly available data about the cryptocurrency. In particular, we focus on determining information about a Bitcoin user's physical location by examining that user's spending habits.
2c0a239caa3c2c590e4d6f23ad01c1f77adfc7a0
5f6d9b8461a9d774da12f1b363eede4b7088cf5d
Previous research results showed that UHF passive CMOS RFID tags had difficulty to achieve sensitivity less than -20 dBm. This paper presents a dual-channel 15-bit UHF passive CMOS RFID tag prototype that can work at sensitivity lower than -20 dBm. The proposed tag chip harvests energy and backscatters uplink data at 866.4-MHz (for ETSI) or 925-MHz (for FCC) channel and receives downlink data at 433-MHz channel. Consequently, the downlink data transmission does not interrupt our tag from harvesting RF energy. To use the harvested energy efficiently, we design a tag chip that includes neither a regulator nor a VCO such that the harvested energy is completely used in receiving, processing, and backscattering data. Without a regulator, our tag uses as few active analog circuits as possible in the receiver front-end. Instead, our tag uses a novel digital circuit to decode the received data. Without a VCO, the design of our tag can extract the required clock signal from the downlink data. Measurement result shows that the sensitivity of the proposed passive tag chip can reach down to -21.2 dBm. Such result corresponds to a 19.6-m reader-to-tag distance under 36-dBm EIRP and 0.4-dBi tag antenna gain. The chip was fabricated in TSMC 0.18- μm CMOS process. The die area is 0.958 mm ×0.931mm.
4991785cb0e6ee3d0b7823b59e144fb80ca3a83e
2f3a6728b87283ccf0f8822f7a60bca8280f0957
Aggregated search is the task of integrating results from potentially multiple specialized search services, or verticals, into the Web search results. The task requires predicting not only which verticals to present (the focus of most prior research), but also predicting where in the Web results to present them (i.e., above or below the Web results, or somewhere in between). Learning models to aggregate results from multiple verticals is associated with two major challenges. First, because verticals retrieve different types of results and address different search tasks, results from different verticals are associated with different types of predictive evidence (or features). Second, even when a feature is common across verticals, its predictiveness may be vertical-specific. Therefore, approaches to aggregating vertical results require handling an inconsistent feature representation across verticals, and, potentially, a vertical-specific relationship between features and relevance. We present 3 general approaches that address these challenges in different ways and compare their results across a set of 13 verticals and 1070 queries. We show that the best approaches are those that allow the learning algorithm to learn a vertical-specific relationship between features and relevance.
b8945cfb7ed72c0fd70263379c328b8570bd763f
a2770a51760a134dbb77889d5517550943ea7b81
A compact dual-polarized dual-band omnidirectional antenna with high gain is presented for 2G/3G/LTE communications, which comprise two horizontal polarization (HP) and a vertical polarization (VP) elements. The upper HP element consists of four pairs of modified printed magneto-electric (ME) dipoles that are fed by a four-way power divider feeding network, and eight pieces of arc-shaped parasitic patches that are printed on both sides of the circular printed circuit board alternately. The four-way power divider feeding network together with the four pairs of ME dipoles mainly provide a stable 360° radiation pattern and high gain, while the eight pieces of patches are used to enhance the bandwidth. The lower HP element is similar to the upper one except that it do not have the parasitic patches. The VP element consists of four pairs of cone-shaped patches. Different from the HP element, the upper VP element provides the lower frequency band while the lower VP one yields the upper frequency band. The VP element and the HP element are perpendicularly arranged to obtain the compact and dual-polarized features. Measured results show that a bandwidth of 39.6% (0.77–1.15 GHz) with a gain of about 2.6 dBi and another bandwidth of 55.3% (1.66–2.93 GHz) with a gain of about 4.5 dBi can be achieved for the HP direction, while a bandwidth of 128% (0.7–3.2 GHz) with a gain of around 4.4 dBi can be acquired for the VP direction. Port isolation larger than 20 dB and low-gain variation levels within 2 dBi are also obtained. Hence, the proposed antenna is suitable for 2G/3G/LTE indoor communications.
33a1ee51cc5d51609943896a95c1371538f2d017
1eb0bf4b9bf04e870962b742c4fc6cb330d1235a
Definitions of business process given in much of the literature on Business Process Management are limited in depth and their related models of business processes are correspondingly constrained. After giving a brief history of the progress of business process modeling techniques from production systems to the office environment, this paper proposes that most definitions are based on machine metaphor type explorations of a process. While these techniques are often rich and illuminating it is suggested that they are too limited to express the true nature of business processes that need to develop and adapt to today’s challenging environment.
bc018fc951c124aa4519697f1884fd5afaf43439
Theoretical and experimental results of a wide-band planar antenna are presented. This antenna can achieve a wide bandwidth, low cross-polarization levels, and low backward radiation levels. For wide bandwidth and easy integration with active circuits, it uses aperture-coupled stacked square patches. The coupling aperture is an H-shaped aperture. Based on the finite-difference time-domain method, a parametric study of the input impedance of the antenna is presented, and effects of each parameter on the antenna impedance are illustrated. One antenna is also designed, fabricated, and measured. The measured return loss exhibits an impedance bandwidth of 21.7%. The cross-polarization levels in both and planes are better than 23 dB. The front-to-back ratio of the antenna radiation pattern is better than 22 dB. Both theoretical and experimental results of parameters and radiation patterns are presented and discussed.
5adcac7d15ec8999fa2beb62f0ddc6893884e080
Fingerprint orientation plays important roles in fingerprint enhancement, fingerprint classification, and fingerprint recognition. This paper critically reviews the primary advances on fingerprint orientation estimation. Advantages and limitations of existing methods have been addressed. Issues on future development have been discussed. Copyright © 2010 John Wiley & Sons, Ltd.
568cff415e7e1bebd4769c4a628b90db293c1717
Vast quantities of videos are now being captured at astonishing rates, but the majority of these are not labelled. To cope with such data, we consider the task of content-based activity recognition in videos without any manually labelled examples, also known as zero-shot video recognition. To achieve this, videos are represented in terms of detected visual concepts, which are then scored as relevant or irrelevant according to their similarity with a given textual query. In this paper, we propose a more robust approach for scoring concepts in order to alleviate many of the brittleness and low precision problems of previous work. Not only do we jointly consider semantic relatedness, visual reliability, and discriminative power. To handle noise and non-linearities in the ranking scores of the selected concepts, we propose a novel pairwise order matrix approach for score aggregation. Extensive experiments on the large-scale TRECVID Multimedia Event Detection data show the superiority of our approach.
a62ac71cd51124973ac57c87d09a3461ecbd8e61
New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. T ime constants have been derived, and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form is W J+l = w, t 2plqK-lx,, where W, is the present weight vector, W, + 1 is the next weight vector, r, is the present error, X, is the present input vector, u is a constant controlling stability and rate of convergence, and 2 K is the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a min imum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise.
5896b9299d100bdd10fee983fe365dc3bcf35a67
This paper presents a noninvasive wireless sensor platform for continuous health monitoring. The sensor system integrates a loop antenna, wireless sensor interface chip, and glucose sensor on a polymer substrate. The IC consists of power management, readout circuitry, wireless communication interface, LED driver, and energy storage capacitors in a 0.36-mm2 CMOS chip with no external components. The sensitivity of our glucose sensor is 0.18 μA·mm-2·mM-1. The system is wirelessly powered and achieves a measured glucose range of 0.05-1 mM with a sensitivity of 400 Hz/mM while consuming 3 μW from a regulated 1.2-V supply.
622c5da12c87ecc3ea8be91f79192b6e0ee559d2
In this theoretical synthesis, we juxtapose three traditions of prior research on user participation and involvement: the survey and experimental literature on the relationship between user participation and IS success, the normative literature on alternative development approaches, and qualitative studies that examine user participation from a variety of theoretical perspectives. We also assess progress made in the three bodies of literature, and identify gaps and directions of future research for improving user participation.
24beb987b722d4a25d3157a43000e685aa8f8874
This paper presents a statistical model which trains from a corpus annotated with Part Of Speech tags and assigns them to previously unseen text with state of the art accuracy The model can be classi ed as a Maximum Entropy model and simultaneously uses many contextual features to predict the POS tag Furthermore this paper demonstrates the use of specialized fea tures to model di cult tagging decisions discusses the corpus consistency problems discovered during the implementation of these features and proposes a training strategy that mitigates these problems
6a2fe560574b76994ab1148b4dae0bfb89e3a3e3
An important aspect of human perception is anticipation and anticipating which activities will a human do next (and how to do them) in useful for many applications, for example, anticipation enables an assistive robot to plan ahead for reactive responses in the human environments. In this work, we present a constructive approach for generating various possible future human activities by reasoning about the rich spatial-temporal relations through object affordances. We represent each possible future using an anticipatory temporal conditional random field (ATCRF) where we sample the nodes and edges corresponding to future object trajectories and human poses from a generative model. We then represent the distribution over the potential futures using a set of constructed ATCRF particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 75.4%, 69.2% and 58.1% for an anticipation time of 1, 3 and 10 seconds respectively. 1
ea38789c6687e7ccb483693046fff5293e903c51
We present a batch reinforcement learning (RL) algorithm that provides probabilistic guarantees about the quality of each policy that it proposes, and which has no hyper-parameters that require expert tuning. The user may select any performance lower-bound, ρ−, and confidence level, δ, and our algorithm will ensure that the probability that it returns a policy with performance below ρ− is at most δ. We then propose an incremental algorithm that executes our policy improvement algorithm repeatedly to generate multiple policy improvements. We show the viability of our approach with a simple gridworld and the standard mountain car problem, as well as with a digital marketing application that uses real world data.
780b05a35f2c7dd4b4d6e2a844ef5e145f1972ae
In multi-turn dialogs, natural language understanding models can introduce obvious errors by being blind to contextual information. To incorporate dialog history, we present a neural architecture with Speaker-Sensitive Dual Memory Networks which encode utterances differently depending on the speaker. This addresses the different extents of information available to the system — the system knows only the surface form of user utterances while it has the exact semantics of system output. We performed experiments on real user data from Microsoft Cortana, a commercial personal assistant. The result showed a significant performance improvement over the state-of-the-art slot tagging models using contextual information.
259bbc822121df705bf3d5898ae031cd712505ea
1Department of Mobile Communications, School of Electrical Engineering and Computer Sciences, Technical University of Berlin, Berlin, Germany 2Wireless Networking, Signal Processing and Security Lab, Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA 3Division of Communication Systems, Department of Electrical Engineering (ISY), Linköping University, SE-581 83 Linköping, Sweden 4Communications Laboratory, Faculty of Electrical Engineering and Information Technology, Dresden University of Technology, 01062 Dresden, Germany
4eca7aa4a96300caf8622d666ecf5635d8b72132
The ability to accurately identify human activities is essential for developing automatic rehabilitation and sports training systems. In this paper, large-scale exercise motion data obtained from a forearm-worn wearable sensor are classified with a convolutional neural network (CNN). Time-series data consisting of accelerometer and orientation measurements are formatted as images, allowing the CNN to automatically extract discriminative features. A comparative study on the effects of image formatting and different CNN architectures is also presented. The best performing configuration classifies 50 gym exercises with 92.1% accuracy.
1b1a829c43f1a4f3a3d70f033a1b8e7bee1f7112
6abac64862f7d207cac58c6a93f75dc80d74e575
5fb874a1c8106a5b2b2779ee8e1433149109ba00
Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reeecting the goodness-of-t of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et al. (1995) introduce a Bayesian metric, called the BDe metric, that computes the relative posterior probability of a network structure given data. In this paper, we show that the search problem of identifying a Bayesian network|among those where each node has at most K parents|that has a relative posterior probability greater than a given constant is NP-complete, when the BDe metric is used. 12.1 Introduction Recently, many researchers have begun to investigate methods for learning Bayesian networks. Many of these approaches have the same basic components: a scoring metric and a search procedure. The scoring metric takes a database of observed cases D and a network structure B S , and returns a score reeecting the goodness-of-t of the data to the structure. A search procedure generates networks for evaluation by the scoring metric. These approaches use the two components to identify a network structure or set of structures that can be used to predict future events or infer causal relationships. Cooper and Herskovits (1992)|herein referred to as CH|derive a Bayesian metric, which we call the BD metric, from a set of reasonable assumptions about learning Bayesian networks containing only discrete variables. Heckerman et al. (1995)|herein referred to as HGC|expand upon the work of CH to derive a new metric, which we call the BDe metric, which has the desirable property of likelihood equivalence. Likelihood equivalence says that the data cannot help to discriminate equivalent structures. We now present the BD metric derived by CH. We use B h S to denote the hypothesis that B S is an I-map of the distribution that generated the database. 2 Given a belief-network structure B S , we use i to denote the parents of x i. We use r i to denote the number of states of variable x i , and q i = Q x l 2 i r l to denote the number of instances of i. We use the integer j to index these instances. That is, we write i = j to denote the observation of the jth instance of the parents of x i. 1996 Springer-Verlag. 2 There is an …
7783fd2984ac139194d21c10bd83b4c9764826a3
Probabilistic methods to create the areas, of computational tools. But I needed to get canned, bayesian networks worked recently strongly. Recently I tossed this book was published. In intelligent systems is researchers in, ai operations research excellence award for graduate. Too concerned about how it i've been. Apparently daphne koller and learning structures evidential reasoning. Pearl is a language for i've. Despite its early publication date it, is not great give the best references.
5c386d601ffcc75f7635a4a5c6066824b37b9425
Nowadays, it is hard to find a popular Web site with a registration form that is not protected by an automated human proof test which displays a sequence of characters in an image, and requests the user to enter the sequence into an input field. This security mechanism is based on the Turing Test—one of the oldest concepts in Artificial Intelligence—and it is most often called Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). This kind of test has been conceived to prevent the automated access to an important Web resource, for example, a Web mail service or a Social Network. There are currently hundreds of these tests, which are served millions of times a day, thus involving a huge amount of human work. On the other side, a number of these tests have been broken, that is, automated programs designed by researchers, hackers, and spammers have been able to automatically serve the correct answer. In this chapter, we present the history and the concept of CAPTCHAs, along with their applications and a wide review of their instantiations. We also discuss their evaluation, both from the user and the security perspectives, including usability, attacks, and countermeasures. We expect this chapter provides to the reader a good overview of this interesting field. CES IN COMPUTERS, VOL. 83 109 Copyright © 2011 Elsevier Inc. 65-2458/DOI: 10.1016/B978-0-12-385510-7.00003-5 All rights reserved. 110 J.M. GOMEZ HIDALGO AND G. ALVAREZ MARAÑON 1. I ntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 1 .1. T he Turing Test and the Origin of CAPTCHAs . . . . . . . . . . . . . . . 111 2. M otivation and Applications . . . . . . . . . . . . . . . . . . . . . . . . 116 2 .1. G eneral Description of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . 116 2 .2. D esirable Properties of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . 117 2 .3. I m plementation and Deployment . . . . . . . . . . . . . . . . . . . . . . 119 2 .4. A pplications and the Rise of the Robots . . . . . . . . . . . . . . . . . . . 121 3. T ypes of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3 .1. O CR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3 .2. I m age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3 .3. A udio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 3 .4. C ognitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4. E valuation of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . 146 4 .1. E fficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4 .2. A ccessibility Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4 .3. P ractical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5. S ecurity and Attacks on CAPTCHAs . . . . . . . . . . . . . . . . . . . . 156 5 .1. A ttacks on CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5 .2. S ecurity Requirements on CAPTCHAs . . . . . . . . . . . . . . . . . . . 169 6. A lternatives to CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . 171 7. C onclusions and Future Trends . . . . . . . . . . . . . . . . . . . . . . . 173 R eferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
941a668cb77010e032a809861427fa8b1bee8ea0
Signal processing today is performed in the vast majority of systems for ECG analysis and interpretation. The objective of ECG signal processing is manifold and comprises the improvement of measurement accuracy and reproducibility (when compared with manual measurements) and the extraction of information not readily available from the signal through visual assessment. In many situations, the ECG is recorded during ambulatory or strenuous conditions such that the signal is corrupted by different types of noise, sometimes originating from another physiological process of the body. Hence, noise reduction represents another important objective of ECG signal processing; in fact, the waveforms of interest are sometimes so heavily masked by noise that their presence can only be revealed once appropriate signal processing has first been applied. Electrocardiographic signals may be recorded on a long timescale (i.e., several days) for the purpose of identifying intermittently occurring disturbances in the heart rhythm. As a result, the produced ECG recording amounts to huge data sizes that quickly fill up available storage space. Transmission of signals across public telephone networks is another application in which large amounts of data are involved. For both situations, data compression is an essential operation and, consequently, represents yet another objective of ECG signal processing. Signal processing has contributed significantly to a new understanding of the ECG and its dynamic properties as expressed by changes in rhythm and beat morphology. For example, techniques have been developed that characterize oscillations related to the cardiovascular system and reflected by subtle variations in heart rate. The detection of low-level, alternating changes in T wave amplitude is another example of oscillatory behavior that has been established as an indicator of increased risk for sudden, life-threatening arrhythmias. Neither of these two oscillatory signal properties can be perceived by the naked eye from a standard ECG printout. Common to all types of ECG analysis—whether it concerns resting ECG interpretation, stress testing, ambulatory monitoring, or intensive care monitoring—is a basic set of algorithms that condition the signal with respect to different types of noise and artifacts, detect heartbeats, extract basic ECG measurements of wave amplitudes and durations, and compress the data for efficient storage or transmission; the block diagram in Fig. 1 presents this set of signal processing algorithms. Although these algorithms are frequently implemented to operate in sequential order, information on the occurrence time of a heartbeat, as produced by the QRS detector, is sometimes incorporated into the other algorithms to improve performance. The complexity of each algorithm varies from application to application so that, for example, noise filtering performed in ambulatory monitoring is much more sophisticated than that required in resting ECG analysis. Once the information produced by the basic set of algorithms is available, a wide range of ECG applications exist where it is of interest to use signal processing for quantifying heart rhythm and beat morphology properties. The signal processing associated with two such applications—high-resolution ECG and T wave alternans—are briefly described at the end of this article. The interested reader is referred to, for example, Ref. 1, where a detailed description of other ECG applications can be found.
b681da8d4be586f6ed6658038c81cdcde1d54406
In this letter, a novel dual-band and polarization-flexible substrate integrated waveguide (SIW) cavity antenna is proposed. The SIW cavity used for the antenna is excited by a conventional TE120 mode for its first resonance. With the intervention of the slot, a second resonance excited by a modified- TE120 mode is also generated, thereby providing a broadside radiation pattern at two resonant frequencies. In addition, the proposed antenna has two orthogonal feeding lines. Therefore, it is possible to provide any six major polarization states. In this letter, three major polarization cases are simulated and compared to measured results. Since modern communication systems require multifunctional antennas, the proposed antenna concept is a promising candidate.
cf18287e79b1fd73cd333fc914bb24c00a537f4c
In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction. If a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robot learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skipconnections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robot learning.
89701a3b04c3f102ebec83db3249b20791eacb38
Context awareness is a key property for enabling context aware services. For a mobile device, the user's location or trajectory is one of the crucial contexts. One common challenge for detecting location or trajectory by mobile devices is to manage the tradeoff between accuracy and power consumption. Typical approaches are (1) controlling the frequency of usage of sensors and (2) sensor fusion technique. The algorithm proposed in this paper takes a different approach to improve the accuracy by merging repeatedly measured coarse and inaccurate location data from cell tower. The experimental result shows that the mean error distance between the detected trajectory and the ground truth is improved from 44m to 10.9m by merging data from 41 days' measurement.
a85ad1a2ee829c315be6ded0eee8a1dadc21a666
Autonomous and assisted driving are undoubtedly hot topics in computer vision. However, the driving task is extremely complex and a deep understanding of drivers' behavior is still lacking. Several researchers are now investigating the attention mechanism in order to define computational models for detecting salient and interesting objects in the scene. Nevertheless, most of these models only refer to bottom up visual saliency and are focused on still images. Instead, during the driving experience the temporal nature and peculiarity of the task influence the attention mechanisms, leading to the conclusion that real life driving data is mandatory. In this paper we propose a novel and publicly available dataset acquired during actual driving. Our dataset, composed by more than 500,000 frames, contains drivers' gaze fixations and their temporal integration providing task-specific saliency maps. Geo-referenced locations, driving speed and course complete the set of released data. To the best of our knowledge, this is the first publicly available dataset of this kind and can foster new discussions on better understanding, exploiting and reproducing the driver's attention process in the autonomous and assisted cars of future generations.
a0ff514a8a64ba5a7cd7430ca04245fd037d040c
This paper builds on academic and industry discussions from the 2012 and 2013 pre-ICIS events: BI Congress III and the Special Interest Group on Decision Support Systems (SIGDSS) workshop, respectively. Recognizing the potential of “big data” to offer new insights for decision making and innovation, panelists at the two events discussed how organizations can use and manage big data for competitive advantage. In addition, expert panelists helped to identify research gaps. While emerging research in the academic community identifies some of the issues in acquiring, analyzing, and using big data, many of the new developments are occurring in the practitioner community. We bridge the gap between academic and practitioner research by presenting a big data analytics framework that depicts a process view of the components needed for big data analytics in organizations. Using practitioner interviews and literature from both academia and practice, we identify the current state of big data research guided by the framework and propose potential areas for future research to increase the relevance of academic research to practice.
34d03cfb02806e668f9748ee60ced1b269d1db6c
0607acbb450d2afef7f2aa5b53bb05966bd065ed
While Deep Neural Networks (DNNs) have achieved tremendous success for large vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is slow. One reason is that DNNs are trained with a large number of training parameters (i.e., 10-50 million). Because networks are trained with a large number of output targets to achieve good performance, the majority of these parameters are in the final weight layer. In this paper, we propose a low-rank matrix factorization of the final weight layer. We apply this low-rank technique to DNNs for both acoustic modeling and language modeling. We show on three different LVCSR tasks ranging between 50-400 hrs, that a low-rank factorization reduces the number of parameters of the network by 30-50%. This results in roughly an equivalent reduction in training time, without a significant loss in final recognition accuracy, compared to a full-rank representation.
56c16d9e2a5270ba6b1d83271e2c10916591968d
56c2fb2438f32529aec604e6fc3b06a595ddbfcc
Recently, several machine learning methods for gender classification from frontal facial images have been proposed. Their variety suggests that there is not a unique or generic solution to this problem. In addition to the diversity of methods, there is also a diversity of benchmarks used to assess them. This gave us the motivation for our work: to select and compare in a concise but reliable way the main state-of-the-art methods used in automatic gender recognition. As expected, there is no overall winner. The winner, based on the accuracy of the classification, depends on the type of benchmarks used.