_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
bdf67ee2a13931ca2d5eac458714ed98148d1b34 | A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system. |
eeb1a1e0cab8d809b5789d04418dc247dca956cc | Lee, Stolfo, and Mok have previously reported the use of association rules and frequency episodes for mining audit data to gain knowledge for intrusion detection. The integration of association rules and frequency episodes with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. We present a modification of a previously reported algorithm for mining fuzzy association rules, define the concept of fuzzy frequency episodes, and present an original algorithm for mining fuzzy frequency episodes. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. Draft: Updated version published in the International Journal of Intelligent Systems, Volume 15, No. I, August 2000 3 |
0b07f84c22ce01309981a02c23d5cd1770cad48b | Table partitioning splits a table into smaller parts that can be accessed, stored, and maintained independent of one another. From their traditional use in improving query performance, partitioning strategies have evolved into a powerful mechanism to improve the overall manageability of database systems. Table partitioning simplifies administrative tasks like data loading, removal, backup, statistics maintenance, and storage provisioning. Query language extensions now enable applications and user queries to specify how their results should be partitioned for further use. However, query optimization techniques have not kept pace with the rapid advances in usage and user control of table partitioning. We address this gap by developing new techniques to generate efficient plans for SQL queries involving multiway joins over partitioned tables. Our techniques are designed for easy incorporation into bottom-up query optimizers that are in wide use today. We have prototyped these techniques in the PostgreSQL optimizer. An extensive evaluation shows that our partition-aware optimization techniques, with low optimization overhead, generate plans that can be an order of magnitude better than plans produced by current optimizers. |
26d673f140807942313545489b38241c1f0401d0 | The amount of data in the world and in our lives seems everincreasing and there’s no end to it. The Weka workbench is an organized collection of state-of-the-art machine learning algorithms and data pre-processing tools. The basic way of interacting with these methods is by invoking them from the command line. However, convenient interactive graphical user interfaces are provided for data exploration, for setting up largescale experiments on distributed computing platforms, and for designing configurations for streamed data processing. These interfaces constitute an advanced environment for experimental data mining. Classification is an important data mining technique with broad applications. It classifies data of various kinds. This paper has been carried out to make a performance evaluation of REPTree, Simple Cart and RandomTree classification algorithm. The paper sets out to make comparative evaluation of classifiers REPTree, Simple Cart and RandomTree in the context of dataset of Indian news to maximize true positive rate and minimize false positive rate. For processing Weka API were used. The results in the paper on dataset of Indian news also show that the efficiency and accuracy of RandomTree is good than REPTree, and Simple Cart. Keywords— Simple Cart, RandomTree, REPTree, Weka, WWW |
6e633b41d93051375ef9135102d54fa097dc8cf8 | Recently there has been a lot of interest in “ensemble learning” — methods that generate many classifiers and aggregate their results. Two well-known methods are boosting (see, e.g., Shapire et al., 1998) and bagging Breiman (1996) of classification trees. In boosting, successive trees give extra weight to points incorrectly predicted by earlier predictors. In the end, a weighted vote is taken for prediction. In bagging, successive trees do not depend on earlier trees — each is independently constructed using a bootstrap sample of the data set. In the end, a simple majority vote is taken for prediction. Breiman (2001) proposed random forests, which add an additional layer of randomness to bagging. In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables. In a random forest, each node is split using the best among a subset of predictors randomly chosen at that node. This somewhat counterintuitive strategy turns out to perform very well compared to many other classifiers, including discriminant analysis, support vector machines and neural networks, and is robust against overfitting (Breiman, 2001). In addition, it is very user-friendly in the sense that it has only two parameters (the number of variables in the random subset at each node and the number of trees in the forest), and is usually not very sensitive to their values. The randomForest package provides an R interface to the Fortran programs by Breiman and Cutler (available at http://www.stat.berkeley.edu/ users/breiman/). This article provides a brief introduction to the usage and features of the R functions. |
8cfe24108b7f73aa229be78f9108e752e8210c36 | Although data mining has been successfully implemented in the business world for some time now, its use in higher education is still relatively new, i.e. its use is intended for identification and extraction of new and potentially valuable knowledge from the data. Using data mining the aim was to develop a model which can derive the conclusion on students' academic success. Different methods and techniques of data mining were compared during the prediction of students' success, applying the data collected from the surveys conducted during the summer semester at the University of Tuzla, the Faculty of Economics, academic year 2010-2011, among first year students and the data taken during the enrollment. The success was evaluated with the passing grade at the exam. The impact of students' socio-demographic variables, achieved results from high school and from the entrance exam, and attitudes towards studying which can have an affect on success, were all investigated. In future investigations, with identifying and evaulating variables associated with process of studying, and with the sample increase, it would be possible to produce a model which would stand as a foundation for the development of decision support system in higher education. |
cc5c84c1c876092e6506040cde7d2a5b9e9065ff | This paper compares the accuracy of decision tree and Bayesian network algorithms for predicting the academic performance of undergraduate and postgraduate students at two very different academic institutes: Can Tho University (CTU), a large national university in Viet Nam; and the Asian Institute of Technology (AIT), a small international postgraduate institute in Thailand that draws students from 86 different countries. Although the diversity of these two student populations is very different, the data-mining tools were able to achieve similar levels of accuracy for predicting student performance: 73/71% for {fail, fair, good, very good} and 94/93% for {fail, pass} at the CTU/AIT respectively. These predictions are most useful for identifying and assisting failing students at CTU (64% accurate), and for selecting very good students for scholarships at the AIT (82% accurate). In this analysis, the decision tree was consistently 3-12% more accurate than the Bayesian network. The results of these case studies give insight into techniques for accurately predicting student performance, compare the accuracy of data mining algorithms, and demonstrate the maturity of open source tools. |
9d0f09e343ebc9d5e896528273b79a1f13aa5c07 | |
2cb6d78e822ca7fd0e29670ec7e26e37ae3d3e8f | This paper presents a novel compact low-temperature cofired ceramic (LTCC) bandpass filter (BPF) with wide stopband and high selectivity. The proposed circuit consists of two coupled λ<sub>g</sub>/4 transmission-line resonators. A special coupling region is selected to realize a novel discriminating coupling scheme for generating a transmission zero (TZ) at the third harmonic frequency. The mechanism is analyzed and the design guideline is described. The source-load coupling is introduced to generate two TZs near the passband and one in the stopband. Thus, wide stopband can be obtained without extra circuits. Due to the LTCC multilayer structures, the filter size is 0.058 λ<sub>g</sub>×0.058 λ<sub>g</sub>×0.011 λ<sub>g</sub>, or 2.63 mm × 2.61 mm × 0.5 mm. The simulated and measured results of the demonstrated LTCC BPF are presented to validate the proposed design. |
52c9eb70c55685b349126ed907e037f383673cf3 | We propose a novel approach to abstractive Web summarization based on the observation that summaries for similar URLs tend to be similar in both content and structure. We leverage existing URL clusters and construct per-cluster word graphs that combine known summaries while abstracting out URL-specific attributes. The resulting topology, conditioned on URL features, allows us to cast the summarization problem as a structured learning task using a lowest cost path search as the decoding step. Early experimental results on a large number of URL clusters show that this approach is able to outperform previously proposed Web summarizers. |
8947ca4949fc66eb65f863dfb825ebd90ab01772 | Many applications in text processing require significant human effort for either labeling large document collections (when learning statistical models) or extrapolating rules from them (when using knowledge engineering). In this work, we describe away to reduce this effort, while retaining the methods' accuracy, by constructing a hybrid classifier that utilizes human reasoning over automatically discovered text patterns to complement machine learning. Using a standard sentiment-classification dataset and real customer feedback data, we demonstrate that the resulting technique results in significant reduction of the human effort required to obtain a given classification accuracy. Moreover, the hybrid text classifier also results in a significant boost in accuracy over machine-learning based classifiers when a comparable amount of labeled data is used. |
563384a5aa6111610ac4939f645d1125a5a0ac7f | Automatic recognition of people has received much attention during the recent years due to its many applications in different fields such as law enforcement, security applications or video indexing. Face recognition is an important and very challenging technique to automatic people recognition. Up to date, there is no technique that provides a robust solution to all situations and different applications that face recognition may encounter. In general, we can make sure that performance of a face recognition system is determined by how to extract feature vector exactly and to classify them into a group accurately. It, therefore, is necessary for us to closely look at the feature extractor and classifier. In this paper, Principle Component Analysis (PCA) is used to play a key role in feature extractor and the SVMs are used to tackle the face recognition problem. Support Vector Machines (SVMs) have been recently proposed as a new classifier for pattern recognition. We illustrate the potential of SVMs on the Cambridge ORL Face database, which consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose, and facial details. The SVMs that have been used included the Linear (LSVM), Polynomial (PSVM), and Radial Basis Function (RBFSVM) SVMs. We provide experimental evidence which show that Polynomial and Radial Basis Function (RBF) SVMs performs better than Linear SVM on the ORL Face Dataset when both are used with one against all classification. We also compared the SVMs based recognition with the standard eigenface approach using the Multi-Layer Perceptron (MLP) Classification criterion. |
47daf9cc8fb15b3a4b7c3db4498d29a5a8b84c22 | 3D object categorization is a non-trivial task in computer vision encompassing many real-world applications. We pose the problem of categorizing 3D polygon meshes as learning appearance evolution from multi-view 2D images. Given a corpus of 3D polygon meshes, we first render the corresponding RGB and depth images from multiple viewpoints on a uniform sphere. Using rank pooling, we propose two methods to learn the appearance evolution of the 2D views. Firstly, we train view-invariant models based on a deep convolutional neural network (CNN) using the rendered RGB-D images and learn to rank the first fully connected layer activations and, therefore, capture the evolution of these extracted features. The parameters learned during this process are used as the 3D shape representations. In the second method, we learn the aggregation of the views from the outset by employing the ranking machine to the rendered RGB- D images directly, which produces aggregated 2D images which we term as ``3D shape images". We then learn CNN models on this novel shape representation for both RGB and depth which encode salient geometrical structure of the polygon. Experiments on the ModelNet40 and ModelNet10 datasets show that the proposed method consistently outperforms existing state-of-the-art algorithms in 3D shape recognition. |
58156d27f80ee450ba43651a780ebd829b70c363 | Previous research on kernel monitoring and protection widely relies on higher privileged system components, such as hardware virtualization extensions, to isolate security tools from potential kernel attacks. These approaches increase both the maintenance effort and the code base size of privileged system components, which consequently increases the risk of having security vulnerabilities. SKEE, which stands for Secure Kernellevel Execution Environment, solves this fundamental problem. SKEE is a novel system that provides an isolated lightweight execution environment at the same privilege level of the kernel. SKEE is designed for commodity ARM platforms. Its main goal is to allow secure monitoring and protection of the kernel without active involvement of higher privileged software. SKEE provides a set of novel techniques to guarantee isolation. It creates a protected address space that is not accessible to the kernel, which is challenging to achieve when both the kernel and the isolated environment share the same privilege level. SKEE solves this challenge by preventing the kernel from managing its own memory translation tables. Hence, the kernel is forced to switch to SKEE to modify the system’s memory layout. In turn, SKEE verifies that the requested modification does not compromise the isolation of the protected address space. Switching from the OS kernel to SKEE exclusively passes through a well-controlled switch gate. This switch gate is carefully designed so that its execution sequence is atomic and deterministic. These properties combined guarantee that a potentially compromised kernel cannot exploit the switching sequence to compromise the isolation. If the kernel attempts to violate these properties, it will only cause the system to fail without exposing the protected address space. SKEE exclusively controls access permissions of the entire OS memory. Hence, it prevents attacks that attempt to inject unverified code into the kernel. Moreover, it can be easily extended to intercept other system events in order to support various intrusion detection and integrity verification tools. This paper presents a SKEE prototype that runs on both 32-bit ARMv7 and 64-bit ARMv8 architectures. Performance evaluation results demonstrate that SKEE is a practical solution for real world systems. 1These authors contributed equally to this work |
698902ce1a836d353d4ff955c826095e28506e05 | |
da09bc42bbf5421b119abea92716186a1ca3f02f | We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω′, if and only if the identities ω and ω′ are close to each other as measured by the “set overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-based encryption”. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model. |
b3baba6c34a2946b999cc0f6be6bb503d303073e | This paper describes a simple, non-parametric and generic test of the equivalence of Receiver Operating Characteristic (ROC) curves based on a modified Kolmogorov-Smirnov (KS) test. The test is described in relation to the commonly used techniques such as the Area Under the ROC curve (AUC) and the Neyman-Pearson method. We first review how the KS test is used to test the null hypotheses that the class labels predicted by a classifier are no better than random. We then propose an interval mapping technique that allows us to use two KS tests to test the null hypothesis that two classifiers have ROC curves that are equivalent. We demonstrate that this test discriminates different ROC curves both when one curve dominates another and when the curves cross and so are not discriminated by AUC. The interval mapping technique is then used to demonstrate that, although AUC has its limitations, it can be a model-independent and coherent measure of classifier performance. |
090f4b588ba58c36a21eddd67ea33d59614480c1 | Syntactic simplification is the process of reducing the grammatical complexity of a text, while retaining its information content and meaning. The aim of syntactic simplification is to make text easier to comprehend for human readers, or process by programs. In this thesis, I describe how syntactic simplification can be achieved using shallow robust analysis, a small set of hand-crafted simplification rules and a detailed analysis of the discourse-level aspects of syntactically rewriting text. I offer a treatment of relative clauses, apposition, coordination and subordination. I present novel techniques for relative clause and appositive attachment. I argue that these attachment decisions are not purely syntactic. My approaches rely on a shallow discourse model and on animacy information obtained from a lexical knowledge base. I also show how clause and appositive boundaries can be determined reliably using a decision procedure based on local context, represented by part-of-speech tags and noun chunks. I then formalise the interactions that take place between syntax and discourse during the simplification process. This is important because the usefulness of syntactic simplification in making a text accessible to a wider audience can be undermined if the rewritten text lacks cohesion. I describe how various generation issues like sentence ordering, cue-word selection, referring-expression generation, determiner choice and pronominal use can be resolved so as to preserve conjunctive and anaphoric cohesive-relations during syntactic simplification. In order to perform syntactic simplification, I have had to address various natural language processing problems, including clause and appositive identification and attachment, pronoun resolution and referring-expression generation. I evaluate my approaches to solving each problem individually, and also present a holistic evaluation of my syntactic simplification system. |
a9a7168b5b45fcf63e7f8904f68f6a90f8062443 | |
6d7c6c8828c7ac91cc74a79fdc06b5783102a784 | This article gives an overview of the activities of the company Microwave Vision, formerly Satimo, oriented to health-related applications. The existing products in terms of Specific Absorption Rate (SAR) measurement and RF safety are described in detail. The progress of the development of a new imaging modality for breast pathology detection using microwaves is shortly reported. |
0c1a55e0e02c1dbf6cf363ec022ca17925586e16 | Identification of tracked objects is a key capability of automated surveillance and information systems for air, surface and subsurface (maritime), and ground environments, improving situational awareness and offering decision support to operational users. The Bayesian-based identification data combining process (IDCP) provides an effective instrument for fusion of uncertain identity indications from various sources. A user-oriented approach to configuration of the process is introduced, which enables operators to adapt the IDCP to changing identification needs in varying operational scenarios. Application of results from cognitive psychology and decision theory provides good access to retrieval of Bayesian data and makes configuration easily feasible to operational experts. |
2636bff7d3bdccf9b39c5e1e7d86a77690f1c07d | Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL). However, designing shaping functions usually requires much expert knowledge and handengineering, and the difficulties are further exacerbated given multiple similar tasks to solve. In this paper, we consider reward shaping on a distribution of tasks, and propose a general meta-learning framework to automatically learn the efficient reward shaping on newly sampled tasks, assuming only shared state space but not necessarily action space. We first derive the theoretically optimal reward shaping in terms of credit assignment in model-free RL. We then propose a value-based meta-learning algorithm to extract an effective prior over the optimal reward shaping. The prior can be applied directly to new tasks, or provably adapted to the task-posterior while solving the task within few gradient updates. We demonstrate the effectiveness of our shaping through significantly improved learning efficiency and interpretable visualizations across various settings, including notably a successful transfer from DQN to DDPG. |
0309ec1f0e139cc10090c4fefa08a83a2644530a | |
42771aede47980ae8eeebac246c7a8b941d11414 | We present and evaluate methods for diversifying search results to improve personalized web search. A common personalization approach involves reranking the top N search results such that documents likely to be preferred by the user are presented higher. The usefulness of reranking is limited in part by the number and diversity of results considered. We propose three methods to increase the diversity of the top results and evaluate the effectiveness of these methods. |
22a8979b53315fad7f98781328cc0326b5147cca | An artificial neural network-based synthesis model is proposed for the design of single-feed circularly-polarized square microstrip antenna (CPSMA) with truncated corners. To obtain the training data sets, the resonant frequency and Q-factor of square microstrip antennas are calculated by empirical formulae. Then the size of the truncated corners and the operation frequency with the best axial ratio are obtained. Using the Levenberg-Marquardt (LM) algorithm, a three hidden layered network is trained to achieve an accurate synthesis model. At last, the model is validated by comparing its results with the electromagnetic simulation and measurement. It is extremely useful to antenna engineers for directly obtaining patch physical dimensions of the single-feed CPSMA with truncated corners. |
93f962a46b24030bf4486a77b282f567529e7782 | This paper presents a compact and power-efficient 5 Ghz in-band full-duplex (FD) design in ANSYS HFSS using the 180-degree ring hybrid coupler. The proposed design achieves an excellent isolation of 57dB by taking advantage of destructive interference between two radiating antennas attached to the coupler, leading to a large reduction in self-interference. The design is passive and hence overcomes additional power requirement for adaptive channel estimation. In addition, it has a very workable physical size for the desired frequency of operation. The proposed FD design is therefore compact and power-efficient, which can be used in mobile devices, such as cell phones or tablet/phablet devices for a more flexible and greater contention of scarce RF resources. |
023cc7f9f3544436553df9548a7d0575bb309c2e | This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute. |
d80e7da055f9c25e29f732d0a829daf172eb1fa0 | This article summarizes an extensive literature review addressing the question, How can we spread and sustain innovations in health service delivery and organization? It considers both content (defining and measuring the diffusion of innovation in organizations) and process (reviewing the literature in a systematic and reproducible way). This article discusses (1) a parsimonious and evidence-based model for considering the diffusion of innovations in health service organizations, (2) clear knowledge gaps where further research should be focused, and (3) a robust and transferable methodology for systematically reviewing health service policy and management. Both the model and the method should be tested more widely in a range of contexts. |
3343d1d78f2a14045b52b71428efaf43073d616d | OBJECTIVE
Rising obesity rates have been linked to the consumption of energy-dense diets. We examined whether dietary energy density was associated with obesity and related disorders including insulin resistance and the metabolic syndrome.
RESEARCH DESIGN AND METHODS
We conducted a cross-sectional study using nationally representative data of U.S. adults > or =20 years of age from the 1999-2002 National Health and Nutrition Examination Survey (n = 9,688). Dietary energy density was calculated based on foods only. We used a series of multivariate linear regression models to determine the independent association between dietary energy density, obesity measures (BMI [in kilograms per meters squared] and waist circumference [in centimeters]), glycemia, or insulinemia. We used multivariate Poisson regression models to determine the independent association between dietary energy density and the metabolic syndrome as defined by the National Cholesterol and Education Program (Adult Treatment Panel III).
RESULTS
Dietary energy density was independently and significantly associated with higher BMI in women (beta = 0.44 [95% CI 0.14-0.73]) and trended toward a significant association in men (beta = 0.37 [-0.007 to 0.74], P = 0.054). Dietary energy density was associated with higher waist circumference in women (beta = 1.11 [0.42-1.80]) and men (beta = 1.33 [0.46-2.19]). Dietary energy density was also independently associated with elevated fasting insulin (beta = 0.65 [0.18-1.12]) and the metabolic syndrome (prevalence ratio = 1.10 [95% CI 1.03-1.17]).
CONCLUSIONS
Dietary energy density is an independent predictor of obesity, elevated fasting insulin levels, and the metabolic syndrome in U.S. adults. Intervention studies to reduce dietary energy density are warranted. |
3e597e492c1ed6e7bbd539d5f2e5a6586c6074cd | Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information. In this paper, we improve this model by explicitly incorporating source-side syntactic trees. More specifically, we propose (1) a bidirectional tree encoder which learns both sequential and tree structured representations; (2) a tree-coverage model that lets the attention depend on the source-side syntax. Experiments on Chinese-English translation demonstrate that our proposed models outperform the sequential attentional model as well as a stronger baseline with a bottom-up tree encoder and word coverage.1 |
4e88de2930a4435f737c3996287a90ff87b95c59 | Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank). |
6411da05a0e6f3e38bcac0ce57c28038ff08081c | Semantic representations have long been argued as potentially useful for enforcing meaning preservation and improving generalization performance of machine translation methods. In this work, we are the first to incorporate information about predicate-argument structure of source sentences (namely, semantic-role representations) into neural machine translation. We use Graph Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders and achieve improvements in BLEU scores over the linguistic-agnostic and syntaxaware versions on the English–German language pair. |
9f291ce2d0fc1d76206139a40a859283674d8f65 | Neural Machine Translation (NMT) based on the encoder-decoder architecture has recently achieved the state-of-the-art performance. Researchers have proven that extending word level attention to phrase level attention by incorporating source-side phrase structure can enhance the attention model and achieve promising improvement. However, word dependencies that can be crucial to correctly understand a source sentence are not always in a consecutive fashion (i.e. phrase structure), sometimes they can be in long distance. Phrase structures are not the best way to explicitly model long distance dependencies. In this paper we propose a simple but effective method to incorporate source-side long distance dependencies into NMT. Our method based on dependency trees enriches each source state with global dependency structures, which can better capture the inherent syntactic structure of source sentences. Experiments on Chinese-English and English-Japanese translation tasks show that our proposed method outperforms state-of-the-art SMT and NMT baselines. |
d12c173ea92fc33dc276d1da90dc72a660f7ea12 | The main objective of Linked Data is linking and integration, and a major step for evaluating whether this target has been reached, is to find all the connections among the Linked Open Data (LOD) Cloud datasets. Connectivity among two or more datasets can be achieved through common Entities, Triples, Literals, and Schema Elements, while more connections can occur due to equivalence relationships between URIs, such as owl:sameAs, owl:equivalentProperty and owl:equivalentClass, since many publishers use such equivalence relationships, for declaring that their URIs are equivalent with URIs of other datasets. However, there are not available connectivity measurements (and indexes) involving more than two datasets, that cover the whole content (e.g., entities, schema, triples) or “slices” (e.g., triples for a specific entity) of datasets, although they can be of primary importance for several real world tasks, such as Information Enrichment, Dataset Discovery and others. Generally, it is not an easy task to find the connections among the datasets, since there exists a big number of LOD datasets and the transitive and symmetric closure of equivalence relationships should be computed for not missing connections. For this reason, we introduce scalable methods and algorithms, (a) for performing the computation of transitive and symmetric closure for equivalence relationships (since they can produce more connections between the datasets); (b) for constructing dedicated global semantics-aware indexes that cover the whole content of datasets; and (c) for measuring the connectivity among two or more datasets. Finally, we evaluate the speedup of the proposed approach, while we report comparative results for over two billion triples. |
d6020bdf3b03f209174cbc8fb4ecbe6208eb9ff1 | We begin with a retrospective reflection on the first author’s research career, which in large part is devoted to research about the implications of information technology (IT) for organizational change. Although IT has long been associated with organizational change, our historical review of the treatment of technology in organization theory demonstrates how easily the material aspects of organizations can disappear into the backwaters of theory development. This is an unfortunate result since the material characteristics of IT initiatives distinguish them from other organizational change initiatives. Our aim is to restore materiality to studies of IT impact by tracing the reasons for its disappearance and by offering options in which IT’s materiality plays a more central theoretical role. We adopt a socio-technical perspective that differs from a strict sociomaterial perspective insofar as we wish to preserve the ontological distinction between material artifacts and their social context of use. Our analysis proceeds using the concept of “affordance” as a relational concept consistent with the socio-technical perspective. We then propose extensions of organizational routines theory that incorporate material artifacts in the generative system known as routines. These contributions exemplify two of the many challenges inherent in adopting materiality as a new research focus in the study of IT’s organizational impacts. |
7039b7c97bd0e59693f2dc4ed7b40e8790bf2746 | We describe a neural network model that jointly learns distributed representations of texts and knowledge base (KB) entities. Given a text in the KB, we train our proposed model to predict entities that are relevant to the text. Our model is designed to be generic with the ability to address various NLP tasks with ease. We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. We evaluated the model on three important NLP tasks (i.e., sentence textual similarity, entity linking, and factoid question answering) involving both unsupervised and supervised settings. As a result, we achieved state-of-the-art results on all three of these tasks. Our code and trained models are publicly available for further academic research. |
42f75b297aed474599c8e598dd211a1999804138 | We describe AutoClass, an approach to unsupervised classiication based upon the classical mixture model, supplemented by a Bayesian method for determining the optimal classes. We include a moderately detailed exposition of the mathematics behind the AutoClass system. We emphasize that no current unsupervised classiication system can produce maximally useful results when operated alone. It is the interaction between domain experts and the machine searching over the model space, that generates new knowledge. Both bring unique information and abilities to the database analysis task, and each enhances the others' eeectiveness. We illustrate this point with several applications of AutoClass to complex real world databases, and describe the resulting successes and failures. 6.1 Introduction This chapter is a summary of our experience in using an automatic classiication program (AutoClass) to extract useful information from databases. It also gives an outline of the principles that underlie automatic classiication in general, and AutoClass in particular. We are concerned with the problem of automatic discovery of classes in data (sometimes called clustering, or unsupervised learning), rather than the generation of class descriptions from labeled examples (called supervised learning). In some sense, automatic classiication aims at discovering the \natural" classes in the data. These classes reeect basic causal mechanisms that makes some cases look more like each other than the rest of the cases. The causal mechanisms may be as boring as sample biases in the data, or could reeect some major new discovery in the domain. Sometimes, these classes were well known to experts in the eld, but unknown to AutoClass, and other times |
32aea4c9fb9eb7cf2b6869efa83cf73420374628 | |
091778f43d947affb69dbccc2c3251abfa852ad2 | A semantic file system is an information storage system that provides flexible associative access to the system's contents by automatically extracting attributes from files with file type specific transducers. Associative access is provided by a conservative extension to existing tree-structured file system protocols, and by protocols that are designed specifically for content based access. Compatiblity with existing file system protocols is provided by introducing the concept of a virtual directory. Virtual directory names are interpreted as queries, and thus provide flexible associative access to files and directories in a manner compatible with existing software. Rapid attribute-based access to file system contents is implemented by automatic extraction and indexing of key properties of file system objects. The automatic indexing of files and directories is called "semantic" because user programmable transducers use information about the semantics of updated file system objects to extract the properties for indexing. Experimental results from a semantic file system implementation support the thesis that semantic file systems present a more effective storage abstraction than do traditional tree structured file systems for information sharing and command level programming. |
096db7e8d2b209fb6dca9c7495ac84405c40e507 | In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are robust in the presence of noise and have many potential applications, including multi-way Blind Source Separation (BSS), multi-sensory or multi-dimensional data analysis, and nonnegative neural sparse coding. We propose to use local cost functions whose simultaneous or sequential (one by one) minimization leads to a very simple ALS algorithm which works under some sparsity constraints both for an under-determined (a system which has less sensors than sources) and over-determined model. The extensive experimental results confirm the validity and high performance of the developed algorithms, especially with usage of the multi-layer hierarchical NMF. Extension of the proposed algorithm to multidimensional Sparse Component Analysis and Smooth Component Analysis is also proposed. |
339888b357e780c6e80fc135ec48a14c3b524f7d | A Bloom filter is a simple space-efficient randomized data structure for representing a set in order to support membership queries. Bloom filters allow false positives but the space savings often outweigh this drawback when the probability of an error is controlled. Bloom filters have been used in database applications since the 1970s, but only in recent years have they become popular in the networking literature. The aim of this paper is to survey the ways in which Bloom filters have been used and modified in a variety of network problems, with the aim of providing a unified mathematical and practical framework for understanding them and stimulating their use in future applications. |
dc3e8bea9ef0c9a2df20e4d11860203eaf795b6a | Ground reaction forces generated during normal walking have recently been used to identify and/or classify individuals based upon the pattern of the forces observed over time. One feature that can be extracted from vertical ground reaction forces is body mass. This single feature has identifying power comparable to other studies that use multiple and more complex features. This study contributes to understanding the role of body mass in identification by (1) quantifying the accuracy and precision with which body mass can be obtained using vertical ground reaction forces, (2) quantifying the distribution of body mass across a population larger than has previously been studied in relation to gait analysis, and (3) quantifying the expected identification capabilities of systems using body mass as a weak biometric. Our results show that body mass can be measured in a fraction of a second with less than a 1 kilogram standard deviation of error. |
1b2f2bb90fb08d0e02eabb152120dbf1d6e5837e | We present a family of neural-network– inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data. |
396945dabf79f4a8bf36ca408a137d6e961306e7 | |
6010c2d8eb5b6c5da3463d0744203060bdcc07a7 | Salmon lice, Lepeophtheirus salmonis (Krøyer, 1837), are fish ectoparasites causing significant economic damage in the mariculture of Atlantic salmon, Salmo salar Linnaeus, 1758. The control of L. salmonis at fish farms relies to a large extent on treatment with anti-parasitic drugs. A problem related to chemical control is the potential for development of resistance, which in L. salmonis is documented for a number of drug classes including organophosphates, pyrethroids and avermectins. The ATP-binding cassette (ABC) gene superfamily is found in all biota and includes a range of drug efflux transporters that can confer drug resistance to cancers and pathogens. Furthermore, some ABC transporters are recognised to be involved in conferral of insecticide resistance. While a number of studies have investigated ABC transporters in L. salmonis, no systematic analysis of the ABC gene family exists for this species. This study presents a genome-wide survey of ABC genes in L. salmonis for which, ABC superfamily members were identified through homology searching of the L. salmonis genome. In addition, ABC proteins were identified in a reference transcriptome of the parasite generated by high-throughput RNA sequencing (RNA-seq) of a multi-stage RNA library. Searches of both genome and transcriptome allowed the identification of a total of 33 genes / transcripts coding for ABC proteins, of which 3 were represented only in the genome and 4 only in the transcriptome. Eighteen sequences were assigned to ABC subfamilies known to contain drug transporters, i.e. subfamilies B (4 sequences), C (11) and G (2). The results suggest that the ABC gene family of L. salmonis possesses fewer members than recorded for other arthropods. The present survey of the L. salmonis ABC gene superfamily will provide the basis for further research into potential roles of ABC transporters in the toxicity of salmon delousing agents and as potential mechanisms of drug resistance. |
4a3235a542f92929378a11f2df2e942fe5674c0e | This paper introduces the Unsupervised Neural Net based Intrusion Detector (UNNID) system, which detects network-based intrusions and attacks using unsupervised neural networks. The system has facilities for training, testing, and tunning of unsupervised nets to be used in intrusion detection. Using the system, we tested two types of unsupervised Adaptive Resonance Theory (ART) nets (ART-1 and ART-2). Based on the results, such nets can efficiently classify network traffic into normal and intrusive. The system uses a hybrid of misuse and anomaly detection approaches, so is capable of detecting known attack types as well as new attack types as anomalies. |
10a9abb4c78f0be5cc85847f248d3e8277b3c810 | The Conference on Computational Natural Language Learning features a shared task, in which participants train and test their learning systems on the same data sets. In 2007, as in 2006, the shared task has been devoted to dependency parsing, this year with both a multilingual track and a domain adaptation track. In this paper, we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languages. In addition, we characterize the different approaches of the participating systems, report the test results, and provide a first analysis of these results. |
14626b05a5ec7ec2addc512f0dfa8db60d817c1b | In this paper we explore acceleration techniques for large scale nonconvex optimization problems with special focuses on deep neural networks. The extrapolation scheme is a classical approach for accelerating stochastic gradient descent for convex optimization, but it does not work well for nonconvex optimization typically. Alternatively, we propose an interpolation scheme to accelerate nonconvex optimization and call the method Interpolatron. We explain motivation behind Interpolatron and conduct a thorough empirical analysis. Empirical results on DNNs of great depths (e.g., 98-layer ResNet and 200-layer ResNet) on CIFAR-10 and ImageNet show that Interpolatron can converge much faster than the stateof-the-art methods such as the SGD with momentum and Adam. Furthermore, Anderson’s acceleration, in which mixing coefficients are computed by least-squares estimation, can also be used to improve the performance. Both Interpolatron and Anderson’s acceleration are easy to implement and tune. We also show that Interpolatron has linear convergence rate under certain regularity assumptions. |
55baef0d54403387f5cf28e2ae1ec850355cf60a | Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice. |
6be461dd5869d00fc09975a8f8e31eb5f86be402 | Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction. |
15f932d189b13786ca54b1dc684902301d34ef65 | A high efficient LLCC-type resonant dc-dc converter is discussed in this paper for a low-power photovoltaic application. Emphasis is put on the different design mechanisms of the resonant tank. At the same time soft switching of the inverter as well as the rectifier bridge are regarded. Concerning the design rules, a new challenge is solved in designing a LLCC-converter with voltage-source output. Instead of the resonant elements, ratios of them, e.g. the ratio of inductances Ls/Lp is considered as design parameters first. Furthermore, the derived design rule for the transformer-inductor device fits directly into the overall LLCC-design. Due to the nature of transformers, i.e. the relation of the inductances Ls/Lp is only a function of geometry, this design parameter is directly considered by geometry. Experimental results demonstrate the high efficiency. |
f13902eb6429629179419c95234ddbd555eb2bb6 | |
07d138a54c441d6ae9bff073025f8f5eeaac4da4 | Big deep neural network (DNN) models trained on large amounts of data have recently achieved the best accuracy on hard tasks, such as image and speech recognition. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. To enable training of extremely large DNNs, models are partitioned across machines. To expedite training on very large data sets, multiple model replicas are trained in parallel on different subsets of the training examples with a global parameter server maintaining shared weights across these replicas. The correct choice for model and data partitioning and overall system provisioning is highly dependent on the DNN and distributed system hardware characteristics. These decisions currently require significant domain expertise and time consuming empirical state space exploration.
This paper develops performance models that quantify the impact of these partitioning and provisioning decisions on overall distributed system performance and scalability. Also, we use these performance models to build a scalability optimizer that efficiently determines the optimal system configuration that minimizes DNN training time. We evaluate our performance models and scalability optimizer using a state-of-the-art distributed DNN training framework on two benchmark applications. The results show our performance models estimate DNN training time with high estimation accuracy and our scalability optimizer correctly chooses the best configurations, minimizing the training time of distributed DNNs. |
eee686b822950a55f31d4c9c33d02c1942424785 | Abstract— This paper describes 2 x 2 triangular microstrip patch antenna using T-junction with quarter wave transformer. By regulating the distance in patch antenna and adjusting feed position, bandwidth can be obtained and by using an array, directivity is enhanced. The requirement of large bandwidth, high directivity, and minimum size leads to the design of 2 x 2 triangular microstrip patch antenna array feeding with T-junction network operate at 5.5 GHz. An antenna designed on an FR4 substrate that had a dielectric constant (r) 4.4, a loss tangent 0.02 and thickness of 1.6 mm. Simulated results showed that designed antenna has directivity 12.91 dB and bandwidth 173 MHz with VSWR 1.07 using T-junction feeding network. The proposed 2 x 2 triangular array has the benefit of light weight, simplicity of fabrication, single layer structure, and high directivity. Keyword Bandwidth, Corporate feeding, Return Loss, T-junction, VSWR. |
c707938422b60bf827ec161872641468ec1ffe00 | We establish geometric and topological properties of the space of value functions in finite stateaction Markov decision processes. Our main contribution is the characterization of the nature of its shape: a general polytope (Aigner et al., 2010). To demonstrate this result, we exhibit several properties of the structural relationship between policies and value functions including the line theorem, which shows that the value functions of policies constrained on all but one state describe a line segment. Finally, we use this novel perspective to introduce visualizations to enhance the understanding of the dynamics of reinforcement learning algorithms. |
86854374c13516a8ad0dc28ffd9cd4be2bca9bfc | In recent years there has been a growing interest in problems, where either the observed data or hidden state variables are confined to a known Riemannian manifold. In sequential data analysis this interest has also been growing, but rather crude algorithms have been applied: either Monte Carlo filters or brute-force discretisations. These approaches scale poorly and clearly show a missing gap: no generic analogues to Kalman filters are currently available in non-Euclidean domains. In this paper, we remedy this issue by first generalising the unscented transform and then the unscented Kalman filter to Riemannian manifolds. As the Kalman filter can be viewed as an optimisation algorithm akin to the Gauss-Newton method, our algorithm also provides a general-purpose optimisation framework on manifolds. We illustrate the suggested method on synthetic data to study robustness and convergence, on a region tracking problem using covariance features, an articulated tracking problem, a mean value optimisation and a pose optimisation problem. |
a075a513b2b1e8dbf9b5d1703a401e8084f9df9c | Uniplanar compact electromagnetic bandgap (UC-EBG) substrate has been proven to be an effective measure to reduce surface wave excitation in printed antenna geometries. This paper investigates the performance of a microstrip antenna phased array embedded in an UC-EBG substrate. The results show a reduction in mutual coupling between elements and provide a possible solution to the "blind spots" problem in phased array applications with printed elements. A novel and efficient UC-EBG array configuration is proposed. A probe fed patch antenna phased array of 7/spl times/5 elements on a high dielectric constant substrate was designed, built and tested. Simulation and measurement results show improvement in the active return loss and active pattern of the array center element. The tradeoffs used to obtain optimum performance are discussed. |
16a0fde5a8ab5591a9b2985f60a04fdf50a18dc4 | Gait has been considered as an efficient biometric trait for user authentication. Although there are some studies that address the task of securing gait templates/models in gait-based authentication systems, they do not take into account the low discriminability and high variation of gait data which significantly affects the security and practicality of the proposed systems. In this paper, we focus on addressing the aforementioned deficiencies in inertial-sensor based gait cryptosystem. Specifically, we leverage Linear Discrimination Analysis to enhance the discrimination of gait templates, and Gray code quantization to extract high discriminative and stable binary template. The experimental results on 38 different users showed that our proposed method significantly improve the performance and security of the gait cryptosystem. In particular, we achieved the False Acceptant Rate of 6×10−5% (i.e., 1 fail in 16983 trials) and False Rejection Rate of 9.2% with 148-bit security. |
d7fd575c7fae05e055e47d898a5d9d2766f742b9 | |
84ade3cb5b57624baee89d9e617bb5847ee07375 | |
9e5158222c911bec96d4f533cd0d7a1a0cff1731 | Next generation RF sensor modules for multifunction active electronically steered antenna (AESA) systems will need a combination of different operating modes, such as radar, electronic warfare (EW) functionalities and communications/datalinks within the same antenna frontend. They typically operate in C-Band, X-Band and Ku-Band and imply a bandwidth requirement of more than 10 GHz. For the realisation of modern active electronically steered antennas, the transmit/receive (T/R) modules have to match strict geometry demands. A major challenge for these future multifunction RF sensor modules is dictated by the half-wavelength antenna grid spacing, that limits the physical channel width to < 12 mm or even less, depending on the highest frequency of operation with accordant beam pointing requirements. A promising solution to overcome these geometry demands is the reduction of the total monolithic microwave integrated circuit (MMIC) chip area, achieved by integrating individual RF functionalities, which are commonly achieved through individual integrated circuits (ICs), into new multifunctional (MFC) MMICs. Various concepts, some of them already implemented, towards next generation RF sensor modules will be discussed and explained in this work. |
77a9473256f6841d40cb9198feb5b91dccf9ffd1 | This paper presents a dimmable charge-pump driver to power light-emitting diodes (LEDs) with power factor correction (PFC) and Zero Voltage Switching (ZVS). The proposed LED driver does not utilize electrolytic capacitors, providing a high useful lifetime, and it can stabilize the output current in open loop control without needing current sensors, which reduces the cost. The output power is proportional to the switching frequency, which allows the LEDs dimming. A prototype with 22 W was implemented and experimental results were discussed. The prototype presented a power factor of 0.996 and an efficiency of 89.5 %. The driver output power was reduced by more than 40% through the switching frequency while varying of 53 kHz to 30 kHz and the converter has continues to operate in ZVS. |
b5fe4731ff6a7a7f1ad8232186e84b1f944162e0 | Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional hamming space, has attracted intensive attention in recent years. This is motivated by the facts a) the multi-modal data is widespread, e.g., the web images on Flickr are associated with tags, and b) hashing is an effective technique towards large-scale high-dimensional data processing, which is exactly the situation of cross-media retrieval. Inspired by recent advances in deep learning, we propose a cross-media hashing approach based on multi-modal neural networks. By restricting in the learning objective a) the hash codes for relevant cross-media data being similar, and b) the hash codes being discriminative for predicting the class labels, the learned Hamming space is expected to well capture the cross-media semantic relationships and to be semantically discriminative. The experiments on two real-world data sets show that our approach achieves superior cross-media retrieval performance compared with the state-of-the-art methods. |
9814dd00440b08caf0df96988edb4c56cfcf7bd1 | Active SLAM poses the challenge for an autonomous robot to plan efficient paths simultaneous to the SLAM process. The uncertainties of the robot, map and sensor measurements, and the dynamic and motion constraints need to be considered in the planning process. In this paper, the active SLAM problem is formulated as an optimal trajectory planning problem. A novel technique is introduced that utilises an attractor combined with local planning strategies such as model predictive control (a.k.a. receding horizon) to solve this problem. An attractor provides high level task intentions and incorporates global information about the environment for the local planner, thereby eliminating the need for costly global planning with longer horizons. It is demonstrated that trajectory planning with an attractor results in improved performance over systems that have local planning alone |
bc32313c5b10212233007ebb38e214d713db99f9 | Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. |
09f13c590f19dce53dfd8530f8cbe8044cce33ed | In recent years, many user-interface devices appear for managing variety the physical interactions. The Microsft Kinect camera is a revolutionary and useful depth camera giving new user experience of interactive gaming on the Xbox platform through gesture or motion detection. In this paper we present an approach for control the Quadrotor AR.Drone using the Microsft Kinect sensor. |
ca78c8c4dbe4c92ba90c8f6e1399b78ced3cf997 | In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation and two morphological inflection tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms. |
abdb694ab4b1cb4f54f07ed16a657765ce8c47f5 | A review and meta-analysis was performed of seventy-five articles concerned with innovation characteristics and their relationship to innovation adoption and implementation. One part of the analysis consisted of constructing a methodological profile of the existing studies, and contrasting this with a hypothetical optimal approach. A second part of the study employed meta-analytic statistical techniques to assess the generality and consistency of existing empirical findings. Three innovation characteristics (compatibility, relative advantage, and complexity) had the most consistent significant relationship to innovation adoption. Suggestions for future research in the area were made. |
518fd110bbf86df5259fb99126173d626a2ff744 | We consider the problem of learning preferences over trajectories for mobile manipulators such as personal robots and assembly line robots. The preferences we learn are more intricate than simple geometric constraints on trajectories; they are rather governed by the surrounding context of various objects and human interactions in the environment. We propose a coactive online learning framework for teaching preferences in contextually rich environments. The key novelty of our approach lies in the type of feedback expected from the user: the human user does not need to demonstrate optimal trajectories as training data, but merely needs to iteratively provide trajectories that slightly improve over the trajectory currently proposed by the system. We argue that this coactive preference feedback can be more easily elicited than demonstrations of optimal trajectories. Nevertheless, theoretical regret bounds of our algorithm match the asymptotic rates of optimal trajectory algorithms. We implement our algorithm on two high-degree-of-freedom robots, PR2 and Baxter, and present three intuitive mechanisms for providing such incremental feedback. In our experimental evaluation we consider two context rich settings, household chores and grocery store checkout, and show that users are able to train the robot with just a few feedbacks (taking only a few minutes). |
9f927249d7b33b91ca23f8820e21b22a6951a644 | Enabling the high data rates of millimeter wave (mmWave) cellular systems requires deploying large antenna arrays at both the basestations and mobile users. Prior work on coverage and rate of mmWave cellular networks focused on the case when basestations and mobile beamforming vectors are predesigned for maximum beamforming gains. Designing beamforming/combining vectors, though, requires training, which may impact both the SINR coverage and rate of mmWave systems. This paper evaluates mmWave cellular network performance while accounting for the beam training/association overhead. First, a model for the initial beam association is developed based on beam sweeping and downlink control pilot reuse. To incorporate the impact of beam training, a new metric, called the effective reliable rate, is defined and adopted. Using stochastic geometry, the effective rate of mmWave cellular networks is derived for two special cases: near-orthogonal pilots and full pilot reuse. Analytical and simulation results provide insights into the answers of two important questions. First, what is the impact of beam association on mmWave network performance? Then, should orthogonal or reused pilots be employed? The results show that unless the employed beams are very wide, initial beam training with full pilot reuse is nearly as good as perfect beam alignment. |
6bd1f2782d6c8c3066d4e7d7e3afb995d79fa3dd | A semantic segmentation algorithm must assign a label to every pixel in an image. Recently, semantic segmentation of RGB imagery has advanced significantly due to deep learning. Because creating datasets for semantic segmentation is laborious, these datasets tend to be significantly smaller than object recognition datasets. This makes it difficult to directly train a deep neural network for semantic segmentation, because it will be prone to overfitting. To cope with this, deep learning models typically use convolutional neural networks pre-trained on large-scale image classification datasets, which are then fine-tuned for semantic segmentation. For non-RGB imagery, this is currently not possible because large-scale labeled non-RGB datasets do not exist. In this paper, we developed two deep neural networks for semantic segmentation of multispectral remote sensing imagery. Prior to training on the target dataset, we initialize the networks with large amounts of synthetic multispectral imagery. We show that this significantly improves results on real-world remote sensing imagery, and we establish a new state-of-the-art result on the challenging Hamlin Beach State Park Dataset. |
9e9b8832b9e727d5f7a61cedfa4bdf44e8969623 | An efficient optimization method called ‘Teaching–Learning-Based Optimization (TLBO)’ is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods. 2011 Elsevier Inc. All rights reserved. |
3ca6ab58ae015860098d800a9942af9df4d1e090 | Search-based graph queries, such as finding short paths and isomorphic subgraphs, are dominated by memory latency. If input graphs can be partitioned appropriately, large cluster-based computing platforms can run these queries. However, the lack of compute-bound processing at each vertex of the input graph and the constant need to retrieve neighbors implies low processor utilization. Furthermore, graph classes such as scale-free social networks lack the locality to make partitioning clearly effective. Massive multithreading is an alternative architectural paradigm, in which a large shared memory is combined with processors that have extra hardware to support many thread contexts. The processor speed is typically slower than normal, and there is no data cache. Rather than mitigating memory latency, multithreaded machines tolerate it. This paradigm is well aligned with the problem of graph search, as the high ratio of memory requests to computation can be tolerated via multithreading. In this paper, we introduce the multithreaded graph library (MTGL), generic graph query software for processing semantic graphs on multithreaded computers. This library currently runs on serial machines and the Cray MTA-2, but Sandia is developing a run-time system that will make it possible to run MTGL-based code on symmetric multiprocessors. We also introduce a multithreaded algorithm for connected components and a new heuristic for inexact subgraph isomorphism We explore the performance of these and other basic graph algorithms on large scale-free graphs. We conclude with a performance comparison between the Cray MTA-2 and Blue Gene/Light for s-t connectivity. |
d4c65ee21bb8d64b8e4380f80ad856a1629b5949 | A waveguide divider with folded lateral arms is presented for separating dual orthogonal linear polarizations in broadband ortho-mode transducers. The structure is based on a well-known double symmetry junction, where the metallic pins have been eliminated and the lateral outputs have been folded to achieve a combined effect: matching for the vertical polarization and a very significant size reduction. In addition, since the path for the lateral branches has been reduced, the insertion losses for the different polarizations are balanced. The isolation between orthogonal polarizations is kept because of the double-symmetry of the junction. From the mechanical point of view, the proposed junction allows a more simple manufacture and assembly of the ortho-mode transducer parts, which has been shown with a Ku-band design, covering the full Ku-band from 12.6 to 18.25 GHz. The experimental prototype has demonstrated a measured return loss better than 28 dB in the design band and insertion loss smaller than 0.15 dB for both polarizations. |
db3259ae9e7f18a319cc24229662da9bf400221a | |
10dae7fca6b65b61d155a622f0c6ca2bc3922251 | |
5021c5f6d94ffaf735ab941241ab21e0c491ffa1 | MSER features are redefined to improve their performances in matching and retrieval tasks. The proposed SIMSER features (i.e. scale-insensitive MSERs) are the extremal regions which are maximally stable not only under the threshold changes (like MSERs) but, additionally, under image rescaling (smoothing). Theoretical advantages of such a modification are discussed. It is also preliminarily verified experimentally that such a modification preserves the fundamental properties of MSERs, i.e. the average numbers of features, repeatability, and computational complexity (which is only multiplicatively increased by the number of scales used), while performances (measured by typical CBVIR metrics) can be significantly improved. In particular, results on benchmark datasets indicate significant increments in recall values, both for descriptor-based matching and word-based matching. In general, SIMSERs seem particularly suitable for a usage with large visual vocabularies, e.g. they can be prospectively applied to improve quality of BoW pre-retrieval operations in large-scale databases. |
e23c9687ba0bf15940af76b7fa0e0c1af9d3156e | The consumer electronics industry is a $ 240 billion global industry with a small number of highly competitive global players. We describe many of the risks associated with any global supply chain in this industry. As illustration, we also list steps that Samsung Electronics and its subsidiary, Samsung Electronics UK, have taken to mitigate these risks. Our description of the risks and illustration of mitigation efforts provides the backdrop to identify areas of future research. |
2f52cbef51a6a8a2a74119ad821526f9e0b57b39 | The SAP HANA database is positioned as the core of the SAP HANA Appliance to support complex business analytical processes in combination with transactionally consistent operational workloads. Within this paper, we outline the basic characteristics of the SAP HANA database, emphasizing the distinctive features that differentiate the SAP HANA database from other classical relational database management systems. On the technical side, the SAP HANA database consists of multiple data processing engines with a distributed query processing environment to provide the full spectrum of data processing -- from classical relational data supporting both row- and column-oriented physical representations in a hybrid engine, to graph and text processing for semi- and unstructured data management within the same system.
From a more application-oriented perspective, we outline the specific support provided by the SAP HANA database of multiple domain-specific languages with a built-in set of natively implemented business functions. SQL -- as the lingua franca for relational database systems -- can no longer be considered to meet all requirements of modern applications, which demand the tight interaction with the data management layer. Therefore, the SAP HANA database permits the exchange of application semantics with the underlying data management platform that can be exploited to increase query expressiveness and to reduce the number of individual application-to-database round trips. |
3a011bd31f1de749210b2b188ffb752d9858c6a6 | We consider extending decision support facilities toward large sophisticated networks, upon which multidimensional attributes are associated with network entities, thereby forming the so-called multidimensional networks. Data warehouses and OLAP (Online Analytical Processing) technology have proven to be effective tools for decision support on relational data. However, they are not well-equipped to handle the new yet important multidimensional networks. In this paper, we introduce Graph Cube, a new data warehousing model that supports OLAP queries effectively on large multidimensional networks. By taking account of both attribute aggregation and structure summarization of the networks, Graph Cube goes beyond the traditional data cube model involved solely with numeric value based group-by's, thus resulting in a more insightful and structure-enriched aggregate network within every possible multidimensional space. Besides traditional cuboid queries, a new class of OLAP queries, crossboid, is introduced that is uniquely useful in multidimensional networks and has not been studied before. We implement Graph Cube by combining special characteristics of multidimensional networks with the existing well-studied data cube techniques. We perform extensive experimental studies on a series of real world data sets and Graph Cube is shown to be a powerful and efficient tool for decision support on large multidimensional networks. |
4b573416043cf9cff42cbb7b753993c907a2be4a | Many traditional and new business applications work with inherently graphstructured data and therefore benefit from graph abstractions and operations provided in the data management layer. The property graph data model not only offers schema flexibility but also permits managing and processing data and metadata jointly. By having typical graph operations implemented directly in the database engine and exposing them both in the form of an intuitive programming interface and a declarative language, complex business application logic can be expressed more easily and executed very efficiently. In this paper we describe our ongoing work to extend the SAP HANA database with built-in graph data support. We see this as a next step on the way to provide an efficient and intuitive data management platform for modern business applications with SAP HANA. |
16af753e94919ca257957cee7ab6c1b30407bb91 | |
cc75568885ab99851cc0e0ea5679121606121e5d | Training and handling working dogs is a costly process and requires specialized skills and techniques. Less subjective and lower-cost training techniques would not only improve our partnership with these dogs but also enable us to benefit from their skills more efficiently. To facilitate this, we are developing a canine body-area-network (cBAN) to combine sensing technologies and computational modeling to provide handlers with a more accurate interpretation for dog training. As the first step of this, we used inertial measurement units (IMU) to remotely detect the behavioral activity of canines. Decision tree classifiers and Hidden Markov Models were used to detect static postures (sitting, standing, lying down, standing on two legs and eating off the ground) and dynamic activities (walking, climbing stairs and walking down a ramp) based on the heuristic features of the accelerometer and gyroscope data provided by the wireless sensing system deployed on a canine vest. Data was collected from 6 Labrador Retrievers and a Kai Ken. The analysis of IMU location and orientation helped to achieve high classification accuracies for static and dynamic activity recognition. |
694a40785f480cc0d65bd94a5e44f570aff5ea37 | Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are considerably difficult to learn in large-scale environments. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms—grid-based and topological—, the approach presented here gains the best of both worlds: accuracy/consistency and efficiency. The paper gives results for autonomously operating a mobile robot equipped with sonar sensors in populated multi-room envi- |
a512385be058b1e2e1d8b418a097065707622ecd | The global burden of cancer continues to increase largely because of the aging and growth of the world population alongside an increasing adoption of cancer-causing behaviors, particularly smoking, in economically developing countries. Based on the GLOBOCAN 2008 estimates, about 12.7 million cancer cases and 7.6 million cancer deaths are estimated to have occurred in 2008; of these, 56% of the cases and 64% of the deaths occurred in the economically developing world. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% of the total cancer cases and 14% of the cancer deaths. Lung cancer is the leading cancer site in males, comprising 17% of the total new cancer cases and 23% of the total cancer deaths. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer. Further, the mortality burden for lung cancer among females in developing countries is as high as the burden for cervical cancer, with each accounting for 11% of the total female cancer deaths. Although overall cancer incidence rates in the developing world are half those seen in the developed world in both sexes, the overall cancer mortality rates are generally similar. Cancer survival tends to be poorer in developing countries, most likely because of a combination of a late stage at diagnosis and limited access to timely and standard treatment. A substantial proportion of the worldwide burden of cancer could be prevented through the application of existing cancer control knowledge and by implementing programs for tobacco control, vaccination (for liver and cervical cancers), and early detection and treatment, as well as public health campaigns promoting physical activity and a healthier dietary intake. Clinicians, public health professionals, and policy makers can play an active role in accelerating the application of such interventions globally. |
37fa040ec0c4bc1b85f3ca2929445f3229ed7f72 | We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format. |
a1a1c4fb58a2bc056a056795609a2be307b6b9bf | Cloud storage has rapidly become a cornerstone of many IT infrastructures, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. Putting user data in the direct control of cloud service providers, however, raises security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. Furthermore, even if the cloud provider is trusted, users having access to outsourced files might be malicious and misbehave. These concerns are particularly serious in sensitive applications like personal health records and credit score systems. To tackle this problem, we present GORAM, a cryptographic system that protects the secrecy and integrity of outsourced data with respect to both an untrusted server and malicious clients, guarantees the anonymity and unlinkability of accesses to such data, and allows the data owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. In the process of designing an efficient construction, we developed two new, generally applicable cryptographic schemes, namely, batched zero-knowledge proofs of shuffle and an accountability technique based on chameleon signatures, which we consider of independent interest. We implemented GORAM in Amazon Elastic Compute Cloud (EC2) and ran a performance evaluation demonstrating the scalability and efficiency of our construction. |
32527d9fcbfb0c84daf715d7e9a375f647b33c2c | |
269ed5ba525519502123b58472e069d77c5bda14 | An interactive Question Answering (QA) system frequently encounters non-sentential (incomplete) questions. These non-sentential questions may not make sense to the system when a user asks them without the context of conversation. The system thus needs to take into account the conversation context to process the incomplete question. In this work, we present a recurrent neural network (RNN) based encoder decoder network that can generate a complete (intended) question, given an incomplete question and conversation context. RNN encoder decoder networks have been show to work well when trained on a parallel corpus with millions of sentences, however it is extremely hard to obtain conversation data of this magnitude. We therefore propose to decompose the original problem into two separate simplified problems where each problem focuses on an abstraction. Specifically, we train a semantic sequence model to learn semantic patterns, and a syntactic sequence model to learn linguistic patterns. We further combine syntactic and semantic sequence models to generate an ensemble model. Our model achieves a BLEU score of 30.15 as compared to 18.54 using a standard RNN encoder decoder model. |
27099ec9ea719f8fd919fb69d66af677a424143b | Adaptive control of thought-rational (ACT-R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT-R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert. |
6fdbf20f50dfd6276d9b89e494f86fbcc7b0b9b7 | We have designed and tested a novel electronic tracking antenna array that is formed by 2 × 2 microstrip sub-arrays. Through time sequence phase weighting on each sub-array, the amplitude and phase on each sub-array can be recovered from the output of the resultant single channel. The amplitude and phase on each array can be used to produce the sum and difference radiation pattern by digital signal processing. In comparison with the monopulse system, the RF comparator is eliminated and the number of the receiver channels is reduced from 3 to 1. A proof-of-concept prototype was fabricated and tested. The measured results confirmed the validity and advantages of the proposed scheme. The procedure of channel correction is given. |
3701bdb05b6764b09a5735cdc3cb9c40736d9765 | We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. |
5cd28cdc4c82f788dee27cb73d7d9280cf9c7343 | This paper presents a method for recognizing aerial image categories based on matching graphlets(i.e., small connected subgraphs) extracted from aerial images. By constructing a Region Adjacency Graph (RAG) to encode the geometric property and the color distribution of each aerial image, we cast aerial image category recognition as RAG-to-RAG matching. Based on graph theory, RAG-to-RAG matching is conducted by matching all their respective graphlets. Towards an effective graphlet matching process, we develop a manifold embedding algorithm to transfer different-sized graphlets into equal length feature vectors and further integrate these feature vectors into a kernel. This kernel is used to train a SVM [8] classifier for aerial image categories recognition. Experimental results demonstrate our method outperforms several state-of-the-art object/scene recognition models. |
b2dac341df54e5f744d5b6562d725d254aae8e80 | This study introduces OpenHAR, a free Matlab toolbox to combine and unify publicly open data sets. It provides an easy access to accelerometer signals of ten publicly open human activity data sets. Data sets are easy to access as OpenHAR provides all the data sets in the same format. In addition, units, measurement range and labels are unified, as well as, body position IDs. Moreover, data sets with different sampling rates are unified using downsampling. What is more, data sets have been visually inspected to find visible errors, such as sensor in wrong orientation. OpenHAR improves re-usability of data sets by fixing these errors. Altogether OpenHAR contains over 65 million labeled data samples. This is equivalent to over 280 hours of data from 3D accelerometers. This includes data from 211 study subjects performing 17 daily human activities and wearing sensors in 14 different body positions. |
7347b4601078bd52eec80d5de29f801890f82de3 | A coupled-Gysel broadband combiner/divider is proposed and demonstrated. The new concept relies on using a single coupled line segment in the design. Significant improvement in bandwidth are realized while maintaining low-loss, ease of design, and flexibility. The coupled-Gysel is demonstrated with a 2.5 - 8 GHz (105% fractional bandwidth) divider with 0.1 dB divider loss, and a 3.4 - 10.2 GHz (100% fractional bandwidth) with 0.2 dB divider loss. |
a05d984443d62575c097ad65b747aae859a5f8b0 | The effects of video games on children's psychosocial development remain the focus of debate. At two timepoints, 1 year apart, 194 children (7.27-11.43 years old; male = 98) reported their gaming frequency, and their tendencies to play violent video games, and to game (a) cooperatively and (b) competitively; likewise, parents reported their children's psychosocial health. Gaming at time one was associated with increases in emotion problems. Violent gaming was not associated with psychosocial changes. Cooperative gaming was not associated with changes in prosocial behavior. Finally, competitive gaming was associated with decreases in prosocial behavior, but only among children who played video games with high frequency. Thus, gaming frequency was related to increases in internalizing but not externalizing, attention, or peer problems, violent gaming was not associated with increases in externalizing problems, and for children playing approximately 8 h or more per week, frequent competitive gaming may be a risk factor for decreasing prosocial behavior. We argue that replication is needed and that future research should better distinguish between different forms of gaming for more nuanced and generalizable insight. |
5a47e047d4d41b61204255e1b265d704b7f265f4 | The term big data has become ubiquitous. Owing to a shared origin between academia, industry and the media there is no single unified definition, and various stakeholders provide diverse and often contradictory definitions. The lack of a consistent definition introduces ambiguity and hampers discourse relating to big data. This short paper attempts to collate the various definitions which have gained some degree of traction and to furnish a clear and concise definition of an otherwise ambiguous term. |
7065e6b496af41bba16971246a02986f5e388860 | Managing and improving organizational capabilities is a significant and complex issue for many companies. To support management and enable improvement, performance assessments are commonly used. One way of assessing organizational capabilities is by means of maturity grids. While maturity grids may share a common structure, their content differs and very often they are developed anew. This paper presents both a reference point and guidance for developing maturity grids. This is achieved by reviewing 24 existing maturity grids and by suggesting a roadmap for their development. The review places particular emphasis on embedded assumptions about organizational change in the formulation of the maturity ratings. The suggested roadmap encompasses four phases: planning, development, evaluation, and maintenance. Each phase discusses a number of decision points for development, such as the selection of process areas, maturity levels, and the delivery mechanism. An example demonstrating the roadmap's utility in industrial practice is provided. The roadmap can also be used to evaluate existing approaches. In concluding the paper, implications for management practice and research are presented. |
5dd79167d714ff3907ffbba102b8e6fba49f053e | This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, , of antenna elements at bothtransmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon’s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised. |
4f911fe6ee5040e6e46e84a9f1e211153943cd9b |
Subsets and Splits