_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
a0117ec4cd582974d06159644d12f65862a8daa3 | Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression. |
274946a974bc2bbbfe89c7f6fd3751396f295625 | In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we will highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering. |
573cd4046fd8b899a7753652cd0f4cf6e351c5ae | We present an approach to the recognition of complex-shaped objects in cluttered environments based on edge information. We first use example images of a target object in typical environments to train a classifier cascade that determines whether edge pixels in an image belong to an instance of the desired object or the clutter. Presented with a novel image, we use the cascade to discard clutter edge pixels and group the object edge pixels into overall detections of the object. The features used for the edge pixel classification are localized, sparse edge density operations. Experiments validate the effectiveness of the technique for recognition of a set of complex objects in a variety of cluttered indoor scenes under arbitrary out-of-image-plane rotation. Furthermore, our experiments suggest that the technique is robust to variations between training and testing environments and is efficient at runtime. |
e9b7367c63ba970cc9a0360116b160dbe1eb1bb4 | We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PIRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (NDPS), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. NDPS works by first learning a neural policy network using DRL, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural “oracle”. We evaluate NDPS on the task of learning to drive a simulated car in the TORCS carracing environment. We demonstrate that NDPS is able to discover human-readable policies that pass some significant performance bars. We also show that PIRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by DRL. |
48d103f81e9b70dc2de82508973bc35f61f8ed01 | This document presents the state-of-the-art of TeS Ku band antennas for mobile satellite communications on board high speed trains and ground vehicles, and his evolution in terms of Ku band antenna performances improvement and upgrade to Ka band terminals. |
7294d9fa5c5524a43619ad7e52d132a90cfe91bb | The general issue of this letter deals with the design of a phased array antenna for high-data-rate SATCOM. A final demonstrator antenna could be installed on an unmanned aerial vehicle (UAV) to communicate with a satellite in Ka-band. First, a compact reflection-type phase shifter is designed and realized. Second, the conception of a phased array antenna prototype is detailed. Third, a new calibration method is involved that can provide the bias voltage to be applied to each phase shifter in order to scan the beam in the desired direction. |
b40b8a2b528a88f45bba5ecd23cec02840798e72 | A 2D-periodic leaky-wave antenna for a Ka-band satcom-on-the-move ground user terminal is presented. The antenna panel operates at the 20 GHz downlink as well as the 30 GHz uplink bands with the respective circular polarizations, using a common radiation aperture and a common phase centre. The dual-band performance is achieved by a carefully designed stacked dual-layer frequency selective surface, with one layer operating at 20 GHz and being transparent at 30 GHz, and the second layer acting vice versa. The paper describes the design of the circularly polarized primary feed, the dual-layer structures, and the complete compact leaky-wave antenna panel. The measured radiation performance reveals realized-gain values above 22 dBi and efficiencies above 60%. The cross-polarization discrimination and sidelobe level are suitable to meet the power spectral requirements for satellite communications at Ka-band. |
1f009366a901c403a8aad65c94ec2fecf3428081 | Previous neural machine translation models used some heuristic search algorithms (e.g., beam search) in order to avoid solving the maximum a posteriori problem over translation sentences at test phase. In this paper, we propose the GumbelGreedy Decoding which trains a generative network to predict translation under a trained model. We solve such a problem using the Gumbel-Softmax reparameterization, which makes our generative network differentiable and trainable through standard stochastic gradient methods. We empirically demonstrate that our proposed model is effective for generating sequences of discrete words. |
8e49caba006e1832a70162f3a93a31be25927349 | This article discusses a new idea called cognitive radar. Three ingredients are basic to the constitution of cognitive radar: 1) intelligent signal processing, which builds on learning through interactions of the radar with the surrounding environment; 2) feedback from the receiver to the transmitter, which is a facilitator of intelligence; and 3) preservation of the information content of radar returns, which is realized by the Bayesian approach to target detection through tracking. All three of these ingredients feature in the echo-location system of a bat, which may be viewed as a physical realization (albeit in neurobiological terms) of cognitive radar. Radar is a remote-sensing system that is widely used for surveillance, tracking, and imaging applications, for both civilian and military needs. In this article, we focus on future possibilities of radar with particular emphasis on the issue of cognition. As an illustrative case study along the way, we consider the problem of radar surveillance applied to an ocean environment. |
f584c3d1c2d0e4baa7f8fc72fcab7b9395970ef5 | Little has been done in the study of these intriguing questions, and I do not wish to give the impression that any extensive set of ideas exists that could be called a "theory." What is quite surprising, as far as the histories of science and philosophy are concerned, is that the major impetus for the fantastic growth of interest in brain processes, both psychological and physiological, has come from a device, a machine, the digital computer. In dealing with a human being and a human society, we enjoy the luxury of being irrational, illogical, inconsistent, and incomplete, and yet of coping. In operating a computer, we must meet the rigorous requirements for detailed instructions and absolute precision. If we understood the ability of the human mind to make effective decisions when confronted by complexity, uncertainty, and irrationality then we could use computers a million times more effectively than we do. Recognition of this fact has been a motivation for the spurt of research in the field of neurophysiology. The more we study the information processing aspects of the mind, the more perplexed and impressed we become. It will be a very long time before we understand these processes sufficiently to reproduce them. In any case, the mathematician sees hundreds and thousands of formidable new problems in dozens of blossoming areas, puzzles galore, and challenges to his heart's content. He may never resolve some of these, but he will never be bored. What more can he ask? |
9d2222506f6a076e2c9803ac4ea5414eda881c73 | INTRODUCTION
Driver drowsiness is a significant contributing factor to road crashes. One approach to tackling this issue is to develop technological countermeasures for detecting driver drowsiness, so that a driver can be warned before a crash occurs.
METHOD
The goal of this review is to assess, given the current state of knowledge, whether vehicle measures can be used to reliably predict drowsiness in real time.
RESULTS
Several behavioral experiments have shown that drowsiness can have a serious impact on driving performance in controlled, experimental settings. However, most of those studies have investigated simple functions of performance (such as standard deviation of lane position) and results are often reported as averages across drivers, and across time.
CONCLUSIONS
Further research is necessary to examine more complex functions, as well as individual differences between drivers.
IMPACT ON INDUSTRY
A successful countermeasure for predicting driver drowsiness will probably require the setting of multiple criteria, and the use of multiple measures. |
7bbd56f4050eb9f8b63f0eacb58ad667aaf49f25 | The phenomenal growth in mobile data traffic calls for a drastic increase in mobile network capacity beyond current 3G/4G networks. In this paper, we propose a millimeter wave mobile broadband (MMB) system for the next generation mobile communication system (5G). MMB taps into the vast spectrum in 3-300 GHz range to meet this growing demand. We reason why the millimeter wave spectrum is suitable for mobile broadband applications. We discuss the unique advantages of millimeter waves such as spectrum availability and large beamforming gain in small form factors. We also describe a practical MMB system design capable of providing Gb/s data rates at distances up to 500 meters and supports mobility up to 350 km/h. By means of system simulations, we show that a basic MMB system is capable of delivering an average cell throughput and cell-edge throughput performances that is 10-100 times better than the current 20MHz LTE-Advanced systems. |
3bc9f8eb5ba303816fd5f642f2e7408f0752d3c4 | |
0c83eeceee8f55fb47aed1420b5510aa185feace | Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the dataset, the specific tools used and the 'mashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here. |
11e907ef1dad5daead606ce6cb69ade18828cc39 | We study an acceleration method for point-to-point shortest-path computations in large and sparse directed graphs with given nonnegative arc weights. The acceleration method is called the arc-flag approach and is based on Dijkstra's algorithm. In the arc-flag approach, we allow a preprocessing of the network data to generate additional information, which is then used to speedup shortest-path queries. In the preprocessing phase, the graph is divided into regions and information is gathered on whether an arc is on a shortest path into a given region. The arc-flag method combined with an appropriate partitioning and a bidirected search achieves an average speedup factor of more than 500 compared to the standard algorithm of Dijkstra on large networks (1 million nodes, 2.5 million arcs). This combination narrows down the search space of Dijkstra's algorithm to almost the size of the corresponding shortest path for long-distance shortest-path queries. We conduct an experimental study that evaluates which partitionings are best suited for the arc-flag method. In particular, we examine partitioning algorithms from computational geometry and a multiway arc separator partitioning. The evaluation was done on German road networks. The impact of different partitions on the speedup of the shortest path algorithm are compared. Furthermore, we present an extension of the speedup technique to multiple levels of partitions. With this multilevel variant, the same speedup factors can be achieved with smaller space requirements. It can, therefore, be seen as a compression of the precomputed data that preserves the correctness of the computed shortest paths. |
09be020a9738464799740602d7cf3273c1416c6a | Procedural texture generation enables the creation of more rich and detailed virtual environments without the help of an artist. However, finding a flexible generative model of real world textures remains an open problem. We present a novel Convolutional Neural Network based texture model consisting of two summary statistics (the Gramian and Translation Gramian matrices), as well as spectral constraints. We investigate the Fourier Transform or Window Fourier Transform in applying spectral constraints, and find that the Window Fourier Transform improved the quality of the generated textures. We demonstrate the efficacy of our system by comparing generated output with that of related state of the art systems. |
9fa3c3f1fb6f1566638f97fcb993fe121646433e | |
f8f92624c8794d54e08b3a8f94910952ae03cade | Person re-identification (re-ID) is a cross-camera retrieval task that suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle). CamStyle can serve as a data augmentation approach that reduces the risk of deep network overfitting and that smooths the CamStyle disparities. Specifically, with a style transfer model, labeled training images can be style transferred to each camera, and along with the original training samples, form the augmented training set. This method, while increasing data diversity against overfitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few camera systems in which overfitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of overfitting. We also report competitive accuracy compared with the state of the art on Market-1501 and DukeMTMC-re-ID. Importantly, CamStyle can be employed to the challenging problems of one view learning and unsupervised domain adaptation (UDA) in person re-identification (re-ID), both of which have critical research and application significance. The former only has labeled data in one camera view and the latter only has labeled data in the source domain. Experimental results show that CamStyle significantly improves the performance of the baseline in the two problems. Specially, for UDA, CamStyle achieves state-of-the-art accuracy based on a baseline deep re-ID model on Market-1501 and DukeMTMC-reID. Our code is available at: https://github.com/zhunzhong07/CamStyle. |
61ec08b1fd5dc1a0e000d9cddee6747f58d928ec | This article surveys bootstrap methods for producing good approximate confidence intervals. The goal is to improve by an order of magnitude upon the accuracy of the standard intervals θ̂ ± zασ̂ , in a way that allows routine application even to very complicated problems. Both theory and examples are used to show how this is done. The first seven sections provide a heuristic overview of four bootstrap confidence interval procedures: BCa, bootstrap-t, ABC and calibration. Sections 8 and 9 describe the theory behind these methods, and their close connection with the likelihood-based confidence interval theory developed by Barndorff-Nielsen, Cox and Reid and others. |
a9a6322f5d6575adb04d9ed670ffdef741840958 | Fordyce spots are ectopic sebaceous glands, ranging between 2 and 3 mm in diameter. These benign lesions are most frequently located in the oral mucosa and the genital skin. Especially in the male genital region they can cause itching, discomfort during sexual activities and are aesthetically unpleasant. So far, a variety of therapeutic procedures have been reported with varying success and recurrence rates. In the present retrospective study (n = 23 patients between 2003 and 2011), we present our surgical approach by means of the micro-punch technique. Using this effective method, we achieved very satisfactory functional and cosmetic results. There were no signs of recurrence during postoperative observations from 12 up to 84 months (median = 51.3 months). |
970698bf0a66ddf935b14e433c81e1175c0e8307 | A direct interpretation of the term Internet of Things refers to the use of standard Internet protocols for the human-to-thing or thingto-thing communication in embedded networks. Although the security needs are well-recognized in this domain, it is still not fully understood how existing IP security protocols and architectures can be deployed. In this paper, we discuss the applicability and limitations of existing Internet protocols and security architectures in the context of the Internet of Things. First, we give an overview of the deployment model and general security needs. We then present challenges and requirements for IP-based security solutions and highlight specific technical limitations of standard IP security protocols. |
140df6ceb211239b36ff1a7cfdc871f06d787d11 | Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Although the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided. Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages $$\mathsf{m}_1, \ldots , \mathsf{m}_T$$ m 1 , … , m T together with decryption keys corresponding to functions $$f_1, \ldots , f_T$$ f 1 , … , f T , reveal essentially no information other than the values $$\{ f_i(\mathsf{m}_j)\}_{i,j\in [T]}$$ { f i ( m j ) } i , j ∈ [ T ] . Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products) or offer somewhat weak notions of function privacy. We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on the learning with errors assumption, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the existence of any one-way function (offering various trade-offs between security and efficiency). |
d0895e18d0553b9a35cff80bd7bd5619a19d51fb | We report a 107 GHz baseband differential transimpedance amplifier IC for high speed optical communication links. The amplifier, comprised of two Darlington resistive feedback stages, was implemented in a 500 nm InP HBT process and demonstrates 55 dBΩ differential transimpedance gain, 30 ps group delay, P1dB = 1 dBm, and is powered by a 5.2 V supply. Differential input and output impedances are 50Ω. The IC interfaces to -2V DC at the input for connections to high-speed photodiodes and -450 mV DC at the output for interfaces to Gilbert-cell mixers and to ECL logic. |
b9d5f2da9408d176a5cfc6dc0912b6d72e0ea989 | |
6910f307fef66461f5cb561b4d4fc8caf8594af5 | In the last two years, there has been a surge of word embedding algorithms and research on them. However, evaluation has mostly been carried out on a narrow set of tasks, mainly word similarity/relatedness and word relation similarity and on a single language, namely English. We propose an approach to evaluate embeddings on a variety of languages that also yields insights into the structure of the embedding space by investigating how well word embeddings cluster along different syntactic features. We show that all embedding approaches behave similarly in this task, with dependency-based embeddings performing best. This effect is even more pronounced when generating low dimensional embed- |
8d31dbda7c58de30ada8616e1fcb011d32d5cf83 | The controller area network with flexible data rate (CAN-FD) is attracting attention as the next generation of in-vehicle network technology. However, security issues have not been completely taken into account when designing CAN-FD, although every bit of information transmitted could be critical to driver safety. If we fail to solve the security vulnerabilities of CAN-FD, we cannot expect Vehicle-Information and Communications Technology (Vehicle-ICT) convergence to continue to develop. Fortunately, secure in-vehicle CAN-FD communication environments can be constructed using the larger data payload of CAN-FD. In this paper, we propose a security architecture for in-vehicle CAN-FD as a countermeasure (designed in accordance with CAN-FD specifications). We considered the characteristics of the International Organization for Standardization (ISO) 26262 Automotive Safety Integrity Level and the in-vehicle subnetwork to design a practical security architecture. We also evaluated the feasibility of the proposed security architecture using three kinds of microcontroller unit and the CANoe software. Our evaluation findings may be used as an indicator of the performance level of electronic control units for manufacturing next-generation vehicles. |
122ab8b1ac332bceacf556bc50268b9d80552bb3 | A widely accepted premise is that complex software frequently contains bugs that can be remotely exploited by attackers. When this software is on an electronic control unit (ECU) in a vehicle, exploitation of these bugs can have life or death consequences. Since software for vehicles is likely to proliferate and grow more complex in time, the number of exploitable vulnerabilities will increase. As a result, manufacturers are keenly aware of the need to quickly and efficiently deploy updates so that software vulnerabilities can be remedied as soon as possible. |
3a257a87ab5d1e317336a6cefb50fee1958bd84a | Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized. |
7dfce578644bc101ae4ffcd0184d2227c6d07809 | Polymorphic encryption and Pseudonymisation, abbreviated as PEP, form a novel approach for the management of sensitive personal data, especially in health care. Traditional encryption is rather rigid: once encrypted, only one key can be used to decrypt the data. This rigidity is becoming an every greater problem in the context of big data analytics, where different parties who wish to investigate part of an encrypted data set all need the one key for decryption. Polymorphic encryption is a new cryptographic technique that solves these problems. Together with the associated technique of polymorphic pseudonymi-sation new security and privacy guarantees can be given which are essential in areas such as (personalised) healthcare, medical data collection via self-measurement apps, and more generally in privacy-friendly identity management and data analytics. The key ideas of polymorphic encryption are: 1. Directly after generation, data can be encrypted in a 'polymorphic' manner and stored at a (cloud) storage facility in such a way that the storage provider cannot get access. Crucially, there is no need to a priori fix who gets to see the data, so that the data can immediately be protected. For instance a PEP-enabled self-measurement device will store all its measurement data in polymorphically encrypted form in a back-end data base. 2. Later on it can be decided who can decrypt the data. This decision will be made on the basis of a policy, in which the data subject should play a key role. The user of the PEP-enabled device can, for instance, decide that doctors X, Y, Z may at some stage decrypt to use the data in their diagnosis, or medical researcher groups A, B, C may use it for their investigations, or third parties U, V, W may use it for additional services, etc. 3. This 'tweaking' of the encrypted data to make it decryptable by a specific party can be done in a blind manner. It will have to be done by a trusted party who knows how to tweak the ciphertext for whom. This PEP technology can provide the necessary security and privacy infrastructure for big data analytics. People can entrust their data in polymorphically encrypted form, and each time decide later to make (parts of) it available (de-cryptable) for specific parties, for specific analysis purposes. In this way users remain in control, and can monitor which of their data is used where by whom for which purposes. The … |
0426408774fea8d724609769d6954dd75454a97e | Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, (1) using a structure similar to the Ladder network as the inference model, (2) warm-up period to support stochastic units staying active in early training, and (3) use of batch normalization. Using these improvements we show state-of-the-art log-likelihood results for generative modeling on several benchmark datasets. |
feebbb3378245c28c708a290a888248026b06ca8 | A novel multi-frequency printed quadrifilar helix antenna based on Multiple arm technique is presented in this paper. Double frequency and satisfactory antenna characteristics are achieved. The antenna has relatively compact size and hemispherical pattern with excellent circularly polarized coverage. The antenna is designed and simulated with the application of HFSS software. The simulation results and analyses are presented. |
1ee72ed1db4ddbc49922255194890037c7a2f797 | A broadband monopulse comparator MMIC (Monolithic Microwave Integrated Circuit) based on GaAs process is presented in this Letter. The comparator network constructed by three magic tees and one lumped power divider is proposed for one sum channel and two delta channels. The measurement results show that very wide frequency band from 15 to 30 GHz (66.7% relatively frequency bandwidth) with less than 2.5-dB loss can be achieved for the sum channel. And the null depth is more than 22 dB in 15–27GHz and 17 dB in 27–30 GHz for two delta channels. The total chip size is 3.4 mm 3.4 mm (<inline-formula> <tex-math notation="LaTeX">$0.26\lambda _{0}~0.26\lambda _{0}$ </tex-math></inline-formula> at the center frequency of 22.5 GHz). |
202b3b3bb4a5190ce53b77564f9ae1dc65f3489b | |
8eea0da60738a54c0fc6a092aecf0daf0c51cee3 | This study investigated user acceptance, concerns, and willingness to buy partially, highly, and fully automated vehicles. By means of a 63-question Internet-based survey, we collected 5000 responses from 109 countries (40 countries with at least 25 respondents). We determined cross-national differences, and assessed correlations with personal variables, such as age, gender, and personality traits as measured with a short version of the Big Five Inventory. Results showed that respondents, on average, found manual driving the most enjoyable mode of driving. Responses were diverse: 22% of the respondents did not want to pay more than $0 for a fully automated driving system, whereas 5% indicated they would be willing to pay more than $30,000, and 33% indicated that fully automated driving would be highly enjoyable. 69% of respondents estimated that fully automated driving will reach a 50% market share between now and 2050. Respondents were found to be most concerned about software hacking/misuse, and were also concerned about legal issues and safety. Respondents scoring higher on neuroticism were slightly less comfortable about data transmitting, whereas respondents scoring higher on agreeableness were slightly more comfortable with this. Respondents from more developed countries (in terms of lower accident statistics, higher education, and higher income) were less comfortable with their vehicle transmitting data, with cross-national correlations between q = 0.80 and q = 0.90. The present results indicate the major areas of promise and concern among the international public, and could be useful for vehicle developers and other stakeholders. 2015 Elsevier Ltd. All rights reserved. |
2d4f10ccd2503c37ec32aa0033d3e5b3559f4404 | Situational awareness has become an increasingly salient factor contributing to flight safety and operational performance, and the research has burgeoned to cope with the human performance challenges associated with the installation of advanced avionics systems in modern aircraft. The systematic study and application of situational awareness has also extended beyond the cockpit to include air traffic controllers and personnel operating within other complex, high consequence work domains. This volume offers a collection of essays that have made important contributions to situational awareness research and practice. To this end, it provides unique access to key readings that address the conceptual development of situational awareness, methods for its assessment, and applications to enhance situational awareness through training and design. |
a7621b4ec18719b08f3a2a444b6d37a2e20227b7 | Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges. |
0ee6f663f89e33eb84093ea9cd94212d1a8170c9 | A leaky-wave antenna (LWA) with circular polarization based on the composite right/left-handed (CRLH) substrate integrated waveguide (SIW) is investigated and presented. Series interdigital capacitors have been introduced into the circuit by etching the slots on the waveguide surface achieving a CRLH functionality. Two symmetrical leaky traveling-wave transmission lines with orthogonal polarizations are placed side-by-side and excited with 90° phase difference generating a pure circular polarization mode. The main beam of this antenna can be steered continuously by varying the frequency while maintaining a low axial ratio (below 3 dB) within the main beam direction. The performance of this LWA is verified through both full-wave simulation and measurement of a fabricated prototype showing a good agreement. |
50bc77f3ec070940b1923b823503a4c2b09e9921 | |
48b38420f9c39c601dcf81621609d131b8035f94 | Health monitoring systems have rapidly evolved during the past two decades and have the potential to change the way health care is currently delivered. Although smart health monitoring systems automate patient monitoring tasks and, thereby improve the patient workflow management, their efficiency in clinical settings is still debatable. This paper presents a review of smart health monitoring systems and an overview of their design and modeling. Furthermore, a critical analysis of the efficiency, clinical acceptability, strategies and recommendations on improving current health monitoring systems will be presented. The main aim is to review current state of the art monitoring systems and to perform extensive and an in-depth analysis of the findings in the area of smart health monitoring systems. In order to achieve this, over fifty different monitoring systems have been selected, categorized, classified and compared. Finally, major advances in the system design level have been discussed, current issues facing health care providers, as well as the potential challenges to health monitoring field will be identified and compared to other similar systems. |
66336d0b89c3eca3dec0a41d2696a0fda23b6957 | A high-gain, broadband, and low-profile continuous transverse stub antenna array is presented in E-band. This array comprises 32 long slots excited in parallel by a uniform corporate parallel-plate-waveguide beamforming network combined to a pillbox coupler. The radiating slots and the corporate feed network are built in aluminum whereas the pillbox coupler and its focal source are fabricated in printed circuit board technology. Specific transitions have been designed to combine both fabrication technologies. The design, fabrication, and measurement results are detailed, and a simple design methodology is proposed. The antenna is well matched ( $S_{11} < -13.6$ dB) between 71 and 86 GHz, and an excellent agreement is found between simulations and measurements, thus validating the proposed design. The antenna gain is higher than 29.3 dBi over the entire bandwidth, with a peak gain of 30.8 dBi at 82.25 GHz, and a beam having roughly the same half-power beamwidth in E- and H-planes. This antenna architecture is considered as an innovative solution for long-distance millimeter-waves telecommunication applications such as fifth-generation backhauling in E-band. |
07d9dd5c25c944bf009256cdcb622feda53dabba | Markov chain Monte Carlo (e. g., the Metropolis algorithm and Gibbs sampler) is a general tool for simulation of complex stochastic processes useful in many types of statistical inference. The basics of Markov chain Monte Carlo are reviewed, including choice of algorithms and variance estimation, and some new methods are introduced. The use of Markov chain Monte Carlo for maximum likelihood estimation is explained, and its performance is compared with maximum pseudo likelihood estimation. |
2ffd5c401a958e88c80291c48738d21d96942c1a | We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around avoiding contact with non-traversable objects (i.e. boxes, upright cylinders, or lying cylinders in certain orientation), but moving over traversable objects (such as spheres, and lying cylinders in a rollable orientation with respect to the robot) rolling them out of its way. We have shown that for each action approximately 1% of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image. The experiments are conducted both using a physics-based simulator and on a real robot. |
788121f29d86021a99a4d1d8ba53bb1312334b16 | THIS PAPER is concerned with the nature of the tutorial process; the means whereby an adult or "expert" helps somebody who is less adult or less expert. Though its aim is general, it is expressed in terms of a particular task: a tutor seeks to teach children aged 3, 4 and 5 yr to build a particular three-dimensional structure that requires a degree of skill that is initially beyond them. It is the usual type of tutoring situation in which one member "knows the answer" and the other does not, rather like a "practical" in which only the instructor "knows how". The changing interaction of tutor and children provide our data. A great deal of early problem solving by the developing child is of this order. Although from the earliest months of life he is a "natural" problem solver in his own right (e.g. Bruner, 1973) it is often the ease that his efforts are assisted and fostered by others who are more skilful than he is (Kaye, 1970). Whether he is learning the procedures that constitute the skills of attending, communicating, manipulating objects, locomoting, or, indeed, a more effective problem solving procedure itself, there are usually others in attendance who help him on his way. Tutorial interactions are, in short, a crucial feature of infancy and childhood. Our species, moreover, appears to be the only one in which any "intentional" tutoring goes on (Bruner, 1972; Hinde, 1971). For although it is true that many of the higher primate species learn by observation of their elders (Hamburg, 1968; van Lawick-Goodall, 1968), there is no evidence that those elders do anything to instruct their charges in the performance of the skill in question. What distinguishes man as a species is not only his capacity for learning, but for teaching as well. It is the main aim of this paper to examine some of the major implications of this interactive, instructional relationship between the developing child and his elders for the study of skill acquisition and problem solving. The acquisition of skill in the human child can be fruitfully conceived as a hierarchical program in which component skills are combined into "higher skills" by appropriate orchestration to meet new, more complex task requirements (Bruner, 1973). The process is analogous to problem solving in which mastery of "lower order" or constituent problems in a sine qua non for success with a larger jjroblcm, each level influencing the other—as with reading where the deciphering of words makes possible the deciphering of sentences, and sentences then aid in the deciphering of particular words (F. Smith, 1971). Given persistent intention in the young learner, given a "lexicon" of constituent skills, the crucial task is often one of com- |
13eba30632154428725983fcd8343f3f1b3f0695 | Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank. |
c22f9e2f3cc1c2296f7edb4cf780c6503e244a49 | |
3dd9793bc7b1f97115c45e90c8874f786262f466 | Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate. |
e7b4ea66dff3966fc9da581f32cb69132a7bbd99 | The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities. |
09168f7259e0df1484115bfd44ce4fdcafdc15f7 | In a two tier cellular network - comprised of a central macrocell underlaid with shorter range femtocell hotspots - cross-tier interference limits overall capacity with universal frequency reuse. To quantify near-far effects with universal frequency reuse, this paper derives a fundamental relation providing the largest feasible cellular Signal-to-Interference-Plus-Noise Ratio (SINR), given any set of feasible femtocell SINRs. We provide a link budget analysis which enables simple and accurate performance insights in a two-tier network. A distributed utility- based SINR adaptation at femtocells is proposed in order to alleviate cross-tier interference at the macrocell from cochannel femtocells. The Foschini-Miljanic (FM) algorithm is a special case of the adaptation. Each femtocell maximizes their individual utility consisting of a SINR based reward less an incurred cost (interference to the macrocell). Numerical results show greater than 30% improvement in mean femtocell SINRs relative to FM. In the event that cross-tier interference prevents a cellular user from obtaining its SINR target, an algorithm is proposed that reduces transmission powers of the strongest femtocell interferers. The algorithm ensures that a cellular user achieves its SINR target even with 100 femtocells/cell-site (with typical cellular parameters) and requires a worst case SINR reduction of only 16% at femtocells. These results motivate design of power control schemes requiring minimal network overhead in two-tier networks with shared spectrum. |
5309b8f4723d44de2fa51cd2c15bffebf541ef57 | The simplicity and intuitive design of traditional planar printed quasi-Yagi antennas has led to its widespread popularity for its good directivity. In this paper, a novel quasi-Yagi antenna with a single director and a concave parabolic reflector, operating in S-band, is proposed. The impedance characteristic and radiation characteristic are simulated with CST-Microwave Studio, and the antenna is fabricated and measured. The measured results indicate that the antenna which can operate at 2.28-2.63GHz can achieve an average gain of 6.5dBi within the operating frequency range, especially a highest gain of 7.5dBi at 2.5GHz. The proposed antenna can be widely used in WLAN/TD-LTE/BD1 and so on. |
c03c3583153d213f696f1cbd4cd65c57437473a5 | This paper proposes an LLC resonant converter based LED (Light Emitting Diode) lamp driver with high power factor. The proposed circuit uses a boost converter for PFC (Power Factor Correction) which operates in continuous conduction mode (CCM) and a quasi half bridge resonant converter to drive the LED lamp load. The LLC converter is designed such that solid state switches of quasi half bridge are working under zero voltage switching (ZVS) to reduce switching losses. The analysis, design, modeling and simulation of 50 W LED driver are carried out using MATLAB/Simulink tool for universal AC mains. The power quality indices are calculated such as total harmonic distortion of AC mains current (THDi), power factor (PF) and crest factor (CF) to evaluate the performance of proposed LED lamp driver. |
94a62f470aeea69af436e2dd0b54cd50eaaa4b23 | As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area. |
ced981c28215dd218f05ecbba6512671b22d1cc6 | Abstract—Nowadays social media information, such as news, links, images, or VDOs, is shared extensively. However, the effectiveness of disseminating information through social media lacks in quality: less fact checking, more biases, and several rumors. Many researchers have investigated about credibility on Twitter, but there is no the research report about credibility information on Facebook. This paper proposes features for measuring credibility on Facebook information. We developed the system for credibility on Facebook. First, we have developed FB credibility evaluator for measuring credibility of each post by manual human’s labelling. We then collected the training data for creating a model using Support Vector Machine (SVM). Secondly, we developed a chrome extension of FB credibility for Facebook users to evaluate the credibility of each post. Based on the usage analysis of our FB credibility chrome extension, about 81% of users’ responses agree with suggested credibility automatically computed by the proposed system. |
1b2a8dc42d6eebc937c9642799a6de87985c3da6 | The social media network phenomenon creates massive amounts of valuable data that is available online and easy to access. Many users share images, videos, comments, reviews, news and opinions on different social networks sites, with Twitter being one of the most popular ones. Data collected from Twitter is highly unstructured, and extracting useful information from tweets is a challenging task. Twitter has a huge number of Arabic users who mostly post and write their tweets using the Arabic language. While there has been a lot of research on sentiment analysis in English, the amount of researches and datasets in Arabic language is limited. This paper introduces an Arabic language dataset, which is about opinions on health services and has been collected from Twitter. The paper will first detail the process of collecting the data from Twitter and also the process of filtering, pre-processing and annotating the Arabic text in order to build a big sentiment analysis dataset in Arabic. Several Machine Learning algorithms (Naïve Bayes, Support Vector Machine and Logistic Regression) alongside Deep and Convolutional Neural Networks were utilized in our experiments of sentiment analysis on our health dataset. |
d228e3e200c2c6f757b9b3579fa058b2953083c0 | |
f87b713182d39297e930c41e23ff26394cbdcade | |
838ec45eeb7f63875742a76aa1080563f44af619 | This article defines and discusses one of these qualitative methods--the case research strategy. Suggestions are provided for researchers who wish to undertake research employing this approach. Criteria for the evaluation of case research are established and several characteristics useful for categorizing the studies are identified. A sample of papers drawn from information systems journals is reviewed. The paper concludes with examples of research areas that are particularly wellsuited to investigation using the case research approach. ACM categories: H.O., J.O. |
9cf2c6d3ab15c1f23fc708e74111324fa82a8169 | This article discusses the Roles of ICT in education. Information communication technologies (ICT) at present are influencing every aspect of human life. They are playing salient roles in work places, business, education, and entertainment. Moreover, many people recognize ICTs as catalysts for change; change in working conditions, handling and exchanging information, teaching methods, learning approaches, scientific research, and in accessing information. Therefore, this review article discusses the roles of ICTs, the promises, limitations and key challenges of integration to education systems. The review attempts in answering the following questions: (1) What are the benefits of ICTs in education? (2) What are the existing promises of ICT use in education systems of some developing countries? (3) What are the limitations and key challenges of ICTs integration to education systems? The review concludes that regardless of all the limitations characterizing it, ICT benefits education systems to provide quality education in alignment with constructivism, which is a contemporary paradigm of learning. |
bb73ea8dc36030735c1439acf93a0e77ac8a907c | This letter presents an internal uniplanar small size multiband antenna for tablet/laptop computer applications. The proposed antenna in addition to common LTE/WWAN channels covers commercial GPS/GLONASS frequency bands. The antenna is comprised of three sections: coupled-fed, shorting and low frequency spiral strips with the size of 50 ×11 ×0.8 mm2. With the aid of spiral strip, lower band operation at 900 MHz is achieved. Two operating frequency bands cover 870-965 and 1556-2480 MHz. In order to validate simulation results, a prototype of the proposed printed antenna is fabricated and tested. Good agreement between the simulation and measurement results is obtained. |
8308e7b39d1f556e4041b4630a41aa8435fe1a49 | MIMO (multiple-input multiple-output) radar refers to an architecture that employs multiple, spatially distributed transmitters and receivers. While, in a general sense, MIMO radar can be viewed as a type of multistatic radar, the separate nomenclature suggests unique features that set MIMO radar apart from the multistatic radar literature and that have a close relation to MIMO communications. This article reviews some recent work on MIMO radar with widely separated antennas. Widely separated transmit/receive antennas capture the spatial diversity of the target's radar cross section (RCS). Unique features of MIMO radar are explained and illustrated by examples. It is shown that with noncoherent processing, a target's RCS spatial variations can be exploited to obtain a diversity gain for target detection and for estimation of various parameters, such as angle of arrival and Doppler. For target location, it is shown that coherent processing can provide a resolution far exceeding that supported by the radar's waveform. |
958d165f8bb77838ec915d4f214a2310e3adde19 | Distributed representations of words as real-valued vectors in a relatively lowdimensional space aim at extracting syntactic and semantic features from large text corpora. A recently introduced neural network, named word2vec (Mikolov et al., 2013a; Mikolov et al., 2013b), was shown to encode semantic information in the direction of the word vectors. In this brief report, it is proposed to use the length of the vectors, together with the term frequency, as measure of word significance in a corpus. Experimental evidence using a domain-specific corpus of abstracts is presented to support this proposal. A useful visualization technique for text corpora emerges, where words are mapped onto a two-dimensional plane and automatically ranked by significance. |
bb9e418469d018be7f5ac2c4b2435ccac50088a3 | The multimedia community has witnessed the rise of deep learning–based techniques in analyzing multimedia content more effectively. In the past decade, the convergence of deep-learning and multimedia analytics has boosted the performance of several traditional tasks, such as classification, detection, and regression, and has also fundamentally changed the landscape of several relatively new areas, such as semantic segmentation, captioning, and content generation. This article aims to review the development path of major tasks in multimedia analytics and take a look into future directions. We start by summarizing the fundamental deep techniques related to multimedia analytics, especially in the visual domain, and then review representative high-level tasks powered by recent advances. Moreover, the performance review of popular benchmarks gives a pathway to technology advancement and helps identify both milestone works and future directions. |
c481fb721531640e047ac7f598bd7714a5e62b33 | Teachers have tried to teach their students by introducing text books along with verbal instructions in traditional education system. However, teaching and learning methods could be changed for developing Information and Communication Technology (ICT). It's time to adapt students with interactive learning system so that they can improve their learning, catching, and memorizing capabilities. It is indispensable to create high quality and realistic leaning environment for students. Visual learning can be easier to understand and deal with their learning. We developed visual learning materials (an overview of solar system) in the form of video for students of primary level using different multimedia application tools. The objective of this paper is to examine the impact of student’s abilities to acquire new knowledge or skills through visual learning materials and blended leaning that is integration of visual learning materials with teacher’s instructions. We visited a primary school in Dhaka city for this study and conducted teaching with three different groups of students (i) teacher taught students by traditional system on same materials and marked level of student’s ability to adapt by a set of questions (ii) another group was taught with only visual learning material and assessment was done with 15 questionnaires, (iii) the third group was taught with the video of solar system combined with teacher’s instructions and assessed with the same questionnaires. This integration of visual materials (solar system) with verbal instructions is a blended approach of learning. The interactive blended approach greatly promoted students ability of acquisition of knowledge and skills. Students response and perception were very positive towards the blended technique than the other two methods. This interactive blending leaning system may be an appropriate method especially for school children. |
bde40c638fd03b685114d8854de2349969f2e091 | Urban black hole, as a traffic anomaly, has caused lots of catastrophic accidents in many big cities nowadays. Traditional methods only depend on the single source data (e.g., taxi trajectories) to design blackhole detection algorithm from one point of view, which is rather incomplete to describe the regional crowd flow. In this paper, we model the urban black holes in each region of New York City (NYC) at different time intervals with a 3-dimensional tensor by fusing cross-domain data sources. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we leverage the knowledge from geographical features, 311 complaint features and human mobility features to recover the blackhole situation throughout NYC. The information can facilitate local residents and officials' decision making. We evaluate our model with five datasets related to NYC, diagnosing the urban black holes that cannot be identified (or earlier than those detected) by a single dataset. Experimental results demonstrate the advantages beyond four baseline methods. |
09f83b83fd3b0114c2c902212101152c2d2d1259 | |
e905396dce34e495b32e40b93195deeba7096476 | This communication presents a wide-band and low-profile H-plane horn antenna based on ridged substrate integrated waveguide (SIW) with a large conducting ground. The horn antenna is implemented in a single substrate with a thickness of 0.13 λ0 at the center frequency. Despite its low profile, the new H-plane horn antenna achieves a very wide bandwidth by employing an arc-shaped copper taper printed on the extended dielectric slab and a three-step ridged SIW transition. The ridged SIW is critical for widening the operation bandwidth and lowering the characteristic impedance so that an excellent impedance matching from the coaxial probe to the narrow SIW can be obtained over a wide frequency range. Measured VSWR of the fabricated horn antenna is below 2.5 from 6.6 GHz to 18 GHz. The antenna also exhibits stable radiation beam over the same frequency range. It is observed that measured results agree well with simulated ones. |
a33a1c0f69327b9bc112ee4857112312c41b13ff | We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of convolutional neural networks (CNN) over both classification and regression based tasks. During training, our style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling input style embeddings from a multivariate normal distribution instead of inferring them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation, illustrating consistent improvements in generalization. |
91acde3f3db1f793070d9e58b05c48401ff46925 | Decision trees are a popular technique in statistical data classification. They recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class. The basic Classification and Regression Tree (CART) algorithm partitions the feature space using axis parallel splits. When the true decision boundaries are not aligned with the feature axes, this approach can produce a complicated boundary structure. Oblique decision trees use oblique decision boundaries to potentially simplify the boundary structure. The major limitation of this approach is that the tree induction algorithm is computationally expensive. In this article we present a new decision tree algorithm, called HHCART. The method utilizes a series of Householder matrices to reflect the training data at each node during the tree construction. Each reflection is based on the directions of the eigenvectors from each classes’ covariance matrix. Considering axis parallel splits in the reflected training data provides an efficient way of finding oblique splits in the unreflected training data. Experimental results show that the accuracy and size of the HHCART trees are comparable with some benchmark methods in the literature. The appealing feature of HHCART is that it can handle both qualitative and quantitative features in the same oblique split. |
896e160b98d52d13a97caa664038e37e86075ee4 | Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing/enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment. |
8cfb316b3233d9b598265e3b3d40b8b064014d63 | The current state-of-the-art in video classification is based on Bag-of-Words using local visual descriptors. Most commonly these are histogram of oriented gradients (HOG), histogram of optical flow (HOF) and motion boundary histograms (MBH) descriptors. While such approach is very powerful for classification, it is also computationally expensive. This paper addresses the problem of computational efficiency. Specifically: (1) We propose several speed-ups for densely sampled HOG, HOF and MBH descriptors and release Matlab code; (2) We investigate the trade-off between accuracy and computational efficiency of descriptors in terms of frame sampling rate and type of Optical Flow method; (3) We investigate the trade-off between accuracy and computational efficiency for computing the feature vocabulary, using and comparing most of the commonly adopted vector quantization techniques: $$k$$ k -means, hierarchical $$k$$ k -means, Random Forests, Fisher Vectors and VLAD. |
9992626e8e063c1b23e1920efd63ab4f008710ac | |
1a8fd4b2f127d02f70f1c94f330628be31d18681 | |
d880d303ee0bfdbc80fc34df0978088cd15ce861 | Abstract—We present a novel end-to-end partially supervised deep learning approach for video anomaly detection and localization using only normal samples. The insight that motivates this study is that the normal samples can be associated with at least one Gaussian component of a Gaussian Mixture Model (GMM), while anomalies either do not belong to any Gaussian component. The method is based on Gaussian Mixture Variational Autoencoder, which can learn feature representations of the normal samples as a Gaussian Mixture Model trained using deep learning. A Fully Convolutional Network (FCN) that does not contain a fully-connected layer is employed for the encoder-decoder structure to preserve relative spatial coordinates between the input image and the output feature map. Based on the joint probabilities of each of the Gaussian mixture components, we introduce a sample energy based method to score the anomaly of image test patches. A two-stream network framework is employed to combine the appearance and motion anomalies, using RGB frames for the former and dynamic flow images, for the latter. We test our approach on two popular benchmarks (UCSD Dataset and Avenue Dataset). The experimental results verify the superiority of our method compared to the state of the arts. |
09f02eee625b7aa6ba7e6f31cfb56f6d4ddd0fdd | The evolution of the World Wide Web (WWW) and the smart-phone technologies have played a key role in the revolution of our daily life. The location-based social networks (LBSN) have emerged and facilitated the users to share the check-in information and multimedia contents. The Point of Interest (POI) recommendation system uses the check-in information to predict the most potential check-in locations. The different aspects of the check-in information, for instance, the geographical distance, the category, and the temporal popularity of a POI; and the temporal check-in trends, and the social (friendship) information of a user play a crucial role in an efficient recommendation. In this paper, we propose a fused recommendation model termed MAPS (Multi Aspect Personalized POI Recommender System) which will be the first in our knowledge to fuse the categorical, the temporal, the social and the spatial aspects in a single model. The major contribution of this paper are: (i) it realizes the problem as a graph of location nodes with constraints on the category and the distance aspects (i.e. the edge between two locations is constrained by a threshold distance and the category of the locations), (ii) it proposes a multi-aspect fused POI recommendation model, and (iii) it extensively evaluates the model with two real-world data sets. |
04d7b7851683809cab561d09b5c5c80bd5c33c80 | QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference. |
248040fa359a9f18527e28687822cf67d6adaf16 | We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research. |
38b1eb892e51661cd0e3c9f6c38f1f7f8def1317 | Smartphones and "app" markets are raising concerns about how third-party applications may misuse or improperly handle users' privacy-sensitive data. Fortunately, unlike in the PC world, we have a unique opportunity to improve the security of mobile applications thanks to the centralized nature of app distribution through popular app markets. Thorough validation of apps applied as part of the app market admission process has the potential to significantly enhance mobile device security. In this paper, we propose AppInspector, an automated security validation system that analyzes apps and generates reports of potential security and privacy violations. We describe our vision for making smartphone apps more secure through automated validation and outline key challenges such as detecting and analyzing security and privacy violations, ensuring thorough test coverage, and scaling to large numbers of apps. |
74640bdf33a1e8b7a319fbbbaeccf681f80861cc | |
b7634a0ac84902b135b6073b61ed6a1909f89bd2 | |
c7b007d546d24322152719898c2836910f0d3939 | The present research seeks to extend existing theory on self-disclosure to the online arena in higher educational institutions and contribute to the knowledge base and understanding about the use of a popular social networking site (SNS), Facebook, by college students. We conducted a non-experimental study to investigate how university students (N = 463) use Facebook, and examined the roles that personality and culture play in disclosure of information in online SNS-based environments. Results showed that individuals do disclose differently online vs. in-person, and that both culture and personality matter. Specifically, it was found that collectivistic individuals low on extraversion and interacting in an online environment disclosed the least honest and the most audience-relevant information, as compared to others. Exploratory analyses also indicate that students use sites such as Facebook primarily to maintain existing personal relationships and selectively used privacy settings to control their self-presentation on SNSs. The findings of this study offer insight into understanding college students’ self-disclosure on SNS, add to the literature on personality and self-disclosure, and shape future directions for research and practice on online self-presentation. Published by Elsevier Ltd. |
c83abfeb5a2f7d431022cd1f8dd7da41431c4810 | We present a framework for the estimation of driver behavior at intersections, with applications to autonomous driving and vehicle safety. The framework is based on modeling the driver behavior and vehicle dynamics as a hybrid-state system (HSS), with driver decisions being modeled as a discrete-state system and the vehicle dynamics modeled as a continuous-state system. The proposed estimation method uses observable parameters to track the instantaneous continuous state and estimates the most likely behavior of a driver given these observations. This paper describes a framework that encompasses the hybrid structure of vehicle-driver coupling and uses hidden Markov models (HMMs) to estimate driver behavior from filtered continuous observations. Such a method is suitable for scenarios that involve unknown decisions of other vehicles, such as lane changes or intersection access. Such a framework requires extensive data collection, and the authors describe the procedure used in collecting and analyzing vehicle driving data. For illustration, the proposed hybrid architecture and driver behavior estimation techniques are trained and tested near intersections with exemplary results provided. Comparison is made between the proposed framework, simple classifiers, and naturalistic driver estimation. Obtained results show promise for using the HSS-HMM framework. |
a8c1347b82ba3d7ce03122955762db86d44186d0 | This paper develops a novel framework for efficient large-scale video retrieval. We aim to find video according to higher level similarities, which is beyond the scope of traditional near duplicate search. Following the popular hashing technique we employ compact binary codes to facilitate nearest neighbor search. Unlike the previous methods which capitalize on only one type of hash code for retrieval, this paper combines heterogeneous hash codes to effectively describe the diverse and multi-scale visual contents in videos. Our method integrates feature pooling and hashing in a single framework. In the pooling stage, we cast video frames into a set of pre-specified components, which capture a variety of semantics of video contents. In the hashing stage, we represent each video component as a compact hash code, and combine multiple hash codes into hash tables for effective search. To speed up the retrieval while retaining most informative codes, we propose a graph-based influence maximization method to bridge the pooling and hashing stages. We show that the influence maximization problem is submodular, which allows a greedy optimization method to achieve a nearly optimal solution. Our method works very efficiently, retrieving thousands of video clips from TRECVID dataset in about 0.001 second. For a larger scale synthetic dataset with 1M samples, it uses less than 1 second in response to 100 queries. Our method is extensively evaluated in both unsupervised and supervised scenarios, and the results on TRECVID Multimedia Event Detection and Columbia Consumer Video datasets demonstrate the success of our proposed technique. |
0faccce84266d2a8f0c4fa08c33b357b42cf17f2 | Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks. |
ac569822882547080d3dc51fed10c746946a6cfd | |
e70ea58d023df2c31325a9b409ee4493e38b6768 | |
3895912b187adee599b1ea662da92865dd0b197d | OBJECTIVE
To describe the promise and potential of big data analytics in healthcare.
METHODS
The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions.
RESULTS
The paper provides a broad overview of big data analytics for healthcare researchers and practitioners.
CONCLUSIONS
Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome. |
73a19026fb8a6ef5bf238ff472f31100c33753d0 | In this paper, we provide the preliminaries of basic concepts about association rule mining and survey the list of existing association rule mining techniques. Of course, a single article cannot be a complete review of all the algorithms, yet we hope that the references cited will cover the major theoretical issues, guiding the researcher in interesting research directions that have yet to be explored. |
75e5ba7621935b57b2be7bf4a10cad66a9c445b9 | We develop a parameter-free face recognition algorithm which is insensitive to large variations in lighting, expression, occlusion, and age using a single gallery sample per subject. We take advantage of the observation that equidistant prototypes embedding is an optimal embedding that maximizes the minimum one-against-the-rest margin between the classes. Rather than preserving the global or local structure of the training data, our method, called linear regression analysis (LRA), applies least-square regression technique to map gallery samples to the equally distant locations, regardless of the true structure of training data. Further, a novel generic learning method, which maps the intra-class facial differences of the generic faces to the zero vectors, is incorporated to enhance the generalization capability of LRA. Using this novel method, learning based on only a handful of generic classes can largely improve the face recognition performance, even when the generic data are collected from a different database and camera set-up. The incremental learning based on the Greville algorithm makes the mapping matrix efficiently updated from the newly coming gallery classes, training samples, or generic variations. Although it is fairly simple and parameter-free, LRA, combined with commonly used local descriptors, such as Gabor representation and local binary patterns, outperforms the state-of-the-art methods for several standard experiments on the Extended Yale B, CMU PIE, AR, and ∗Corresponding author. Tel:+86 10 62283059 Fax: +86 10 62285019 Email address: [email protected] (Weihong Deng) Preprint submitted to Elsevier March 28, 2014 |
b1cfe7f8b8557b03fa38036030f09b448d925041 | -This paper presents a texture segmentation algorithm inspired by the multi-channel filtering theory for visual information processing in the early stages of human visual system. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain, and a systematic filter selection scheme is proposed, which is based on reconstruction of the input image from the filtered images. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of "energy" in a window around each pixel. A square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial information in the clustering process is proposed. A relative index is used to estimate the "'true" number of texture categories. Texture segmentation Multi-channel filtering Clustering Clustering index Gabor filters Wavelet transform I . I N T R O D U C T I O N Image segmentation is a difficult yet very important task in many image analysis or computer vision applications. Differences in the mean gray level or in color in small neighborhoods alone are not always sufficient for image segmentation. Rather, one has to rely on differences in the spatial arrangement of gray values of neighboring pixels-that is, on differences in texture. The problem of segmenting an image based on textural cues is referred to as the texture segmentation problem. Texture segmentation involves identifying regions with "uniform" textures in a given image. Appropriate measures of texture are needed in order to decide whether a given region has uniform texture. Sklansky (o has suggested the following definition of texture which is appropriate in the segmentation context: "A region in an image has a constant texture if a set of local statistics or other local properties of the picture are constant, slowly varying, or approximately periodic". Texture, therefore, has both local and global connotations--i t is characterized by invariance of certain local measures or properties over an image region. The diversity of natural and artificial textures makes it impossible to give a universal definition of texture. A large number of techniques for analyzing image texture has been proposed in the past two decades/2,3) In this paper, we focus on a particular approach to texture analysis which is referred to as ° This work was supported in part by the National Science Foundation infrastructure grant CDA-8806599, and by a grant from E. I. Du Pont De Nemours & Company Inc. the multi-channel filtering approach. This approach is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. First proposed by Campbell and Robson (4) the theory holds that the visual system decomposes the retinal image into a number of filtered images, each of which contains intensity variations over a narrow range of frequency (size) and orientation. The psychophysical experiments that suggested such a decomposition used various grating patterns as stimuli and were based on adaptation techniques. I~l Subsequent psychophysiological experiments provided additional evidence supporting the theory. De Valois et al. ,(5) for example, recorded the response of simple cells in the visual cortex of the Macaque monkey to sinusoidal gratings with different frequencies and orientations. It was observed that each cell responds to a narrow range of frequency and orientation only. Therefore, it appears that there are mechanisms in the visual cortex of mammals that are tuned to combinations of frequency and orientation in a narrow range. These mechanisms are often referred to as channels, and are appropriately interpreted as band-pass filters. The multi-channel filtering approach to texture analysis is intuitively appealing because it allows us to exploit differences in dominant sizes and orientations of different textures. Today, the need for a multi-resolution approach to texture analysis is well recognized. While other approaches to texture analysis have had to be extended to accommodate this paradigm, the multi-channel filtering approach, is inherently multi-resolutional. Another important |
5757dd57950f6b3c4d90a342a170061c8c535536 | This paper presents a novel approach to the problem of computing the matching-cost for stereo vision. The approach is based upon a Convolutional Neural Network that is used to compute the similarity of input patches from stereo image pairs. In combination with state-ofthe-art stereo pipeline steps, the method achieves top results in major stereo benchmarks. The paper introduces the problem of stereo matching, discusses the proposed method and shows results from recent stereo datasets. |
4b65024cd376067156a5ac967899a7748fa31f6f | Unbounded, unordered, global-scale datasets are increasingly common in day-to-day business (e.g. Web logs, mobile usage statistics, and sensor networks). At the same time, consumers of these datasets have evolved sophisticated requirements, such as event-time ordering and windowing by features of the data themselves, in addition to an insatiable hunger for faster answers. Meanwhile, practicality dictates that one can never fully optimize along all dimensions of correctness, latency, and cost for these types of input. As a result, data processing practitioners are left with the quandary of how to reconcile the tensions between these seemingly competing propositions, often resulting in disparate implementations and systems. We propose that a fundamental shift of approach is necessary to deal with these evolved requirements in modern data processing. We as a field must stop trying to groom unbounded datasets into finite pools of information that eventually become complete, and instead live and breathe under the assumption that we will never know if or when we have seen all of our data, only that new data will arrive, old data may be retracted, and the only way to make this problem tractable is via principled abstractions that allow the practitioner the choice of appropriate tradeoffs along the axes of interest: correctness, latency, and cost. In this paper, we present one such approach, the Dataflow Model, along with a detailed examination of the semantics it enables, an overview of the core principles that guided its design, and a validation of the model itself via the real-world experiences that led to its development. We use the term “Dataflow Model” to describe the processing model of Google Cloud Dataflow [20], which is based upon technology from FlumeJava [12] and MillWheel [2]. This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing [email protected]. Articles from this volume were invited to present their results at the 41st International Conference on Very Large Data Bases, August 31st September 4th 2015, Kohala Coast, Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 Copyright 2015 VLDB Endowment 2150-8097/15/08. |
40c3b350008ada8f3f53a758e69992b6db8a8f95 | Object detection has over the past few years converged on using linear SVMs over HOG features. Training linear SVMs however is quite expensive, and can become intractable as the number of categories increase. In this work we revisit a much older technique, viz. Linear Discriminant Analysis, and show that LDA models can be trained almost trivially, and with little or no loss in performance. The covariance matrices we estimate capture properties of natural images. Whitening HOG features with these covariances thus removes naturally occuring correlations between the HOG features. We show that these whitened features (which we call WHO) are considerably better than the original HOG features for computing similarities, and prove their usefulness in clustering. Finally, we use our findings to produce an object detection system that is competitive on PASCAL VOC 2007 while being considerably easier to train and test. |
28e0c6088cf444e8694e511148a8f19d9feaeb44 | This paper explores the behavior of a self-deploying helical pantograph antenna for CubeSats. The helical pantograph concept is described along with concepts for attachment to the satellite bus. Finite element folding simulations of a pantograph consisting of eight helices are presented and compared to compaction force experiments done on a prototype antenna. Reflection coefficient tests are also presented, demonstrating the operating frequency range of the prototype antenna. The helical pantograph is shown to be a promising alternative to current small satellite antenna solutions. |
1aad2da473888cb7ebc1bfaa15bfa0f1502ce005 | This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably. |
97876c2195ad9c7a4be010d5cb4ba6af3547421c | |
259c25242db4a0dc1e1b5e61fd059f8949bdb79d | Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d-dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) kd-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on the Computational Geometry Algorithms Library (CGAL, http://www.cgal.org/). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention. |
ac4a2337afdf63e9b3480ce9025736d71f8cec1a | BACKGROUND
About 50% of the patients with advanced Parkinson's disease (PD) suffer from freezing of gait (FOG), which is a sudden and transient inability to walk. It often causes falls, interferes with daily activities and significantly impairs quality of life. Because gait deficits in PD patients are often resistant to pharmacologic treatment, effective non-pharmacologic treatments are of special interest.
OBJECTIVES
The goal of our study is to evaluate the concept of a wearable device that can obtain real-time gait data, processes them and provides assistance based on pre-determined specifications.
METHODS
We developed a real-time wearable FOG detection system that automatically provides a cueing sound when FOG is detected and which stays until the subject resumes walking. We evaluated our wearable assistive technology in a study with 10 PD patients. Over eight hours of data was recorded and a questionnaire was filled out by each patient.
RESULTS
Two hundred and thirty-seven FOG events have been identified by professional physiotherapists in a post-hoc video analysis. The device detected the FOG events online with a sensitivity of 73.1% and a specificity of 81.6% on a 0.5 sec frame-based evaluation.
CONCLUSIONS
With this study we show that online assistive feedback for PD patients is possible. We present and discuss the patients' and physiotherapists' perspectives on wearability and performance of the wearable assistant as well as their gait performance when using the assistant and point out the next research steps. Our results demonstrate the benefit of such a context-aware system and motivate further studies. |
2a68c39e3586f87da501bc2a5ae6138469f50613 | A large body of research in supervised learning deals with the analysis of singlelabel data, where training examples are associated with a single label λ from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y ⊆ L. Such data are called multi-label. Textual data, such as documents and web pages, are frequently annotated with more than a single label. For example, a news article concerning the reactions of the Christian church to the release of the “Da Vinci Code” film can be labeled as both religion and movies. The categorization of textual data is perhaps the dominant multi-label application. Recently, the issue of learning from multi-label data has attracted significant attention from a lot of researchers, motivated from an increasing number of new applications, such as semantic annotation of images [1, 2, 3] and video [4, 5], functional genomics [6, 7, 8, 9, 10], music categorization into emotions [11, 12, 13, 14] and directed marketing [15]. Table 1 presents a variety of applications that are discussed in the literature. This chapter reviews past and recent work on the rapidly evolving research area of multi-label data mining. Section 2 defines the two major tasks in learning from multi-label data and presents a significant number of learning methods. Section 3 discusses dimensionality reduction methods for multi-label data. Sections 4 and 5 discuss two important research challenges, which, if successfully met, can significantly expand the real-world applications of multi-label learning methods: a) exploiting label structure and b) scaling up to domains with large number of labels. Section 6 introduces benchmark multi-label datasets and their statistics, while Section 7 presents the most frequently used evaluation measures for multi-label learn- |
60686a80b91ce9518428e00dea95dfafadadd93c | This communication presents a dual-port reconfigurable square patch antenna with polarization diversity for 2.4 GHz. By controlling the states of four p-i-n diodes on the patch, the polarization of the proposed antenna can be switched among linear polarization (LP), left- or right-hand circular polarization (CP) at each port. The air substrate and aperture-coupled feed structure are employed to simplify the bias circuit of p-i-n diodes. With high isolation and low cross-polarization level in LP modes, both ports can work simultaneously as a dual linearly polarized antenna for polarimetric radars. Different CP waves are obtained at each port, which are suitable for addressing challenges ranging from mobility, adverse weather conditions and non-line-of-sight applications. The antenna has advantages of simple biasing network, easy fabrication and adjustment, which can be widely applied in polarization diversity applications. |
0d11248c42d5a57bb28b00d64e21a32d31bcd760 | On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the Code-Red (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet.In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms. |
0462a4fcd991f8d6f814337882da182c504d1d7b | We present a new edition of the Google Books Ngram Corpus, which describes how often words and phrases were used over a period of five centuries, in eight languages; it reflects 6% of all books ever published. This new edition introduces syntactic annotations: words are tagged with their part-of-speech, and headmodifier relationships are recorded. The annotations are produced automatically with statistical models that are specifically adapted to historical text. The corpus will facilitate the study of linguistic trends, especially those related to the evolution of syntax. |
Subsets and Splits