title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Fooling Sets and the Spanning Tree Polytope
In the study of extensions of polytopes of combinatorial optimization problems, a notorious open question is that for the size of the smallest extended formulation of the Minimum Spanning Tree problem on a complete graph with $n$ nodes. The best known lower bound is $\Omega(n^2)$, the best known upper bound is $O(n^3)$. In this note we show that the venerable fooling set method cannot be used to improve the lower bound: every fooling set for the Spanning Tree polytope has size $O(n^2)$.
1
0
1
0
0
0
From Azéma supermartingales of finite honest times to optional semimartingales of class-($Σ$)
Given a finite honest time, we derive representations for the additive and multiplicative decomposition of it's Azéma supermartingale in terms of optional supermartingales and its running supremum. We then extend the notion of semimartingales of class-$(\Sigma)$ to optional semimartingales with jumps in its finite variation part, allowing one to establish formulas similar to the Madan-Roynette-Yor option pricing formulas for larger class of processes. Finally, we introduce the optional multiplicative systems associated with positive submartingales and apply them to construct random times with given Azéma supermartingale.
0
0
0
0
0
1
Single versus Double Blind Reviewing at WSDM 2017
In this paper we study the implications for conference program committees of using single-blind reviewing, in which committee members are aware of the names and affiliations of paper authors, versus double-blind reviewing, in which this information is not visible to committee members. WSDM 2017, the 10th ACM International ACM Conference on Web Search and Data Mining, performed a controlled experiment in which each paper was reviewed by four committee members. Two of these four reviewers were chosen from a pool of committee members who had access to author information; the other two were chosen from a disjoint pool who did not have access to this information. This information asymmetry persisted through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers, and preferentially bid for papers from top institutions. Once papers were allocated to reviewers, single-blind reviewers were significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors and top institutions. The estimated odds multipliers are 1.63 for famous authors and 1.58 and 2.10 for top universities and companies respectively, so the result is tangible. For female authors, the associated odds multiplier of 0.78 is not statistically significant in our study. However, a meta-analysis places this value in line with that of other experiments, and in the context of this larger aggregate the gender effect is also statistically significant.
1
0
0
0
0
0
Scale-variant Topological Information for Characterizing Complex Networks
Real-world networks are difficult to characterize because of the variation of topological scales, the non-dyadic complex interactions, and the fluctuations. Here, we propose a general framework to address these problems via a methodology grounded on topology data analysis. By observing the diffusion process in a network at a single specified timescale, we can map the network nodes to a point cloud, which contains the topological information of the network at a single scale. We then calculate the point clouds constructed over variable timescales, which provide scale-variant topological information and enable a deep understanding of the network structure and functionality. Experiments on synthetic and real-world data demonstrate the effectiveness of our framework in identifying network models, classifying real-world networks and detecting transition points in time-evolving networks. Our work presents a unified analysis that is potentially applicable to more complicated network structures such as multilayer and multiplex networks.
1
0
0
0
0
0
Topological orbital superfluid with chiral d-wave order in a rotating optical lattice
Topological superfluid is an exotic state of quantum matter that possesses a nodeless superfluid gap in the bulk and Andreev edge modes at the boundary of a finite system. Here, we study a multi-orbital superfluid driven by attractive s-wave interaction in a rotating optical lattice. Interestingly, we find that the rotation induces the inter- orbital hybridization and drives the system into topological orbital superfluid in accordance with intrinsically chiral d-wave pairing characteristics. Thanks to the conservation of spin, the topological orbital superfluid supports four rather than two chiral Andreev edge modes at the boundary of the lattice. Moreover, we find that the intrinsic harmonic confining potential forms a circular spatial barrier which accumulates atoms and supports a mass current under injection of small angular momentum as external driving force. This feature provides an experimentally detectable phenomenon to verify the topological orbital superfluid with chiral d-wave order in a rotating optical lattice.
0
1
0
0
0
0
Optimal client recommendation for market makers in illiquid financial products
The process of liquidity provision in financial markets can result in prolonged exposure to illiquid instruments for market makers. In this case, where a proprietary position is not desired, pro-actively targeting the right client who is likely to be interested can be an effective means to offset this position, rather than relying on commensurate interest arising through natural demand. In this paper, we consider the inference of a client profile for the purpose of corporate bond recommendation, based on typical recorded information available to the market maker. Given a historical record of corporate bond transactions and bond meta-data, we use a topic-modelling analogy to develop a probabilistic technique for compiling a curated list of client recommendations for a particular bond that needs to be traded, ranked by probability of interest. We show that a model based on Latent Dirichlet Allocation offers promising performance to deliver relevant recommendations for sales traders.
0
0
0
1
0
0
Design of an Audio Interface for Patmos
This paper describes the design and implementation of an audio interface for the Patmos processor, which runs on an Altera DE2-115 FPGA board. This board has an audio codec included, the WM8731. The interface described in this work allows to receive and send audio from and to the WM8731, and to synthesize, store or manipulate audio signals writing C programs for Patmos. The audio interface described in this paper is intended to be used with the Patmos processor. Patmos is an open source RISC ISAs with a load-store architecture, that is optimized for Real-Time Systems. Patmos is part of a project founded by the European Union called T-CREST (Time-predictable Multi-Core Architecture for Embedded Systems).[5] The structure of this project is integrated with the Patmos project: new hardware modules have been added as IOs, which allow the communication between the processor and the audio codec. These modules include a clock generator for the audio chip, ADC and DAC modules for the audio conversion from analog to digital and vice versa, and an I2C module which allows setting configuration parameters on the audio codec. Moreover, a top module has been created, which connects all the modules previously mentioned between them, to Patmos and to the WM8731, using the external pins of the FPGA.
1
0
0
0
0
0
A Classifying Variational Autoencoder with Application to Polyphonic Music Generation
The variational autoencoder (VAE) is a popular probabilistic generative model. However, one shortcoming of VAEs is that the latent variables cannot be discrete, which makes it difficult to generate data from different modes of a distribution. Here, we propose an extension of the VAE framework that incorporates a classifier to infer the discrete class of the modeled data. To model sequential data, we can combine our Classifying VAE with a recurrent neural network such as an LSTM. We apply this model to algorithmic music generation, where our model learns to generate musical sequences in different keys. Most previous work in this area avoids modeling key by transposing data into only one or two keys, as opposed to the 10+ different keys in the original music. We show that our Classifying VAE and Classifying VAE+LSTM models outperform the corresponding non-classifying models in generating musical samples that stay in key. This benefit is especially apparent when trained on untransposed music data in the original keys.
1
0
0
1
0
0
On Estimation of Isotonic Piecewise Constant Signals
Consider a sequence of real data points $X_1,\ldots, X_n$ with underlying means $\theta^*_1,\dots,\theta^*_n$. This paper starts from studying the setting that $\theta^*_i$ is both piecewise constant and monotone as a function of the index $i$. For this, we establish the exact minimax rate of estimating such monotone functions, and thus give a non-trivial answer to an open problem in the shape-constrained analysis literature. The minimax rate involves an interesting iterated logarithmic dependence on the dimension, a phenomenon that is revealed through characterizing the interplay between the isotonic shape constraint and model selection complexity. We then develop a penalized least-squares procedure for estimating the vector $\theta^*=(\theta^*_1,\dots,\theta^*_n)^T$. This estimator is shown to achieve the derived minimax rate adaptively. For the proposed estimator, we further allow the model to be misspecified and derive oracle inequalities with the optimal rates, and show there exists a computationally efficient algorithm to compute the exact solution.
0
0
1
1
0
0
Deep Stacked Stochastic Configuration Networks for Non-Stationary Data Streams
The concept of stochastic configuration networks (SCNs) others a solid framework for fast implementation of feedforward neural networks through randomized learning. Unlike conventional randomized approaches, SCNs provide an avenue to select appropriate scope of random parameters to ensure the universal approximation property. In this paper, a deep version of stochastic configuration networks, namely deep stacked stochastic configuration network (DSSCN), is proposed for modeling non-stationary data streams. As an extension of evolving stochastic connfiguration networks (eSCNs), this work contributes a way to grow and shrink the structure of deep stochastic configuration networks autonomously from data streams. The performance of DSSCN is evaluated by six benchmark datasets. Simulation results, compared with prominent data stream algorithms, show that the proposed method is capable of achieving comparable accuracy and evolving compact and parsimonious deep stacked network architecture.
0
0
0
1
0
0
Extracting Syntactic Patterns from Databases
Many database columns contain string or numerical data that conforms to a pattern, such as phone numbers, dates, addresses, product identifiers, and employee ids. These patterns are useful in a number of data processing applications, including understanding what a specific field represents when field names are ambiguous, identifying outlier values, and finding similar fields across data sets. One way to express such patterns would be to learn regular expressions for each field in the database. Unfortunately, exist- ing techniques on regular expression learning are slow, taking hundreds of seconds for columns of just a few thousand values. In contrast, we develop XSystem, an efficient method to learn patterns over database columns in significantly less time. We show that these patterns can not only be built quickly, but are expressive enough to capture a number of key applications, including detecting outliers, measuring column similarity, and assigning semantic labels to columns (based on a library of regular expressions). We evaluate these applications with datasets that range from chemical databases (based on a collaboration with a pharmaceutical company), our university data warehouse, and open data from MassData.gov.
1
0
0
0
0
0
Aggressive Sampling for Multi-class to Binary Reduction with Applications to Text Classification
We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.
1
0
0
1
0
0
Understanding and predicting travel time with spatio-temporal features of network traffic flow, weather and incidents
Travel time on a route varies substantially by time of day and from day to day. It is critical to understand to what extent this variation is correlated with various factors, such as weather, incidents, events or travel demand level in the context of dynamic networks. This helps a better decision making for infrastructure planning and real-time traffic operation. We propose a data-driven approach to understand and predict highway travel time using spatio-temporal features of those factors, all of which are acquired from multiple data sources. The prediction model holistically selects the most related features from a high-dimensional feature space by correlation analysis, principle component analysis and LASSO. We test and compare the performance of several regression models in predicting travel time 30 min in advance via two case studies: (1) a 6-mile highway corridor of I-270N in D.C. region, and (2) a 2.3-mile corridor of I-376E in Pittsburgh region. We found that some bottlenecks scattered in the network can imply congestion on those corridors at least 30 minutes in advance, including those on the alternative route to the corridors of study. In addition, real-time travel time is statistically related to incidents on some specific locations, morning/afternoon travel demand, visibility, precipitation, wind speed/gust and the weather type. All those spatio-temporal information together help improve prediction accuracy, comparing to using only speed data. In both case studies, random forest shows the most promise, reaching a root-mean-squared error of 16.6\% and 17.0\% respectively in afternoon peak hours for the entire year of 2014.
0
0
0
1
0
0
The power of sum-of-squares for detecting hidden structures
We study planted problems---finding hidden structures in random noisy inputs---through the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems, often achieving the best known polynomial-time guarantees in terms of accuracy of recovered solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral algorithms are often unable to accomplish this: the twist in these new spectral algorithms is the use of spectral structure of matrices whose entries are low-degree polynomials of the input variables. We prove that for a wide class of planted problems, including refuting random constraint satisfaction problems, tensor and sparse PCA, densest-k-subgraph, community detection in stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials are as powerful as SoS semidefinite programs of roughly degree d. For such problems it is therefore always possible to match the guarantees of SoS without solving a large semidefinite program. Using related ideas on SoS algorithms and low-degree matrix polynomials (and inspired by recent work on SoS and the planted clique problem by Barak et al.), we prove new nearly-tight SoS lower bounds for the tensor and sparse principal component analysis problems. Our lower bounds for sparse principal component analysis are the first to suggest that going beyond existing algorithms for this problem may require sub-exponential time.
1
0
0
0
0
0
An active-learning algorithm that combines sparse polynomial chaos expansions and bootstrap for structural reliability analysis
Polynomial chaos expansions (PCE) have seen widespread use in the context of uncertainty quantification. However, their application to structural reliability problems has been hindered by the limited performance of PCE in the tails of the model response and due to the lack of local metamodel error estimates. We propose a new method to provide local metamodel error estimates based on bootstrap resampling and sparse PCE. An initial experimental design is iteratively updated based on the current estimation of the limit-state surface in an active learning algorithm. The greedy algorithm uses the bootstrap-based local error estimates for the polynomial chaos predictor to identify the best candidate set of points to enrich the experimental design. We demonstrate the effectiveness of this approach on a well-known analytical benchmark representing a series system, on a truss structure and on a complex realistic frame structure problem.
0
0
0
1
0
0
Experimental Tests of Spirituality
We currently harness technologies that could shed new light on old philosophical questions, such as whether our mind entails anything beyond our body or whether our moral values reflect universal truth.
0
0
0
0
1
0
Chern classes and Gromov--Witten theory of projective bundles
We prove that the Gromov--Witten theory (GWT) of a projective bundle can be determined by the Chern classes and the GWT of the base. It completely answers a question raised in a previous paper (arXiv:1607.00740). Its consequences include that the GWT of the blow-up of X at a smooth subvariety Z is uniquely determined by GWT of X, Z plus some topological data.
0
0
1
0
0
0
Numerical Gaussian Processes for Time-dependent and Non-linear Partial Differential Equations
We introduce the concept of numerical Gaussian processes, which we define as Gaussian processes with covariance functions resulting from temporal discretization of time-dependent partial differential equations. Numerical Gaussian processes, by construction, are designed to deal with cases where: (1) all we observe are noisy data on black-box initial conditions, and (2) we are interested in quantifying the uncertainty associated with such noisy data in our solutions to time-dependent partial differential equations. Our method circumvents the need for spatial discretization of the differential operators by proper placement of Gaussian process priors. This is an attempt to construct structured and data-efficient learning machines, which are explicitly informed by the underlying physics that possibly generated the observed data. The effectiveness of the proposed approach is demonstrated through several benchmark problems involving linear and nonlinear time-dependent operators. In all examples, we are able to recover accurate approximations of the latent solutions, and consistently propagate uncertainty, even in cases involving very long time integration.
1
0
1
1
0
0
Learning from Experience: A Dynamic Closed-Loop QoE Optimization for Video Adaptation and Delivery
The quality of experience (QoE) is known to be subjective and context-dependent. Identifying and calculating the factors that affect QoE is indeed a difficult task. Recently, a lot of effort has been devoted to estimate the users QoE in order to improve video delivery. In the literature, most of the QoE-driven optimization schemes that realize trade-offs among different quality metrics have been addressed under the assumption of homogenous populations. Nevertheless, people perceptions on a given video quality may not be the same, which makes the QoE optimization harder. This paper aims at taking a step further in order to address this limitation and meet users profiles. To do so, we propose a closed-loop control framework based on the users(subjective) feedbacks to learn the QoE function and optimize it at the same time. Our simulation results show that our system converges to a steady state, where the resulting QoE function noticeably improves the users feedbacks.
1
0
0
0
0
0
Deep CNN based feature extractor for text-prompted speaker recognition
Deep learning is still not a very common tool in speaker verification field. We study deep convolutional neural network performance in the text-prompted speaker verification task. The prompted passphrase is segmented into word states - i.e. digits -to test each digit utterance separately. We train a single high-level feature extractor for all states and use cosine similarity metric for scoring. The key feature of our network is the Max-Feature-Map activation function, which acts as an embedded feature selector. By using multitask learning scheme to train the high-level feature extractor we were able to surpass the classic baseline systems in terms of quality and achieved impressive results for such a novice approach, getting 2.85% EER on the RSR2015 evaluation set. Fusion of the proposed and the baseline systems improves this result.
0
0
0
1
0
0
Data-Injection Attacks in Stochastic Control Systems: Detectability and Performance Tradeoffs
Consider a stochastic process being controlled across a communication channel. The control signal that is transmitted across the control channel can be replaced by a malicious attacker. The controller is allowed to implement any arbitrary detection algorithm to detect if an attacker is present. This work characterizes some fundamental limitations of when such an attack can be detected, and quantifies the performance degradation that an attacker that seeks to be undetected or stealthy can introduce.
1
0
1
0
0
0
Efficient Version-Space Reduction for Visual Tracking
Discrminative trackers, employ a classification approach to separate the target from its background. To cope with variations of the target shape and appearance, the classifier is updated online with different samples of the target and the background. Sample selection, labeling and updating the classifier is prone to various sources of errors that drift the tracker. We introduce the use of an efficient version space shrinking strategy to reduce the labeling errors and enhance its sampling strategy by measuring the uncertainty of the tracker about the samples. The proposed tracker, utilize an ensemble of classifiers that represents different hypotheses about the target, diversify them using boosting to provide a larger and more consistent coverage of the version-space and tune the classifiers' weights in voting. The proposed system adjusts the model update rate by promoting the co-training of the short-memory ensemble with a long-memory oracle. The proposed tracker outperformed state-of-the-art trackers on different sequences bearing various tracking challenges.
1
0
0
0
0
0
Context2Name: A Deep Learning-Based Approach to Infer Natural Variable Names from Usage Contexts
Most of the JavaScript code deployed in the wild has been minified, a process in which identifier names are replaced with short, arbitrary and meaningless names. Minified code occupies less space, but also makes the code extremely difficult to manually inspect and understand. This paper presents Context2Name, a deep learningbased technique that partially reverses the effect of minification by predicting natural identifier names for minified names. The core idea is to predict from the usage context of a variable a name that captures the meaning of the variable. The approach combines a lightweight, token-based static analysis with an auto-encoder neural network that summarizes usage contexts and a recurrent neural network that predict natural names for a given usage context. We evaluate Context2Name with a large corpus of real-world JavaScript code and show that it successfully predicts 47.5% of all minified identifiers while taking only 2.9 milliseconds on average to predict a name. A comparison with the state-of-the-art tools JSNice and JSNaughty shows that our approach performs comparably in terms of accuracy while improving in terms of efficiency. Moreover, Context2Name complements the state-of-the-art by predicting 5.3% additional identifiers that are missed by both existing tools.
1
0
0
1
0
0
Reconstructing fluid dynamics with micro-finite element
In the theory of the Navier-Stokes equations, the viscous fluid in incompressible flow is modelled as a homogeneous and dense assemblage of constituent "fluid particles" with viscous stress proportional to rate of strain. The crucial concept of fluid flow is the velocity of the particle that is accelerated by the pressure and viscous interaction around it. In this paper, by virtue of the alternative constituent "micro-finite element", we introduce a set of new intrinsic quantities, called the vortex fields, to characterise the relative orientation between elements and the feature of micro-eddies in the element, while the description of viscous interaction in fluid returns to the initial intuition that the interlayer friction is proportional to the slip strength. Such a framework enables us to reconstruct the dynamics theory of viscous fluid, in which the flowing fluid can be modelled as a finite covering of elements and consequently indicated by a space-time differential manifold that admits complex topological evolution.
0
1
0
0
0
0
Towards Black-box Iterative Machine Teaching
In this paper, we make an important step towards the black-box machine teaching by considering the cross-space machine teaching, where the teacher and the learner use different feature representations and the teacher can not fully observe the learner's model. In such scenario, we study how the teacher is still able to teach the learner to achieve faster convergence rate than the traditional passive learning. We propose an active teacher model that can actively query the learner (i.e., make the learner take exams) for estimating the learner's status and provably guide the learner to achieve faster convergence. The sample complexities for both teaching and query are provided. In the experiments, we compare the proposed active teacher with the omniscient teacher and verify the effectiveness of the active teacher model.
1
0
0
1
0
0
On Generalization and Regularization in Deep Learning
Why do large neural network generalize so well on complex tasks such as image classification or speech recognition? What exactly is the role regularization for them? These are arguably among the most important open questions in machine learning today. In a recent and thought provoking paper [C. Zhang et al.] several authors performed a number of numerical experiments that hint at the need for novel theoretical concepts to account for this phenomenon. The paper stirred quit a lot of excitement among the machine learning community but at the same time it created some confusion as discussions on OpenReview.net testifies. The aim of this pedagogical paper is to make this debate accessible to a wider audience of data scientists without advanced theoretical knowledge in statistical learning. The focus here is on explicit mathematical definitions and on a discussion of relevant concepts, not on proofs for which we provide references.
0
0
1
1
0
0
Nonparametric Inference for Auto-Encoding Variational Bayes
We would like to learn latent representations that are low-dimensional and highly interpretable. A model that has these characteristics is the Gaussian Process Latent Variable Model. The benefits and negative of the GP-LVM are complementary to the Variational Autoencoder, the former provides interpretable low-dimensional latent representations while the latter is able to handle large amounts of data and can use non-Gaussian likelihoods. Our inspiration for this paper is to marry these two approaches and reap the benefits of both. In order to do so we will introduce a novel approximate inference scheme inspired by the GP-LVM and the VAE. We show experimentally that the approximation allows the capacity of the generative bottle-neck (Z) of the VAE to be arbitrarily large without losing a highly interpretable representation, allowing reconstruction quality to be unlimited by Z at the same time as a low-dimensional space can be used to perform ancestral sampling from as well as a means to reason about the embedded data.
1
0
0
1
0
0
Imbalanced Malware Images Classification: a CNN based Approach
Deep convolutional neural networks (CNNs) can be applied to malware binary detection through images classification. The performance, however, is degraded due to the imbalance of malware families (classes). To mitigate this issue, we propose a simple yet effective weighted softmax loss which can be employed as the final layer of deep CNNs. The original softmax loss is weighted, and the weight value can be determined according to class size. A scaling parameter is also included in computing the weight. Proper selection of this parameter has been studied and an empirical option is given. The weighted loss aims at alleviating the impact of data imbalance in an end-to-end learning fashion. To validate the efficacy, we deploy the proposed weighted loss in a pre-trained deep CNN model and fine-tune it to achieve promising results on malware images classification. Extensive experiments also indicate that the new loss function can fit other typical CNNs with an improved classification performance.
1
0
0
1
0
0
Integrated optical force sensors using focusing photonic crystal arrays
Mechanical oscillators are at the heart of many sensor applications. Recently several groups have developed oscillators that are probed optically, fabricated from high-stress silicon nitride films. They exhibit outstanding force sensitivities of a few aN/Hz$^{1/2}$ and can also be made highly reflective, for efficient detection. The optical read-out usually requires complex experimental setups, including positioning stages and bulky cavities, making them impractical for real applications. In this paper we propose a novel way of building fully integrated all-optical force sensors based on low-loss silicon nitride mechanical resonators with a photonic crystal reflector. We can circumvent previous limitations in stability and complexity by simulating a suspended focusing photonic crystal, purely made of silicon nitride. Our design allows for an all integrated sensor, built out of a single block that integrates a full Fabry-Pérot cavity, without the need for assembly or alignment. The presented simulations will allow for a radical simplification of sensors based on high-Q silicon nitride membranes. Our results comprise, to the best of our knowledge, the first simulations of a focusing mirror made from a mechanically suspended flat membrane with subwavelength thickness. Cavity lengths between a few hundred $\mu$m and mm should be directly realizable.
0
1
0
0
0
0
Comparison of Signaling and Media Approaches to Detect VoIP SPIT Attack
IP networks became the most dominant type of information networks nowadays. It provides a number of services and makes it easy for users to be connected. IP networks provide an efficient way with a large number of services compared to other ways of voice communication. This leads to the migration to make voice calls via IP networks. Despite the wide range of IP networks services, availability, and its capabilities, there still a large number of security threats that affect IP networks and for sure affecting other services based on it and voice is one of them. This paper discusses reasons of migration from making voice calls via IP networks and leaving legacy networks, requirements to be available in IP networks to support voice transport, and concentrating on SPIT attack and its detection methods. Experiments took place to compare the different approaches used to detect spam over VoIP networks.
1
0
0
0
0
0
Coupling functions: Universal insights into dynamical interaction mechanisms
The dynamical systems found in Nature are rarely isolated. Instead they interact and influence each other. The coupling functions that connect them contain detailed information about the functional mechanisms underlying the interactions and prescribe the physical rule specifying how an interaction occurs. Here, we aim to present a coherent and comprehensive review encompassing the rapid progress made recently in the analysis, understanding and applications of coupling functions. The basic concepts and characteristics of coupling functions are presented through demonstrative examples of different domains, revealing the mechanisms and emphasizing their multivariate nature. The theory of coupling functions is discussed through gradually increasing complexity from strong and weak interactions to globally-coupled systems and networks. A variety of methods that have been developed for the detection and reconstruction of coupling functions from measured data is described. These methods are based on different statistical techniques for dynamical inference. Stemming from physics, such methods are being applied in diverse areas of science and technology, including chemistry, biology, physiology, neuroscience, social sciences, mechanics and secure communications. This breadth of application illustrates the universality of coupling functions for studying the interaction mechanisms of coupled dynamical systems.
0
1
0
0
0
0
Uncertainty quantification for radio interferometric imaging: II. MAP estimation
Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately $10^5$ times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy.
0
1
0
1
0
0
Semialgebraic Invariant Synthesis for the Kannan-Lipton Orbit Problem
The \emph{Orbit Problem} consists of determining, given a linear transformation $A$ on $\mathbb{Q}^d$, together with vectors $x$ and $y$, whether the orbit of $x$ under repeated applications of $A$ can ever reach $y$. This problem was famously shown to be decidable by Kannan and Lipton in the 1980s. In this paper, we are concerned with the problem of synthesising suitable \emph{invariants} $\mathcal{P} \subseteq \mathbb{R}^d$, \emph{i.e.}, sets that are stable under $A$ and contain $x$ and not $y$, thereby providing compact and versatile certificates of non-reachability. We show that whether a given instance of the Orbit Problem admits a semialgebraic invariant is decidable, and moreover in positive instances we provide an algorithm to synthesise suitable invariants of polynomial size. It is worth noting that the existence of \emph{semilinear} invariants, on the other hand, is (to the best of our knowledge) not known to be decidable.
1
0
1
0
0
0
Design and characterization of the Large-Aperture Experiment to Detect the Dark Age (LEDA) radiometer systems
The Large-Aperture Experiment to Detect the Dark Age (LEDA) was designed to detect the predicted O(100)mK sky-averaged absorption of the Cosmic Microwave Background by Hydrogen in the neutral pre- and intergalactic medium just after the cosmological Dark Age. The spectral signature would be associated with emergence of a diffuse Ly$\alpha$ background from starlight during 'Cosmic Dawn'. Recently, Bowman et al. (2018) have reported detection of this predicted absorption feature, with an unexpectedly large amplitude of 530 mK, centered at 78 MHz. Verification of this result by an independent experiment, such as LEDA, is pressing. In this paper, we detail design and characterization of the LEDA radiometer systems, and a first-generation pipeline that instantiates a signal path model. Sited at the Owens Valley Radio Observatory Long Wavelength Array, LEDA systems include the station correlator, five well-separated redundant dual polarization radiometers and backend electronics. The radiometers deliver a 30-85MHz band (16<z<34) and operate as part of the larger interferometric array, for purposes ultimately of in situ calibration. Here, we report on the LEDA system design, calibration approach, and progress in characterization as of January 2016. The LEDA systems are currently being modified to improve performance near 78 MHz in order to verify the purported absorption feature.
0
1
0
0
0
0
Automatic Bayesian Density Analysis
Making sense of a dataset in an automatic and unsupervised fashion is a challenging problem in statistics and AI. Classical approaches for density estimation are usually not flexible enough to deal with the uncertainty inherent to real-world data: they are often restricted to fixed latent interaction models and homogeneous likelihoods; they are sensitive to missing, corrupt and anomalous data; moreover, their expressiveness generally comes at the price of intractable inference. As a result, supervision from statisticians is usually needed to find the right model for the data. However, as domain experts do not necessarily have to be experts in statistics, we propose Automatic Bayesian Density Analysis (ABDA) to make density estimation accessible at large. ABDA automates the selection of adequate likelihood models from arbitrarily rich dictionaries while modeling their interactions via a deep latent structure adaptively learned from data as a sum-product network. ABDA casts uncertainty estimation at these local and global levels into a joint Bayesian inference problem, providing robust and yet tractable inference. Extensive empirical evidence shows that ABDA is a suitable tool for automatic exploratory analysis of heterogeneous tabular data, allowing for missing value estimation, statistical data type and likelihood discovery, anomaly detection and dependency structure mining, on top of providing accurate density estimation.
0
0
0
1
0
0
MLCapsule: Guarded Offline Deployment of Machine Learning as a Service
With the widespread use of machine learning (ML) techniques, ML as a service has become increasingly popular. In this setting, an ML model resides on a server and users can query the model with their data via an API. However, if the user's input is sensitive, sending it to the server is not an option. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model. In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service. MLCapsule executes the machine learning model locally on the user's client and therefore the data never leaves the client. Meanwhile, MLCapsule offers the service provider the same level of control and security of its model as the commonly used server-side execution. In addition, MLCapsule is applicable to offline applications that require local execution. Beyond protecting against direct model access, we demonstrate that MLCapsule allows for implementing defenses against advanced attacks on machine learning models such as model stealing/reverse engineering and membership inference.
0
0
0
1
0
0
Brownian forgery of statistical dependences
The balance held by Brownian motion between temporal regularity and randomness is embodied in a remarkable way by Levy's forgery of continuous functions. Here we describe how this property can be extended to forge arbitrary dependences between two statistical systems, and then establish a new Brownian independence test based on fluctuating random paths. We also argue that this result allows revisiting the theory of Brownian covariance from a physical perspective and opens the possibility of engineering nonlinear correlation measures from more general functional integrals.
0
0
1
1
0
0
Internal sizes in $μ$-abstract elementary classes
Working in the context of $\mu$-abstract elementary classes ($\mu$-AECs) - or, equivalently, accessible categories with all morphisms monomorphisms - we examine the two natural notions of size that occur, namely cardinality of underlying sets and internal size. The latter, purely category-theoretic, notion generalizes e.g. density character in complete metric spaces and cardinality of orthogonal bases in Hilbert spaces. We consider the relationship between these notions under mild set-theoretic hypotheses, including weakenings of the singular cardinal hypothesis. We also establish preliminary results on the existence and categoricity spectra of $\mu$-AECs, including specific examples showing dramatic failures of the eventual categoricity conjecture (with categoricity defined using cardinality) in $\mu$-AECs.
0
0
1
0
0
0
Seasonal forecasts of the summer 2016 Yangtze River basin rainfall
The Yangtze River has been subject to heavy flooding throughout history, and in recent times severe floods such as those in 1998 have resulted in heavy loss of life and livelihoods. Dams along the river help to manage flood waters, and are important sources of electricity for the region. Being able to forecast high-impact events at long lead times therefore has enormous potential benefit. Recent improvements in seasonal forecasting mean that dynamical climate models can start to be used directly for operational services. The teleconnection from El Niño to Yangtze River basin rainfall meant that the strong El Niño in winter 2015/2016 provided a valuable opportunity to test the application of a dynamical forecast system. This paper therefore presents a case study of a real time seasonal forecast for the Yangtze River basin, building on previous work demonstrating the retrospective skill of such a forecast. A simple forecasting methodology is presented, in which the forecast probabilities are derived from the historical relationship between hindcast and observations. Its performance for 2016 is discussed. The heavy rainfall in the May-June-July period was correctly forecast well in advance. August saw anomalously low rainfall, and the forecasts for the June-July-August period correctly showed closer to average levels. The forecasts contributed to the confidence of decision-makers across the Yangtze River basin. Trials of climate services such as this help to promote appropriate use of seasonal forecasts, and highlight areas for future improvements.
0
1
0
0
0
0
Modeling Interference Via Symmetric Treatment Decomposition
Classical causal inference assumes a treatment meant for a given unit does not have an effect on other units. When this "no interference" assumption is violated, new types of spillover causal effects arise, and causal inference becomes much more difficult. In addition, interference introduces a unique complication where outcomes may transmit treatment influences to each other, which is a relationship that has some features of a causal one, but is symmetric. In settings where detailed temporal information on outcomes is not available, addressing this complication using statistical inference methods based on Directed Acyclic Graphs (DAGs) (Ogburn & VanderWeele, 2014) leads to conceptual difficulties. In this paper, we develop a new approach to decomposing the spillover effect into direct (also known as the contagion effect) and indirect (also known as the infectiousness effect) components that extends the DAG based treatment decomposition approach to mediation found in (Robins & Richardson, 2010) to causal chain graph models (Lauritzen & Richardson, 2002). We show that when these components of the spillover effect are identified in these models, they have an identifying functional, which we call the symmetric mediation formula, that generalizes the mediation formula in DAGs (Pearl, 2011). We further show that, unlike assumptions in classical mediation analysis, an assumption permitting identification in our setting leads to restrictions on the observed data law, making the assumption empirically falsifiable. Finally, we discuss statistical inference for the components of the spillover effect in the special case of two interacting outcomes, and discuss a maximum likelihood estimator, and a doubly robust estimator.
0
0
0
1
0
0
A Bayesian algorithm for distributed network localization using distance and direction data
A reliable, accurate, and affordable positioning service is highly required in wireless networks. In this paper, the novel Message Passing Hybrid Localization (MPHL) algorithm is proposed to solve the problem of cooperative distributed localization using distance and direction estimates. This hybrid approach combines two sensing modalities to reduce the uncertainty in localizing the network nodes. A statistical model is formulated for the problem, and approximate minimum mean square error (MMSE) estimates of the node locations are computed. The proposed MPHL is a distributed algorithm based on belief propagation (BP) and Markov chain Monte Carlo (MCMC) sampling. It improves the identifiability of the localization problem and reduces its sensitivity to the anchor node geometry, compared to distance-only or direction-only localization techniques. For example, the unknown location of a node can be found if it has only a single neighbor; and a whole network can be localized using only a single anchor node. Numerical results are presented showing that the average localization error is significantly reduced in almost every simulation scenario, about 50% in most cases, compared to the competing algorithms.
1
0
0
1
0
0
Bounds for multivariate residues and for the polynomials in the elimination theorem
We present several upper bounds for the height of global residues of rational forms on an affine variety. As a consequence, we deduce upper bounds for the height of the coefficients in the Bergman-Weil trace formula. We also present upper bounds for the degree and the height of the polynomials in the elimination theorem on an affine variety. This is an arithmetic analogue of Jelonek's effective elimination theorem, that plays a crucial role in the proof of our bounds for the height of global residues.
0
0
1
0
0
0
An adsorbed gas estimation model for shale gas reservoirs via statistical learning
Shale gas plays an important role in reducing pollution and adjusting the structure of world energy. Gas content estimation is particularly significant in shale gas resource evaluation. There exist various estimation methods, such as first principle methods and empirical models. However, resource evaluation presents many challenges, especially the insufficient accuracy of existing models and the high cost resulting from time-consuming adsorption experiments. In this research, a low-cost and high-accuracy model based on geological parameters is constructed through statistical learning methods to estimate adsorbed shale gas content
0
1
0
1
0
0
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
This paper develops a novel methodology for using symbolic knowledge in deep learning. From first principles, we derive a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that it effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects, such as rankings and paths. These discrete concepts are tremendously difficult to learn, and benefit from a tight integration of deep learning and symbolic reasoning methods.
1
0
0
1
0
0
Chaotic properties of a turbulent isotropic fluid
By tracking the divergence of two initially close trajectories in phase space in an Eulerian approach to forced turbulence, the relation between the maximal Lyapunov exponent $\lambda$, and the Reynolds number $Re$ is measured using direct numerical simulations, performed on up to $2048^3$ collocation points. The Lyapunov exponent is found to solely depend on the Reynolds number with $\lambda \propto Re^{0.53}$ and that after a transient period the divergence of trajectories grows at the same rate at all scales. Finally a linear divergence is seen that is dependent on the energy forcing rate. Links are made with other chaotic systems.
0
1
0
0
0
0
White light emission from silicon nanoparticles
As one of the most important semiconductors, silicon (Si) has been used to fabricate electronic devices, waveguides, detectors, and solar cells etc. However, its indirect bandgap hinders the use of Si for making good emitters1. For integrated photonic circuits, Si-based emitters with sizes in the range of 100-300 nm are highly desirable. Here, we show that efficient white light emission can be realized in spherical and cylindrical Si nanoparticles with feature sizes of ~200 nm. The up-converted luminescence appears at the magnetic and electric multipole resonances when the nanoparticles are resonantly excited at their magnetic and electric dipole resonances by using femtosecond (fs) laser pulses with ultralow low energy of ~40 pJ. The lifetime of the white light is as short as ~52 ps, almost three orders of magnitude smaller than the state-of-the-art results reported so far for Si (~10 ns). Our finding paves the way for realizing efficient Si-based emitters compatible with current semiconductor fabrication technology, which can be integrated to photonic circuits.
0
1
0
0
0
0
The collapse of ecosystem engineer populations
Humans are the ultimate ecosystem engineers who have profoundly transformed the world's landscapes in order to enhance their survival. Somewhat paradoxically, however, sometimes the unforeseen effect of this ecosystem engineering is the very collapse of the population it intended to protect. Here we use a spatial version of a standard population dynamics model of ecosystem engineers to study the colonization of unexplored virgin territories by a small settlement of engineers. We find that during the expansion phase the population density reaches values much higher than those the environment can support in the equilibrium situation. When the colonization front reaches the boundary of the available space, the population density plunges sharply and attains its equilibrium value. The collapse takes place without warning and happens just after the population reaches its peak number. We conclude that overpopulation and the consequent collapse of an expanding population of ecosystem engineers is a natural consequence of the nonlinear feedback between the population and environment variables.
0
1
0
0
0
0
Highly Viscous Microjet Generator
This paper describes a simple yet novel system for generating a highly viscous microjet. The jet is produced inside a wettable thin tube partially submerged in a liquid. The gas-liquid interface inside the tube, which is initially concave, is kept much deeper than that outside the tube. An impulsive force applied at the bottom of a liquid container leads to significant acceleration of the liquid inside the tube followed by flow-focusing due to the concave interface. The jet generation process can be divided into two parts that occur in different time scales, i.e. the Impact time (impact duration $\le O(10^{-4})$ s) and Focusing time (focusing duration $\gg O(10^{-4})$ s). In Impact time, the liquid accelerates suddenly due to the impact. In Focusing time, the microjet emerges due to flow-focusing. In order to explain the sudden acceleration inside the tube in Impact time, we develop a physical model based on a pressure impulse approach. Numerical simulations confirm the proposed model, indicating that the basic mechanism of the acceleration of the liquid due to the impulsive force is elucidated. Remarkably, the viscous effect is negligible in Impact time. In contrast, in Focusing time, the viscosity plays an important role in the microjet generation. We experimentally and numerically investigate the velocity of microjets with various viscosities. We find that higher viscosities lead to reduction of the jet velocity, which can be described by using Reynolds number (the ratio between the inertia force and the viscous force). This novel device may be a starting point for next-generation technologies, such as high-viscosity inkjet printers including bioprinters and needle-free injection devices for minimally invasive medical treatments.
0
1
0
0
0
0
Section problems for configuration spaces of surfaces
In this paper we give a close-to-sharp answer to the basic questions: When is there a continuous way to add a point to a configuration of $n$ ordered points on a surface $S$ of finite type so that all the points are still distinct? When this is possible, what are all the ways to do it? More precisely, let PConf$_n(S)$ be the space of ordered $n$-tuples of distinct points in $S$. Let $f_n(S): \text{PConf}_{n+1}(S) \to \text{PConf}_n(S)$ be the map given by $f_n(x_0,x_1,\ldots,x_n):=(x_1,\ldots,x_n)$. We will classify all continuous sections of $f_n$ by proving: 1. If $S=\mathbb{R}^2$ and $n>3$, any section of $f_{n}(S)$ is either "adding a point at infinity" or "adding a point near $x_k$". (We define these two terms in Section 2.1, whether we can define "adding a point near $x_k$" or "adding a point at infinity" depends in a delicate way on properties of $S$.) 2. If $S=S^2$ a $2$-sphere and $n>4$, any section of $f_{n}(S)$ is "adding a point near $x_k$", if $S=S^2$ and $n=2$, the bundle $f_n(S)$ does not have a section. (We define this term in Section 3.2) 3. If $S=S_g$ a surface of genus $g>1$ and for $n>1$, the bundle $f_{n}(S)$ does not have a section.
0
0
1
0
0
0
Odd-triplet superconductivity in single-level quantum dots
We study the interplay of spin and charge coherence in a single-level quantum dot. A tunnel coupling to a superconducting lead induces superconducting correlations in the dot. With full spin symmetry retained, only even-singlet superconducting correlations are generated. An applied magnetic field or attached ferromagnetic leads partially or fully reduce the spin symmetry, and odd-triplet superconducting correlations are generated as well. For single-level quantum dots, no other superconducting correlations are possible. We analyze, with the help of a diagrammatic real-time technique, the interplay of spin symmetry and superconductivity and its signatures in electronic transport, in particular current and shot noise.
0
1
0
0
0
0
From Neuronal Models to Neuronal Dynamics and Image Processing
This paper is an introduction to the membrane potential equation for neurons. Its properties are described, as well as sample applications. Networks of these equations can be used for modeling neuronal systems, which also process images and video sequences, respectively. Specifically, (i) a dynamic retina is proposed (based on a reaction-diffusion system), which predicts afterimages and simple visual illusions, (ii) a system for texture segregation (texture elements are understood as even-symmetric contrast features), and (iii) a network for detecting object approaches (inspired by the locust visual system).
0
0
0
0
1
0
A compilation of LEGO Technic parts to support learning experiments on linkages
We present a compilation of LEGO Technic parts to provide easy-to-build constructions of basic planar linkages. Some technical issues and their possible solutions are discussed. To solve questions on fine details---like deciding whether the motion is an exactly straight line or not---we refer to the dynamic mathematics software tool GeoGebra.
0
0
1
0
0
0
On the Casas-Alvero conjecture
The conjecture is formulated in an affine structure and linked with dimension=1 of the defined CA sets. Then some known results are proved in this context. The short intended proof relies on a direct yet unclear statement about homogeneous dependence of algebraic equations. This might not be a complete proof or even one on the right track, but it may provoke more thoughts in this respect as expected.
0
0
1
0
0
0
Effect of compressibility and aspect ratio on performance of long elastic seals
Recent experiments show no statistical impact of seal length on the performance of long elastomeric seals in relatively smooth test fixtures. Motivated by these results, we analytically and computationally investigate the combined effects of seal length and compressibility on the maximum differential pressure a seal can support. We present a Saint-Venant type analytic shear lag solution for slightly compressible seals with large aspect ratios, which compares well with nonlinear finite element simulations in regions far from the ends of the seal. However, at the high- and low-pressure ends, where fracture is observed experimentally, the analytic solution is in poor agreement with detailed finite element calculations. Nevertheless, we show that the analytic solution provides far-field stress measures that correlate, over a range of aspect ratios and bulk moduli, the calculated energy release rates for the growth of small cracks at the two ends of the seal. Thus a single finite element simulation coupled with the analytic solution can be used to determine tendencies for fracture at the two ends of the seal over a wide range of geometry and compressibility. Finally, using a hypothetical critical energy release rate, predictions for whether a crack on the high-pressure end will begin to grow before or after a crack on the low-pressure end begins to grow are made using the analytic solution and compared with finite element simulations for finite deformation, hyperelastic seals.
0
1
0
0
0
0
Integral models of reductive groups and integral Mumford-Tate groups
Let $G$ be a reductive algebraic group over a $p$-adic field or number field $K$, and let $V$ be a $K$-linear faithful representation of $G$. A lattice $\Lambda$ in the vector space $V$ defines a model $\hat{G}_{\Lambda}$ of $G$ over $\mathscr{O}_K$. One may wonder to what extent $\Lambda$ is determined by the group scheme $\hat{G}_{\Lambda}$. In this paper we prove that up to a natural equivalence relation on the set of lattices there are only finitely many $\Lambda$ corresponding to one model $\hat{G}_{\Lambda}$. Furthermore, we relate this fact to moduli spaces of abelian varieties as follows: let $\mathscr{A}_{g,n}$ be the moduli space of principally polarised abelian varieties of dimension $g$ with level $n$ structure. We prove that there are at most finitely many special subvarieties of $\mathscr{A}_{g,n}$ with a given integral generic Mumford-Tate group.
0
0
1
0
0
0
A proof of Boca's Theorem
We give a general method of extending unital completely positive maps to amalgamated free products of C*-algebras. As an application we give a dilation theoretic proof of Boca's Theorem.
0
0
1
0
0
0
Vico-Greengard-Ferrando quadratures in the tensor solver for integral equations
Convolution with Green's function of a differential operator appears in a lot of applications e.g. Lippmann-Schwinger integral equation. Algorithms for computing such are usually non-trivial and require non-uniform mesh. However, recently Vico, Greengard and Ferrando developed method for computing convolution with smooth functions with compact support with spectral accuracy, requiring nothing more than Fast Fourier Transform (FFT). Their approach is very suitable for the low-rank tensor implementation which we develop using Quantized Tensor Train (QTT) decomposition.
1
0
0
0
0
0
Mott insulators of hardcore bosons in 1D: many-body orders, entanglement, edge modes
Many-body phenomena were always an integral part of physics comprising of collective behaviors through self-organization, in systems consisting of many components and degrees of freedom. We investigate the collective behaviors of strongly interacting particles confined in one dimension. We show that many-body orders with topological characteristics can be found at the Mott insulator limit for hardcore bosons, at different fillings, without considering the spin degree of freedom or long-range microscopic interactions. These orders have unique properties like weak or strong quantum correlations (entanglement), quantified by the entanglement entropy, edge excitations/modes and gapped energy spectrum with highly degenerate ground state, bearing resemblance to topologically ordered phases of matter.
0
1
0
0
0
0
Fast Monte Carlo Markov chains for Bayesian shrinkage models with random effects
When performing Bayesian data analysis using a general linear mixed model, the resulting posterior density is almost always analytically intractable. However, if proper conditionally conjugate priors are used, there is a simple two-block Gibbs sampler that is geometrically ergodic in nearly all practical settings, including situations where $p > n$ (Abrahamsen and Hobert, 2017). Unfortunately, the (conditionally conjugate) multivariate normal prior on $\beta$ does not perform well in the high-dimensional setting where $p \gg n$. In this paper, we consider an alternative model in which the multivariate normal prior is replaced by the normal-gamma shrinkage prior developed by Griffin and Brown (2010). This change leads to a much more complex posterior density, and we develop a simple MCMC algorithm for exploring it. This algorithm, which has both deterministic and random scan components, is easier to analyze than the more obvious three-step Gibbs sampler. Indeed, we prove that the new algorithm is geometrically ergodic in most practical settings.
0
0
1
1
0
0
Distributed Optimal Vehicle Grid Integration Strategy with User Behavior Prediction
With the increasing of electric vehicle (EV) adoption in recent years, the impact of EV charging activities to the power grid becomes more and more significant. In this article, an optimal scheduling algorithm which combines smart EV charging and V2G gird service is developed to integrate EVs into power grid as distributed energy resources, with improved system cost performance. Specifically, an optimization problem is formulated and solved at each EV charging station according to control signal from aggregated control center and user charging behavior prediction by mean estimation and linear regression. The control center collects distributed optimization results and updates the control signal, periodically. The iteration continues until it converges to optimal scheduling. Experimental result shows this algorithm helps fill the valley and shave the peak in electric load profiles within a microgrid, while the energy demand of individual driver can be satisfied.
1
0
1
0
0
0
Horcrux: A Password Manager for Paranoids
Vulnerabilities in password managers are unremitting because current designs provide large attack surfaces, both at the client and server. We describe and evaluate Horcrux, a password manager that is designed holistically to minimize and decentralize trust, while retaining the usability of a traditional password manager. The prototype Horcrux client, implemented as a Firefox add-on, is split into two components, with code that has access to the user's master's password and any key material isolated into a small auditable component, separate from the complexity of managing the user interface. Instead of exposing actual credentials to the DOM, a dummy username and password are autofilled by the untrusted component. The trusted component intercepts and modifies POST requests before they are encrypted and sent over the network. To avoid trusting a centralized store, stored credentials are secret-shared over multiple servers. To provide domain and username privacy, while maintaining resilience to off-line attacks on a compromised password store, we incorporate cuckoo hashing in a way that ensures an attacker cannot determine if a guessed master password is correct. Our approach only works for websites that do not manipulate entered credentials in the browser client, so we conducted a large-scale experiment that found the technique appears to be compatible with over 98% of tested login forms.
1
0
0
0
0
0
Cross-validation
This text is a survey on cross-validation. We define all classical cross-validation procedures, and we study their properties for two different goals: estimating the risk of a given estimator, and selecting the best estimator among a given family. For the risk estimation problem, we compute the bias (which can also be corrected) and the variance of cross-validation methods. For estimator selection, we first provide a first-order analysis (based on expectations). Then, we explain how to take into account second-order terms (from variance computations, and by taking into account the usefulness of overpenalization). This allows, in the end, to provide some guidelines for choosing the best cross-validation method for a given learning problem.
0
0
1
1
0
0
Learning from Label Proportions in Brain-Computer Interfaces: Online Unsupervised Learning with Guarantees
Objective: Using traditional approaches, a Brain-Computer Interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g.~by transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder to achieve a reliable calibration-less decoding with a guarantee to recover the true class means. Method: We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We modified a visual ERP speller to meet the requirements of the LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with N=13 subjects performing a copy-spelling task. Results: Theoretical considerations show that LLP is guaranteed to minimize the loss function similarly to a corresponding supervised classifier. It performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. Significance: The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the true class means. This makes it an ideal solution to avoid a tedious calibration and to tackle non-stationarities in the data. Additionally, LLP works on complementary principles compared to existing unsupervised methods, allowing for their further enhancement when combined with LLP.
1
0
0
1
0
0
Generalized Index Coding Problem and Discrete Polymatroids
The index coding problem has been generalized recently to accommodate receivers which demand functions of messages and which possess functions of messages. The connections between index coding and matroid theory have been well studied in the recent past. Index coding solutions were first connected to multi linear representation of matroids. For vector linear index codes discrete polymatroids which can be viewed as a generalization of the matroids was used. It was shown that a vector linear solution to an index coding problem exists if and only if there exists a representable discrete polymatroid satisfying certain conditions. In this work we explore the connections between generalized index coding and discrete polymatroids. The conditions that needs to be satisfied by a representable discrete polymatroid for a generalized index coding problem to have a vector linear solution is established. From a discrete polymatroid we construct an index coding problem with coded side information and shows that if the index coding problem has a certain optimal length solution then the discrete polymatroid satisfies certain properties. From a matroid we construct a similar generalized index coding problem and shows that the index coding problem has a binary scalar linear solution of optimal length if and only if the matroid is binary representable.
1
0
1
0
0
0
Reconstruction via the intrinsic geometric structures of interior transmission eigenfunctions
We are concerned with the inverse scattering problem of extracting the geometric structures of an unknown/inaccessible inhomogeneous medium by using the corresponding acoustic far-field measurement. Using the intrinsic geometric properties of the so-called interior transmission eigenfunctions, we develop a novel inverse scattering scheme. The proposed method can efficiently capture the cusp singularities of the support of the inhomogeneous medium. If further a priori information is available on the support of the medium, say, it is a convex polyhedron, then one can actually recover its shape. Both theoretical analysis and numerical experiments are provided. Our reconstruction method is new to the literature and opens up a new direction in the study of inverse scattering problems.
0
0
1
0
0
0
Thermoelectric phase diagram of the SrTiO3-SrNbO3 solid solution system
Thermoelectric energy conversion - the exploitation of the Seebeck effect to convert waste heat into electricity - has attracted an increasing amount of research attention for energy harvesting technology. Niobium-doped strontium titanate (SrTi1-xNbxO3) is one of the most promising thermoelectric material candidates, particularly as it poses a much lesser environmental risk in comparison to materials based on heavy metal elements. Two-dimensional electron confinement, e.g. through the formation of superlattices or two-dimensional electron gases, is recognized as an effective strategy to improve the thermoelectric performance of SrTi1-xNbxO3. Although electron confinement is closely related to the electronic structure, the fundamental electronic phase behavior of the SrTi1-xNbxO3 solid solution system has yet to be comprehensively investigated. Here, we present a thermoelectric phase diagram for the SrTi1-xNbxO3 (0.05 =< x =< 1) solid solution system, which we derived from the characterization of epitaxial films. We observed two thermoelectric phase boundaries in the system, which originate from the step-like decrease in carrier effective mass at x ~ 0.3, and from a local minimum in carrier relaxation time at x ~ 0.5. The origins of these phase boundaries are considered to be related to isovalent/heterovalent B-site substitution: parabolic Ti 3d orbitals dominate electron conduction for compositions with x < 0.3, whereas the Nb 4d orbital dominates when x > 0.3. At x ~ 0.5, a tetragonal distortion of the lattice, in which the B-site is composed of Ti4+ and Nb4+ ions, leads to the formation of tail-like impurity bands, which maximizes the electron scattering. These results provide a foundation for further research into improving the thermoelectric performance of SrTi1-xNbxO3.
0
1
0
0
0
0
AMPA, NMDA and GABAA receptor mediated network burst dynamics in cortical cultures in vitro
In this work we study the excitatory AMPA, and NMDA, and inhibitory GABAA receptor mediated dynamical changes in neuronal networks of neonatal rat cortex in vitro. Extracellular network-wide activity was recorded with 59 planar electrodes simultaneously under different pharmacological conditions. We analyzed the changes of overall network activity and network-wide burst frequency between baseline and AMPA receptor (AMPA-R) or NMDA receptor (NMDA-R) driven activity, as well as between the latter states and disinhibited activity. Additionally, spatiotemporal structures of pharmacologically modified bursts and recruitment of electrodes during the network bursts were studied. Our results show that AMPA-R and NMDA-R receptors have clearly distinct roles in network dynamics. AMPA-Rs are in greater charge to initiate network wide bursts. Therefore NMDA-Rs maintain the already initiated activity. GABAA receptors (GABAA-Rs) inhibit AMPA-R driven network activity more strongly than NMDA-R driven activity during the bursts.
0
0
0
0
1
0
Coarse fundamental groups and box spaces
We use a coarse version of the fundamental group first introduced by Barcelo, Kramer, Laubenbacher and Weaver to show that box spaces of finitely presented groups detect the normal subgroups used to construct the box space, up to isomorphism. As a consequence we have that two finitely presented groups admit coarsely equivalent box spaces if and only if they are commensurable via normal subgroups. We also provide an example of two filtrations $(N_i)$ and $(M_i)$ of a free group $F$ such that $M_i>N_i$ for all $i$ with $[M_i:N_i]$ uniformly bounded, but with $\Box_{(N_i)}F$ not coarsely equivalent to $\Box_{(M_i)}F$. Finally, we give some applications of the main theorem for rank gradient and the first $\ell^2$ Betti number, and show that the main theorem can be used to construct infinitely many coarse equivalence classes of box spaces with various properties.
0
0
1
0
0
0
Algebraic entropy of (integrable) lattice equations and their reductions
We study the growth of degrees in many autonomous and non-autonomous lattice equations defined by quad rules with corner boundary values, some of which are known to be integrable by other characterisations. Subject to an enabling conjecture, we prove polynomial growth for a large class of equations which includes the Adler-Bobenko-Suris equations and Viallet's $Q_V$ and its non-autonomous generalization. Our technique is to determine the ambient degree growth of the projective version of the lattice equations and to conjecture the growth of their common factors at each lattice vertex, allowing the true degree growth to be found. The resulting degrees satisfy a linear partial difference equation which is universal, i.e. the same for all the integrable lattice equations considered. When we take periodic reductions of these equations, which includes staircase initial conditions, we obtain from this linear partial difference equation an ordinary difference equation for degrees that implies quadratic or linear degree growth. We also study growth of degree of several non-integrable lattice equations. Exponential growth of degrees of these equations, and their mapping reductions, is also proved subject to a conjecture.
0
1
0
0
0
0
Measurement of mirror birefringence with laser heterodyne polarimetry
A laser heterodyne polarimeter (LHP) designed for the measurement of the birefringence of dielectric super-mirrors is described and initial results are reported. The LHP does not require an optical resonator and so promises unprecedented accuracy in the measurement of the birefringence of individual mirrors. The working principle of the LHP can be applied to the measurement of vacuum birefringence and potentially ALPS (Any Light Particle Search).
0
1
0
0
0
0
Individual position diversity in dependence socioeconomic networks increases economic output
The availability of big data recorded from massively multiplayer online role-playing games (MMORPGs) allows us to gain a deeper understanding of the potential connection between individuals' network positions and their economic outputs. We use a statistical filtering method to construct dependence networks from weighted friendship networks of individuals. We investigate the 30 distinct motif positions in the 13 directed triadic motifs which represent microscopic dependences among individuals. Based on the structural similarity of motif positions, we further classify individuals into different groups. The node position diversity of individuals is found to be positively correlated with their economic outputs. We also find that the economic outputs of leaf nodes are significantly lower than that of the other nodes in the same motif. Our findings shed light on understanding the influence of network structure on economic activities and outputs in socioeconomic system.
1
1
0
0
0
0
A formula for the nonsymmetric Opdam's hypergeometric function of type $A_2$
The aim of this paper is to give an explicit formula for the nonsymmetric Heckman-Opdam's hypergeometric function of type $A_2$. This is obtained by differentiating the corresponding symmetric hypergeometric function.
0
0
1
0
0
0
A new algorithm for irreducible decomposition of representations of finite groups
An algorithm for irreducible decomposition of representations of finite groups over fields of characteristic zero is described. The algorithm uses the fact that the decomposition induces a partition of the invariant inner product into a complete set of mutually orthogonal projectors. By expressing the projectors through the basis elements of the centralizer ring of the representation, the problem is reduced to solving systems of quadratic equations. The current implementation of the algorithm is able to split representations of dimensions up to hundreds of thousands. Examples of calculations are given.
1
0
0
0
0
0
Stability and Grothendieck
This note is a commentary on the model-theoretic interpretation of Grothendieck's double limit characterization of weak relative compactness.
0
0
1
0
0
0
Visual Analogies between Atari Games for Studying Transfer Learning in RL
In this work, we ask the following question: Can visual analogies, learned in an unsupervised way, be used in order to transfer knowledge between pairs of games and even play one game using an agent trained for another game? We attempt to answer this research question by creating visual analogies between a pair of games: a source game and a target game. For example, given a video frame in the target game, we map it to an analogous state in the source game and then attempt to play using a trained policy learned for the source game. We demonstrate convincing visual mapping between four pairs of games (eight mappings), which are used to evaluate three transfer learning approaches.
0
0
0
1
0
0
Learning in the Repeated Secretary Problem
In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experience. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants' behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. Fitting such a threshold-based model to data reveals players' estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games.
1
0
0
0
0
0
Flexible Mixture Modeling on Constrained Spaces
This paper addresses challenges in flexibly modeling multimodal data that lie on constrained spaces. Applications include climate or crime measurements in a geographical area, or flow-cytometry experiments, where unsuitable recordings are discarded. A simple approach to modeling such data is through the use of mixture models, with each component following an appropriate truncated distribution. Problems arise when the truncation involves complicated constraints, leading to difficulties in specifying the component distributions, and in evaluating their normalization constants. Bayesian inference over the parameters of these models results in posterior distributions that are doubly-intractable. We address this problem via an algorithm based on rejection sampling and data augmentation. We view samples from a truncated distribution as outcomes of a rejection sampling scheme, where proposals are made from a simple mixture model, and are rejected if they violate the constraints. Our scheme proceeds by imputing the rejected samples given mixture parameters, and then resampling parameters given all samples. We study two modeling approaches: mixtures of truncated components and truncated mixtures of components. In both situations, we describe exact Markov chain Monte Carlo sampling algorithms, as well as approximations that bound the number of rejected samples, achieving computational efficiency and lower variance at the cost of asymptotic bias. Overall, our methodology only requires practitioners to provide an indicator function for the set of interest. We present results on simulated data and apply our algorithm to two problems, one involving flow-cytometry data, and the other, crime recorded in the city of Chicago.
0
0
0
1
0
0
Universal equilibrium scaling functions at short times after a quench
By analyzing spin-spin correlation functions at relatively short distances, we show that equilibrium near-critical properties can be extracted at short times after quenches into the vicinity of a quantum critical point. The time scales after which equilibrium properties can be extracted are sufficiently short so that the proposed scheme should be viable for quantum simulators of spin models based on ultracold atoms or trapped ions. Our results, analytic as well as numeric, are for one-dimensional spin models, either integrable or nonintegrable, but we expect our conclusions to be valid in higher dimensions as well.
0
1
0
0
0
0
First Results from CUORE: A Search for Lepton Number Violation via $0νββ$ Decay of $^{130}$Te
The CUORE experiment, a ton-scale cryogenic bolometer array, recently began operation at the Laboratori Nazionali del Gran Sasso in Italy. The array represents a significant advancement in this technology, and in this work we apply it for the first time to a high-sensitivity search for a lepton-number--violating process: $^{130}$Te neutrinoless double-beta decay. Examining a total TeO$_2$ exposure of 86.3 kg$\cdot$yr, characterized by an effective energy resolution of (7.7 $\pm$ 0.5) keV FWHM and a background in the region of interest of (0.014 $\pm$ 0.002) counts/(keV$\cdot$kg$\cdot$yr), we find no evidence for neutrinoless double-beta decay. The median statistical sensitivity of this search is $7.0\times10^{24}$ yr. Including systematic uncertainties, we place a lower limit on the decay half-life of $T^{0\nu}_{1/2}$($^{130}$Te) > $1.3\times 10^{25}$ yr (90% C.L.). Combining this result with those of two earlier experiments, Cuoricino and CUORE-0, we find $T^{0\nu}_{1/2}$($^{130}$Te) > $1.5\times 10^{25}$ yr (90% C.L.), which is the most stringent limit to date on this decay. Interpreting this result as a limit on the effective Majorana neutrino mass, we find $m_{\beta\beta}<(110 - 520)$ meV, where the range reflects the nuclear matrix element estimates employed.
0
1
0
0
0
0
Bernoulli-Carlitz and Cauchy-Carlitz numbers with Stirling-Carlitz numbers
Recently, the Cauchy-Carlitz number was defined as the counterpart of the Bernoulli-Carlitz number. Both numbers can be expressed explicitly in terms of so-called Stirling-Carlitz numbers. In this paper, we study the second analogue of Stirling-Carlitz numbers and give some general formulae, including Bernoulli and Cauchy numbers in formal power series with complex coefficients, and Bernoulli-Carlitz and Cauchy-Carlitz numbers in function fields. We also give some applications of Hasse-Teichmüller derivative to hypergeometric Bernoulli and Cauchy numbers in terms of associated Stirling numbers.
0
0
1
0
0
0
Irreducible network backbones: unbiased graph filtering via maximum entropy
Networks provide an informative, yet non-redundant description of complex systems only if links represent truly dyadic relationships that cannot be directly traced back to node-specific properties such as size, importance, or coordinates in some embedding space. In any real-world network, some links may be reducible, and others irreducible, to such local properties. This dichotomy persists despite the steady increase in data availability and resolution, which actually determines an even stronger need for filtering techniques aimed at discerning essential links from non-essential ones. Here we introduce a rigorous method that, for any desired level of statistical significance, outputs the network backbone that is irreducible to the local properties of nodes, i.e. their degrees and strengths. Unlike previous approaches, our method employs an exact maximum-entropy formulation guaranteeing that the filtered network encodes only the links that cannot be inferred from local information. Extensive empirical analysis confirms that this approach uncovers essential backbones that are otherwise hidden amidst many redundant relationships and inaccessible to other methods. For instance, we retrieve the hub-and-spoke skeleton of the US airport network and many specialised patterns of international trade. Being irreducible to local transportation and economic constraints of supply and demand, these backbones single out genuinely higher-order wiring principles.
1
1
0
0
0
0
Synkhronos: a Multi-GPU Theano Extension for Data Parallelism
We present Synkhronos, an extension to Theano for multi-GPU computations leveraging data parallelism. Our framework provides automated execution and synchronization across devices, allowing users to continue to write serial programs without risk of race conditions. The NVIDIA Collective Communication Library is used for high-bandwidth inter-GPU communication. Further enhancements to the Theano function interface include input slicing (with aggregation) and input indexing, which perform common data-parallel computation patterns efficiently. One example use case is synchronous SGD, which has recently been shown to scale well for a growing set of deep learning problems. When training ResNet-50, we achieve a near-linear speedup of 7.5x on an NVIDIA DGX-1 using 8 GPUs, relative to Theano-only code running a single GPU in isolation. Yet Synkhronos remains general to any data-parallel computation programmable in Theano. By implementing parallelism at the level of individual Theano functions, our framework uniquely addresses a niche between manual multi-device programming and prescribed multi-GPU training routines.
1
0
0
0
0
0
The PomXYZ Proteins Self-Organize on the Bacterial Nucleoid to Stimulate Cell Division
Cell division site positioning is precisely regulated to generate correctly sized and shaped daughters. We uncover a novel strategy to position the FtsZ cytokinetic ring at midcell in the social bacterium Myxococcus xanthus. PomX, PomY and the nucleoid-binding ParA/MinD ATPase PomZ self-assemble forming a large nucleoid-associated complex that localizes at the division site before FtsZ to directly guide and stimulate division. PomXYZ localization is generated through self-organized biased random motion on the nucleoid towards midcell and constrained motion at midcell. Experiments and theory show that PomXYZ motion is produced by diffusive PomZ fluxes on the nucleoid into the complex. Flux differences scale with the intracellular asymmetry of the complex and are converted into a local PomZ concentration gradient across the complex with translocation towards the higher PomZ concentration. At midcell, fluxes equalize resulting in constrained motion. Flux-based mechanisms may represent a general paradigm for positioning of macromolecular structures in bacteria.
0
0
0
0
1
0
Quantitative Results on Diophantine Equations in Many Variables
We consider a system of polynomials $f_1,\ldots, f_R\in \mathbb{Z}[x_1,\ldots, x_n]$ of the same degree with non-singular local zeros and in many variables. Generalising the work of Birch (1962) we find quantitative asymptotics (in terms of the maximum of the absolute value of the coefficients of these polynomials) for the number of integer zeros of this system within a growing box. Using a quantitative version of the Nullstellensatz, we obtain a quantitative strong approximation result, i.e. an upper bound on the smallest integer zero provided the system of polynomials is non-singular.
0
0
1
0
0
0
Generalized Springer correspondence for symmetric spaces associated to orthogonal groups
Let $G = GL_N$ over an algebraically closed field of odd characteristic, and $\theta$ an involutive automorphism on $G$ such that $H = (G^{\theta})^0$ is isomorphic to $SO_N$. Then $G^{\iota\theta} = \{ g \in G \mid \theta(g) = g^{-1} \}$ is regarded as a symmetric space $G/G^{\theta}$. Let $G^{\iota\theta}_{uni}$ be the set of unipotent elements in $G^{\iota\theta}$. $H$ acts on $G^{\iota\theta}_{uni}$ by the conjugation. As an analogue of the generalized Springer correspondence in the case of reductive groups, we establish in this paper the generalized Springer correspondence between $H$-orbits in $G^{\iota\theta}_{uni}$ and irreducible representations of various symmetric groups.
0
0
1
0
0
0
An Ensemble Boosting Model for Predicting Transfer to the Pediatric Intensive Care Unit
Our work focuses on the problem of predicting the transfer of pediatric patients from the general ward of a hospital to the pediatric intensive care unit. Using data collected over 5.5 years from the electronic health records of two medical facilities, we develop classifiers based on adaptive boosting and gradient tree boosting. We further combine these learned classifiers into an ensemble model and compare its performance to a modified pediatric early warning score (PEWS) baseline that relies on expert defined guidelines. To gauge model generalizability, we perform an inter-facility evaluation where we train our algorithm on data from one facility and perform evaluation on a hidden test dataset from a separate facility. We show that improvements are witnessed over the PEWS baseline in accuracy (0.77 vs. 0.69), sensitivity (0.80 vs. 0.68), specificity (0.74 vs. 0.70) and AUROC (0.85 vs. 0.73).
1
0
0
1
0
0
Effects of Network Structure on the Performance of a Modeled Traffic Network under Drivers' Bounded Rationality
We propose a minority route choice game to investigate the effect of the network structure on traffic network performance under the assumption of drivers' bounded rationality. We investigate ring-and-hub topologies to capture the nature of traffic networks in cities, and employ a minority game-based inductive learning process to model the characteristic behavior under the route choice scenario. Through numerical experiments, we find that topological changes in traffic networks induce a phase transition from an uncongested phase to a congested phase. Understanding this phase transition is helpful in planning new traffic networks.
1
1
0
0
0
0
Sneak into Devil's Colony- A study of Fake Profiles in Online Social Networks and the Cyber Law
Massive content about user's social, personal and professional life stored on Online Social Networks (OSNs) has attracted not only the attention of researchers and social analysts but also the cyber criminals. These cyber criminals penetrate illegally into an OSN by establishing fake profiles or by designing bots and exploit the vulnerabilities of an OSN to carry out illegal activities. With the growth of technology cyber crimes have been increasing manifold. Daily reports of the security and privacy threats in the OSNs demand not only the intelligent automated detection systems that can identify and alleviate fake profiles in real time but also the reinforcement of the security and privacy laws to curtail the cyber crime. In this paper, we have studied various categories of fake profiles like compromised profiles, cloned profiles and online bots (spam-bots, social-bots, like-bots and influential-bots) on different OSN sites along with existing cyber laws to mitigate their threats. In order to design fake profile detection systems, we have highlighted different category of fake profile features which are capable to distinguish different kinds of fake entities from real ones. Another major challenges faced by researchers while building the fake profile detection systems is the unavailability of data specific to fake users. The paper addresses this challenge by providing extremely obliging data collection techniques along with some existing data sources. Furthermore, an attempt is made to present several machine learning techniques employed to design different fake profile detection systems.
1
0
0
0
0
0
Preserving Data-Privacy with Added Noises: Optimal Estimation and Privacy Analysis
Networked system often relies on distributed algorithms to achieve a global computation goal with iterative local information exchanges between neighbor nodes. To preserve data privacy, a node may add a random noise to its original data for information exchange at each iteration. Nevertheless, a neighbor node can estimate other's original data based on the information it received. The estimation accuracy and data privacy can be measured in terms of $(\epsilon, \delta)$-data-privacy, defined as the probability of $\epsilon$-accurate estimation (the difference of an estimation and the original data is within $\epsilon$) is no larger than $\delta$ (the disclosure probability). How to optimize the estimation and analyze data privacy is a critical and open issue. In this paper, a theoretical framework is developed to investigate how to optimize the estimation of neighbor's original data using the local information received, named optimal distributed estimation. Then, we study the disclosure probability under the optimal estimation for data privacy analysis. We further apply the developed framework to analyze the data privacy of the privacy-preserving average consensus algorithm and identify the optimal noises for the algorithm.
1
0
0
0
0
0
Geometric tuning of self-propulsion for Janus catalytic particles
Catalytic swimmers have attracted much attention as alternatives to biological systems for examining collective microscopic dynamics and the response to physico-chemical signals. Yet, understanding and predicting even the most fundamental characteristics of their individual propulsion still raises important challenges. While chemical asymmetry is widely recognized as the cornerstone of catalytic propulsion, different experimental studies have reported that particles with identical chemical properties may propel in opposite directions. Here, we show that, beyond its chemical properties, the detailed shape of a catalytic swimmer plays an essential role in determining its direction of motion, demonstrating the compatibility of the classical theoretical framework with experimental observations.
0
1
0
0
0
0
On the Bogolubov-de Gennes Equations
We consider the Bogolubov-de Gennes equations giving an equivalent formulation of the BCS theory of superconductivity. We are interested in the case when the magnetic field is present. We (a) discuss their general features, (b) isolate key physical classes of solutions (normal, vortex and vortex lattice states) and (c) prove existence of the normal, vortex and vortex lattice states and stability/instability of the normal states for large/small temperature or/and magnetic fields.
0
0
1
0
0
0
High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking
Penalized likelihood methods are widely used for high-dimensional regression. Although many methods have been proposed and the associated theory is now well-developed, the relative efficacy of different methods in finite-sample settings, as encountered in practice, remains incompletely understood. There is therefore a need for empirical investigations in this area that can offer practical insight and guidance to users of these methods. In this paper we present a large-scale comparison of penalized regression methods. We distinguish between three related goals: prediction, variable selection and variable ranking. Our results span more than 1,800 data-generating scenarios, allowing us to systematically consider the influence of various factors (sample size, dimensionality, sparsity, signal strength and multicollinearity). We consider several widely-used methods (Lasso, Elastic Net, Ridge Regression, SCAD, the Dantzig Selector as well as Stability Selection). We find considerable variation in performance between methods, with results dependent on details of the data-generating scenario and the specific goal. Our results support a `no panacea' view, with no unambiguous winner across all scenarios, even in this restricted setting where all data align well with the assumptions underlying the methods. Lasso is well-behaved, performing competitively in many scenarios, while SCAD is highly variable. Substantial benefits from a Ridge-penalty are only seen in the most challenging scenarios with strong multi-collinearity. The results are supported by semi-synthetic analyzes using gene expression data from cancer samples. Our empirical results complement existing theory and provide a resource to compare methods across a range of scenarios and metrics.
0
0
0
1
0
0
Using Human Brain Activity to Guide Machine Learning
Machine learning is a field of computer science that builds algorithms that learn. In many cases, machine learning algorithms are used to recreate a human ability like adding a caption to a photo, driving a car, or playing a game. While the human brain has long served as a source of inspiration for machine learning, little effort has been made to directly use data collected from working brains as a guide for machine learning algorithms. Here we demonstrate a new paradigm of "neurally-weighted" machine learning, which takes fMRI measurements of human brain activity from subjects viewing images, and infuses these data into the training process of an object recognition learning algorithm to make it more consistent with the human brain. After training, these neurally-weighted classifiers are able to classify images without requiring any additional neural data. We show that our neural-weighting approach can lead to large performance gains when used with traditional machine vision features, as well as to significant improvements with already high-performing convolutional neural network features. The effectiveness of this approach points to a path forward for a new class of hybrid machine learning algorithms which take both inspiration and direct constraints from neuronal data.
1
0
0
0
0
0
Anesthesiologist-level forecasting of hypoxemia with only SpO2 data using deep learning
We use a deep learning model trained only on a patient's blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access to all the data recorded in a modern operating room. We also provide a simple way to visualize the reason why a patient's risk is low or high by assigning weight to the patient's past blood oxygen values. This work has the potential to provide cutting-edge clinical decision support in low-resource settings, where rates of surgical complication and death are substantially greater than in high-resource areas.
1
0
0
1
0
0
Merging fragments of classical logic
We investigate the possibility of extending the non-functionally complete logic of a collection of Boolean connectives by the addition of further Boolean connectives that make the resulting set of connectives functionally complete. More precisely, we will be interested in checking whether an axiomatization for Classical Propositional Logic may be produced by merging Hilbert-style calculi for two disjoint incomplete fragments of it. We will prove that the answer to that problem is a negative one, unless one of the components includes only top-like connectives.
1
0
1
0
0
0
Variational Bayes Estimation of Discrete-Margined Copula Models with Application to Time Series
We propose a new variational Bayes estimator for high-dimensional copulas with discrete, or a combination of discrete and continuous, margins. The method is based on a variational approximation to a tractable augmented posterior, and is faster than previous likelihood-based approaches. We use it to estimate drawable vine copulas for univariate and multivariate Markov ordinal and mixed time series. These have dimension $rT$, where $T$ is the number of observations and $r$ is the number of series, and are difficult to estimate using previous methods. The vine pair-copulas are carefully selected to allow for heteroskedasticity, which is a feature of most ordinal time series data. When combined with flexible margins, the resulting time series models also allow for other common features of ordinal data, such as zero inflation, multiple modes and under- or over-dispersion. Using six example series, we illustrate both the flexibility of the time series copula models, and the efficacy of the variational Bayes estimator for copulas of up to 792 dimensions and 60 parameters. This far exceeds the size and complexity of copula models for discrete data that can be estimated using previous methods.
0
0
0
1
0
0
COSMO: Contextualized Scene Modeling with Boltzmann Machines
Scene modeling is very crucial for robots that need to perceive, reason about and manipulate the objects in their environments. In this paper, we adapt and extend Boltzmann Machines (BMs) for contextualized scene modeling. Although there are many models on the subject, ours is the first to bring together objects, relations, and affordances in a highly-capable generative model. For this end, we introduce a hybrid version of BMs where relations and affordances are introduced with shared, tri-way connections into the model. Moreover, we contribute a dataset for relation estimation and modeling studies. We evaluate our method in comparison with several baselines on object estimation, out-of-context object detection, relation estimation, and affordance estimation tasks. Moreover, to illustrate the generative capability of the model, we show several example scenes that the model is able to generate.
1
0
0
0
0
0
Learning Large-Scale Topological Maps Using Sum-Product Networks
In order to perform complex actions in human environments, an autonomous robot needs the ability to understand the environment, that is, to gather and maintain spatial knowledge. Topological map is commonly used for representing large scale, global maps such as floor plans. Although much work has been done in topological map extraction, we have found little previous work on the problem of learning the topological map using a probabilistic model. Learning a topological map means learning the structure of the large-scale space and dependency between places, for example, how the evidence of a group of places influence the attributes of other places. This is an important step towards planning complex actions in the environment. In this thesis, we consider the problem of using probabilistic deep learning model to learn the topological map, which is essentially a sparse undirected graph where nodes represent places annotated with their semantic attributes (e.g. place category). We propose to use a novel probabilistic deep model, Sum-Product Networks (SPNs), due to their unique properties. We present two methods for learning topological maps using SPNs: the place grid method and the template-based method. We contribute an algorithm that builds SPNs for graphs using template models. Our experiments evaluate the ability of our models to enable robots to infer semantic attributes and detect maps with novel semantic attribute arrangements. Our results demonstrate their understanding of the topological map structure and spatial relations between places.
1
0
0
0
0
0
Entanglement scaling and spatial correlations of the transverse field Ising model with perturbations
We study numerically the entanglement entropy and spatial correlations of the one dimensional transverse field Ising model with three different perturbations. First, we focus on the out of equilibrium, steady state with an energy current passing through the system. By employing a variety of matrix-product state based methods, we confirm the phase diagram and compute the entanglement entropy. Second, we consider a small perturbation that takes the system away from integrability and calculate the correlations, the central charge and the entanglement entropy. Third, we consider periodically weakened bonds, exploring the phase diagram and entanglement properties first in the situation when the weak and strong bonds alternate (period two-bonds) and then the general situation of a period of n bonds. In the latter case we find a critical weak bond that scales with the transverse field as $J'_c/J$ = $(h/J)^n$, where $J$ is the strength of the strong bond, $J'$ of the weak bond and $h$ the transverse field. We explicitly show that the energy current is not a conserved quantity in this case.
0
1
0
0
0
0
Anyonic excitations of hardcore anyons
Strongly interacting many-body systems consisting of fermions or bosons can host exotic quasiparticles with anyonic statistics. Here, we demonstrate that many-body systems of anyons can also form anyonic quasi-particles. The charge and statistics of the emergent anyons can be different from those of the original anyons.
0
1
0
0
0
0