title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Differentially Private ANOVA Testing
Modern society generates an incredible amount of data about individuals, and releasing summary statistics about this data in a manner that provably protects individual privacy would offer a valuable resource for researchers in many fields. We present the first algorithm for analysis of variance (ANOVA) that preserves differential privacy, allowing this important statistical test to be conducted (and the results released) on databases of sensitive information. In addition to our private algorithm for the F test statistic, we show a rigorous way to compute p-values that accounts for the added noise needed to preserve privacy. Finally, we present experimental results quantifying the statistical power of this differentially private version of the test, finding that a sample of several thousand observations is frequently enough to detect variation between groups. The differentially private ANOVA algorithm is a promising approach for releasing a common test statistic that is valuable in fields in the sciences and social sciences.
1
0
0
1
0
0
The $E$-cohomological Conley Index, Cup-Lengths and the Arnold Conjecture on $T^{2n}$
We give a new proof of the strong Arnold conjecture for $1$-periodic solutions of Hamiltonian systems on tori, that was first shown by C. Conley and E. Zehnder in 1983. Our proof uses other methods and is shorter than the previous one. We first show that the $E$-cohomological Conley index, that was introduced by the first author recently, has a natural module structure. This yields a new cup-length and a lower bound for the number of critical points of functionals. Then an existence result for the $E$-cohomological Conley index, which applies to the setting of the Arnold conjecture, paves the way to a new proof of it on tori.
0
0
1
0
0
0
Compiling Diderot: From Tensor Calculus to C
Diderot is a parallel domain-specific language for analysis and visualization of multidimensional scientific images, such as those produced by CT and MRI scanners. In particular, it supports algorithms where tensor fields (i.e., functions from 3D points to tensor values) are used to represent the underlying physical objects that were scanned by the imaging device. Diderot supports higher-order programming where tensor fields are first-class values and where differential operators and lifted linear-algebra operators can be used to express mathematical reasoning directly in the language. While such lifted field operations are central to the definition and computation of many scientific visualization algorithms, to date they have required extensive manual derivations and laborious implementation. The challenge for the Diderot compiler is to effectively translate the high-level mathematical concepts that are expressible in the surface language to a low-level and efficient implementation in C. This paper describes our approach to this challenge, which is based around the careful design of an intermediate representation (IR), called EIN, and a number of compiler transformations that lower the program from tensor calculus to C while avoiding combinatorial explosion in the size of the IR. We describe the challenges in compiling a language like Diderot, the design of EIN, and the transformation used by the compiler. We also present an evaluation of EIN with respect to both compiler efficiency and quality of generated code.
1
0
0
0
0
0
Semi-Supervised and Active Few-Shot Learning with Prototypical Networks
We consider the problem of semi-supervised few-shot classification where a classifier needs to adapt to new tasks using a few labeled examples and (potentially many) unlabeled examples. We propose a clustering approach to the problem. The features extracted with Prototypical Networks are clustered using $K$-means with the few labeled examples guiding the clustering process. We note that in many real-world applications the adaptation performance can be significantly improved by requesting the few labels through user feedback. We demonstrate good performance of the active adaptation strategy using image data.
1
0
0
1
0
0
Inhomogeneous exponential jump model
We introduce and study the inhomogeneous exponential jump model - an integrable stochastic interacting particle system on the continuous half line evolving in continuous time. An important feature of the system is the presence of arbitrary spatial inhomogeneity on the half line which does not break the integrability. We completely characterize the macroscopic limit shape and asymptotic fluctuations of the height function (= integrated current) in the model. In particular, we explain how the presence of inhomogeneity may lead to macroscopic phase transitions in the limit shape such as shocks or traffic jams. Away from these singularities the asymptotic fluctuations of the height function around its macroscopic limit shape are governed by the GUE Tracy-Widom distribution. A surprising result is that while the limit shape is discontinuous at a traffic jam caused by a macroscopic slowdown in the inhomogeneity, fluctuations on both sides of such a traffic jam still have the GUE Tracy-Widom distribution (but with different non-universal normalizations). The integrability of the model comes from the fact that it is a degeneration of the inhomogeneous stochastic higher spin six vertex models studied earlier in arXiv:1601.05770 [math.PR]. Our results on fluctuations are obtained via an asymptotic analysis of Fredholm determinantal formulas arising from contour integral expressions for the q-moments in the stochastic higher spin six vertex model. We also discuss "product-form" translation invariant stationary distributions of the exponential jump model which lead to an alternative hydrodynamic-type heuristic derivation of the macroscopic limit shape.
0
0
1
0
0
0
Evidence accumulation in a Laplace domain decision space
Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.
0
0
0
0
1
0
Improved Energy Pooling Efficiency Through Inhibited Spontaneous Emission
The radiative lifetime of molecules or atoms can be increased by placing them within a tuned conductive cavity that inhibits spontaneous emission. This was examined as a possible means of enhancing three-body, singlet-based upconversion, known as energy pooling. Achieving efficient upconversion of light has potential applications in the fields of photovoltaics, biofuels, and medicine. The affect of the photonically constrained environment on pooling efficiency was quantified using a kinetic model populated with data from molecular quantum electrodynamics, perturbation theory, and ab initio calculations. This model was applied to a system with fluorescein donors and a hexabenzocoronene acceptor. Placing the molecules within a conducting cavity was found to increase the efficiency of energy pooling by increasing both the donor lifetime and the acceptor emission rate--i.e. a combination of inhibited spontaneous emission and the Purcell effect. A model system with a free-space pooling efficiency of 23% was found to have an efficiency of 47% in a rectangular cavity.
0
1
0
0
0
0
A unified, mechanistic framework for developmental and evolutionary change
The two most fundamental processes describing change in biology -development and evolution- occur over drastically different timescales, difficult to reconcile within a unified framework. Development involves a temporal sequence of cell states controlled by a hierarchy of regulatory structures. It occurs over the lifetime of a single individual, and is associated to the gene expression level change of a given genotype. Evolution, by contrast entails genotypic change through the acquisition or loss of genes, and involves the emergence of new, environmentally selected phenotypes over the lifetimes of many individ- uals. Here we present a model of regulatory network evolution that accounts for both timescales. We extend the framework of boolean models of gene regulatory network (GRN)-currently only applicable to describing development-to include evolutionary processes. As opposed to one-to-one maps to specific attractors, we identify the phenotypes of the cells as the relevant macrostates of the GRN. A pheno- type may now correspond to multiple attractors, and its formal definition no longer require a fixed size for the genotype. This opens the possibility for a quantitative study of the phenotypic change of a genotype, which is itself changing over evolutionary timescales. We show how the realization of specific phenotypes can be controlled by gene duplication events, and how successive events of gene duplication lead to new regulatory structures via selection. It is these structures that enable control of macroscale patterning, as in development. The proposed framework therefore provides a mechanistic explanation for the emergence of regulatory structures controlling development over evolutionary time.
0
0
0
0
1
0
Coinfection in a stochastic model for bacteriophage systems
A system modeling bacteriophage treatments with coinfections in a noisy context is analyzed. We prove that in a small noise regime, the system converges in the long term to a bacteria free equilibrium. Moreover, we compare the treatment with coinfection with the treatment without coinfection, showing how the coinfection affects the dose of bacteriophages that is needed to eliminate the bacteria and the velocity of convergence to the free bacteria equilibrium.
0
0
1
0
0
0
Field dependence of non-reciprocal magnons in chiral MnSi
Spin waves in chiral magnetic materials are strongly influenced by the Dzyaloshinskii-Moriya interaction resulting in intriguing phenomena like non-reciprocal magnon propagation and magnetochiral dichroism. Here, we study the non-reciprocal magnon spectrum of the archetypical chiral magnet MnSi and its evolution as a function of magnetic field covering the field-polarized and conical helix phase. Using inelastic neutron scattering, the magnon energies and their spectral weights are determined quantitatively after deconvolution with the instrumental resolution. In the field-polarized phase the imaginary part of the dynamical susceptibility $\chi''(\varepsilon, {\bf q})$ is shown to be asymmetric with respect to wavevectors ${\bf q}$ longitudinal to the applied magnetic field ${\bf H}$, which is a hallmark of chiral magnetism. In the helimagnetic phase, $\chi''(\varepsilon, {\bf q})$ becomes increasingly symmetric with decreasing ${\bf H}$ due to the formation of helimagnon bands and the activation of additional spinflip and non-spinflip scattering channels. The neutron spectra are in excellent quantitative agreement with the low-energy theory of cubic chiral magnets with a single fitting parameter being the damping rate of spin waves.
0
1
0
0
0
0
On the Simpson index for the Moran process with random selection and immigration
Moran or Wright-Fisher processes are probably the most well known model to study the evolution of a population under various effects. Our object of study will be the Simpson index which measures the level of diversity of the population, one of the key parameter for ecologists who study for example forest dynamics. Following ecological motivations, we will consider here the case where there are various species with fitness and immigration parameters being random processes (and thus time evolving). To measure biodiversity, ecologists generally use the Simpson index, who has no closed formula, except in the neutral (no selection) case via a backward approach, and which is difficult to evaluate even numerically when the population size is large. Our approach relies on the large population limit in the "weak" selection case, and thus to give a procedure which enable us to approximate, with controlled rate, the expectation of the Simpson index at fixed time. Our approach will be forward and valid for all time, which is the main difference with the historical approach of Kingman, or Krone-Neuhauser. We will also study the long time behaviour of the Wright-Fisher process in a simplified setting, allowing us to get a full picture for the approximation of the expectation of the Simpson index.
0
0
0
0
1
0
Small animal whole body imaging with metamaterial-inspired RF coil
Preclinical magnetic resonance imaging often requires the entire body of an animal to be imaged with sufficient quality. This is usually performed by combining regions scanned with small coils with high sensitivity or long scans using large coils with low sensitivity. Here, a metamaterial-inspired design employing of a parallel array of wires operating on the principle of eigenmode hybridization is used to produce a small animal whole-body imaging coil. The coil field distribution responsible for the coil field of view and sensitivity is simulated in an electromagnetic simulation package and the coil geometrical parameters are optimized for the chosen application. A prototype coil is then manufactured and assembled using brass telescopic tubes and copper plates as distributed capacitance, its field distribution is measured experimentally using B1+ mapping technique and found to be in close correspondence with simulated results. The coil field distribution is found to be suitable for whole-body small animal imaging and coil image quality is compared with a number of commercially available coils by whole-body living mice scanning. Signal to noise measurements in living mice show outstanding coil performance compared to commercially available coils with large receptive fields, and rivaling performance compared to small receptive field and high-sensitivity coils. The coil is deemed suitable for whole-body small animal preclinical applications.
0
1
0
0
0
0
Entropic Trace Estimates for Log Determinants
The scalable calculation of matrix determinants has been a bottleneck to the widespread application of many machine learning methods such as determinantal point processes, Gaussian processes, generalised Markov random fields, graph models and many others. In this work, we estimate log determinants under the framework of maximum entropy, given information in the form of moment constraints from stochastic trace estimation. The estimates demonstrate a significant improvement on state-of-the-art alternative methods, as shown on a wide variety of UFL sparse matrices. By taking the example of a general Markov random field, we also demonstrate how this approach can significantly accelerate inference in large-scale learning methods involving the log determinant.
1
0
0
1
0
0
A Mixture of Matrix Variate Bilinear Factor Analyzers
Over the years data has become increasingly higher dimensional, which has prompted an increased need for dimension reduction techniques. This is perhaps especially true for clustering (unsupervised classification) as well as semi-supervised and supervised classification. Although dimension reduction in the area of clustering for multivariate data has been quite thoroughly discussed within the literature, there is relatively little work in the area of three-way, or matrix variate, data. Herein, we develop a mixture of matrix variate bilinear factor analyzers (MMVBFA) model for use in clustering high-dimensional matrix variate data. This work can be considered both the first matrix variate bilinear factor analysis model as well as the first MMVBFA model. Parameter estimation is discussed, and the MMVBFA model is illustrated using simulated and real data.
0
0
0
1
0
0
The application of the competency-based approach to assess the training and employment adequacy problem
This review paper fits in the context of the adequate matching of training to employment, which is one of the main challenges that universities around the world strive to meet. In higher education, the revision of curricula necessitates a return to the skills required by the labor market to train skilled labors. In this research, we started with the presentation of the conceptual framework. Then we quoted different currents that discussed the problematic of the job training match from various perspectives. We proceeded to choose some studies that have attempted to remedy this problem by adopting the competency-based approach that involves the referential line. This approach has as a main characteristic the attainment of the match between training and employment. Therefore, it is a relevant solution for this problem. We scrutinized the selected studies, presenting their objectives, methodologies and results, and we provided our own analysis. Then, we focused on the Moroccan context through observations and studies already conducted. And finally, we introduced the problematic of our future project.
1
0
0
0
0
0
Prediction and Generation of Binary Markov Processes: Can a Finite-State Fox Catch a Markov Mouse?
Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.
1
1
0
0
0
0
Topological orders of strongly interacting particles
We investigate the self-organization of strongly interacting particles confined in 1D and 2D. We consider hardcore bosons in spinless Hubbard lattice models with short range interactions. We show that, many-body orders with topological characteristics emerge, at different energy bands separated by large gaps. These topological orders manifest in the way the particles organize in real space to form states with different energy. Each of these states contains topological defects/condensations whose Euler characteristic can be used as a topological number to categorize states belonging to the same energy band. We provide analytical formulas for this topological number and the full energy spectrum of the system for both sparsely and densely filled systems. Furthermore, we discuss the connection with the Gauss-Bonnet theorem of differential geometry, by using the curvature generated in real space by the particle structures. Our result is a demonstration of how topological orders can arise in strongly interacting many-body systems with simple underlying rules, without considering the spin, long-range microscopic interactions, or external fields.
0
1
0
0
0
0
Virtual unknotting numbers of certain virtual torus knots
The virtual unknotting number of a virtual knot is the minimal number of crossing changes that makes the virtual knot to be the unknot, which is defined only for virtual knots virtually homotopic to the unknot. We focus on the virtual knot obtained from the standard (p,q)-torus knot diagram by replacing all crossings on one overstrand into virtual crossings and prove that its virtual unknotting number is equal to the unknotting number of the $(p,q)$-torus knot, i.e. it is (p-1)(q-1)/2.
0
0
1
0
0
0
Computing the Lambert W function in arbitrary-precision complex interval arithmetic
We describe an algorithm to evaluate all the complex branches of the Lambert W function with rigorous error bounds in interval arithmetic, which has been implemented in the Arb library. The classic 1996 paper on the Lambert W function by Corless et al. provides a thorough but partly heuristic numerical analysis which needs to be complemented with some explicit inequalities and practical observations about managing precision and branch cuts.
1
0
0
0
0
0
An Architecture for Embedded Systems Supporting Assisted Living
The rise in life expectancy is one of the great achievements of the twentieth century. This phenomenon originates a still increasing interest in Ambient Assisted Living (AAL) technological solutions that may support people in their daily routines allowing an independent and safe lifestyle as long as possible. AAL systems generally acquire data from the field and reason on them and the context to accomplish their tasks. Very often, AAL systems are vertical solutions, thus making hard their reuse and adaptation to different domains with respect to the ones for which they have been developed. In this paper we propose an architectural solution that allows the acquisition level of an ALL system to be easily built, configured, and extended without affecting the reasoning level of the system. We experienced our proposal in a fall detection system.
1
0
0
0
0
0
Hunting Rabbits on the Hypercube
We explore the Hunters and Rabbits game on the hypercube. In the process, we find the solution for all classes of graphs with an isoperimetric nesting property and find the exact hunter number of $Q^n$ to be $1+\sum\limits_{i=0}^{n-2} \binom{i}{\lfloor i/2 \rfloor}$. In addition, we extend results to the situation where we allow the rabbit to not move between shots.
0
0
1
0
0
0
Mixed Effect Dirichlet-Tree Multinomial for Longitudinal Microbiome Data and Weight Prediction
Quantifying the relation between gut microbiome and body weight can provide insights into personalized strategies for improving digestive health. In this paper, we present an algorithm that predicts weight fluctuations using gut microbiome in a healthy cohort of newborns from a previously published dataset. Microbial data has been known to present unique statistical challenges that defy most conventional models. We propose a mixed effect Dirichlet-tree multinomial (DTM) model to untangle these difficulties as well as incorporate covariate information and account for species relatedness. The DTM setup allows one to easily invoke empirical Bayes shrinkage on each node for enhanced inference of microbial proportions. Using these estimates, we subsequently apply random forest for weight prediction and obtain a microbiome-inferred weight metric. Our result demonstrates that microbiome-inferred weight is significantly associated with weight changes in the future and its non-trivial effect size makes it a viable candidate to forecast weight progression.
0
0
0
1
0
0
Approximating Throughput and Packet Decoding Delay in Linear Network Coded Wireless Broadcast
In this paper, we study a wireless packet broadcast system that uses linear network coding (LNC) to help receivers recover data packets that are missing due to packet erasures. We study two intertwined performance metrics, namely throughput and average packet decoding delay (APDD) and establish strong/weak approximation relations based on whether the approximation holds for the performance of every receiver (strong) or for the average performance across all receivers (weak). We prove an equivalence between strong throughput approximation and strong APDD approximation. We prove that throughput-optimal LNC techniques can strongly approximate APDD, and partition-based LNC techniques may weakly approximate throughput. We also prove that memoryless LNC techniques, including instantly decodable network coding techniques, are not strong throughput and APDD approximation nor weak throughput approximation techniques.
1
0
1
0
0
0
Search for electromagnetic super-preshowers using gamma-ray telescopes
Any considerations on propagation of particles through the Universe must involve particle interactions: processes leading to production of particle cascades. While one expects existence of such cascades, the state of the art cosmic-ray research is oriented purely on a detection of single particles, gamma rays or associated extensive air showers. The natural extension of the cosmic-ray research with the studies on ensembles of particles and air showers is being proposed by the CREDO Collaboration. Within the CREDO strategy the focus is put on generalized super-preshowers (SPS): spatially and/or temporally extended cascades of particles originated above the Earth atmosphere, possibly even at astrophysical distances. With CREDO we want to find out whether SPS can be at least partially observed by a network of terrestrial and/or satellite detectors receiving primary or secondary cosmic-ray signal. This paper addresses electromagnetic SPS, e.g. initiated by VHE photons interacting with the cosmic microwave background, and the SPS signatures that can be seen by gamma-ray telescopes, exploring the exampleof Cherenkov Telescope Array. The energy spectrum of secondary electrons and photons in an electromagnetic super-preshower might be extended over awide range of energy, down to TeV or even lower, as it is evident from the simulation results. This means that electromagnetic showers induced by such particles in the Earth atmosphere could be observed by imaging atmospheric Cherenkov telescopes. We present preliminary results from the study of response of the Cherenkov Telescope Array to SPS events, including the analysis of the simulated shower images on the camera focal plane and implementedgeneric reconstruction chains based on the Hillas parameters.
0
1
0
0
0
0
Gromov-Hausdorff limit of Wasserstein spaces on point clouds
We consider a point cloud $X_n := \{ x_1, \dots, x_n \}$ uniformly distributed on the flat torus $\mathbb{T}^d : = \mathbb{R}^d / \mathbb{Z}^d $, and construct a geometric graph on the cloud by connecting points that are within distance $\epsilon$ of each other. We let $\mathcal{P}(X_n)$ be the space of probability measures on $X_n$ and endow it with a discrete Wasserstein distance $W_n$ as introduced independently by Maas and Zhou et al. for general finite Markov chains. We show that as long as $\epsilon= \epsilon_n$ decays towards zero slower than an explicit rate depending on the level of uniformity of $X_n$, then the space $(\mathcal{P}(X_n), W_n)$ converges in the Gromov-Hausdorff sense towards the space of probability measures on $\mathbb{T}^d$ endowed with the Wasserstein distance.
0
0
1
1
0
0
Minimal axiomatic frameworks for definable hyperreals with transfer
We modify the definable ultrapower construction of Kanovei and Shelah (2004) to develop a ZF-definable extension of the continuum with transfer provable using countable choice only, with an additional mild hypothesis on well-ordering implying properness. Under the same assumptions, we also prove the existence of a definable, proper elementary extension of the standard superstructure over the reals. Keywords: definability; hyperreal; superstructure; elementary embedding.
0
0
1
0
0
0
A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity
We present a novel notion of complexity that interpolates between and generalizes some classic existing complexity notions in learning theory: for estimators like empirical risk minimization (ERM) with arbitrary bounded losses, it is upper bounded in terms of data-independent Rademacher complexity; for generalized Bayesian estimators, it is upper bounded by the data-dependent information complexity (also known as stochastic or PAC-Bayesian, $\mathrm{KL}(\text{posterior} \operatorname{\|} \text{prior})$ complexity. For (penalized) ERM, the new complexity reduces to (generalized) normalized maximum likelihood (NML) complexity, i.e. a minimax log-loss individual-sequence regret. Our first main result bounds excess risk in terms of the new complexity. Our second main result links the new complexity via Rademacher complexity to $L_2(P)$ entropy, thereby generalizing earlier results of Opper, Haussler, Lugosi, and Cesa-Bianchi who did the log-loss case with $L_\infty$. Together, these results recover optimal bounds for VC- and large (polynomial entropy) classes, replacing localized Rademacher complexity by a simpler analysis which almost completely separates the two aspects that determine the achievable rates: 'easiness' (Bernstein) conditions and model complexity.
1
0
0
1
0
0
A Naive Algorithm for Feedback Vertex Set
Given a graph on $n$ vertices and an integer $k$, the feedback vertex set problem asks for the deletion of at most $k$ vertices to make the graph acyclic. We show that a greedy branching algorithm, which always branches on an undecided vertex with the largest degree, runs in single-exponential time, i.e., $O(c^k\cdot n^2)$ for some constant $c$.
1
0
0
0
0
0
Query Complexity of Clustering with Side Information
Suppose, we are given a set of $n$ elements to be clustered into $k$ (unknown) clusters, and an oracle/expert labeler that can interactively answer pair-wise queries of the form, "do two elements $u$ and $v$ belong to the same cluster?". The goal is to recover the optimum clustering by asking the minimum number of queries. In this paper, we initiate a rigorous theoretical study of this basic problem of query complexity of interactive clustering, and provide strong information theoretic lower bounds, as well as nearly matching upper bounds. Most clustering problems come with a similarity matrix, which is used by an automated process to cluster similar points together. Our main contribution in this paper is to show the dramatic power of side information aka similarity matrix on reducing the query complexity of clustering. A similarity matrix represents noisy pair-wise relationships such as one computed by some function on attributes of the elements. A natural noisy model is where similarity values are drawn independently from some arbitrary probability distribution $f_+$ when the underlying pair of elements belong to the same cluster, and from some $f_-$ otherwise. We show that given such a similarity matrix, the query complexity reduces drastically from $\Theta(nk)$ (no similarity matrix) to $O(\frac{k^2\log{n}}{\cH^2(f_+\|f_-)})$ where $\cH^2$ denotes the squared Hellinger divergence. Moreover, this is also information-theoretic optimal within an $O(\log{n})$ factor. Our algorithms are all efficient, and parameter free, i.e., they work without any knowledge of $k, f_+$ and $f_-$, and only depend logarithmically with $n$. Along the way, our work also reveals intriguing connection to popular community detection models such as the {\em stochastic block model}, significantly generalizes them, and opens up many venues for interesting future research.
1
0
0
1
0
0
On Prediction Properties of Kriging: Uniform Error Bounds and Robustness
Kriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are computationally simple. However, in many applications one would like to predict for a range of untried points simultaneously. In this work we obtain some error bounds for the (simple) kriging predictor under the uniform metric. It works for a scattered set of input points in an arbitrary dimension, and also covers the case where the covariance function of the Gaussian process is misspecified. These results lead to a better understanding of the rate of convergence of kriging under the Gaussian or the Matérn correlation functions, the relationship between space-filling designs and kriging models, and the robustness of the Matérn correlation functions.
0
0
1
1
0
0
Robust Submodular Maximization: A Non-Uniform Partitioning Approach
We study the problem of maximizing a monotone submodular function subject to a cardinality constraint $k$, with the added twist that a number of items $\tau$ from the returned set may be removed. We focus on the worst-case setting considered in (Orlin et al., 2016), in which a constant-factor approximation guarantee was given for $\tau = o(\sqrt{k})$. In this paper, we solve a key open problem raised therein, presenting a new Partitioned Robust (PRo) submodular maximization algorithm that achieves the same guarantee for more general $\tau = o(k)$. Our algorithm constructs partitions consisting of buckets with exponentially increasing sizes, and applies standard submodular optimization subroutines on the buckets in order to construct the robust solution. We numerically demonstrate the performance of PRo in data summarization and influence maximization, demonstrating gains over both the greedy algorithm and the algorithm of (Orlin et al., 2016).
1
0
0
1
0
0
Value Directed Exploration in Multi-Armed Bandits with Structured Priors
Multi-armed bandits are a quintessential machine learning problem requiring the balancing of exploration and exploitation. While there has been progress in developing algorithms with strong theoretical guarantees, there has been less focus on practical near-optimal finite-time performance. In this paper, we propose an algorithm for Bayesian multi-armed bandits that utilizes value-function-driven online planning techniques. Building on previous work on UCB and Gittins index, we introduce linearly-separable value functions that take both the expected return and the benefit of exploration into consideration to perform n-step lookahead. The algorithm enjoys a sub-linear performance guarantee and we present simulation results that confirm its strength in problems with structured priors. The simplicity and generality of our approach makes it a strong candidate for analyzing more complex multi-armed bandit problems.
1
0
0
1
0
0
Adaptive Mantel Test for AssociationTesting in Imaging Genetics Data
Mantel's test (MT) for association is conducted by testing the linear relationship of similarity of all pairs of subjects between two observational domains. Motivated by applications to neuroimaging and genetics data, and following the succes of shrinkage and kernel methods for prediction with high-dimensional data, we here introduce the adaptive Mantel test as an extension of the MT. By utilizing kernels and penalized similarity measures, the adaptive Mantel test is able to achieve higher statistical power relative to the classical MT in many settings. Furthermore, the adaptive Mantel test is designed to simultaneously test over multiple similarity measures such that the correct type I error rate under the null hypothesis is maintained without the need to directly adjust the significance threshold for multiple testing. The performance of the adaptive Mantel test is evaluated on simulated data, and is used to investigate associations between genetics markers related to Alzheimer's Disease and heatlhy brain physiology with data from a working memory study of 350 college students from Beijing Normal University.
0
0
0
1
0
0
Approximation of Functions over Manifolds: A Moving Least-Squares Approach
We present an algorithm for approximating a function defined over a $d$-dimensional manifold utilizing only noisy function values at locations sampled from the manifold with noise. To produce the approximation we do not require any knowledge regarding the manifold other than its dimension $d$. The approximation scheme is based upon the Manifold Moving Least-Squares (MMLS). The proposed algorithm is resistant to noise in both the domain and function values. Furthermore, the approximant is shown to be smooth and of approximation order of $\mathcal{O}(h^{m+1})$ for non-noisy data, where $h$ is the mesh size with respect to the manifold domain, and $m$ is the degree of a local polynomial approximation utilized in our algorithm. In addition, the proposed algorithm is linear in time with respect to the ambient-space's dimension. Thus, in case of extremely large ambient space dimension, we are able to avoid the curse of dimensionality without having to perform non-linear dimension reduction, which introduces distortions to the manifold data. Using numerical experiments, we compare the presented method to state-of-the-art algorithms for regression over manifolds and show its potential.
1
0
0
1
0
0
Delay sober up drunkers: Control of diffusion in random walkers
Time delay in general leads to instability in some systems, while a specific feedback with delay can control fluctuated motion in nonlinear deterministic systems to a stable state. In this paper, we consider a non-stationary stochastic process, i.e., a random walk and observe its diffusion phenomenon with time delayed feedback. Surprisingly, the diffusion coefficient decreases with increasing the delay time. We analytically illustrate this suppression of diffusion by using stochastic delay differential equations and justify the feasibility of this suppression by applying the time-delay feedback to a molecular dynamics model.
0
1
0
0
0
0
Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter
Primordial Black Holes (PBH) could be the cold dark matter of the universe. They could have arisen from large (order one) curvature fluctuations produced during inflation that reentered the horizon in the radiation era. At reentry, these fluctuations source gravitational waves (GW) via second order anisotropic stresses. These GW, together with those (possibly) sourced during inflation by the same mechanism responsible for the large curvature fluctuations, constitute a primordial stochastic GW background (SGWB) that unavoidably accompanies the PBH formation. We study how the amplitude and the range of frequencies of this signal depend on the statistics (Gaussian versus $\chi^2$) of the primordial curvature fluctuations, and on the evolution of the PBH mass function due to accretion and merging. We then compare this signal with the sensitivity of present and future detectors, at PTA and LISA scales. We find that this SGWB will help to probe, or strongly constrain, the early universe mechanism of PBH production. The comparison between the peak mass of the PBH distribution and the peak frequency of this SGWB will provide important information on the merging and accretion evolution of the PBH mass distribution from their formation to the present era. Different assumptions on the statistics and on the PBH evolution also result in different amounts of CMB $\mu$-distortions. Therefore the above results can be complemented by the detection (or the absence) of $\mu$-distortions with an experiment such as PIXIE.
0
1
0
0
0
0
Linear Parsing Expression Grammars
PEGs were formalized by Ford in 2004, and have several pragmatic operators (such as ordered choice and unlimited lookahead) for better expressing modern programming language syntax. Since these operators are not explicitly defined in the classic formal language theory, it is significant and still challenging to argue PEGs' expressiveness in the context of formal language theory.Since PEGs are relatively new, there are several unsolved problems.One of the problems is revealing a subclass of PEGs that is equivalent to DFAs. This allows application of some techniques from the theory of regular grammar to PEGs. In this paper, we define Linear PEGs (LPEGs), a subclass of PEGs that is equivalent to DFAs. Surprisingly, LPEGs are formalized by only excluding some patterns of recursive nonterminal in PEGs, and include the full set of ordered choice, unlimited lookahead, and greedy repetition, which are characteristic of PEGs. Although the conversion judgement of parsing expressions into DFAs is undecidable in general, the formalism of LPEGs allows for a syntactical judgement of parsing expressions.
1
0
0
0
0
0
Pre-Synaptic Pool Modification (PSPM): A Supervised Learning Procedure for Spiking Neural Networks
A central question in neuroscience is how to develop realistic models that predict output firing behavior based on provided external stimulus. Given a set of external inputs and a set of output spike trains, the objective is to discover a network structure which can accomplish the transformation as accurately as possible. Due to the difficulty of this problem in its most general form, approximations have been made in previous work. Past approximations have sacrificed network size, recurrence, allowed spiked count, or have imposed layered network structure. Here we present a learning rule without these sacrifices, which produces a weight matrix of a leaky integrate-and-fire (LIF) network to match the output activity of both deterministic LIF networks as well as probabilistic integrate-and-fire (PIF) networks. Inspired by synaptic scaling, our pre-synaptic pool modification (PSPM) algorithm outputs deterministic, fully recurrent spiking neural networks that can provide a novel generative model for given spike trains. Similarity in output spike trains is evaluated with a variety of metrics including a van-Rossum like measure and a numerical comparison of inter-spike interval distributions. Application of our algorithm to randomly generated networks improves similarity to the reference spike trains on both of these stated measures. In addition, we generated LIF networks that operate near criticality when trained on critical PIF outputs. Our results establish that learning rules based on synaptic homeostasis can be used to represent input-output relationships in fully recurrent spiking neural networks.
0
0
0
0
1
0
The Bright and Dark Sides of High-Redshift starburst galaxies from {\it Herschel} and {\it Subaru} observations
We present rest-frame optical spectra from the FMOS-COSMOS survey of twelve $z \sim 1.6$ \textit{Herschel} starburst galaxies, with Star Formation Rate (SFR) elevated by $\times$8, on average, above the star-forming Main Sequence (MS). Comparing the H$\alpha$ to IR luminosity ratio and the Balmer Decrement we find that the optically-thin regions of the sources contain on average only $\sim 10$ percent of the total SFR whereas $\sim90$ percent comes from an extremely obscured component which is revealed only by far-IR observations and is optically-thick even in H$\alpha$. We measure the [NII]$_{6583}$/H$\alpha$ ratio, suggesting that the less obscured regions have a metal content similar to that of the MS population at the same stellar masses and redshifts. However, our objects appear to be metal-rich outliers from the metallicity-SFR anticorrelation observed at fixed stellar mass for the MS population. The [SII]$_{6732}$/[SII]$_{6717}$ ratio from the average spectrum indicates an electron density $n_{\rm e} \sim 1,100\ \mathrm{cm}^{-3}$, larger than what estimated for MS galaxies but only at the 1.5$\sigma$ level. Our results provide supporting evidence that high-$z$ MS outliers are the analogous of local ULIRGs, and are consistent with a major merger origin for the starburst event.
0
1
0
0
0
0
Edge Estimation with Independent Set Oracles
We study the task of estimating the number of edges in a graph with access to only an independent set oracle. Independent set queries draw motivation from group testing and have applications to the complexity of decision versus counting problems. We give two algorithms to estimate the number of edges in an $n$-vertex graph, using (i) $\mathrm{polylog}(n)$ bipartite independent set queries, or (ii) ${n}^{2/3} \cdot\mathrm{polylog}(n)$ independent set queries.
1
0
0
0
0
0
Threshold-activated transport stabilizes chaotic populations to steady states
We explore Random Scale-Free networks of populations, modelled by chaotic Ricker maps, connected by transport that is triggered when population density in a patch is in excess of a critical threshold level. Our central result is that threshold-activated dispersal leads to stable fixed populations, for a wide range of threshold levels. Further, suppression of chaos is facilitated when the threshold-activated migration is more rapid than the intrinsic population dynamics of a patch. Additionally, networks with large number of nodes open to the environment, readily yield stable steady states. Lastly we demonstrate that in networks with very few open nodes, the degree and betweeness centrality of the node open to the environment has a pronounced influence on control. All qualitative trends are corroborated by quantitative measures, reflecting the efficiency of control, and the width of the steady state window.
0
1
0
0
0
0
Bounding and Counting Linear Regions of Deep Neural Networks
We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions. In particular, we study the number of linear regions, i.e. pieces, that a PWL function represented by a DNN can attain, both theoretically and empirically. We present (i) tighter upper and lower bounds for the maximum number of linear regions on rectifier networks, which are exact for inputs of dimension one; (ii) a first upper bound for multi-layer maxout networks; and (iii) a first method to perform exact enumeration or counting of the number of regions by modeling the DNN with a mixed-integer linear formulation. These bounds come from leveraging the dimension of the space defining each linear region. The results also indicate that a deep rectifier network can only have more linear regions than every shallow counterpart with same number of neurons if that number exceeds the dimension of the input.
1
0
0
1
0
0
The reactive-telegraph equation and a related kinetic model
We study the long-range, long-time behavior of the reactive-telegraph equation and a related reactive-kinetic model. The two problems are equivalent in one spatial dimension. We point out that the reactive-telegraph equation, meant to model a population density, does not preserve positivity in higher dimensions. In view of this, in dimensions larger than one, we consider a reactive-kinetic model and investigate the long-range, long-time limit of the solutions. We provide a general characterization of the speed of propagation and we compute it explicitly in one and two dimensions. We show that a phase transition between parabolic and hyperbolic behavior takes place only in one dimension. Finally, we investigate the hydrodynamic limit of the limiting problem.
0
0
1
0
0
0
Finiteness theorems for K3 surfaces and abelian varieties of CM type
We study abelian varieties and K3 surfaces with complex multiplication defined over number fields of fixed degree. We show that these varieties fall into finitely many isomorphism classes over an algebraic closure of the field of rational numbers. As an application we confirm finiteness conjectures of Shafarevich and Coleman in the CM case. In addition we prove the uniform boundedness of the Galois invariant subgroup of the geometric Brauer group for forms of a smooth projective variety satisfying the integral Mumford--Tate conjecture. When applied to K3 surfaces, this affirms a conjecture of Várilly-Alvarado in the CM case.
0
0
1
0
0
0
Rheology of inelastic hard spheres at finite density and shear rate
Considering a granular fluid of inelastic smooth hard spheres we discuss the conditions delineating the rheological regimes comprising Newtonian, Bagnoldian, shear thinning, and shear thickening behavior. Developing a kinetic theory, valid at finite shear rates and densities around the glass transition density, we predict the viscosity and Bagnold coefficient at practically relevant values of the control parameters. The determination of full flow curves relating the shear stress $\sigma$ to the shear rate $\dot\gamma$, and predictions of the yield stress complete our discussion of granular rheology derived from first principles.
0
1
0
0
0
0
The Flash ADC system and PMT waveform reconstruction for the Daya Bay Experiment
To better understand the energy response of the Antineutrino Detector (AD), the Daya Bay Reactor Neutrino Experiment installed a full Flash ADC readout system on one AD that allowed for simultaneous data taking with the current readout system. This paper presents the design, data acquisition, and simulation of the Flash ADC system, and focuses on the PMT waveform reconstruction algorithms. For liquid scintillator calorimetry, the most critical requirement to waveform reconstruction is linearity. Several common reconstruction methods were tested but the linearity performance was not satisfactory. A new method based on the deconvolution technique was developed with 1% residual non-linearity, which fulfills the requirement. The performance was validated with both data and Monte Carlo (MC) simulations, and 1% consistency between them has been achieved.
0
1
0
0
0
0
Quasitoric totally normally split representatives in unitary cobordism ring
The present paper generalises the results of Ray and Buchstaber-Ray, Buchstaber-Panov-Ray in unitary cobordism theory. I prove that any class $x\in \Omega^{*}_{U}$ of the unitary cobordism ring contains a quasitoric totally normally and tangentially split manifold.
0
0
1
0
0
0
Interpretable Feature Recommendation for Signal Analytics
This paper presents an automated approach for interpretable feature recommendation for solving signal data analytics problems. The method has been tested by performing experiments on datasets in the domain of prognostics where interpretation of features is considered very important. The proposed approach is based on Wide Learning architecture and provides means for interpretation of the recommended features. It is to be noted that such an interpretation is not available with feature learning approaches like Deep Learning (such as Convolutional Neural Network) or feature transformation approaches like Principal Component Analysis. Results show that the feature recommendation and interpretation techniques are quite effective for the problems at hand in terms of performance and drastic reduction in time to develop a solution. It is further shown by an example, how this human-in-loop interpretation system can be used as a prescriptive system.
1
0
0
1
0
0
Learning One-hidden-layer ReLU Networks via Gradient Descent
We study the problem of learning one-hidden-layer neural networks with Rectified Linear Unit (ReLU) activation function, where the inputs are sampled from standard Gaussian distribution and the outputs are generated from a noisy teacher network. We analyze the performance of gradient descent for training such kind of neural networks based on empirical risk minimization, and provide algorithm-dependent guarantees. In particular, we prove that tensor initialization followed by gradient descent can converge to the ground-truth parameters at a linear rate up to some statistical error. To the best of our knowledge, this is the first work characterizing the recovery guarantee for practical learning of one-hidden-layer ReLU networks with multiple neurons. Numerical experiments verify our theoretical findings.
0
0
0
1
0
0
Exclusion of GNSS NLOS Receptions Caused by Dynamic Objects in Heavy Traffic Urban Scenarios Using Real-Time 3D Point Cloud: An Approach without 3D Maps
Absolute positioning is an essential factor for the arrival of autonomous driving. Global Navigation Satellites System (GNSS) receiver provides absolute localization for it. GNSS solution can provide satisfactory positioning in open or sub-urban areas, however, its performance suffered in super-urbanized area due to the phenomenon which are well-known as multipath effects and NLOS receptions. The effects dominate GNSS positioning performance in the area. The recent proposed 3D map aided (3DMA) GNSS can mitigate most of the multipath effects and NLOS receptions caused by buildings based on 3D city models. However, the same phenomenon caused by moving objects in urban area is currently not modelled in the 3D geographic information system (GIS). Moving objects with tall height, such as the double-decker bus, can also cause NLOS receptions because of the blockage of GNSS signals by surface of objects. Therefore, we present a novel method to exclude the NLOS receptions caused by double-decker bus in highly urbanized area, Hong Kong. To estimate the geometry dimension and orientation relative to GPS receiver, a Euclidean cluster algorithm and a classification method are used to detect the double-decker buses and calculate their relative locations. To increase the accuracy and reliability of the proposed NLOS exclusion method, an NLOS exclusion criterion is proposed to exclude the blocked satellites considering the elevation, signal noise ratio (SNR) and horizontal dilution of precision (HDOP). Finally, GNSS positioning is estimated by weighted least square (WLS) method using the remaining satellites after the NLOS exclusion. A static experiment was performed near a double-decker bus stop in Hong Kong, which verified the effectiveness of the proposed method.
1
0
0
0
0
0
Energy Distribution in Intrinsically Coupled Systems: The Spring Pendulum Paradigm
Intrinsically nonlinear coupled systems present different oscillating components that exchange energy among themselves. We present a new approach to deal with such energy exchanges and to investigate how it depends on the system control parameters. The method consists in writing the total energy of the system, and properly identifying the energy terms for each component and, especially, their coupling. To illustrate the proposed approach, we work with the bi-dimensional spring pendulum, which is a paradigm to study nonlinear coupled systems, and is used as a model for several systems. For the spring pendulum, we identify three energy components, resembling the spring and pendulum like motions, and the coupling between them. With these analytical expressions, we analyze the energy exchange for individual trajectories, and we also obtain global characteristics of the spring pendulum energy distribution by calculating spatial and time average energy components for a great number of trajectories (periodic, quasi-periodic and chaotic) throughout the phase space. Considering an energy term due to the nonlinear coupling, we identify regions in the parameter space that correspond to strong and weak coupling. The presented procedure can be applied to nonlinear coupled systems to reveal how the coupling mediates internal energy exchanges, and how the energy distribution varies according to the system parameters.
0
1
0
0
0
0
Cayley deformations of compact complex surfaces
In this article, we consider Cayley deformations of a compact complex surface in a Calabi--Yau four-fold. We will study complex deformations of compact complex submanifolds of Calabi--Yau manifolds with a view to explaining why complex and Cayley deformations of a compact complex surface are the same. We in fact prove that the moduli space of complex deformations of any compact complex embedded submanifold of a Calabi--Yau manifold is a smooth manifold.
0
0
1
0
0
0
Ensemble Inhibition and Excitation in the Human Cortex: an Ising Model Analysis with Uncertainties
The pairwise maximum entropy model, also known as the Ising model, has been widely used to analyze the collective activity of neurons. However, controversy persists in the literature about seemingly inconsistent findings, whose significance is unclear due to lack of reliable error estimates. We therefore develop a method for accurately estimating parameter uncertainty based on random walks in parameter space using adaptive Markov Chain Monte Carlo after the convergence of the main optimization algorithm. We apply our method to the spiking patterns of excitatory and inhibitory neurons recorded with multielectrode arrays in the human temporal cortex during the wake-sleep cycle. Our analysis shows that the Ising model captures neuronal collective behavior much better than the independent model during wakefulness, light sleep, and deep sleep when both excitatory (E) and inhibitory (I) neurons are modeled; ignoring the inhibitory effects of I-neurons dramatically overestimates synchrony among E-neurons. Furthermore, information-theoretic measures reveal that the Ising model explains about 80%-95% of the correlations, depending on sleep state and neuron type. Thermodynamic measures show signatures of criticality, although we take this with a grain of salt as it may be merely a reflection of long-range neural correlations.
0
0
0
0
1
0
Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods
We study the smooth structure of convex functions by generalizing a powerful concept so-called self-concordance introduced by Nesterov and Nemirovskii in the early 1990s to a broader class of convex functions, which we call generalized self-concordant functions. This notion allows us to develop a unified framework for designing Newton-type methods to solve convex optimiza- tion problems. The proposed theory provides a mathematical tool to analyze both local and global convergence of Newton-type methods without imposing unverifiable assumptions as long as the un- derlying functionals fall into our generalized self-concordant function class. First, we introduce the class of generalized self-concordant functions, which covers standard self-concordant functions as a special case. Next, we establish several properties and key estimates of this function class, which can be used to design numerical methods. Then, we apply this theory to develop several Newton-type methods for solving a class of smooth convex optimization problems involving the generalized self- concordant functions. We provide an explicit step-size for the damped-step Newton-type scheme which can guarantee a global convergence without performing any globalization strategy. We also prove a local quadratic convergence of this method and its full-step variant without requiring the Lipschitz continuity of the objective Hessian. Then, we extend our result to develop proximal Newton-type methods for a class of composite convex minimization problems involving generalized self-concordant functions. We also achieve both global and local convergence without additional assumption. Finally, we verify our theoretical results via several numerical examples, and compare them with existing methods.
0
0
1
1
0
0
On Joint Functional Calculus For Ritt Operators
In this paper, we study joint functional calculus for commuting $n$-tuple of Ritt operators. We provide an equivalent characterisation of boundedness for joint functional calculus for Ritt operators on $L^p$-spaces, $1< p<\infty$. We also investigate joint similarity problem and joint bounded functional calculus on non-commutative $L^p$-spaces for $n$-tuple of Ritt operators. We get our results by proving a suitable multivariable transfer principle between sectorial and Ritt operators as well as an appropriate joint dilation result in a general setting.
0
0
1
0
0
0
Controlling plasmon modes and damping in buckled two-dimensional material open systems
Full ranges of both hybrid plasmon-mode dispersions and their damping are studied systematically by our recently developed mean-field theory in open systems involving a conducting substrate and a two-dimensional (2D) material with a buckled honeycomb lattice, such as silicene, germanene, and a group \rom{4} dichalcogenide as well. In this hybrid system, the single plasmon mode for a free-standing 2D layer is split into one acoustic-like and one optical-like mode, leading to a dramatic change in the damping of plasmon modes. In comparison with gapped graphene, critical features associated with plasmon modes and damping in silicene and molybdenum disulfide are found with various spin-orbit and lattice asymmetry energy bandgaps, doping types and levels, and coupling strengths between 2D materials and the conducting substrate. The obtained damping dependence on both spin and valley degrees of freedom is expected to facilitate measuring the open-system dielectric property and the spin-orbit coupling strength of individual 2D materials. The unique linear dispersion of the acoustic-like plasmon mode introduces additional damping from the intraband particle-hole modes which is absent for a free-standing 2D material layer, and the use of molybdenum disulfide with a large bandgap simultaneously suppresses the strong damping from the interband particle-hole modes.
0
1
0
0
0
0
A high-order nonconservative approach for hyperbolic equations in fluid dynamics
It is well known, thanks to Lax-Wendroff theorem, that the local conservation of a numerical scheme for a conservative hyperbolic system is a simple and systematic way to guarantee that, if stable, a scheme will provide a sequence of solutions that will converge to a weak solution of the continuous problem. In [1], it is shown that a nonconservative scheme will not provide a good solution. The question of using, nevertheless, a nonconservative formulation of the system and getting the correct solution has been a long-standing debate. In this paper, we show how get a relevant weak solution from a pressure-based formulation of the Euler equations of fluid mechanics. This is useful when dealing with nonlinear equations of state because it is easier to compute the internal energy from the pressure than the opposite. This makes it possible to get oscillation free solutions, contrarily to classical conservative methods. An extension to multiphase flows is also discussed, as well as a multidimensional extension.
0
0
1
0
0
0
Improving Resilience of Autonomous Moving Platforms by Real Time Analysis of Their Cooperation
Environmental changes, failures, collisions or even terrorist attacks can cause serious malfunctions of the delivery systems. We have presented a novel approach improving resilience of Autonomous Moving Platforms AMPs. The approach is based on multi-level state diagrams describing environmental trigger specifications, movement actions and synchronization primitives. The upper level diagrams allowed us to model advanced interactions between autonomous AMPs and detect irregularities such as deadlocks live-locks etc. The techniques were presented to verify and analyze combined AMPs' behaviors using model checking technique. The described system, Dedan verifier, is still under development. In the near future, a graphical form of verified system representation is planned.
1
0
0
0
0
0
Power series expansions for the planar monomer-dimer problem
We compute the free energy of the planar monomer-dimer model. Unlike the classical planar dimer model, an exact solution is not known in this case. Even the computation of the low-density power series expansion requires heavy and nontrivial computations. Despite of the exponential computational complexity, we compute almost three times more terms than were previously known. Such an expansion provides both lower and upper bound for the free energy, and allows to obtain more accurate numerical values than previously possible. We expect that our methods can be applied to other similar problems.
1
0
0
0
0
0
Neural Networks Compression for Language Modeling
In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs). It is known that conventional RNNs, e.g, LSTM-based networks in language modeling, are characterized with either high space complexity or substantial inference time. This problem is especially crucial for mobile applications, in which the constant interaction with the remote server is inappropriate. By using the Penn Treebank (PTB) dataset we compare pruning, quantization, low-rank factorization, tensor train decomposition for LSTM networks in terms of model size and suitability for fast inference.
1
0
0
1
0
0
Relativistic effects in the non-resonant two-photon K-shell ionization of neutral atoms
Relativistic effects in the non-resonant two-photon K-shell ionization of neutral atoms are studied theoretically within the framework of second-order perturbation theory. The non-relativistic results are compared with the relativistic calculations in the dipole and no-pair approximations as well as with the complete relativistic approach. The calculations are performed in both velocity and length gauges. Our results show a significant decrease of the total cross section for heavy atoms as compared to the non-relativistic treatment, which is mainly due to the relativistic wavefunction contraction. The effects of higher multipoles and negative continuum energy states counteract the relativistic contraction contribution, but are generally much weaker. While the effects beyond the dipole approximation are equally important in both gauges, the inclusion of negative continuum energy states visibly contributes to the total cross section only in the velocity gauge.
0
1
0
0
0
0
Local Okounkov bodies and limits in prime characteristic
This article is concerned with the asymptotic behavior of certain sequences of ideals in rings of prime characteristic. These sequences, which we call $p$-families of ideals, are ubiquitous in prime characteristic commutative algebra (e.g., they occur naturally in the theories of tight closure, Hilbert-Kunz multiplicity, and $F$-signature). We associate to each $p$-family of ideals an object in Euclidean space that is analogous to the Newton-Okounkov body of a graded family of ideals, which we call a $p$-body. Generalizing the methods used to establish volume formulas for the Hilbert-Kunz multiplicity and $F$-signature of semigroup rings, we relate the volume of a $p$-body to a certain asymptotic invariant determined by the corresponding $p$-family of ideals. We apply these methods to obtain new existence results for limits in positive characteristic, an analogue of the Brunn-Minkowski theorem for Hilbert-Kunz multiplicity, and a uniformity result concerning the positivity of a $p$-family.
0
0
1
0
0
0
Two-Dimensional Systolic Complexes Satisfy Property A
We show that 2-dimensional systolic complexes are quasi-isometric to quadric complexes with flat intervals. We use this fact along with the weight function of Brodzki, Campbell, Guentner, Niblo and Wright to prove that 2-dimensional systolic complexes satisfy Property A.
0
0
1
0
0
0
Application of the Computer Capacity to the Analysis of Processors Evolution
The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.
1
0
0
0
0
0
On absolutely normal numbers and their discrepancy estimate
We construct the base $2$ expansion of an absolutely normal real number $x$ so that, for every integer $b$ greater than or equal to $2$, the discrepancy modulo $1$ of the sequence $(b^0 x, b^1 x, b^2 x , \ldots)$ is essentially the same as that realized by almost all real numbers.
1
0
1
0
0
0
Multispectral computational ghost imaging with multiplexed illumination
Computational ghost imaging is a robust and compact system that has drawn wide attentions over the last two decades. Multispectral imaging possesses spatial and spectral resolving abilities, is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate lights of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to red, green, blue colored information, respectively) and random patterns. The results of simulation and experiment have verified that our method can be effective to recover the colored object. Our method has two major advantages: one is that the binary encoded matrices as cipher keys can protect the security of private contents; the other is that multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of the data acquisition.
0
1
0
0
0
0
Combined MEG and fMRI Exponential Random Graph Modeling for inferring functional Brain Connectivity
Estimated connectomes by the means of neuroimaging techniques have enriched our knowledge of the organizational properties of the brain leading to the development of network-based clinical diagnostics. Unfortunately, to date, many of those network-based clinical diagnostics tools, based on the mere description of isolated instances of observed connectomes are noisy estimates of the true connectivity network. Modeling brain connectivity networks is therefore important to better explain the functional organization of the brain and allow inference of specific brain properties. In this report, we present pilot results on the modeling of combined MEG and fMRI neuroimaging data acquired during an n-back memory task experiment. We adopted a pooled Exponential Random Graph Model (ERGM) as a network statistical model to capture the underlying process in functional brain networks of 9 subjects MEG and fMRI data out of 32 during a 0-back vs 2-back memory task experiment. Our results suggested strong evidence that all the functional connectomes of the 9 subjects have small world properties. A group level comparison using comparing the conditions pairwise showed no significant difference in the functional connectomes across the subjects. Our pooled ERGMs successfully reproduced important brain properties such as functional segregation and functional integration. However, the ERGMs reproducing the functional segregation of the brain networks discriminated between the 0-back and 2-back conditions while the models reproducing both properties failed to successfully discriminate between both conditions. Our results are promising and would improve in robustness with a larger sample size. Nevertheless, our pilot results tend to support previous findings that functional segregation and integration are sufficient to statistically reproduce the main properties of brain network.
0
0
0
0
1
0
An updated Type II supernova Hubble diagram
We present photometry and spectroscopy of nine Type II-P/L supernovae (SNe) with redshifts in the 0.045 < z < 0.335 range, with a view to re-examining their utility as distance indicators. Specifically, we apply the expanding photosphere method (EPM) and the standardized candle method (SCM) to each target, and find that both methods yield distances that are in reasonable agreement with each other. The current record-holder for the highest-redshift spectroscopically confirmed SN II-P is PS1-13bni (z = 0.335 +0.009 -0.012), and illustrates the promise of Type II SNe as cosmological tools. We updated existing EPM and SCM Hubble diagrams by adding our sample to those previously published. Within the context of Type II SN distance measuring techniques, we investigated two related questions. First, we explored the possibility of utilising spectral lines other than the traditionally used Fe II 5169 to infer the photospheric velocity of SN ejecta. Using local well-observed objects, we derive an epoch-dependent relation between the strong Balmer line and Fe II 5169 velocities that is applicable 30 to 40 days post-explosion. Motivated in part by the continuum of key observables such as rise time and decline rates exhibited from II-P to II-L SNe, we assessed the possibility of using Hubble-flow Type II-L SNe as distance indicators. These yield similar distances as the Type II-P SNe. Although these initial results are encouraging, a significantly larger sample of SNe II-L would be required to draw definitive conclusions.
0
1
0
0
0
0
Digital Identity: The Effect of Trust and Reputation Information on User Judgement in the Sharing Economy
The Sharing Economy (SE) is a growing ecosystem focusing on peer-to-peer enterprise. In the SE the information available to assist individuals (users) in making decisions focuses predominantly on community generated trust and reputation information. However, how such information impacts user judgement is still being understood. To explore such effects, we constructed an artificial SE accommodation platform where we varied the elements related to hosts' digital identity, measuring users' perceptions and decisions to interact. Across three studies, we find that trust and reputation information increases not only the users' perceived trustworthiness, credibility, and sociability of hosts, but also the propensity to rent a private room in their home. This effect is seen when providing users both with complete profiles and profiles with partial user-selected information. Closer investigations reveal that three elements relating to the host's digital identity are sufficient to produce such positive perceptions and increased rental decisions, regardless of which three elements are presented. Our findings have relevant implications for human judgment and privacy in the SE, and question its current culture of ever increasing information-sharing.
1
0
0
0
0
0
Can Deep Clinical Models Handle Real-World Domain Shifts?
The hypothesis that computational models can be reliable enough to be adopted in prognosis and patient care is revolutionizing healthcare. Deep learning, in particular, has been a game changer in building predictive models, thereby leading to community-wide data curation efforts. However, due to the inherent variabilities in population characteristics and biological systems, these models are often biased to the training datasets. This can be limiting when models are deployed in new environments, particularly when there are systematic domain shifts not known a priori. In this paper, we formalize these challenges by emulating a large class of domain shifts that can occur in clinical settings, and argue that evaluating the behavior of predictive models in light of those shifts is an effective way of quantifying the reliability of clinical models. More specifically, we develop an approach for building challenging scenarios, based on analysis of \textit{disease landscapes}, and utilize unsupervised domain adaptation to compensate for the domain shifts. Using the openly available MIMIC-III EHR dataset for phenotyping, we generate a large class of scenarios and evaluate the ability of deep clinical models in those cases. For the first time, our work sheds light into data regimes where deep clinical models can fail to generalize, due to significant changes in the disease landscapes between the source and target landscapes. This study emphasizes the need for sophisticated evaluation mechanisms driven by real-world domain shifts to build effective AI solutions for healthcare.
0
0
0
1
0
0
Fabrication of grain boundary junctions using NdFeAs(O,F) superconducting thin films
We report on the growth of NdFeAs(O,F) thin films on [001]-tilt MgO bicrystal substrates with misorientation angle theta_GB=6°, 12°, 24° and 45°, and their inter- and intra-grain transport properties. X-ray diffraction study confirmed that all our NdFeAs(O,F) films are epitaxially grown on the MgO bicrystals. The theta_GB dependence of the inter-grain critical current density Jc shows that, unlike Co-doped BaFe2As2 and Fe(Se,Te), its decay with theta_GB is rather significant. As a possible reason of this result, fluorine may have diffused preferentially to the grain boundary region and eroded the crystal structure.
0
1
0
0
0
0
Panchromatic Hubble Andromeda Treasury XVIII. The High-mass Truncation of the Star Cluster Mass Function
We measure the mass function for a sample of 840 young star clusters with ages between 10-300 Myr observed by the Panchromatic Hubble Andromeda Treasury (PHAT) survey in M31. The data show clear evidence of a high-mass truncation: only 15 clusters more massive than $10^4$ $M_{\odot}$ are observed, compared to $\sim$100 expected for a canonical $M^{-2}$ pure power-law mass function with the same total number of clusters above the catalog completeness limit. Adopting a Schechter function parameterization, we fit a characteristic truncation mass of $M_c = 8.5^{+2.8}_{-1.8} \times 10^3$ $M_{\odot}$. While previous studies have measured cluster mass function truncations, the characteristic truncation mass we measure is the lowest ever reported. Combining this M31 measurement with previous results, we find that the cluster mass function truncation correlates strongly with the characteristic star formation rate surface density of the host galaxy, where $M_c \propto$ $\langle \Sigma_{\mathrm{SFR}} \rangle^{\sim1.1}$. We also find evidence that suggests the observed $M_c$-$\Sigma_{\mathrm{SFR}}$ relation also applies to globular clusters, linking the two populations via a common formation pathway. If so, globular cluster mass functions could be useful tools for constraining the star formation properties of their progenitor host galaxies in the early Universe.
0
1
0
0
0
0
Cohomology and overconvergence for representations of powers of Galois groups
We show that the Galois cohomology groups of $p$-adic representations of a direct power of $\operatorname{Gal}(\overline{\mathbb{Q}_p}/\mathbb{Q}_p)$ can be computed via the generalization of Herr's complex to multivariable $(\varphi,\Gamma)$-modules. Using Tate duality and a pairing for multivariable $(\varphi,\Gamma)$-modules we extend this to analogues of the Iwasawa cohomology. We show that all $p$-adic representations of a direct power of $\operatorname{Gal}(\overline{\mathbb{Q}_p}/\mathbb{Q}_p)$ are overconvergent and, moreover, passing to overconvergent multivariable $(\varphi,\Gamma)$-modules is an equivalence of categories. Finally, we prove that the overconvergent Herr complex also computes the Galois cohomology groups.
0
0
1
0
0
0
Global algorithms for maximal eigenpair
This paper is a continuation of \ct{cmf16} where an efficient algorithm for computing the maximal eigenpair was introduced first for tridiagonal matrices and then extended to the irreducible matrices with nonnegative off-diagonal elements. This paper introduces two global algorithms for computing the maximal eigenpair in a rather general setup, including even a class of real (with some negative off-diagonal elements) or complex matrices.
0
0
1
1
0
0
Entanglement and entropy production in coupled single-mode Bose-Einstein condensates
We investigate the time evolution of the entanglement entropy of coupled single-mode Bose-Einstein condensates in a double well potential at $T=0$ temperature, by combining numerical results with analytical approximations. We find that the coherent oscillations of the condensates result in entropy oscillations on the top of a linear entropy generation at short time scales. Due to dephasing, the entropy eventually saturates to a stationary value, in spite of the lack of equilibration. We show that this long time limit of the entropy reflects the semiclassical dynamics of the system, revealing the self-trapping phase transition of the condensates at large interaction strength by a sudden entropy jump. We compare the stationary limit of the entropy to the prediction of a classical microcanonical ensemble, and find surprisingly good agreement in spite of the non-equilibrium state of the system. Our predictions should be experimentally observable on a Bose-Einstein condensate in a double well potential or on a two-component condensate with inter-state coupling.
0
1
0
0
0
0
Sparse Poisson Regression with Penalized Weighted Score Function
We proposed a new penalized method in this paper to solve sparse Poisson Regression problems. Being different from $\ell_1$ penalized log-likelihood estimation, our new method can be viewed as penalized weighted score function method. We show that under mild conditions, our estimator is $\ell_1$ consistent and the tuning parameter can be pre-specified, which shares the same good property of the square-root Lasso.
0
0
1
1
0
0
Bubble size statistics during reionization from 21-cm tomography
The upcoming SKA1-Low radio interferometer will be sensitive enough to produce tomographic imaging data of the redshifted 21-cm signal from the Epoch of Reionization. Due to the non-Gaussian distribution of the signal, a power spectrum analysis alone will not provide a complete description of its properties. Here, we consider an additional metric which could be derived from tomographic imaging data, namely the bubble size distribution of ionized regions. We study three methods that have previously been used to characterize bubble size distributions in simulation data for the hydrogen ionization fraction - the spherical-average, mean-free-path and friends-of-friends methods - and apply them to simulated 21-cm data cubes. Our simulated data cubes have the (sensitivity-dictated) resolution expected for the SKA1-Low reionization experiment and we study the impact of both the light-cone and redshift space distortion effects. To identify ionized regions in the 21-cm data we introduce a new, self-adjusting thresholding approach based on the K-Means algorithm. We find that the fraction of ionized cells identified in this way consistently falls below the mean volume-averaged ionized fraction. From a comparison of the three bubble size methods, we conclude that all three methods are useful, but that the mean-free-path method performs best in terms of tracking the progress of reionization and separating different reionization scenarios. The light-cone effect is found to affect data spanning more than about 10~MHz in frequency ($\Delta z\sim0.5$). We find that redshift space distortions only marginally affect the bubble size distributions.
0
1
0
0
0
0
Robust Stochastic Configuration Networks with Kernel Density Estimation
Neural networks have been widely used as predictive models to fit data distribution, and they could be implemented through learning a collection of samples. In many applications, however, the given dataset may contain noisy samples or outliers which may result in a poor learner model in terms of generalization. This paper contributes to a development of robust stochastic configuration networks (RSCNs) for resolving uncertain data regression problems. RSCNs are built on original stochastic configuration networks with weighted least squares method for evaluating the output weights, and the input weights and biases are incrementally and randomly generated by satisfying with a set of inequality constrains. The kernel density estimation (KDE) method is employed to set the penalty weights for each training samples, so that some negative impacts, caused by noisy data or outliers, on the resulting learner model can be reduced. The alternating optimization technique is applied for updating a RSCN model with improved penalty weights computed from the kernel density estimation function. Performance evaluation is carried out by a function approximation, four benchmark datasets and a case study on engineering application. Comparisons to other robust randomised neural modelling techniques, including the probabilistic robust learning algorithm for neural networks with random weights and improved RVFL networks, indicate that the proposed RSCNs with KDE perform favourably and demonstrate good potential for real-world applications.
1
0
0
1
0
0
Density of the spectrum of Jacobi matrices with power asymptotics
We consider Jacobi matrices $J$ whose parameters have the power asymptotics $\rho_n=n^{\beta_1} \left( x_0 + \frac{x_1}{n} + {\rm O}(n^{-1-\epsilon})\right)$ and $q_n=n^{\beta_2} \left( y_0 + \frac{y_1}{n} + {\rm O}(n^{-1-\epsilon})\right)$ for the off-diagonal and diagonal, respectively. We show that for $\beta_1 > \beta_2$, or $\beta_1=\beta_2$ and $2x_0 > |y_0|$, the matrix $J$ is in the limit circle case and the convergence exponent of its spectrum is $1/\beta_1$. Moreover, we obtain upper and lower bounds for the upper density of the spectrum. When the parameters of the matrix $J$ have a power asymptotic with one more term, we characterise the occurrence of the limit circle case completely (including the exceptional case $\lim_{n\to \infty} |q_n|\big/ \rho_n = 2$) and determine the convergence exponent in almost all cases.
0
0
1
0
0
0
Modeling of a self-sustaining ignition in a solid energetic material
In the present work we analyze some necessary conditions for ignition of solid energetic materials by low velocity impact ignition mechanism. Basing on reported results of {\it ab initio} computations we assume that the energetic activation barriers for the primary endothermic dissociation in some energetic materials may be locally lowered due to the effect of shear strain caused by the impact. We show that the ignition may be initiated in regions with the reduced activation barriers, even at moderately low exothermicity of the subsequent exothermic reactions thus suggesting that the above regions may serve as "hot spots" for the ignition. We apply our results to analyze initial steps of ignition in DADNE and TATB molecular crystals.
0
1
0
0
0
0
Election Bias: Comparing Polls and Twitter in the 2016 U.S. Election
While the polls have been the most trusted source for election predictions for decades, in the recent presidential election they were called inaccurate and biased. How inaccurate were the polls in this election and can social media beat the polls as an accurate election predictor? Polls from several news outlets and sentiment analysis on Twitter data were used, in conjunction with the results of the election, to answer this question and outline further research on the best method for predicting the outcome of future elections.
1
0
0
0
0
0
Versality of the relative Fukaya category
Seidel introduced the notion of a Fukaya category `relative to an ample divisor', explained that it is a deformation of the Fukaya category of the affine variety that is the complement of the divisor, and showed how the relevant deformation theory is controlled by the symplectic cohomology of the complement. We elaborate on Seidel's definition of the relative Fukaya category, and give a criterion under which the deformation is versal.
0
0
1
0
0
0
Hot Phonon and Carrier Relaxation in Si(100) Determined by Transient Extreme Ultraviolet Spectroscopy
The thermalization of hot carriers and phonons gives direct insight into the scattering processes that mediate electrical and thermal transport. Obtaining the scattering rates for both hot carriers and phonons currently requires multiple measurements with incommensurate timescales. Here, transient extreme-ultraviolet (XUV) spectroscopy on the silicon 2p core level at 100 eV is used to measure hot carrier and phonon thermalization in Si(100) from tens of femtoseconds to 200 ps following photoexcitation of the indirect transition to the {\Delta} valley at 800 nm. The ground state XUV spectrum is first theoretically predicted using a combination of a single plasmon pole model and the Bethe-Salpeter equation (BSE) with density functional theory (DFT). The excited state spectrum is predicted by incorporating the electronic effects of photo-induced state-filling, broadening, and band-gap renormalization into the ground state XUV spectrum. A time-dependent lattice deformation and expansion is also required to describe the excited state spectrum. The kinetics of these structural components match the kinetics of phonons excited from the electron-phonon and phonon-phonon scattering processes following photoexcitation. Separating the contributions of electronic and structural effects on the transient XUV spectra allows the carrier population, the population of phonons involved in inter- and intra-valley electron-phonon scattering, and the population of phonons involved in phonon-phonon scattering to be quantified as a function of delay time.
0
1
0
0
0
0
Counting Quasi-Idempotent Irreducible Integral Matrices
Given any polynomial $p$ in $C[X]$, we show that the set of irreducible matrices satisfying $p(A)=0$ is finite. In the specific case $p(X)=X^2-nX$, we count the number of irreducible matrices in this set and analyze the arising sequences and their asymptotics. Such matrices turn out to be related to generalized compositions and generalized partitions.
0
0
1
0
0
0
On sound-based interpretation of neonatal EEG
Significant training is required to visually interpret neonatal EEG signals. This study explores alternative sound-based methods for EEG interpretation which are designed to allow for intuitive and quick differentiation between healthy background activity and abnormal activity such as seizures. A novel method based on frequency and amplitude modulation (FM/AM) is presented. The algorithm is tuned to facilitate the audio domain perception of rhythmic activity which is specific to neonatal seizures. The method is compared with the previously developed phase vocoder algorithm for different time compressing factors. A survey is conducted amongst a cohort of non-EEG experts to quantitatively and qualitatively examine the performance of sound-based methods in comparison with the visual interpretation. It is shown that both sonification methods perform similarly well, with a smaller inter-observer variability in comparison with visual. A post-survey analysis of results is performed by examining the sensitivity of the ear to frequency evolution in audio.
0
0
0
1
1
0
OAuthGuard: Protecting User Security and Privacy with OAuth 2.0 and OpenID Connect
Millions of users routinely use Google to log in to websites supporting OAuth 2.0 or OpenID Connect; the security of OAuth 2.0 and OpenID Connect is therefore of critical importance. As revealed in previous studies, in practice RPs often implement OAuth 2.0 incorrectly, and so many real-world OAuth 2.0 and OpenID Connect systems are vulnerable to attack. However, users of such flawed systems are typically unaware of these issues, and so are at risk of attacks which could result in unauthorised access to the victim user's account at an RP. In order to address this threat, we have developed OAuthGuard, an OAuth 2.0 and OpenID Connect vulnerability scanner and protector, that works with RPs using Google OAuth 2.0 and OpenID Connect services. It protects user security and privacy even when RPs do not implement OAuth 2.0 or OpenID Connect correctly. We used OAuthGuard to survey the 1000 top-ranked websites supporting Google sign-in for the possible presence of five OAuth 2.0 or OpenID Connect security and privacy vulnerabilities, of which one has not previously been described in the literature. Of the 137 sites in our study that employ Google Sign-in, 69 were found to suffer from at least one serious vulnerability. OAuthGuard was able to protect user security and privacy for 56 of these 69 RPs, and for the other 13 was able to warn users that they were using an insecure implementation.
1
0
0
0
0
0
Aspiration dynamics generate robust predictions in structured populations
Evolutionary game dynamics in structured populations are strongly affected by updating rules. Previous studies usually focus on imitation-based rules, which rely on payoff information of social peers. Recent behavioral experiments suggest that whether individuals use such social information for strategy updating may be crucial to the outcomes of social interactions. This hints at the importance of considering updating rules without dependence on social peers' payoff information, which, however, is rarely investigated. Here, we study aspiration-based self-evaluation rules, with which individuals self-assess the performance of strategies by comparing own payoffs with an imaginary value they aspire, called the aspiration level. We explore the fate of strategies on population structures represented by graphs or networks. Under weak selection, we analytically derive the condition for strategy dominance, which is found to coincide with the classical condition of risk-dominance. This condition holds for all networks and all distributions of aspiration levels, and for individualized ways of self-evaluation. Our condition can be intuitively interpreted: one strategy prevails over the other if the strategy brings more satisfaction to individuals than the other does. Our work thus sheds light on the intrinsic difference between evolutionary dynamics induced by aspiration-based and imitation-based rules.
0
0
0
0
1
0
Memory Efficient Experience Replay for Streaming Learning
In supervised machine learning, an agent is typically trained once and then deployed. While this works well for static settings, robots often operate in changing environments and must quickly learn new things from data streams. In this paradigm, known as streaming learning, a learner is trained online, in a single pass, from a data stream that cannot be assumed to be independent and identically distributed (iid). Streaming learning will cause conventional deep neural networks (DNNs) to fail for two reasons: 1) they need multiple passes through the entire dataset; and 2) non-iid data will cause catastrophic forgetting. An old fix to both of these issues is rehearsal. To learn a new example, rehearsal mixes it with previous examples, and then this mixture is used to update the DNN. Full rehearsal is slow and memory intensive because it stores all previously observed examples, and its effectiveness for preventing catastrophic forgetting has not been studied in modern DNNs. Here, we describe the ExStream algorithm for memory efficient rehearsal and compare it to alternatives. We find that full rehearsal can eliminate catastrophic forgetting in a variety of streaming learning settings, with ExStream performing well using far less memory and computation.
0
0
0
1
0
0
New integrable semi-discretizations of the coupled nonlinear Schrodinger equations
We have undertaken an algorithmic search for new integrable semi-discretizations of physically relevant nonlinear partial differential equations. The search is performed by using a compatibility condition for the discrete Lax operators and symbolic computations. We have discovered a new integrable system of coupled nonlinear Schrodinger equations which combines elements of the Ablowitz-Ladik lattice and the triangular-lattice ribbon studied by Vakhnenko. We show that the continuum limit of the new integrable system is given by uncoupled complex modified Korteweg-de Vries equations and uncoupled nonlinear Schrodinger equations.
0
1
1
0
0
0
Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations
Non-negative matrix factorization is a basic tool for decomposing data into the feature and weight matrices under non-negativity constraints, and in practice is often solved in the alternating minimization framework. However, it is unclear whether such algorithms can recover the ground-truth feature matrix when the weights for different features are highly correlated, which is common in applications. This paper proposes a simple and natural alternating gradient descent based algorithm, and shows that with a mild initialization it provably recovers the ground-truth in the presence of strong correlations. In most interesting cases, the correlation can be in the same order as the highest possible. Our analysis also reveals its several favorable features including robustness to noise. We complement our theoretical results with empirical studies on semi-synthetic datasets, demonstrating its advantage over several popular methods in recovering the ground-truth.
1
0
0
1
0
0
Bayesian mean-variance analysis: Optimal portfolio selection under parameter uncertainty
The paper solves the problem of optimal portfolio choice when the parameters of the asset returns distribution, like the mean vector and the covariance matrix are unknown and have to be estimated by using historical data of the asset returns. The new approach employs the Bayesian posterior predictive distribution which is the distribution of the future realization of the asset returns given the observable sample. The parameters of the posterior predictive distributions are functions of the observed data values and, consequently, the solution of the optimization problem is expressed in terms of data only and does not depend on unknown quantities. In contrast, the optimization problem of the traditional approach is based on unknown quantities which are estimated in the second step leading to a suboptimal solution. We also derive a very useful stochastic representation of the posterior predictive distribution whose application leads not only to the solution of the considered optimization problem, but provides the posterior predictive distribution of the optimal portfolio return used to construct a prediction interval. A Bayesian efficient frontier, a set of optimal portfolios obtained by employing the posterior predictive distribution, is constructed as well. Theoretically and using real data we show that the Bayesian efficient frontier outperforms the sample efficient frontier, a common estimator of the set of optimal portfolios known to be overoptimistic.
0
0
0
0
0
1
Selective inference after likelihood- or test-based model selection in linear models
Statistical inference after model selection requires an inference framework that takes the selection into account in order to be valid. Following recent work on selective inference, we derive analytical expressions for inference after likelihood- or test-based model selection for linear models.
0
0
0
1
0
0
Sampling and Reconstruction of Graph Signals via Weak Submodularity and Semidefinite Relaxation
We study the problem of sampling a bandlimited graph signal in the presence of noise, where the objective is to select a node subset of prescribed cardinality that minimizes the signal reconstruction mean squared error (MSE). To that end, we formulate the task at hand as the minimization of MSE subject to binary constraints, and approximate the resulting NP-hard problem via semidefinite programming (SDP) relaxation. Moreover, we provide an alternative formulation based on maximizing a monotone weak submodular function and propose a randomized-greedy algorithm to find a sub-optimal subset. We then derive a worst-case performance guarantee on the MSE returned by the randomized greedy algorithm for general non-stationary graph signals. The efficacy of the proposed methods is illustrated through numerical simulations on synthetic and real-world graphs. Notably, the randomized greedy algorithm yields an order-of-magnitude speedup over state-of-the-art greedy sampling schemes, while incurring only a marginal MSE performance loss.
1
0
0
1
0
0
DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images
We present DLTK, a toolkit providing baseline implementations for efficient experimentation with deep learning methods on biomedical images. It builds on top of TensorFlow and its high modularity and easy-to-use examples allow for a low-threshold access to state-of-the-art implementations for typical medical imaging problems. A comparison of DLTK's reference implementations of popular network architectures for image segmentation demonstrates new top performance on the publicly available challenge data "Multi-Atlas Labeling Beyond the Cranial Vault". The average test Dice similarity coefficient of $81.5$ exceeds the previously best performing CNN ($75.7$) and the accuracy of the challenge winning method ($79.0$).
1
0
0
0
0
0
A homotopy theory of Nakaoka twin cotorsion pairs
We show that the Verdier quotients can be realized as subfactors by the homotopy theory of additive categories with suspensions developed in \cite{ZWLi2, ZWLi3}. As applications, we develop the homotopy theory of Nakaoka twin cotorsion pairs of triangulated categories and prove that Iyama-Yoshino triangulated subfactors are Verdier quotients under suitable conditions.
0
0
1
0
0
0
Categorical relations between Langlands dual quantum affine algebras: Doubly laced types
We prove that the Grothendieck rings of category $\mathcal{C}^{(t)}_Q$ over quantum affine algebras $U_q'(\g^{(t)})$ $(t=1,2)$ associated to each Dynkin quiver $Q$ of finite type $A_{2n-1}$ (resp. $D_{n+1}$) is isomorphic to one of category $\mathcal{C}_{\mQ}$ over the Langlands dual $U_q'({^L}\g^{(2)})$ of $U_q'(\g^{(2)})$ associated to any twisted adapted class $[\mQ]$ of $A_{2n-1}$ (resp. $D_{n+1}$). This results provide partial answers of conjectures of Frenkel-Hernandez on Langlands duality for finite-dimensional representation of quantum affine algebras.
0
0
1
0
0
0
Strategic Dynamic Pricing with Network Effects
We study the optimal pricing strategy of a monopolist selling homogeneous goods to customers over multiple periods. The customers choose their time of purchase to maximize their payoff that depends on their valuation of the product, the purchase price, and the utility they derive from past purchases of others, termed the network effect. We first show that the optimal price sequence is non-decreasing. Therefore, by postponing purchase to future rounds, customers trade-off a higher utility from the network effects with a higher price. We then show that a customer's equilibrium strategy can be characterized by a threshold rule in which at each round a customer purchases the product if and only if her valuation exceeds a certain threshold. This implies that customers face an inference problem regarding the valuations of others, i.e., observing that a customer has not yet purchased the product, signals that her valuation is below a threshold. We consider a block model of network interactions, where there are blocks of buyers subject to the same network effect. A natural benchmark, this model allows us to provide an explicit characterization of the optimal price sequence asymptotically as the number of agents goes to infinity, which notably is linearly increasing in time with a slope that depends on the network effect through a scalar given by the sum of entries of the inverse of the network weight matrix. Our characterization shows that increasing the "imbalance" in the network defined as the difference between the in and out degree of the nodes increases the revenue of the monopolist. We further study the effects of price discrimination and show that in earlier periods monopolist offers lower prices to blocks with higher Bonacich centrality to encourage them to purchase, which in turn further incentivizes other customers to buy in subsequent periods.
1
0
0
0
0
0
Semisimple Leibniz algebras and their derivations and automorphisms
The present paper is devoted to the description of finite-dimensional semisimple Leibniz algebras over complex numbers, their derivations and automorphisms.
0
0
1
0
0
0
Vandermonde Matrices with Nodes in the Unit Disk and the Large Sieve
We derive bounds on the extremal singular values and the condition number of NxK, with N>=K, Vandermonde matrices with nodes in the unit disk. The mathematical techniques we develop to prove our main results are inspired by a link---first established by by Selberg [1] and later extended by Moitra [2]---between the extremal singular values of Vandermonde matrices with nodes on the unit circle and large sieve inequalities. Our main conceptual contribution lies in establishing a connection between the extremal singular values of Vandermonde matrices with nodes in the unit disk and a novel large sieve inequality involving polynomials in z \in C with |z|<=1. Compared to Bazán's upper bound on the condition number [3], which, to the best of our knowledge, constitutes the only analytical result---available in the literature---on the condition number of Vandermonde matrices with nodes in the unit disk, our bound not only takes a much simpler form, but is also sharper for certain node configurations. Moreover, the bound we obtain can be evaluated consistently in a numerically stable fashion, whereas the evaluation of Bazán's bound requires the solution of a linear system of equations which has the same condition number as the Vandermonde matrix under consideration and can therefore lead to numerical instability in practice. As a byproduct, our result---when particularized to the case of nodes on the unit circle---slightly improves upon the Selberg-Moitra bound.
1
0
1
0
0
0
High-buckled R3 stanene with topologically nontrivial energy gap
Stanene has been predicted to be a two-dimensional topological insulator (2DTI). Its low-buckled atomic geometry and the enhanced spin-orbit coupling are expected to cause a prominent quantum spin hall (QSH) effect. However, most of the experimentally grown stanene to date displays a metallic state without a real gap, possibly due to the chemical coupling with the substrate and the stress applied by the substrate. Here,we demonstrate an efficient way of tuning the atomic buckling in stanene to open a topologically nontrivial energy gap. Via tuning the growth kinetics, we obtain not only the low-buckled 1x1 stanene but also an unexpected high-buckled R3xR3 stanene on the Bi(111) substrate. Scanning tunneling microscopy (STM) study combined with density functional theory (DFT) calculation confirms that the R3xR3 stanene is a distorted 1x1 structure with a high-buckled Sn in every three 1x1 unit cells. The high-buckled R3xR3 stanene favors a large band inversion at the {\Gamma} point, and the spin orbital coupling open a topologically nontrivial energy gap. The existence of edge states as verified in both STM measurement and DFT calculation further confirms the topology of the R3xR3 stanene. This study provides an alternate way to tune the topology of monolayer 2DTI materials.
0
1
0
0
0
0