ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
20,201
Pitfalls and Best Practices in Algorithm Configuration
Good parameter settings are crucial to achieve high performance in many areas of artificial intelligence (AI), such as propositional satisfiability solving, AI planning, scheduling, and machine learning (in particular deep learning). Automated algorithm configuration methods have recently received much attention in the AI community since they replace tedious, irreproducible and error-prone manual parameter tuning and can lead to new state-of-the-art performance. However, practical applications of algorithm configuration are prone to several (often subtle) pitfalls in the experimental design that can render the procedure ineffective. We identify several common issues and propose best practices for avoiding them. As one possibility for automatically handling as many of these as possible, we also propose a tool called GenericWrapper4AC.
1
0
0
0
0
0
20,202
Critical behaviors in contagion dynamics
We study the critical behavior of a general contagion model where nodes are either active (e.g. with opinion A, or functioning) or inactive (e.g. with opinion B, or damaged). The transitions between these two states are determined by (i) spontaneous transitions independent of the neighborhood, (ii) transitions induced by neighboring nodes and (iii) spontaneous reverse transitions. The resulting dynamics is extremely rich including limit cycles and random phase switching. We derive a unifying mean-field theory. Specifically, we analytically show that the critical behavior of systems whose dynamics is governed by processes (i-iii) can only exhibit three distinct regimes: (a) uncorrelated spontaneous transition dynamics (b) contact process dynamics and (c) cusp catastrophes. This ends a long-standing debate on the universality classes of complex contagion dynamics in mean-field and substantially deepens its mathematical understanding.
0
1
1
0
0
0
20,203
Machine Learning in Appearance-based Robot Self-localization
An appearance-based robot self-localization problem is considered in the machine learning framework. The appearance space is composed of all possible images, which can be captured by a robot's visual system under all robot localizations. Using recent manifold learning and deep learning techniques, we propose a new geometrically motivated solution based on training data consisting of a finite set of images captured in known locations of the robot. The solution includes estimation of the robot localization mapping from the appearance space to the robot localization space, as well as estimation of the inverse mapping for modeling visual image features. The latter allows solving the robot localization problem as the Kalman filtering problem.
1
0
0
1
0
0
20,204
Towards Communication-Aware Robust Topologies
We currently witness the emergence of interesting new network topologies optimized towards the traffic matrices they serve, such as demand-aware datacenter interconnects (e.g., ProjecToR) and demand-aware overlay networks (e.g., SplayNets). This paper introduces a formal framework and approach to reason about and design such topologies. We leverage a connection between the communication frequency of two nodes and the path length between them in the network, which depends on the entropy of the communication matrix. Our main contribution is a novel robust, yet sparse, family of network topologies which guarantee an expected path length that is proportional to the entropy of the communication patterns.
1
0
0
0
0
0
20,205
Zero-shot Domain Adaptation without Domain Semantic Descriptors
We propose a method to infer domain-specific models such as classifiers for unseen domains, from which no data are given in the training phase, without domain semantic descriptors. When training and test distributions are different, standard supervised learning methods perform poorly. Zero-shot domain adaptation attempts to alleviate this problem by inferring models that generalize well to unseen domains by using training data in multiple source domains. Existing methods use observed semantic descriptors characterizing domains such as time information to infer the domain-specific models for the unseen domains. However, it cannot always be assumed that such metadata can be used in real-world applications. The proposed method can infer appropriate domain-specific models without any semantic descriptors by introducing the concept of latent domain vectors, which are latent representations for the domains and are used for inferring the models. The latent domain vector for the unseen domain is inferred from the set of the feature vectors in the corresponding domain, which is given in the testing phase. The domain-specific models consist of two components: the first is for extracting a representation of a feature vector to be predicted, and the second is for inferring model parameters given the latent domain vector. The posterior distributions of the latent domain vectors and the domain-specific models are parametrized by neural networks, and are optimized by maximizing the variational lower bound using stochastic gradient descent. The effectiveness of the proposed method was demonstrated through experiments using one regression and two classification tasks.
0
0
0
1
0
0
20,206
Network modelling of topological domains using Hi-C data
Genome-wide chromosome conformation capture techniques such as Hi-C enable the generation of 3D genome contact maps and offer new pathways toward understanding the spatial organization of genome. One specific feature of the 3D organization is known as topologically associating domains (TADs), which are densely interacting, contiguous chromatin regions playing important roles in regulating gene expression. A few algorithms have been proposed to detect TADs. In particular, the structure of Hi-C data naturally inspires application of community detection methods. However, one of the drawbacks of community detection is that most methods take exchangeability of the nodes in the network for granted; whereas the nodes in this case, i.e. the positions on the chromosomes, are not exchangeable. We propose a network model for detecting TADs using Hi-C data that takes into account this non-exchangeability. In addition, our model explicitly makes use of cell-type specific CTCF binding sites as biological covariates and can be used to identify conserved TADs across multiple cell types. The model leads to a likelihood objective that can be efficiently optimized via relaxation. We also prove that when suitably initialized, this model finds the underlying TAD structure with high probability. Using simulated data, we show the advantages of our method and the caveats of popular community detection methods, such as spectral clustering, in this application. Applying our method to real Hi-C data, we demonstrate the domains identified have desirable epigenetic features and compare them across different cell types. The code is available upon request.
0
0
0
1
0
0
20,207
On a Possible Giant Impact Origin for the Colorado Plateau
It is proposed and substantiated that an extraterrestrial object of the approximate size and mass of Planet Mars, impacting the Earth in an oblique angle along an approximately NE-SW route (with respect to the current orientation of the North America continent) around 750 million years ago (750 Ma), is likely to be the direct cause of a chain of events which led to the rifting of the Rodinia supercontinent and the severing of the foundation of the Colorado Plateau from its surrounding craton. It is further argued that the impactor most likely originated as a rouge exoplanet produced during one of the past crossings of our Solar System through the Galactic spiral arms in its orbital motion around the center of the Milky Way Galaxy. Recent work has shown that the sites of galactic spiral arms are locations of density-wave collisionless shocks. The perturbations from such shock are known lead to the formation of massive stars, which evolve quickly and die as supernovae. The blastwaves from supernova explosions, in addition to the collisionless shocks at the spiral arms, can perturb the orbits of the streaming disk matter, occasionally producing rogue exoplanets that can reach the inner confines of our Solar System. The similarity between the period of spiral-arm crossings of our Solar System to the period of major extinction events in the Phanerozoic Eon of the Earth's history, as well as to the period of the supercontinent cycle (the so-called Wilson Cycle), indicates that the global environment of the Milky Way Galaxy may have played a major role in initiating Earth's past tectonic activities.
0
1
0
0
0
0
20,208
Reversing Parallel Programs with Blocks and Procedures
We show how to reverse a while language extended with blocks, local variables, procedures and the interleaving parallel composition. Annotation is defined along with a set of operational semantics capable of storing necessary reversal information, and identifiers are introduced to capture the interleaving order of an execution. Inversion is defined with a set of operational semantics that use saved information to undo an execution. We prove that annotation does not alter the behaviour of the original program, and that inversion correctly restores the initial program state.
1
0
0
0
0
0
20,209
Detection of the Stellar Intracluster Medium in Perseus (Abell 426)
Hubble Space Telescope photometry from the ACS/WFC and WFPC2 cameras is used to detect and measure globular clusters (GCs) in the central region of the rich Perseus cluster of galaxies. A detectable population of Intragalactic GCs is found extending out to at least 500 kpc from the cluster center. These objects display luminosity and color (metallicity) distributions that are entirely normal for GC populations. Extrapolating from the limited spatial coverage of the HST fields, we estimate very roughly that the entire Perseus cluster should contain ~50000 or more IGCs, but a targetted wide-field survey will be needed for a more definitive answer. Separate brief results are presented for the rich GC systems in NGC 1272 and NGC 1275, the two largest Perseus ellipticals. For NGC 1272 we find a specific frequency S_N = 8, while for the central giant NGC 1275, S_N ~ 12. In both these giant galaxies, the GC colors are well matched by bimodal distributions, with the majority in the blue (metal-poor) component. This preliminary study suggests that Perseus is a prime target for a more comprehensive deep imaging survey of Intragalactic GCs.
0
1
0
0
0
0
20,210
Binets: fundamental building blocks for phylogenetic networks
Phylogenetic networks are a generalization of evolutionary trees that are used by biologists to represent the evolution of organisms which have undergone reticulate evolution. Essentially, a phylogenetic network is a directed acyclic graph having a unique root in which the leaves are labelled by a given set of species. Recently, some approaches have been developed to construct phylogenetic networks from collections of networks on 2- and 3-leaved networks, which are known as binets and trinets, respectively. Here we study in more depth properties of collections of binets, one of the simplest possible types of networks into which a phylogenetic network can be decomposed. More specifically, we show that if a collection of level-1 binets is compatible with some binary network, then it is also compatible with a binary level-1 network. Our proofs are based on useful structural results concerning lowest stable ancestors in networks. In addition, we show that, although the binets do not determine the topology of the network, they do determine the number of reticulations in the network, which is one of its most important parameters. We also consider algorithmic questions concerning binets. We show that deciding whether an arbitrary set of binets is compatible with some network is at least as hard as the well-known Graph Isomorphism problem. However, if we restrict to level-1 binets, it is possible to decide in polynomial time whether there exists a binary network that displays all the binets. We also show that to find a network that displays a maximum number of the binets is NP-hard, but that there exists a simple polynomial-time 1/3-approximation algorithm for this problem. It is hoped that these results will eventually assist in the development of new methods for constructing phylogenetic networks from collections of smaller networks.
1
0
1
0
0
0
20,211
G-Deformations of maps into projective space
$G$-deformability of maps into projective space is characterised by the existence of certain Lie algebra valued 1-forms. This characterisation gives a unified way to obtain well known results regarding deformability in different geometries.
0
0
1
0
0
0
20,212
Cloudless atmospheres for young low-gravity substellar objects
Atmospheric modeling of low-gravity (VL-G) young brown dwarfs remains a challenge. The presence of very thick clouds has been suggested because of their extremely red near-infrared (NIR) spectra, but no cloud models provide a good fit to the data with a radius compatible with evolutionary models for these objects. We show that cloudless atmospheres assuming a temperature gradient reduction caused by fingering convection provides a very good model to match the observed VL-G NIR spectra. The sequence of extremely red colors in the NIR for atmospheres with effective temperature from ~2000 K down to ~1200 K is very well reproduced with predicted radii typical of young low-gravity objects. Future observations with NIRSPEC and MIRI on the James Webb Space Telescope (JWST) will provide more constrains in the mid-infrared, helping to confirm/refute whether or not the NIR reddening is caused by fingering convection. We suggest that the presence/absence of clouds will be directly determined by the silicate absorption features that can be observed with MIRI. JWST will therefore be able to better characterize the atmosphere of these hot young brown dwarfs and their low-gravity exoplanet analogues.
0
1
0
0
0
0
20,213
On the State of the Art of Evaluation in Neural Language Models
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
1
0
0
0
0
0
20,214
Weakly Supervised Audio Source Separation via Spectrum Energy Preserved Wasserstein Learning
Separating audio mixtures into individual instrument tracks has been a long standing challenging task. We introduce a novel weakly supervised audio source separation approach based on deep adversarial learning. Specifically, our loss function adopts the Wasserstein distance which directly measures the distribution distance between the separated sources and the real sources for each individual source. Moreover, a global regularization term is added to fulfill the spectrum energy preservation property regardless separation. Unlike state-of-the-art weakly supervised models which often involve deliberately devised constraints or careful model selection, our approach need little prior model specification on the data, and can be straightforwardly learned in an end-to-end fashion. We show that the proposed method performs competitively on public benchmark against state-of-the-art weakly supervised methods.
1
0
0
0
0
0
20,215
Proof of a conjecture of Kløve on permutation codes under the Chebychev distance
Let $d$ be a positive integer and $x$ a real number. Let $A_{d, x}$ be a $d\times 2d$ matrix with its entries $$ a_{i,j}=\left\{ \begin{array}{ll} x\ \ & \mbox{for} \ 1\leqslant j\leqslant d+1-i, 1\ \ & \mbox{for} \ d+2-i\leqslant j\leqslant d+i, 0\ \ & \mbox{for} \ d+1+i\leqslant j\leqslant 2d. \end{array} \right. $$ Further, let $R_d$ be a set of sequences of integers as follows: $$R_d=\{(\rho_1, \rho_2,\ldots, \rho_d)|1\leqslant \rho_i\leqslant d+i, 1\leqslant i \leqslant d,\ \mbox{and}\ \rho_r\neq \rho_s\ \mbox{for}\ r\neq s\}.$$ and define $$\Omega_d(x)=\sum_{\rho\in R_d}a_{1,\rho_1}a_{2, \rho_2}\ldots a_{d,\rho_d}.$$ In order to give a better bound on the size of spheres of permutation codes under the Chebychev distance, Kl{\o}ve introduced the above function and conjectured that $$\Omega_d(x)=\sum_{m=0}^d{d\choose m}(m+1)^d(x-1)^{d-m}.$$ In this paper, we settle down this conjecture positively.
1
0
0
0
0
0
20,216
Herschel observations of the Galactic HII region RCW 79
Triggered star formation around HII regions could be an important process. The Galactic HII region RCW 79 is a prototypical object for triggered high-mass star formation. We take advantage of Herschel data from the surveys HOBYS, "Evolution of Interstellar Dust", and Hi-Gal to extract compact sources in this region, complemented with archival 2MASS, Spitzer, and WISE data to determine the physical parameters of the sources (e.g., envelope mass, dust temperature, and luminosity) by fitting the spectral energy distribution. We obtained a sample of 50 compact sources, 96% of which are situated in the ionization-compressed layer of cold and dense gas that is characterized by the column density PDF with a double-peaked lognormal distribution. The 50 sources have sizes of 0.1-0.4 pc with a typical value of 0.2 pc, temperatures of 11-26 K, envelope masses of 6-760 $M_\odot$, densities of 0.1-44 $\times$ $10^5$ cm$^{-3}$, and luminosities of 19-12712 $L_\odot$. The sources are classified into 16 class 0, 19 intermediate, and 15 class I objects. Their distribution follows the evolutionary tracks in the diagram of bolometric luminosity versus envelope mass (Lbol-Menv) well. A mass threshold of 140 $M_\odot$, determined from the Lbol-Menv diagram, yields 12 candidate massive dense cores that may form high-mass stars. The core formation efficiency (CFE) for the 8 massive condensations shows an increasing trend of the CFE with density. This suggests that the denser the condensation, the higher the fraction of its mass transformation into dense cores, as previously observed in other high-mass star-forming regions.
0
1
0
0
0
0
20,217
Software-Defined Robotics -- Idea & Approach
The methodology of Software-Defined Robotics hierarchical-based and stand-alone framework can be designed and implemented to program and control different sets of robots, regardless of their manufacturers' parameters and specifications, with unified commands and communications. This framework approach will increase the capability of (re)programming a specific group of robots during the runtime without affecting the others as desired in the critical missions and industrial operations, expand the shared bandwidth, enhance the reusability of code, leverage the computational processing power, decrease the unnecessary analyses of vast supplemental electrical components for each robot, as well as get advantages of the most state-of-the-art industrial trends in the cloud-based computing, Virtual Machines (VM), and Robot-as-a-Service (RaaS) technologies.
1
0
0
0
0
0
20,218
Stability interchanges in a curved Sitnikov problem
We consider a curved Sitnikov problem, in which an infinitesimal particle moves on a circle under the gravitational influence of two equal masses in Keplerian motion within a plane perpendicular to that circle. There are two equilibrium points, whose stability we are studying. We show that one of the equilibrium points undergoes stability interchanges as the semi-major axis of the Keplerian ellipses approaches the diameter of that circle. To derive this result, we first formulate and prove a general theorem on stability interchanges, and then we apply it to our model. The motivation for our model resides with the $n$-body problem in spaces of constant curvature.
0
1
1
0
0
0
20,219
Hardy Spaces over Half-strip Domains
We define Hardy spaces $H^p(\Omega_\pm)$ on half-strip domain~$\Omega_+$ and $\Omega_-= \mathbb{C}\setminus\overline{\Omega_+}$, where $0<p<\infty$, and prove that functions in $H^p(\Omega_\pm)$ has non-tangential boundary limit a.e. on $\Gamma$, the common boundary of $\Omega_\pm$. We then prove that Cauchy integral of functions in $L^p(\Gamma)$ are in $H^p(\Omega_\pm)$, where $1<p<\infty$, that is, Cauchy transform is bounded. Besides, if $1\leqslant p<\infty$, then $H^p(\Omega_\pm)$ functions are the Cauchy integral of their non-tangential boundary limits. We also establish an isomorphism between $H^p(\Omega_\pm)$ and $H^p(\mathbb{C}_\pm)$, the classical Hardy spaces over upper and lower half complex planes.
0
0
1
0
0
0
20,220
An Optimization Framework with Flexible Inexact Inner Iterations for Nonconvex and Nonsmooth Programming
In recent years, numerous vision and learning tasks have been (re)formulated as nonconvex and nonsmooth programmings(NNPs). Although some algorithms have been proposed for particular problems, designing fast and flexible optimization schemes with theoretical guarantee is a challenging task for general NNPs. It has been investigated that performing inexact inner iterations often benefit to special applications case by case, but their convergence behaviors are still unclear. Motivated by these practical experiences, this paper designs a novel algorithmic framework, named inexact proximal alternating direction method (IPAD) for solving general NNPs. We demonstrate that any numerical algorithms can be incorporated into IPAD for solving subproblems and the convergence of the resulting hybrid schemes can be consistently guaranteed by a series of simple error conditions. Beyond the guarantee in theory, numerical experiments on both synthesized and real-world data further demonstrate the superiority and flexibility of our IPAD framework for practical use.
1
0
1
0
0
0
20,221
Attitude and angular velocity tracking for a rigid body using geometric methods on the two-sphere
The control task of tracking a reference pointing direction (the attitude about the pointing direction is irrelevant) while obtaining a desired angular velocity (PDAV) around the pointing direction using geometric techniques is addressed here. Existing geometric controllers developed on the two-sphere only address the tracking of a reference pointing direction while driving the angular velocity about the pointing direction to zero. In this paper a tracking controller on the two-sphere, able to address the PDAV control task, is developed globally in a geometric frame work, to avoid problems related to other attitude representations such as unwinding (quaternions) or singularities (Euler angles). An attitude error function is constructed resulting in a control system with desired tracking performance for rotational maneuvers with large initial attitude/angular velocity errors and the ability to negotiate bounded modeling inaccuracies. The tracking ability of the developed control system is evaluated by comparing its performance with an existing geometric controller on the two-sphere and by numerical simulations, showing improved performance for large initial attitude errors, smooth transitions between desired angular velocities and the ability to negotiate bounded modeling inaccuracies.
1
0
1
0
0
0
20,222
Miscomputation in software: Learning to live with errors
Computer programs do not always work as expected. In fact, ominous warnings about the desperate state of the software industry continue to be released with almost ritualistic regularity. In this paper, we look at the 60 years history of programming and at the different practical methods that software community developed to live with programming errors. We do so by observing a class of students discussing different approaches to programming errors. While learning about the different methods for dealing with errors, we uncover basic assumptions that proponents of different paradigms follow. We learn about the mathematical attempt to eliminate errors through formal methods, scientific method based on testing, a way of building reliable systems through engineering methods, as well as an artistic approach to live coding that accepts errors as a creative inspiration. This way, we can explore the differences and similarities among the different paradigms. By inviting proponents of different methods into a single discussion, we hope to open potential for new thinking about errors. When should we use which of the approaches? And what can software development learn from mathematics, science, engineering and art? When programming or studying programming, we are often enclosed in small communities and we take our basic assumptions for granted. Through the discussion in this paper, we attempt to map the large and rich space of programming ideas and provide reference points for exploring, perhaps foreign, ideas that can challenge some of our assumptions.
1
0
0
0
0
0
20,223
The Combinatorics of Weighted Vector Compositions
A vector composition of a vector $\mathbf{\ell}$ is a matrix $\mathbf{A}$ whose rows sum to $\mathbf{\ell}$. We define a weighted vector composition as a vector composition in which the column values of $\mathbf{A}$ may appear in different colors. We study vector compositions from different viewpoints: (1) We show how they are related to sums of random vectors and (2) how they allow to derive formulas for partial derivatives of composite functions. (3) We study congruence properties of the number of weighted vector compositions, for fixed and arbitrary number of parts, many of which are analogous to those of ordinary binomial coefficients and related quantities. Via the Central Limit Theorem and their multivariate generating functions, (4) we also investigate the asymptotic behavior of several special cases of numbers of weighted vector compositions. Finally, (5) we conjecture an extension of a primality criterion due to Mann and Shanks in the context of weighted vector compositions.
1
0
1
0
0
0
20,224
Fast Linear Model for Knowledge Graph Embeddings
This paper shows that a simple baseline based on a Bag-of-Words (BoW) representation learns surprisingly good knowledge graph embeddings. By casting knowledge base completion and question answering as supervised classification problems, we observe that modeling co-occurences of entities and relations leads to state-of-the-art performance with a training time of a few minutes using the open sourced library fastText.
1
0
0
1
0
0
20,225
Deep learning for plasma tomography using the bolometer system at JET
Deep learning is having a profound impact in many fields, especially those that involve some form of image processing. Deep neural networks excel in turning an input image into a set of high-level features. On the other hand, tomography deals with the inverse problem of recreating an image from a number of projections. In plasma diagnostics, tomography aims at reconstructing the cross-section of the plasma from radiation measurements. This reconstruction can be computed with neural networks. However, previous attempts have focused on learning a parametric model of the plasma profile. In this work, we use a deep neural network to produce a full, pixel-by-pixel reconstruction of the plasma profile. For this purpose, we use the overview bolometer system at JET, and we introduce an up-convolutional network that has been trained and tested on a large set of sample tomograms. We show that this network is able to reproduce existing reconstructions with a high level of accuracy, as measured by several metrics.
0
1
0
1
0
0
20,226
Assembly Bias and Splashback in Galaxy Clusters
We use publicly available data for the Millennium Simulation to explore the implications of the recent detection of assembly bias and splashback signatures in a large sample of galaxy clusters. These were identified in the SDSS/DR8 photometric data by the redMaPPer algorithm and split into high- and low-concentration subsamples based on the projected positions of cluster members. We use simplified versions of these procedures to build cluster samples of similar size from the simulation data. These match the observed samples quite well and show similar assembly bias and splashback signals. Previous theoretical work has found the logarithmic slope of halo density profiles to have a well-defined minimum whose depth decreases and whose radius increases with halo concentration. Projected profiles for the observed and simulated cluster samples show trends with concentration which are opposite to these predictions. In addition, for high-concentration clusters the minimum slope occurs at significantly smaller radius than predicted. We show that these discrepancies all reflect confusion between splashback features and features imposed on the profiles by the cluster identification and concentration estimation procedures. The strong apparent assembly bias is not reflected in the three-dimensional distribution of matter around clusters. Rather it is a consequence of the preferential contamination of low-concentration clusters by foreground or background groups.
0
1
0
0
0
0
20,227
Accelerating Kernel Classifiers Through Borders Mapping
Support vector machines (SVM) and other kernel techniques represent a family of powerful statistical classification methods with high accuracy and broad applicability. Because they use all or a significant portion of the training data, however, they can be slow, especially for large problems. Piecewise linear classifiers are similarly versatile, yet have the additional advantages of simplicity, ease of interpretation and, if the number of component linear classifiers is not too large, speed. Here we show how a simple, piecewise linear classifier can be trained from a kernel-based classifier in order to improve the classification speed. The method works by finding the root of the difference in conditional probabilities between pairs of opposite classes to build up a representation of the decision boundary. When tested on 17 different datasets, it succeeded in improving the classification speed of a SVM for 9 of them by factors as high as 88 times or more. The method is best suited to problems with continuum features data and smooth probability functions. Because the component linear classifiers are built up individually from an existing classifier, rather than through a simultaneous optimization procedure, the classifier is also fast to train.
1
0
0
1
0
0
20,228
Information Criterion for Minimum Cross-Entropy Model Selection
This paper considers the problem of approximating a density when it can be evaluated up to a normalizing constant at a finite number of points. This density approximation problem is ubiquitous in machine learning, such as approximating a posterior density for Bayesian inference and estimating an optimal density for importance sampling. Approximating the density with a parametric model can be cast as a model selection problem. This problem cannot be addressed with traditional approaches that maximize the (marginal) likelihood of a model, for example, using the Akaike information criterion (AIC) or Bayesian information criterion (BIC). We instead aim to minimize the cross-entropy that gauges the deviation of a parametric model from the target density. We propose a novel information criterion called the cross-entropy information criterion (CIC) and prove that the CIC is an asymptotically unbiased estimator of the cross-entropy (up to a multiplicative constant) under some regularity conditions. We propose an iterative method to approximate the target density by minimizing the CIC. We demonstrate that the proposed method selects a parametric model that well approximates the target density.
0
0
0
1
0
0
20,229
Scaling relations in large-Prandtl-number natural thermal convection
In this study we follow Grossmann and Lohse, Phys. Rev. Lett. 86 (2001), who derived various scalings regimes for the dependence of the Nusselt number $Nu$ and the Reynolds number $Re$ on the Rayleigh number $Ra$ and the Prandtl number $Pr$. We focus on theoretical arguments as well as on numerical simulations for the case of large-$Pr$ natural thermal convection. Based on an analysis of self-similarity of the boundary layer equations, we derive that in this case the limiting large-$Pr$ boundary-layer dominated regime is I$_\infty^<$, introduced and defined in [1], with the scaling relations $Nu\sim Pr^0\,Ra^{1/3}$ and $Re\sim Pr^{-1}\,Ra^{2/3}$. Our direct numerical simulations for $Ra$ from $10^4$ to $10^9$ and $Pr$ from 0.1 to 200 show that the regime I$_\infty^<$ is almost indistinguishable from the regime III$_\infty$, where the kinetic dissipation is bulk-dominated. With increasing $Ra$, the scaling relations undergo a transition to those in IV$_u$ of reference [1], where the thermal dissipation is determined by its bulk contribution.
0
1
0
0
0
0
20,230
The renormalization method from continuous to discrete dynamical systems: asymptotic solutions, reductions and invariant manifolds
The renormalization method based on the Taylor expansion for asymptotic analysis of differential equations is generalized to difference equations. The proposed renormalization method is based on the Newton-Maclaurin expansion. Several basic theorems on the renormalization method are proven. Some interesting applications are given, including asymptotic solutions of quantum anharmonic oscillator and discrete boundary layer, the reductions and invariant manifolds of some discrete dynamics systems. Furthermore, the homotopy renormalization method based on the Newton-Maclaurin expansion is proposed and applied to those difference equations including no a small parameter.
0
0
1
0
0
0
20,231
A Stress/Displacement Virtual Element Method for Plane Elasticity Problems
The numerical approximation of 2D elasticity problems is considered, in the framework of the small strain theory and in connection with the mixed Hellinger-Reissner variational formulation. A low-order Virtual Element Method (VEM) with a-priori symmetric stresses is proposed. Several numerical tests are provided, along with a rigorous stability and convergence analysis.
0
0
1
0
0
0
20,232
Estimation of the lead-lag parameter between two stochastic processes driven by fractional Brownian motions
In this paper, we consider the problem of estimating the lead-lag parameter between two stochastic processes driven by fractional Brownian motions (fBMs) of the Hurst parameter greater than 1/2. First we propose a lead-lag model between two stochastic processes involving fBMs, and then construct a consistent estimator of the lead-lag parameter with possible convergence rate. Our estimator has the following two features. Firstly, we can construct the lead-lag estimator without using the Hurst parameters of the underlying fBMs. Secondly, our estimator can deal with some non-synchronous and irregular observations. We explicitly calculate possible convergence rate when the observation times are (1) synchronous and equidistant, and (2) given by the Poisson sampling scheme. We also present numerical simulations of our results using the R package YUIMA.
0
0
1
1
0
0
20,233
Multi-resolution polymer Brownian dynamics with hydrodynamic interactions
A polymer model given in terms of beads, interacting through Hookean springs and hydrodynamic forces, is studied. Brownian dynamics description of this bead-spring polymer model is extended to multiple resolutions. Using this multiscale approach, a modeller can efficiently look at different regions of the polymer in different spatial and temporal resolutions with scalings given for the number of beads, statistical segment length and bead radius in order to maintain macro-scale properties of the polymer filament. The Boltzmann distribution of a Gaussian chain for differing statistical segment lengths gives a Langevin equation for the multi-resolution model with a mobility tensor for different bead sizes. Using the pre-averaging approximation, the translational diffusion coefficient is obtained as a function of the inverse of a matrix and then in closed form in the long-chain limit. This is then confirmed with numerical experiments.
0
1
0
0
0
0
20,234
Learning Theory of Distributed Regression with Bias Corrected Regularization Kernel Network
Distributed learning is an effective way to analyze big data. In distributed regression, a typical approach is to divide the big data into multiple blocks, apply a base regression algorithm on each of them, and then simply average the output functions learnt from these blocks. Since the average process will decrease the variance, not the bias, bias correction is expected to improve the learning performance if the base regression algorithm is a biased one. Regularization kernel network is an effective and widely used method for nonlinear regression analysis. In this paper we will investigate a bias corrected version of regularization kernel network. We derive the error bounds when it is applied to a single data set and when it is applied as a base algorithm in distributed regression. We show that, under certain appropriate conditions, the optimal learning rates can be reached in both situations.
1
0
0
1
0
0
20,235
Cluster-based Kriging Approximation Algorithms for Complexity Reduction
Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.
1
0
0
1
0
0
20,236
How to model fake news
Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news.
1
0
0
0
0
1
20,237
Analysis of Dropout in Online Learning
Deep learning is the state-of-the-art in fields such as visual object recognition and speech recognition. This learning uses a large number of layers and a huge number of units and connections. Therefore, overfitting is a serious problem with it, and the dropout which is a kind of regularization tool is used. However, in online learning, the effect of dropout is not well known. This paper presents our investigation on the effect of dropout in online learning. We analyzed the effect of dropout on convergence speed near the singular point. Our results indicated that dropout is effective in online learning. Dropout tends to avoid the singular point for convergence speed near that point.
1
0
0
1
0
0
20,238
TADPOLE Challenge: Prediction of Longitudinal Evolution in Alzheimer's Disease
The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting future evolution of individuals at risk of Alzheimer's disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study or any other datasets to which they have access. Participants are then required to make monthly forecasts over a period of 5 years from January 2018, of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog13), and total volume of the ventricles. These individual forecasts are later compared with the corresponding future measurements in ADNI-3 (obtained after the TADPOLE submission deadline). The first submission phase of TADPOLE was open for prize-eligible submissions between 15 June and 15 November 2017. The submission system remains open via the website: this https URL, although since 15 November 2017 submissions are not eligible for the first round of prizes. This paper describes the design of the TADPOLE Challenge.
0
0
0
1
1
0
20,239
A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction
The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23 ms, which is fast enough to enable real-time applications.
1
0
0
0
0
0
20,240
Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM
We present a state-of-the-art end-to-end Automatic Speech Recognition (ASR) model. We learn to listen and write characters with a joint Connectionist Temporal Classification (CTC) and attention-based encoder-decoder network. The encoder is a deep Convolutional Neural Network (CNN) based on the VGG network. The CTC network sits on top of the encoder and is jointly trained with the attention-based decoder. During the beam search process, we combine the CTC predictions, the attention-based decoder predictions and a separately trained LSTM language model. We achieve a 5-10\% error reduction compared to prior systems on spontaneous Japanese and Chinese speech, and our end-to-end model beats out traditional hybrid ASR systems.
1
0
0
0
0
0
20,241
The Social and Work Structure of an Afterschool Math Club
This study focuses on the social structure and interpersonal dynamics of an afterschool math club for middle schoolers. Using social network analysis, two networks were formed and analyzed: The network of friendship relationships and the network of working relationships. The interconnections and correlations between friendship relationships, working relationships, and student opinion surveys are studied. A core working group of talented students emerged from the network of working relations. This group acted a central go-between for other members in the club. This core working group expanded into largest friendship group in the friendship network. Although there were working isolates, they were not found to be socially isolated. Students who were less popular tended to report a greater favorable impact from club participation. Implications for the study of the social structure of afterschool STEM clubs and classrooms are discussed.
0
0
1
0
0
0
20,242
Fourier Transform of Schwartz Algebras on Groups in the Harish-Chandra class
It is well-known that the Harish-Chandra transform, $f\mapsto\mathcal{H}f,$ is a topological isomorphism of the spherical (Schwartz) convolution algebra $\mathcal{C}^{p}(G//K)$ (where $K$ is a maximal compact subgroup of any arbitrarily chosen group $G$ in the Harish-Chandra class and $0<p\leq2$) onto the (Schwartz) multiplication algebra $\bar{\mathcal{Z}}({\mathfrak{F}}^{\epsilon})$ (of $\mathfrak{w}-$invariant members of $\mathcal{Z}({\mathfrak{F}}^{\epsilon}),$ with $\epsilon=(2/p)-1$). The same cannot however be said of the full Schwartz convolution algebra $\mathcal{C}^{p}(G),$ except for few specific examples of groups (notably $G=SL(2,\mathbb{R})$) and for some notable values of $p$ (with restrictions on $G$ and/or on $\mathcal{C}^{p}(G)$). Nevertheless the full Harish-Chandra Plancherel formula on $G$ is known for all of $\mathcal{C}^{2}(G)=:\mathcal{C}(G).$ In order to then understand the structure of Harish-Chandra transform more clearly and to compute the image of $\mathcal{C}^{p}(G)$ under it (without any restriction) we derive an absolutely convergent series expansion (in terms of known functions) for the Harish-Chandra transform by an application of the full Plancherel formula on $G.$ This leads to a computation of the image of $\mathcal{C}(G)$ under the Harish-Chandra transform which may be seen as a concrete realization of Arthur's result and be easily extended to all of $\mathcal{C}^{p}(G)$ in much the same way as it is known in the work of Trombi and Varadarajan.
0
0
1
0
0
0
20,243
Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation
We consider the estimation of a signal from the knowledge of its noisy linear random Gaussian projections. A few examples where this problem is relevant are compressed sensing, sparse superposition codes, and code division multiple access. There has been a number of works considering the mutual information for this problem using the replica method from statistical physics. Here we put these considerations on a firm rigorous basis. First, we show, using a Guerra-Toninelli type interpolation, that the replica formula yields an upper bound to the exact mutual information. Secondly, for many relevant practical cases, we present a converse lower bound via a method that uses spatial coupling, state evolution analysis and the I-MMSE theorem. This yields a single letter formula for the mutual information and the minimal-mean-square error for random Gaussian linear estimation of all discrete bounded signals. In addition, we prove that the low complexity approximate message-passing algorithm is optimal outside of the so-called hard phase, in the sense that it asymptotically reaches the minimal-mean-square error. In this work spatial coupling is used primarily as a proof technique. However our results also prove two important features of spatially coupled noisy linear random Gaussian estimation. First there is no algorithmically hard phase. This means that for such systems approximate message-passing always reaches the minimal-mean-square error. Secondly, in a proper limit the mutual information associated to such systems is the same as the one of uncoupled linear random Gaussian estimation.
1
1
1
0
0
0
20,244
A Memristor-Based Optimization Framework for AI Applications
Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at the two terminals of memristors make them promising candidates to perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merit of memristor crossbars with the advantages of an operator splitting method, alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The capability of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based PI method can further be applied to principal component analysis (PCA). The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence (AI).
1
0
0
1
0
0
20,245
Estimating parameters of a directed weighted graph model with beta-distributed edge-weights
We introduce a directed, weighted random graph model, where the edge-weights are independent and beta-distributed with parameters depending on their endpoints. We will show that the row- and column-sums of the transformed edge-weight matrix are sufficient statistics for the parameters, and use the theory of exponential families to prove that the ML estimate of the parameters exists and is unique. Then an algorithm to find this estimate is introduced together with convergence proof that uses properties of the digamma function. Simulation results and applications are also presented.
0
0
1
1
0
0
20,246
On the Performance of Reduced-Complexity Transmit/Receive Diversity Systems over MIMO-V2V Channel Model
In this letter, we investigate the performance of multiple-input multiple-output techniques in a vehicle-to-vehicle communication system. We consider both transmit antenna selection with maximal-ratio combining and transmit antenna selection with selection combining. The channel propagation model between two vehicles is represented as n*Rayleigh distribution, which has been shown to be a realistic model for vehicle-to-vehicle communication scenarios. We derive tight analytical expressions for the outage probability and amount of fading of the post-processing signal-to-noise ratio.
1
0
0
0
0
0
20,247
Internal migration and education: A cross-national comparison
Migration the main process shaping patterns of human settlement within and between countries. It is widely acknowledged to be integral to the process of human development as it plays a significant role in enhancing educational outcomes. At regional and national levels, internal migration underpins the efficient functioning of the economy by bringing knowledge and skills to the locations where they are needed. It is the multi-dimensional nature of migration that underlines its significance in the process of human development. Human mobility extends in the spatial domain from local travel to international migration, and in the temporal dimension from short-term stays to permanent relocations. Classification and measurement of such phenomena is inevitably complex, which has severely hindered progress in comparative research, with very few large-scale cross-national comparisons of migration. The linkages between migration and education have been explored in a separate line of inquiry that has predominantly focused on country-specific analyses as to the ways in which migration affects educational outcomes and how educational attainment affects migration behaviour. A recurrent theme has been the educational selectivity of migrants, which in turn leads to an increase of human capital in some regions, primarily cities, at the expense of others. Questions have long been raised as to the links between education and migration in response to educational expansion, but have not yet been fully answered because of the absence, until recently, of adequate data for comparative analysis of migration. In this paper, we bring these two separate strands of research together to systematically explore links between internal migration and education across a global sample of 57 countries at various stages of development, using data drawn from the IPUMS database.
0
0
0
0
0
1
20,248
Free differential Lie Rota-Baxter algebras and Gröbner-Shirshov bases
We establish the Gröbner-Shirshov bases theory for differential Lie $\Omega$-algebras. As an application, we give a linear basis of a free differential Lie Rota-Baxter algebra on a set.
0
0
1
0
0
0
20,249
Conversion Rate Optimization through Evolutionary Computation
Conversion optimization means designing a web interface so that as many users as possible take a desired action on it, such as register or purchase. Such design is usually done by hand, testing one change at a time through A/B testing, or a limited number of combinations through multivariate testing, making it possible to evaluate only a small fraction of designs in a vast design space. This paper describes Sentient Ascend, an automatic conversion optimization system that uses evolutionary optimization to create effective web interface designs. Ascend makes it possible to discover and utilize interactions between the design elements that are difficult to identify otherwise. Moreover, evaluation of design candidates is done in parallel online, i.e. with a large number of real users interacting with the system. A case study on an existing media site shows that significant improvements (i.e. over 43%) are possible beyond human design. Ascend can therefore be seen as an approach to massively multivariate conversion optimization, based on a massively parallel interactive evolution.
1
0
0
0
0
0
20,250
Emergence of spatial curvature
This paper investigates the phenomenon of emergence of spatial curvature. This phenomenon is absent in the Standard Cosmological Model, which has a flat and fixed spatial curvature (small perturbations are considered in the Standard Cosmological Model but their global average vanishes, leading to spatial flatness at all times). This paper shows that with the nonlinear growth of cosmic structures the global average deviates from zero. The analysis is based on the {\em silent universes} (a wide class of inhomogeneous cosmological solutions of the Einstein equations). The initial conditions are set in the early universe as perturbations around the $\Lambda$CDM model with $\Omega_m = 0.31$, $\Omega_\Lambda = 0.69$, and $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$. As the growth of structures becomes nonlinear, the model deviates from the $\Lambda$CDM model, and at the present instant if averaged over a domain ${\cal D}$ with volume $V = (2150\,{\rm Mpc})^3$ (at these scales the cosmic variance is negligibly small) gives: $\Omega_m^{\cal D} = 0.22$, $\Omega_\Lambda^{\cal D} = 0.61$, $\Omega_{\cal R}^{\cal D} = 0.15$ (in the FLRW limit $\Omega_{\cal R}^{\cal D} \to \Omega_k$), and $\langle H \rangle_{\cal D} = 72.2$ km s$^{-1}$ Mpc$^{-1}$. Given the fact that low-redshift observations favor higher values of the Hubble constant and lower values of matter density, compared to the CMB constraints, the emergence of the spatial curvature in the low-redshift universe could be a possible solution to these discrepancies.
0
1
0
0
0
0
20,251
Asymptotic generalized bivariate extreme with random index
In many biological, agricultural, military activity problems and in some quality control problems, it is almost impossible to have a fixed sample size, because some observations are always lost for various reasons. Therefore, the sample size itself is considered frequently to be a random variable (rv). The class of limit distribution functions (df's) of the random bivariate extreme generalized order statistics (GOS) from independent and identically distributed RV's are fully characterized. When the random sample size is assumed to be independent of the basic variables and its df is assumed to converge weakly to a non-degenerate limit, the necessary and sufficient conditions for the weak convergence of the random bivariate extreme GOS are obtained. Furthermore, when the interrelation of the random size and the basic rv's is not restricted, sufficient conditions for the convergence and the forms of the limit df's are deduced. Illustrative examples are given which lend further support to our theoretical results.
0
0
1
1
0
0
20,252
On problems in the calculus of variations in increasingly elongated domains
We consider minimization problems in the calculus of variations set in a sequence of domains the size of which tends to infinity in certain directions and such that the data only depend on the coordinates in the directions that remain constant. We study the asymptotic behavior of minimizers in various situations and show that they converge in an appropriate sense toward minimizers of a related energy functional in the constant directions.
0
0
1
0
0
0
20,253
Inequalities related to Symmetrized Harmonic Convex Functions
In this paper, we extend the Hermite-Hadamard type $\dot{I}$scan inequality to the class of symmetrized harmonic convex functions. The corresponding version for harmonic h-convex functions is also investigated. Furthermore, we establish Hermite-Hadamard type inequalites for the product of a harmonic convex function with a symmetrized harmonic convex function.
0
0
1
0
0
0
20,254
Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter
This paper provides several statistical estimators for the drift and volatility parameters of an Ornstein-Uhlenbeck process driven by fractional Brownian motion, whose observations can be made either continuously or at discrete time instants. First and higher order power variations are used to estimate the volatility parameter. The almost sure convergence of the estimators and the corresponding central limit theorems are obtained for all the Hurst parameter range $H\in (0, 1)$. The least squares estimator is used for the drift parameter. A central limit theorem is proved when the Hurst parameter $H \in (0, 1/2)$ and a noncentral limit theorem is proved for $H\in[3/4, 1)$. Thus, the open problem left in the paper by Hu and Nualart (2010) is completely solved, where a central limit theorem for least squares estimator is proved for $H\in [1/2, 3/4)$.
0
0
1
1
0
0
20,255
E-polynomials of $PGL(2,\mathbb{C})$-character varieties of surface groups
In this paper, we compute the E-polynomials of the $PGL(2,\mathbb{C})$-character varieties associated to surfaces of genus $g$ with one puncture, for any holonomy around it, and compare it with its Langlands dual case, $SL(2,\mathbb{C})$. The study is based on the stratification of the space of representations and on the analysis of the behaviour of the E-polynomial under fibrations.
0
0
1
0
0
0
20,256
Security Analysis of Cache Replacement Policies
Modern computer architectures share physical resources between different programs in order to increase area-, energy-, and cost-efficiency. Unfortunately, sharing often gives rise to side channels that can be exploited for extracting or transmitting sensitive information. We currently lack techniques for systematic reasoning about this interplay between security and efficiency. In particular, there is no established way for quantifying security properties of shared caches. In this paper, we propose a novel model that enables us to characterize important security properties of caches. Our model encompasses two aspects: (1) The amount of information that can be absorbed by a cache, and (2) the amount of information that can effectively be extracted from the cache by an adversary. We use our model to compute both quantities for common cache replacement policies (FIFO, LRU, and PLRU) and to compare their isolation properties. We further show how our model for information extraction leads to an algorithm that can be used to improve the bounds delivered by the CacheAudit static analyzer.
1
0
0
0
0
0
20,257
Supercongruences related to ${}_3F_2(1)$ involving harmonic numbers
We show various supercongruences for truncated series which involve central binomial coefficients and harmonic numbers. The corresponding infinite series are also evaluated.
0
0
1
0
0
0
20,258
Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition
Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization, DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments.
1
0
0
0
0
0
20,259
First-Order vs. Second-Order Encodings for LTLf-to-Automata Translation
Translating formulas of Linear Temporal Logic (LTL) over finite traces, or LTLf, to symbolic Deterministic Finite Automata (DFA) plays an important role not only in LTLf synthesis, but also in synthesis for Safety LTL formulas. The translation is enabled by using MONA, a powerful tool for symbolic, BDD-based, DFA construction from logic specifications. Recent works used a first-order encoding of LTLf formulas to translate LTLf to First Order Logic (FOL), which is then fed to MONA to get the symbolic DFA. This encoding was shown to perform well, but other encodings have not been studied. Specifically, the natural question of whether second-order encoding, which has significantly simpler quantificational structure, can outperform first-order encoding remained open. In this paper we address this challenge and study second-order encodings for LTLf formulas. We first introduce a specific MSO encoding that captures the semantics of LTLf in a natural way and prove its correctness. We then explore is a Compact MSO encoding, which benefits from automata-theoretic minimization, thus suggesting a possible practical advantage. To that end, we propose a formalization of symbolic DFA in second-order logic, thus developing a novel connection between BDDs and MSO. We then show by empirical evaluations that the first-order encoding does perform better than both second-order encodings. The conclusion is that first-order encoding is a better choice than second-order encoding in LTLf-to-Automata translation.
1
0
0
0
0
0
20,260
Regularity results and parametrices of semi-linear boundary problems of product type
This short note describes the benefit one obtains from a specific construction of a family of parametrices for a class of elliptic boundary value problems perturbed by non-linear terms of product type. The construction is based on the Boutet de Monvel calculus of pseudo-differential boundary operators for the linear elliptic parts, and on paradifferential operators for the product terms.
0
0
1
0
0
0
20,261
$\left( β, \varpi \right)$-stability for cross-validation and the choice of the number of folds
In this paper, we introduce a new concept of stability for cross-validation, called the $\left( \beta, \varpi \right)$-stability, and use it as a new perspective to build the general theory for cross-validation. The $\left( \beta, \varpi \right)$-stability mathematically connects the generalization ability and the stability of the cross-validated model via the Rademacher complexity. Our result reveals mathematically the effect of cross-validation from two sides: on one hand, cross-validation picks the model with the best empirical generalization ability by validating all the alternatives on test sets; on the other hand, cross-validation may compromise the stability of the model selection by causing subsampling error. Moreover, the difference between training and test errors in q\textsuperscript{th} round, sometimes referred to as the generalization error, might be autocorrelated on q. Guided by the ideas above, the $\left( \beta, \varpi \right)$-stability help us derivd a new class of Rademacher bounds, referred to as the one-round/convoluted Rademacher bounds, for the stability of cross-validation in both the i.i.d.\ and non-i.i.d.\ cases. For both light-tail and heavy-tail losses, the new bounds quantify the stability of the one-round/average test error of the cross-validated model in terms of its one-round/average training error, the sample sizes $n$, number of folds $K$, the tail property of the loss (encoded as Orlicz-$\Psi_\nu$ norms) and the Rademacher complexity of the model class $\Lambda$. The new class of bounds not only quantitatively reveals the stability of the generalization ability of the cross-validated model, it also shows empirically the optimal choice for number of folds $K$, at which the upper bound of the one-round/average test error is lowest, or, to put it in another way, where the test error is most stable.
1
0
1
1
0
0
20,262
Directional convexity of harmonic mappings
The convolution properties are discussed for the complex-valued harmonic functions in the unit disk $\mathbb{D}$ constructed from the harmonic shearing of the analytic function $\phi(z):=\int_0^z (1/(1-2\xi\textit{e}^{\textit{i}\mu}\cos\nu+\xi^2\textit{e}^{2\textit{i}\mu}))\textit{d}\xi$, where $\mu$ and $\nu$ are real numbers. For any real number $\alpha$ and harmonic function $f=h+\overline{g}$, define an analytic function $f_{\alpha}:=h+\textit{e}^{-2\textit{i}\alpha}g$. Let $\mu_1$ and $\mu_2$ $(\mu_1+\mu_2=\mu)$ be real numbers, and $f=h+\overline{g}$ and $F=H+\overline{G}$ be locally-univalent and sense-preserving harmonic functions such that $f_{\mu_1}*F_{\mu_2}=\phi$. It is shown that the convolution $f*F$ is univalent and convex in the direction of $-\mu$, provided it is locally univalent and sense-preserving. Also, local-univalence of the above convolution $f*F$ is shown for some specific analytic dilatations of $f$ and $F$. Furthermore, if $g\equiv0$ and both the analytic functions $f_{\mu_1}$ and $F_{\mu_2}$ are convex, then the convolution $f*F$ is shown to be convex. These results extends the work done by Dorff \textit{et al.} to a larger class of functions.
0
0
1
0
0
0
20,263
Optimal Non-uniform Deployments in Ultra-Dense Finite-Area Cellular Networks
Network densification and heterogenisation through the deployment of small cellular access points (picocells and femtocells) are seen as key mechanisms in handling the exponential increase in cellular data traffic. Modelling such networks by leveraging tools from Stochastic Geometry has proven particularly useful in understanding the fundamental limits imposed on network coverage and capacity by co-channel interference. Most of these works however assume infinite sized and uniformly distributed networks on the Euclidean plane. In contrast, we study finite sized non-uniformly distributed networks, and find the optimal non-uniform distribution of access points which maximises network coverage for a given non-uniform distribution of mobile users, and vice versa.
1
0
0
0
0
0
20,264
Plan, Attend, Generate: Character-level Neural Machine Translation with Planning in the Decoder
We investigate the integration of a planning mechanism into an encoder-decoder architecture with an explicit alignment for character-level machine translation. We develop a model that plans ahead when it computes alignments between the source and target sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the strategic attentive reader and writer (STRAW) model. Our proposed model is end-to-end trainable with fully differentiable operations. We show that it outperforms a strong baseline on three character-level decoder neural machine translation on WMT'15 corpus. Our analysis demonstrates that our model can compute qualitatively intuitive alignments and achieves superior performance with fewer parameters.
1
0
0
0
0
0
20,265
Deep Recurrent NMF for Speech Separation by Unfolding Iterative Thresholding
In this paper, we propose a novel recurrent neural network architecture for speech separation. This architecture is constructed by unfolding the iterations of a sequential iterative soft-thresholding algorithm (ISTA) that solves the optimization problem for sparse nonnegative matrix factorization (NMF) of spectrograms. We name this network architecture deep recurrent NMF (DR-NMF). The proposed DR-NMF network has three distinct advantages. First, DR-NMF provides better interpretability than other deep architectures, since the weights correspond to NMF model parameters, even after training. This interpretability also provides principled initializations that enable faster training and convergence to better solutions compared to conventional random initialization. Second, like many deep networks, DR-NMF is an order of magnitude faster at test time than NMF, since computation of the network output only requires evaluating a few layers at each time step. Third, when a limited amount of training data is available, DR-NMF exhibits stronger generalization and separation performance compared to sparse NMF and state-of-the-art long-short term memory (LSTM) networks. When a large amount of training data is available, DR-NMF achieves lower yet competitive separation performance compared to LSTM networks.
1
0
0
1
0
0
20,266
Birth of isolated nested cylinders and limit cycles in 3D piecewise smooth vector fields with symmetry
Our start point is a 3D piecewise smooth vector field defined in two zones and presenting a shared fold curve for the two smooth vector fields considered. Moreover, these smooth vector fields are symmetric relative to the fold curve, giving raise to a continuum of nested topological cylinders such that each orthogonal section of these cylinders is filled by centers. First we prove that the normal form considered represents a whole class of piecewise smooth vector fields. After we perturb the initial model in order to obtain exactly $\mathcal{L}$ invariant planes containing centers. A second perturbation of the initial model also is considered in order to obtain exactly $k$ isolated cylinders filled by periodic orbits. Finally, joining the two previous bifurcations we are able to exhibit a model, preserving the symmetry relative to the fold curve, and having exactly $k.\mathcal{L}$ limit cycles.
0
0
1
0
0
0
20,267
The difficulty of folding self-folding origami
Why is it difficult to refold a previously folded sheet of paper? We show that even crease patterns with only one designed folding motion inevitably contain an exponential number of `distractor' folding branches accessible from a bifurcation at the flat state. Consequently, refolding a sheet requires finding the ground state in a glassy energy landscape with an exponential number of other attractors of higher energy, much like in models of protein folding (Levinthal's paradox) and other NP-hard satisfiability (SAT) problems. As in these problems, we find that refolding a sheet requires actuation at multiple carefully chosen creases. We show that seeding successful folding in this way can be understood in terms of sub-patterns that fold when cut out (`folding islands'). Besides providing guidelines for the placement of active hinges in origami applications, our results point to fundamental limits on the programmability of energy landscapes in sheets.
0
1
0
0
0
0
20,268
Integrable Trotterization: Local Conservation Laws and Boundary Driving
We discuss a general procedure to construct an integrable real-time trotterization of interacting lattice models. As an illustrative example we consider a spin-$1/2$ chain, with continuous time dynamics described by the isotropic ($XXX$) Heisenberg Hamiltonian. For periodic boundary conditions local conservation laws are derived from an inhomogeneous transfer matrix and a boost operator is constructed. In the continuous time limit these local charges reduce to the known integrals of motion of the Heisenberg chain. In a simple Kraus representation we also examine the nonequilibrium setting, where our integrable cellular automaton is driven by stochastic processes at the boundaries. We show explicitly, how an exact nonequilibrium steady state density matrix can be written in terms of a staggered matrix product ansatz. This simple trotterization scheme, in particular in the open system framework, could prove to be a useful tool for experimental simulations of the lattice models in terms of trapped ion and atom optics setups.
0
1
0
0
0
0
20,269
The Multivariate Hawkes Process in High Dimensions: Beyond Mutual Excitation
The Hawkes process is a class of point processes whose future depends on its own history. Previous theoretical work on the Hawkes process is limited to the case of a mutually-exciting process, in which a past event can only increase the occurrence of future events. However, in neuronal networks and other real-world applications, inhibitory relationships may be present. In this paper, we develop a new approach for establishing the properties of the Hawkes process without the restriction to mutual excitation. To this end, we employ a thinning process representation and a coupling construction to bound the dependence coefficient of the Hawkes process. Using recent developments on weakly dependent sequences, we establish a concentration inequality for second-order statistics of the Hawkes process. We apply this concentration inequality in order to establish theoretical results for penalized regression and clustering analysis in the high-dimensional regime. Our theoretical results are corroborated by simulation studies and an application to a neuronal spike train data set.
0
0
0
1
0
0
20,270
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.
1
0
0
0
0
0
20,271
A universal thin film model for Ginzburg-Landau energy with dipolar interaction
We present an analytical treatment of a three-dimensional variational model of a system that exhibits a second-order phase transition in the presence of dipolar interactions. Within the framework of Ginzburg-Landau theory, we concentrate on the case in which the domain occupied by the sample has the shape of a flat thin film and obtain a reduced two-dimensional, non-local variational model that describes the energetics of the system in terms of the order parameter averages across the film thickness. Namely, we show that the reduced two-dimensional model is in a certain sense asymptotically equivalent to the original three-dimensional model for small film thicknesses. Using this asymptotic equivalence, we analyze two different thin film limits for the full three-dimensional model via the methods of $\Gamma$-convergence applied to the reduced two-dimensional model. In the first regime, in which the film thickness vanishes while all other parameters remain fixed, we recover the local two-dimensional Ginzburg-Landau model. On the other hand, when the film thickness vanishes while the sample's lateral dimensions diverge at the right rate, we show that the system exhibits a transition from homogeneous to spatially modulated global energy minimizers. We identify a sharp threshold for this transition.
0
1
1
0
0
0
20,272
LOCATA challenge: speaker localization with a planar array
This document describes our submission to the 2018 LOCalization And TrAcking (LOCATA) challenge (Tasks 1, 3, 5). We estimate the 3D position of a speaker using the Global Coherence Field (GCF) computed from multiple microphone pairs of a DICIT planar array. One of the main challenges when using such an array with omnidirectional microphones is the front-back ambiguity, which is particularly evident in Task 5. We address this challenge by post-processing the peaks of the GCF and exploiting the attenuation introduced by the frame of the array. Moreover, the intermittent nature of speech and the changing orientation of the speaker make localization difficult. For Tasks 3 and 5, we also employ a Particle Filter (PF) that favors the spatio-temporal continuity of the localization results.
1
0
0
0
0
0
20,273
Less Is More: A Comprehensive Framework for the Number of Components of Ensemble Classifiers
The number of component classifiers chosen for an ensemble greatly impacts the prediction ability. In this paper, we use a geometric framework for a priori determining the ensemble size, which is applicable to most of existing batch and online ensemble classifiers. There are only a limited number of studies on the ensemble size examining Majority Voting (MV) and Weighted Majority Voting (WMV). Almost all of them are designed for batch-mode, hardly addressing online environments. Big data dimensions and resource limitations, in terms of time and memory, make determination of ensemble size crucial, especially for online environments. For the MV aggregation rule, our framework proves that the more strong components we add to the ensemble, the more accurate predictions we can achieve. For the WMV aggregation rule, our framework proves the existence of an ideal number of components, which is equal to the number of class labels, with the premise that components are completely independent of each other and strong enough. While giving the exact definition for a strong and independent classifier in the context of an ensemble is a challenging task, our proposed geometric framework provides a theoretical explanation of diversity and its impact on the accuracy of predictions. We conduct a series of experimental evaluations to show the practical value of our theorems and existing challenges.
1
0
0
1
0
0
20,274
Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers
Low-rank modeling has many important applications in computer vision and machine learning. While the matrix rank is often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical performance. However, the resulting optimization problem is much more challenging. Recent state-of-the-art requires an expensive full SVD in each iteration. In this paper, we show that for many commonly-used nonconvex low-rank regularizers, a cutoff can be derived to automatically threshold the singular values obtained from the proximal operator. This allows such operator being efficiently approximated by power method. Based on it, we develop a proximal gradient algorithm (and its accelerated variant) with inexact proximal splitting and prove that a convergence rate of O(1/T) where T is the number of iterations is guaranteed. Furthermore, we show the proposed algorithm can be well parallelized, which achieves nearly linear speedup w.r.t the number of threads. Extensive experiments are performed on matrix completion and robust principal component analysis, which shows a significant speedup over the state-of-the-art. Moreover, the matrix solution obtained is more accurate and has a lower rank than that of the nuclear norm regularizer.
1
0
0
1
0
0
20,275
Computational Eco-Systems for Handwritten Digits Recognition
Inspired by the importance of diversity in biological system, we built an heterogeneous system that could achieve this goal. Our architecture could be summarized in two basic steps. First, we generate a diverse set of classification hypothesis using both Convolutional Neural Networks, currently the state-of-the-art technique for this task, among with other traditional and innovative machine learning techniques. Then, we optimally combine them through Meta-Nets, a family of recently developed and performing ensemble methods.
0
0
0
1
0
0
20,276
Compressing networks with super nodes
Community detection is a commonly used technique for identifying groups in a network based on similarities in connectivity patterns. To facilitate community detection in large networks, we recast the network to be partitioned into a smaller network of 'super nodes', each super node comprising one or more nodes in the original network. To define the seeds of our super nodes, we apply the 'CoreHD' ranking from dismantling and decycling. We test our approach through the analysis of two common methods for community detection: modularity maximization with the Louvain algorithm and maximum likelihood optimization for fitting a stochastic block model. Our results highlight that applying community detection to the compressed network of super nodes is significantly faster while successfully producing partitions that are more aligned with the local network connectivity, more stable across multiple (stochastic) runs within and between community detection algorithms, and overlap well with the results obtained using the full network.
1
1
0
0
0
0
20,277
Mode specific electronic friction in dissociative chemisorption on metal surfaces: H$_2$ on Ag(111)
Electronic friction and the ensuing nonadiabatic energy loss play an important role in chemical reaction dynamics at metal surfaces. Using molecular dynamics with electronic friction evaluated on-the-fly from Density Functional Theory, we find strong mode dependence and a dominance of nonadiabatic energy loss along the bond stretch coordinate for scattering and dissociative chemisorption of H$_2$ on the Ag(111) surface. Exemplary trajectories with varying initial conditions indicate that this mode-specificity translates into modulated energy loss during a dissociative chemisorption event. Despite minor nonadiabatic energy loss of about 5\%, the directionality of friction forces induces dynamical steering that affects individual reaction outcomes, specifically for low-incidence energies and vibrationally excited molecules. Mode-specific friction induces enhanced loss of rovibrational rather than translational energy and will be most visible in its effect on final energy distributions in molecular scattering experiments.
0
1
0
0
0
0
20,278
The quantum auxiliary linear problem & quantum Darboux-Backlund transformations
We explore the notion of the quantum auxiliary linear problem and the associated problem of quantum Backlund transformations (BT). In this context we systematically construct the analogue of the classical formula that provides the whole hierarchy of the time components of Lax pairs at the quantum level for both closed and open integrable lattice models. The generic time evolution operator formula is particularly interesting and novel at the quantum level when dealing with systems with open boundary conditions. In the same frame we show that the reflection K-matrix can also be viewed as a particular type of BT, fixed at the boundaries of the system. The q-oscillator (q-boson) model, a variant of the Ablowitz-Ladik model, is then employed as a paradigm to illustrate the method. Particular emphasis is given to the time part of the quantum BT as possible connections and applications to the problem of quantum quenches as well as the time evolution of local quantum impurities are evident. A discussion on the use of Bethe states as well as coherent states for the study of the time evolution is also presented.
0
1
0
0
0
0
20,279
Simplified Minimal Gated Unit Variations for Recurrent Neural Networks
Recurrent neural networks with various types of hidden units have been used to solve a diverse range of problems involving sequence data. Two of the most recent proposals, gated recurrent units (GRU) and minimal gated units (MGU), have shown comparable promising results on example public datasets. In this paper, we introduce three model variants of the minimal gated unit (MGU) which further simplify that design by reducing the number of parameters in the forget-gate dynamic equation. These three model variants, referred to simply as MGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset and from the Reuters Newswire Topics (RNT) dataset. The new models have shown similar accuracy to the MGU model while using fewer parameters and thus lowering training expense. One model variant, namely MGU2, performed better than MGU on the datasets considered, and thus may be used as an alternate to MGU or GRU in recurrent neural networks.
1
0
0
1
0
0
20,280
Parallel G-duplex and C-duplex DNA with Uninterrupted Spines of AgI-Mediated Base Pairs
Hydrogen bonding between nucleobases produces diverse DNA structural motifs, including canonical duplexes, guanine (G) quadruplexes and cytosine (C) i-motifs. Incorporating metal-mediated base pairs into nucleic acid structures can introduce new functionalities and enhanced stabilities. Here we demonstrate, using mass spectrometry (MS), ion mobility spectrometry (IMS) and fluorescence resonance energy transfer (FRET), that parallel-stranded structures consisting of up to 20 G-Ag(I)-G contiguous base pairs are formed when natural DNA sequences are mixed with silver cations in aqueous solution. FRET indicates that duplexes formed by poly(cytosine) strands with 20 contiguous C-Ag(I)-C base pairs are also parallel. Silver-mediated G-duplexes form preferentially over G-quadruplexes, and the ability of Ag+ to convert G-quadruplexes into silver-paired duplexes may provide a new route to manipulating these biologically relevant structures. IMS indicates that G-duplexes are linear and more rigid than B-DNA. DFT calculations were used to propose structures compatible with the IMS experiments. Such inexpensive, defect-free and soluble DNA-based nanowires open new directions in the design of novel metal-mediated DNA nanotechnology.
0
0
0
0
1
0
20,281
Constraints on Vacuum Energy from Structure Formation and Nucleosynthesis
This paper derives an upper limit on the density $\rho_{\scriptstyle\Lambda}$ of dark energy based on the requirement that cosmological structure forms before being frozen out by the eventual acceleration of the universe. By allowing for variations in both the cosmological parameters and the strength of gravity, the resulting constraint is a generalization of previous limits. The specific parameters under consideration include the amplitude $Q$ of the primordial density fluctuations, the Planck mass $M_{\rm pl}$, the baryon-to-photon ratio $\eta$, and the density ratio $\Omega_M/\Omega_b$. In addition to structure formation, we use considerations from stellar structure and Big Bang Nucleosynthesis (BBN) to constrain these quantities. The resulting upper limit on the dimensionless density of dark energy becomes $\rho_{\scriptstyle\Lambda}/M_{\rm pl}^4<10^{-90}$, which is $\sim30$ orders of magnitude larger than the value in our universe $\rho_{\scriptstyle\Lambda}/M_{\rm pl}^4\sim10^{-120}$. This new limit is much less restrictive than previous constraints because additional parameters are allowed to vary. With these generalizations, a much wider range of universes can develop cosmic structure and support observers. To constrain the constituent parameters, new BBN calculations are carried out in the regime where $\eta$ and $G=M_{\rm pl}^{-2}$ are much larger than in our universe. If the BBN epoch were to process all of the protons into heavier elements, no hydrogen would be left behind to make water, and the universe would not be viable. However, our results show that some hydrogen is always left over, even under conditions of extremely large $\eta$ and $G$, so that a wide range of alternate universes are potentially habitable.
0
1
0
0
0
0
20,282
Dynamic Mobile Edge Caching with Location Differentiation
Mobile edge caching enables content delivery directly within the radio access network, which effectively alleviates the backhaul burden and reduces round-trip latency. To fully exploit the edge resources, the most popular contents should be identified and cached. Observing that content popularity varies greatly at different locations, to maximize local hit rate, this paper proposes an online learning algorithm that dynamically predicts content hit rate, and makes location-differentiated caching decisions. Specifically, a linear model is used to estimate the future hit rate. Considering the variations in user demand, a perturbation is added to the estimation to account for uncertainty. The proposed learning algorithm requires no training phase, and hence is adaptive to the time-varying content popularity profile. Theoretical analysis indicates that the proposed algorithm asymptotically approaches the optimal policy in the long term. Extensive simulations based on real world traces show that, the proposed algorithm achieves higher hit rate and better adaptiveness to content popularity fluctuation, compared with other schemes.
1
0
0
0
0
0
20,283
Online Learning for Distribution-Free Prediction
We develop an online learning method for prediction, which is important in problems with large and/or streaming data sets. We formulate the learning approach using a covariance-fitting methodology, and show that the resulting predictor has desirable computational and distribution-free properties: It is implemented online with a runtime that scales linearly in the number of samples; has a constant memory requirement; avoids local minima problems; and prunes away redundant feature dimensions without relying on restrictive assumptions on the data distribution. In conjunction with the split conformal approach, it also produces distribution-free prediction confidence intervals in a computationally efficient manner. The method is demonstrated on both real and synthetic datasets.
1
0
0
1
0
0
20,284
Thermal transitions, pseudogap behavior and BCS-BEC crossover in Fermi-Fermi mixtures
We study the mass imbalanced Fermi-Fermi mixture within the framework of a two-dimensional lattice fermion model. Based on the thermodynamic and species dependent quasiparticle behavior we map out the finite temperature phase diagram of this system and show that unlike the balanced Fermi superfluid there are now two different pseudogap regimes as PG-I and PG-II. While within the PG-I regime both the fermionic species are pseudogapped, PG-II corresponds to the regime where pseudogap feature survives only in the light species. We believe that the single particle spectral features that we discuss in this paper are observable through the species resolved radio frequency spectroscopy and momentum resolved photo emission spectroscopy measurements on systems such as, 6$_{Li}$-40$_{K}$ mixture. We further investigate the interplay between the population and mass imbalances and report that at a fixed population imbalance the BCS-BEC crossover in a Fermi-Fermi mixture would require a critical interaction (U$_{c}$), for the realization of the uniform superfluid state. The effect of imbalance in mass on the exotic Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superfluid phase has been probed in detail in terms of the thermodynamic and quasiparticle behavior of this phase. It has been observed that in spite of the s-wave symmetry of the pairing field a nodal superfluid gap is realized in the LO regime. Our results on the various thermal scales and regimes are expected to serve as benchmarks for the experimental observations on 6$_{Li}$-40$_{K}$ mixture.
0
1
0
0
0
0
20,285
Spectral Approximation for Ergodic CMV Operators with an Application to Quantum Walks
We establish concrete criteria for fully supported absolutely continuous spectrum for ergodic CMV matrices and purely absolutely continuous spectrum for limit-periodic CMV matrices. We proceed by proving several variational estimates on the measure of the spectrum and the vanishing set of the Lyapunov exponent for CMV matrices, which represent CMV analogues of results obtained for Schrödinger operators due to Y.\ Last in the early 1990s. Having done so, we combine those estimates with results from inverse spectral theory to obtain purely absolutely continuous spectrum.
0
0
1
0
0
0
20,286
Quantitative Photoacoustic Imaging in the Acoustic Regime using SPIM
While in standard photoacoustic imaging the propagation of sound waves is modeled by the standard wave equation, our approach is based on a generalized wave equation with variable sound speed and material density, respectively. In this paper we present an approach for photoacoustic imaging, which in addition to recovering of the absorption density parameter, the imaging parameter of standard photoacoustics, also allows to reconstruct the spatially varying sound speed and density, respectively, of the medium. We provide analytical reconstruction formulas for all three parameters based in a linearized model based on single plane illumination microscopy (SPIM) techniques.
0
0
1
0
0
0
20,287
Integrated Deep and Shallow Networks for Salient Object Detection
Deep convolutional neural network (CNN) based salient object detection methods have achieved state-of-the-art performance and outperform those unsupervised methods with a wide margin. In this paper, we propose to integrate deep and unsupervised saliency for salient object detection under a unified framework. Specifically, our method takes results of unsupervised saliency (Robust Background Detection, RBD) and normalized color images as inputs, and directly learns an end-to-end mapping between inputs and the corresponding saliency maps. The color images are fed into a Fully Convolutional Neural Networks (FCNN) adapted from semantic segmentation to exploit high-level semantic cues for salient object detection. Then the results from deep FCNN and RBD are concatenated to feed into a shallow network to map the concatenated feature maps to saliency maps. Finally, to obtain a spatially consistent saliency map with sharp object boundaries, we fuse superpixel level saliency map at multi-scale. Extensive experimental results on 8 benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art approaches with a margin.
1
0
0
0
0
0
20,288
Feedback Capacity over Networks
In this paper, we investigate the fundamental limitations of feedback mechanism in dealing with uncertainties for network systems. The study of maximum capability of feedback control was pioneered in Xie and Guo (2000) for scalar systems with nonparametric nonlinear uncertainty. In a network setting, nodes with unknown and nonlinear dynamics are interconnected through a directed interaction graph. Nodes can design feedback controls based on all available information, where the objective is to stabilize the network state. Using information structure and decision pattern as criteria, we specify three categories of network feedback laws, namely the global-knowledge/global-decision, network-flow/local-decision, and local-flow/local-decision feedback. We establish a series of network capacity characterizations for these three fundamental types of network control laws. First of all, we prove that for global-knowledge/global-decision and network-flow/local-decision control where nodes know the information flow across the entire network, there exists a critical number $\big(3/2+\sqrt{2}\big)/\|A_{\mathrm{G}}\|_\infty$, where $3/2+\sqrt{2}$ is as known as the Xie-Guo constant and $A_{\mathrm{G}}$ is the network adjacency matrix, defining exactly how much uncertainty in the node dynamics can be overcome by feedback. Interestingly enough, the same feedback capacity can be achieved under max-consensus enhanced local flows where nodes only observe information flows from neighbors as well as extreme (max and min) states in the network. Next, for local-flow/local-decision control, we prove that there exists a structure-determined value being a lower bound of the network feedback capacity. These results reveal the important connection between network structure and fundamental capabilities of in-network feedback control.
1
0
0
0
0
0
20,289
Flexural phonons in supported graphene: from pinning to localization
We identify graphene layer on a disordered substrate as a possible system where Anderson localization of phonons can be observed. Generally, observation of localization for scattering waves is not simple, because the Rayleigh scattering is inversely proportional to a high power of wavelength. The situation is radically different for the out of plane vibrations, so-called flexural phonons, scattered by pinning centers induced by a substrate. In this case, the scattering time for vanishing wave vector tends to a finite limit. One may, therefore, expect that physics of the flexural phonons exhibits features characteristic for electron localization in two dimensions, albeit without complications caused by the electron-electron interactions. We confirm this idea by calculating statistical properties of the Anderson localization of flexural phonons for a model of elastic sheet in the presence of the pinning centers. Finally, we discuss possible manifestations of the flexural phonons, including the localized ones, in the electronic thermal conductance.
0
1
0
0
0
0
20,290
Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack
Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them, if the effort required to do so remains acceptable. Not only this: we believe there is lots to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of an industry-strength, open source configurable software system, JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70% configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster's lead developers.
1
0
0
0
0
0
20,291
Q-Learning Algorithm for VoLTE Closed-Loop Power Control in Indoor Small Cells
We propose a reinforcement learning (RL) based closed loop power control algorithm for the downlink of the voice over LTE (VoLTE) radio bearer for an indoor environment served by small cells. The main contributions of our paper are to 1) use RL to solve performance tuning problems in an indoor cellular network for voice bearers and 2) show that our derived lower bound loss in effective signal to interference plus noise ratio due to neighboring cell failure is sufficient for VoLTE power control purposes in practical cellular networks. In our simulation, the proposed RL-based power control algorithm significantly improves both voice retainability and mean opinion score compared to current industry standards. The improvement is due to maintaining an effective downlink signal to interference plus noise ratio against adverse network operational issues and faults.
1
0
0
1
0
0
20,292
Ancient shrinking spherical interfaces in the Allen-Cahn flow
We consider the parabolic Allen-Cahn equation in $\mathbb{R}^n$, $n\ge 2$, $$u_t= \Delta u + (1-u^2)u \quad \hbox{ in } \mathbb{R}^n \times (-\infty, 0].$$ We construct an ancient radially symmetric solution $u(x,t)$ with any given number $k$ of transition layers between $-1$ and $+1$. At main order they consist of $k$ time-traveling copies of $w$ with spherical interfaces distant $O(\log |t| )$ one to each other as $t\to -\infty$. These interfaces are resemble at main order copies of the {\em shrinking sphere} ancient solution to mean the flow by mean curvature of surfaces: $|x| = \sqrt{- 2(n-1)t}$. More precisely, if $w(s)$ denotes the heteroclinic 1-dimensional solution of $w'' + (1-w^2)w=0$ $w(\pm \infty)= \pm 1$ given by $w(s) = \tanh \left(\frac s{\sqrt{2}} \right) $ we have $$ u(x,t) \approx \sum_{j=1}^k (-1)^{j-1}w(|x|-\rho_j(t)) - \frac 12 (1+ (-1)^{k}) \quad \hbox{ as } t\to -\infty $$ where $$\rho_j(t)=\sqrt{-2(n-1)t}+\frac{1}{\sqrt{2}}\left(j-\frac{k+1}{2}\right)\log\left(\frac {|t|}{\log |t| }\right)+ O(1),\quad j=1,\ldots ,k.$$
0
0
1
0
0
0
20,293
Measuring the Eccentricity of Items
The long-tail phenomenon tells us that there are many items in the tail. However, not all tail items are the same. Each item acquires different kinds of users. Some items are loved by the general public, while some items are consumed by eccentric fans. In this paper, we propose a novel metric, item eccentricity, to incorporate this difference between consumers of the items. Eccentric items are defined as items that are consumed by eccentric users. We used this metric to analyze two real-world datasets of music and movies and observed the characteristics of items in terms of eccentricity. The results showed that our defined eccentricity of an item does not change much over time, and classified eccentric and noneccentric items present significantly distinct characteristics. The proposed metric effectively separates the eccentric and noneccentric items mixed in the tail, which could not be done with the previous measures, which only consider the popularity of items.
1
0
0
0
0
0
20,294
Dataset: Rare Event Classification in Multivariate Time Series
A real-world dataset is provided from a pulp-and-paper manufacturing industry. The dataset comes from a multivariate time series process. The data contains a rare event of paper break that commonly occurs in the industry. The data contains sensor readings at regular time-intervals (x's) and the event label (y). The primary purpose of the data is thought to be building a classification model for early prediction of the rare event. However, it can also be used for multivariate time series data exploration and building other supervised and unsupervised models.
0
0
0
1
0
0
20,295
Experimental verification of stopping-power prediction from single- and dual-energy computed tomography in biological tissues
An experimental setup for consecutive measurement of ion and x-ray absorption in tissue or other materials is introduced. With this setup using a 3D-printed sample container, the reference stopping-power ratio (SPR) of materials can be measured with an uncertainty of below 0.1%. A total of 65 porcine and bovine tissue samples were prepared for measurement, comprising five samples each of 13 tissue types representing about 80% of the total body mass (three different muscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone). Using a standard stoichiometric calibration for single-energy CT (SECT) as well as a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all tissues and then compared to the measured reference. With the SECT approach, the SPRs of all tissues were predicted with a mean error of (-0.84 $\pm$ 0.12)% and a mean absolute error of (1.27 $\pm$ 0.12)%. In contrast, the DECT-based SPR predictions were overall consistent with the measured reference with a mean error of (-0.02 $\pm$ 0.15)% and a mean absolute error of (0.10 $\pm$ 0.15)%. Thus, in this study, the potential of DECT to decrease range uncertainty could be confirmed in biological tissue.
0
1
0
0
0
0
20,296
How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks
In their recent book Merchants of Doubt [New York:Bloomsbury 2010], Naomi Oreskes and Erik Conway describe the "tobacco strategy", which was used by the tobacco industry to influence policy makers regarding the health risks of tobacco products. The strategy involved two parts, consisting of (1) promoting and sharing independent research supporting the industry's preferred position and (2) funding additional research, but selectively publishing the results. We introduce a model of the Tobacco Strategy, and use it to argue that both prongs of the strategy can be extremely effective--even when policy makers rationally update on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the Tobacco Strategy is particularly successful. In addition, we show how journalists engaged in "fair" reporting can inadvertently mimic the effects of industry on public belief.
1
0
0
0
0
0
20,297
A complete and partial integrability technique of the Lorenz system
In this paper we deal with the well-known nonlinear Lorenz system that describes the deterministic chaos phenomenon. We consider an interesting problem with time-varying phenomena in quantum optics. Then we establish from the motion equations the passage to the Lorenz system. Furthermore, we show that the reduction to the third order non linear equation can be performed. Therefore, the obtained differential equation can be analytically solved in some special cases and transformed to Abel, Dufing, Painlevé and generalized Emden-Fowler equations. So, a motivating technique that permitted a complete and partial integrability of the Lorenz system is presented.
0
1
0
0
0
0
20,298
Comparision of the definitions of generalized solution of the Cauchy problem for quasi-linear equation
In preprint we consider and compare different definitions of generalized solution of the Cauchy problem for 1d-scalar quasilinear equation (conservation law). We start from the classical approaches goes back to I.M. Gelfand, O.A. Oleinik, S.N. Kruzhkov and move to the modern finite-difference approximations approaches belongs to A.A. Shananin and G.M. Henkin. We discuss the conditions that provide definitions to be equivalent.
0
0
1
0
0
0
20,299
On The Complexity of Sparse Label Propagation
This paper investigates the computational complexity of sparse label propagation which has been proposed recently for processing network structured data. Sparse label propagation amounts to a convex optimization problem and might be considered as an extension of basis pursuit from sparse vectors to network structured datasets. Using a standard first-order oracle model, we characterize the number of iterations for sparse label propagation to achieve a prescribed accuracy. In particular, we derive an upper bound on the number of iterations required to achieve a certain accuracy and show that this upper bound is sharp for datasets having a chain structure (e.g., time series).
0
0
0
1
0
0
20,300
Detecting Qualia in Natural and Artificial Agents
The Hard Problem of consciousness has been dismissed as an illusion. By showing that computers are capable of experiencing, we show that they are at least rudimentarily conscious with potential to eventually reach superconsciousness. The main contribution of the paper is a test for confirming certain subjective experiences in a tested agent. We follow with analysis of benefits and problems with conscious machines and implications of such capability on future of computing, machine rights and artificial intelligence safety.
1
0
0
0
0
0