ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
18,601
A Consistent Bayesian Formulation for Stochastic Inverse Problems Based on Push-forward Measures
We formulate, and present a numerical method for solving, an inverse problem for inferring parameters of a deterministic model from stochastic observational data (quantities of interest). The solution, given as a probability measure, is derived using a Bayesian updating approach for measurable maps that finds a posterior probability measure, that when propagated through the deterministic model produces a push-forward measure that exactly matches the observed probability measure on the data. Our approach for finding such posterior measures, which we call consistent Bayesian inference, is simple and only requires the computation of the push-forward probability measure induced by the combination of a prior probability measure and the deterministic model. We establish existence and uniqueness of observation-consistent posteriors and present stability and error analysis. We also discuss the relationships between consistent Bayesian inference, classical/statistical Bayesian inference, and a recently developed measure-theoretic approach for inference. Finally, analytical and numerical results are presented to highlight certain properties of the consistent Bayesian approach and the differences between this approach and the two aforementioned alternatives for inference.
0
0
1
1
0
0
18,602
Coherent State Mapping Ring-Polymer Molecular Dynamics for Non-Adiabatic quantum propagations
We introduce the coherent state mapping ring-polymer molecular dynamics (CS-RPMD), a new method that accurately describes electronic non-adiabatic dynamics with explicit nuclear quantization. This new approach is derived by using coherent state mapping representation for the electronic degrees of freedom (DOF) and the ring-polymer path-integral representation for the nuclear DOF. CS-RPMD Hamiltonian does not contain any inter-bead coupling term in the state-dependent potential, which is a key feature that ensures correct electronic Rabi oscillations. Hamilton's equation of motion is used to sample initial configurations and propagate the trajectories, preserving the distribution with classical symplectic evolution. In the special one-bead limit for mapping variables, CS-RPMD preserves the detailed balance. Numerical tests of this method with a two-state model system show a very good agreement with exact quantum results over a broad range of electronic couplings.
0
1
0
0
0
0
18,603
Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem
Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem (CLT) for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L$^p$ convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.
0
0
1
1
0
0
18,604
KMS states on $C^*$-algebras associated to a family of $*$-commuting local homeomorphisms
We consider a family of $*$-commuting local homeomorphisms on a compact space, and build a compactly aligned product system of Hilbert bimodules (in the sense of Fowler). This product system has a Nica-Toeplitz algebra and a Cuntz-Pimsner algebra. Both algebras carry a gauge action of a higher-dimensional torus, and there are many possible dynamics obtained by composing with different embeddings of the real line in this torus. We study the KMS states of these dynamics. For large inverse temperatures, we describe the simplex of KMS states on the Nica-Toeplitz algebra. To study KMS states for smaller inverse temperature, we consider a preferred dynamics for which there is a single critical inverse temperature. We find a KMS state on the Nica-Toeplitz algebra at this critical inverse temperature which factors through the Cuntz-Pimsner algebra. We then illustrate our results by considering backward shifts on the infinite-path spaces of a class of $k$-graphs.
0
0
1
0
0
0
18,605
Multi-Labelled Value Networks for Computer Go
This paper proposes a new approach to a novel value network architecture for the game Go, called a multi-labelled (ML) value network. In the ML value network, different values (win rates) are trained simultaneously for different settings of komi, a compensation given to balance the initiative of playing first. The ML value network has three advantages, (a) it outputs values for different komi, (b) it supports dynamic komi, and (c) it lowers the mean squared error (MSE). This paper also proposes a new dynamic komi method to improve game-playing strength. This paper also performs experiments to demonstrate the merits of the architecture. First, the MSE of the ML value network is generally lower than the value network alone. Second, the program based on the ML value network wins by a rate of 67.6% against the program based on the value network alone. Third, the program with the proposed dynamic komi method significantly improves the playing strength over the baseline that does not use dynamic komi, especially for handicap games. To our knowledge, up to date, no handicap games have been played openly by programs using value networks. This paper provides these programs with a useful approach to playing handicap games.
1
0
0
0
0
0
18,606
Lifting high-dimensional nonlinear models with Gaussian regressors
We study the problem of recovering a structured signal $\mathbf{x}_0$ from high-dimensional data $\mathbf{y}_i=f(\mathbf{a}_i^T\mathbf{x}_0)$ for some nonlinear (and potentially unknown) link function $f$, when the regressors $\mathbf{a}_i$ are iid Gaussian. Brillinger (1982) showed that ordinary least-squares estimates $\mathbf{x}_0$ up to a constant of proportionality $\mu_\ell$, which depends on $f$. Recently, Plan & Vershynin (2015) extended this result to the high-dimensional setting deriving sharp error bounds for the generalized Lasso. Unfortunately, both least-squares and the Lasso fail to recover $\mathbf{x}_0$ when $\mu_\ell=0$. For example, this includes all even link functions. We resolve this issue by proposing and analyzing an alternative convex recovery method. In a nutshell, our method treats such link functions as if they were linear in a lifted space of higher-dimension. Interestingly, our error analysis captures the effect of both the nonlinearity and the problem's geometry in a few simple summary parameters.
0
0
0
1
0
0
18,607
Functional Conceptual Substratum as a New Cognitive Mechanism for Mathematical Creation
We describe a new cognitive ability, i.e., functional conceptual substratum, used implicitly in the generation of several mathematical proofs and definitions. Furthermore, we present an initial (first-order) formalization of this mechanism together with its relation to classic notions like primitive positive definability and Diophantiveness. Additionally, we analyze the semantic variability of functional conceptual substratum when small syntactic modifications are done. Finally, we describe mathematically natural inference rules for definitions inspired by functional conceptual substratum and show that they are sound and complete w.r.t. standard calculi.
0
0
1
0
0
0
18,608
Applying Gromov's Amenable Localization to Geodesic Flows
Let $M$ be a compact connected smooth Riemannian $n$-manifold with boundary. We combine Gromov's amenable localization technique with the Poincaré duality to study the traversally generic geodesic flows on $SM$, the space of the spherical tangent bundle. Such flows generate stratifications of $SM$, governed by rich universal combinatorics. The stratification reflects the ways in which the geodesic flow trajectories interact with the boundary $\d(SM)$. Specifically, we get lower estimates of the numbers of connected components of these flow-generated strata of any given codimension $k$. These universal bounds are expressed in terms of the normed homology $H_k(M; \R)$ and $H_k(DM; \R)$, where $DM = M\cup_{\d M} M$ denotes the double of $M$. The norms here are the Gromov simplicial semi-norms in homology. The more complex the metric on $M$ is, the more numerous the strata of $SM$ and $S(DM)$ are. So one may regard our estimates as analogues of the Morse inequalities for the geodesics on manifolds with boundary. It turns out that some close relatives of the normed homology spaces form obstructions to the existence of globally $k$-convex traversally generic metrics on $M$.
0
0
1
0
0
0
18,609
Modeling the Ellsberg Paradox by Argument Strength
We present a formal measure of argument strength, which combines the ideas that conclusions of strong arguments are (i) highly probable and (ii) their uncertainty is relatively precise. Likewise, arguments are weak when their conclusion probability is low or when it is highly imprecise. We show how the proposed measure provides a new model of the Ellsberg paradox. Moreover, we further substantiate the psychological plausibility of our approach by an experiment (N = 60). The data show that the proposed measure predicts human inferences in the original Ellsberg task and in corresponding argument strength tasks. Finally, we report qualitative data taken from structured interviews on folk psychological conceptions on what argument strength means.
1
0
1
0
0
0
18,610
Correction to the paper "Some remarks on Davie's uniqueness theorem"
The property 4 in Proposition 2.3 from the paper "Some remarks on Davie's uniqueness theorem" is replaced with a weaker assertion which is sufficient for the proof of the main results. Technical details and improvements are given.
0
0
1
0
0
0
18,611
An Estimation and Analysis Framework for the Rasch Model
The Rasch model is widely used for item response analysis in applications ranging from recommender systems to psychology, education, and finance. While a number of estimators have been proposed for the Rasch model over the last decades, the available analytical performance guarantees are mostly asymptotic. This paper provides a framework that relies on a novel linear minimum mean-squared error (L-MMSE) estimator which enables an exact, nonasymptotic, and closed-form analysis of the parameter estimation error under the Rasch model. The proposed framework provides guidelines on the number of items and responses required to attain low estimation errors in tests or surveys. We furthermore demonstrate its efficacy on a number of real-world collaborative filtering datasets, which reveals that the proposed L-MMSE estimator performs on par with state-of-the-art nonlinear estimators in terms of predictive performance.
0
0
0
1
0
0
18,612
Do planets remember how they formed?
One of the most directly observable features of a transiting multi-planet system is their size-ordering when ranked in orbital separation. Kepler has revealed a rich diversity of outcomes, from perfectly ordered systems, like Kepler-80, to ostensibly disordered systems, like Kepler-20. Under the hypothesis that systems are born via preferred formation pathways, one might reasonably expect non-random size-orderings reflecting these processes. However, subsequent dynamical evolution, often chaotic and turbulent in nature, may erode this information and so here we ask - do systems remember how they formed? To address this, we devise a model to define the entropy of a planetary system's size-ordering, by first comparing differences between neighboring planets and then extending to accommodate differences across the chain. We derive closed-form solutions for many of the micro state occupancies and provide public code with look-up tables to compute entropy for up to ten-planet systems. All three proposed entropy definitions exhibit the expected property that their credible interval increases with respect to a proxy for time. We find that the observed Kepler multis display a highly significant deficit in entropy compared to a randomly generated population. Incorporating a filter for systems deemed likely to be dynamically packed, we show that this result is robust against the possibility of missing planets too. Put together, our work establishes that Kepler systems do indeed remember something of their younger years and highlights the value of information theory for exoplanetary science.
0
1
0
0
0
0
18,613
Demarcating circulation regimes of synchronously rotating terrestrial planets within the habitable zone
We investigate the atmospheric dynamics of terrestrial planets in synchronous rotation within the habitable zone of low-mass stars using the Community Atmosphere Model (CAM). The surface temperature contrast between day and night hemispheres decreases with an increase in incident stellar flux, which is opposite the trend seen on gas giants. We define three dynamical regimes in terms of the equatorial Rossby deformation radius and the Rhines length. The slow rotation regime has a mean zonal circulation that spans from day to night side, with both the Rossby deformation radius and the Rhines length exceeding planetary radius, which occurs for planets around stars with effective temperatures of 3300 K to 4500 K (rotation period > 20 days). Rapid rotators have a mean zonal circulation that partially spans a hemisphere and with banded cloud formation beneath the substellar point, with the Rossby deformation radius is less than planetary radius, which occurs for planets orbiting stars with effective temperatures of less than 3000 K (rotation period < 5 days). In between is the Rhines rotation regime, which retains a thermally-direct circulation from day to night side but also features midlatitude turbulence-driven zonal jets. Rhines rotators occur for planets around stars in the range of 3000 K to 3300 K (rotation period ~ 5 to 20 days), where the Rhines length is greater than planetary radius but the Rossby deformation radius is less than planetary radius. The dynamical state can be observationally inferred from comparing the morphology of the thermal emission phase curves of synchronously rotating planets.
0
1
0
0
0
0
18,614
Versatile Large-Area Custom-Feature van der Waals Epitaxy of Topological Insulators
As the focus of applied research in topological insulators (TI) evolves, the need to synthesize large-area TI films for practical device applications takes center stage. However, constructing scalable and adaptable processes for high-quality TI compounds remains a challenge. To this end, a versatile van der Waals epitaxy (vdWE) process for custom-feature Bismuth Telluro-Sulfide TI growth and fabrication is presented, achieved through selective-area fluorination and modification of surface free-energy on mica. The TI features grow epitaxially in large single-crystal trigonal domains, exhibiting armchair or zigzag crystalline edges highly oriented with the underlying mica lattice and only two preferred domain orientations mirrored at $180^\circ$. As-grown feature thickness dependence on lateral dimensions and denuded zones at boundaries are observed, as explained by a semi-empirical two-species surface migration model with robust estimates of growth parameters and elucidating the role of selective-area surface modification. Topological surface states contribute up to 60% of device conductance at room-temperature, indicating excellent electronic quality. High-yield microfabrication and the adaptable vdWE growth mechanism with readily alterable precursor and substrate combinations, lend the process versatility to realize crystalline TI synthesis in arbitrary shapes and arrays suitable for facile integration with processes ranging from rapid prototyping to scalable manufacturing.
0
1
0
0
0
0
18,615
Sharp off-diagonal weighted norm estimates for the Bergman projection
We prove that for $1<p\le q<\infty$, $qp\geq {p'}^2$ or $p'q'\geq q^2$, $\frac{1}{p}+\frac{1}{p'}=\frac{1}{q}+\frac{1}{q'}=1$, $$\|\omega P_\alpha(f)\|_{L^p(\mathcal{H},y^{\alpha+(2+\alpha)(\frac{q}{p}-1)}dxdy)}\le C_{p,q,\alpha}[\omega]_{B_{p,q,\alpha}}^{(\frac{1}{p'}+\frac{1}{q})\max\{1,\frac{p'}{q}\}}\|\omega f\|_{L^p(\mathcal{H},y^{\alpha}dxdy)}$$ where $P_\alpha$ is the weighted Bergman projection of the upper-half plane $\mathcal{H}$, and $$[\omega]_{B_{p,q,\alpha}}:=\sup_{I\subset \mathbb{R}}\left(\frac{1}{|I|^{2+\alpha}}\int_{Q_I}\omega^{q}dV_\alpha\right)\left(\frac{1}{|I|^{2+\alpha}}\int_{Q_I}\omega^{-p'}dV_\alpha\right)^{\frac{q}{p'}},$$ with $Q_I=\{z=x+iy\in \mathbb{C}: x\in I, 0<y<|I|\}$.
0
0
1
0
0
0
18,616
An Automated Scalable Framework for Distributing Radio Astronomy Processing Across Clusters and Clouds
The Low Frequency Array (LOFAR) radio telescope is an international aperture synthesis radio telescope used to study the Universe at low frequencies. One of the goals of the LOFAR telescope is to conduct deep wide-field surveys. Here we will discuss a framework for the processing of the LOFAR Two Meter Sky Survey (LoTSS). This survey will produce close to 50 PB of data within five years. These data rates require processing at locations with high-speed access to the archived data. To complete the LoTSS project, the processing software needs to be made portable and moved to clusters with a high bandwidth connection to the data archive. This work presents a framework that makes the LOFAR software portable, and is used to scale out LOFAR data reduction. Previous work was successful in preprocessing LOFAR data on a cluster of isolated nodes. This framework builds upon it and and is currently operational. It is designed to be portable, scalable, automated and general. This paper describes its design and high level operation and the initial results processing LoTSS data.
0
1
0
0
0
0
18,617
Learning multiple visual domains with residual adapters
There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.
1
0
0
1
0
0
18,618
Cosmic quantum optical probing of quantum gravity through a gravitational lensLens
We consider the nonunitary quantum dynamics of neutral massless scalar particles used to model photons around a massive gravitational lens. The gravitational interaction between the lensing mass and asymptotically free particles is described by their second-quantized scattering wavefunctions. Remarkably, the zero-point spacetime fluctuations can induce significant decoherence of the scattered states with spontaneous emission of gravitons, thereby reducing the particles' coherence as well as energy. This new effect suggests that, when photon polarizations are negligible, such quantum gravity phenomena could lead to measurable anomalous redshift of recently studied astrophysical lasers through a gravitational lens in the range of black holes and galaxy clusters.
0
1
0
0
0
0
18,619
Combinatorial Miller-Hagberg Algorithm for Randomization of Dense Networks
We propose a slightly revised Miller-Hagberg (MH) algorithm that efficiently generates a random network from a given expected degree sequence. The revision was to replace the approximated edge probability between a pair of nodes with a combinatorically calculated edge probability that better captures the likelihood of edge presence especially where edges are dense. The computational complexity of this combinatorial MH algorithm is still in the same order as the original one. We evaluated the proposed algorithm through several numerical experiments. The results demonstrated that the proposed algorithm was particularly good at accurately representing high-degree nodes in dense, heterogeneous networks. This algorithm may be a useful alternative of other more established network randomization methods, given that the data are increasingly becoming larger and denser in today's network science research.
1
0
0
0
0
0
18,620
Topological Insulators in Random Lattices
Our understanding of topological insulators is based on an underlying crystalline lattice where the local electronic degrees of freedom at different sites hybridize with each other in ways that produce nontrivial band topology, and the search for material systems to realize such phases have been strongly influenced by this. Here we theoretically demonstrate topological insulators in systems with a random distribution of sites in space, i. e., a random lattice. This is achieved by constructing hopping models on random lattices whose ground states possess nontrivial topological nature (characterized e. g., by Bott indices) that manifests as quantized conductances in systems with a boundary. By tuning parameters such as the density of sites (for a given range of fermion hopping), we can achieve transitions from trivial to topological phases. We discuss interesting features of these transitions. In two spatial dimensions, we show this for all five symmetry classes (A, AII, D, DIII and C) that are known to host nontrivial topology in crystalline systems. We expect similar physics to be realizable in any dimension and provide an explicit example of a $Z_2$ topological insulator on a random lattice in three spatial dimensions. Our study not only provides a deeper understanding of the topological phases of non-interacting fermions, but also suggests new directions in the pursuit of the laboratory realization of topological quantum matter.
0
1
0
0
0
0
18,621
DiGrad: Multi-Task Reinforcement Learning with Shared Actions
Most reinforcement learning algorithms are inefficient for learning multiple tasks in complex robotic systems, where different tasks share a set of actions. In such environments a compound policy may be learnt with shared neural network parameters, which performs multiple tasks concurrently. However such compound policy may get biased towards a task or the gradients from different tasks negate each other, making the learning unstable and sometimes less data efficient. In this paper, we propose a new approach for simultaneous training of multiple tasks sharing a set of common actions in continuous action spaces, which we call as DiGrad (Differential Policy Gradient). The proposed framework is based on differential policy gradients and can accommodate multi-task learning in a single actor-critic network. We also propose a simple heuristic in the differential policy gradient update to further improve the learning. The proposed architecture was tested on 8 link planar manipulator and 27 degrees of freedom(DoF) Humanoid for learning multi-goal reachability tasks for 3 and 2 end effectors respectively. We show that our approach supports efficient multi-task learning in complex robotic systems, outperforming related methods in continuous action spaces.
1
0
0
1
0
0
18,622
Narratives of Quantum Theory in the Age of Quantum Technologies
Quantum technologies can be presented to the public with or without introducing a strange trait of quantum theory responsible for their non-classical efficiency. Traditionally the message was centered on the superposition principle, while entanglement and properties such as contextuality have been gaining ground recently. A less theoretical approach is focused on simple protocols that enable technological applications. It results in a pragmatic narrative built with the help of the resource paradigm and principle-based reconstructions. I discuss the advantages and weaknesses of these methods. To illustrate the importance of new metaphors beyond the Schrödinger cat, I briefly describe a non-mathematical narrative about entanglement that conveys an idea of some of its unusual properties. If quantum technologists are to succeed in building trust in their work, they ought to provoke an aesthetic perception in the public commensurable with the mathematical beauty of quantum theory experienced by the physicist. The power of the narrative method lies in its capacity to do so.
0
1
0
0
0
0
18,623
Is Information in the Brain Represented in Continuous or Discrete Form?
The question of continuous-versus-discrete information representation in the brain is a fundamental yet unresolved physiological question. Historically, most analyses assume a continuous representation without considering the alternative possibility of a discrete representation. Our work explores the plausibility of both representations, and answers the question from a communications engineering perspective. Drawing on the well-established Shannon's communications theory, we posit that information in the brain is represented in a discrete form. Using a computer simulation, we show that information cannot be communicated reliably between neurons using a continuous representation, due to the presence of noise; neural information has to be in a discrete form. In addition, we designed 3 (human) behavioral experiments on probability estimation and analyzed the data using a novel discrete (quantized) model of probability. Under a discrete model of probability, two distinct probabilities (say, 0.57 and 0.58) are treated indifferently. We found that data from all participants were better fit to discrete models than continuous ones. Furthermore, we re-analyzed the data from a published (human) behavioral study on intertemporal choice using a novel discrete (quantized) model of intertemporal choice. Under such a model, two distinct time delays (say, 16 days and 17 days) are treated indifferently. We found corroborating results, showing that data from all participants were better fit to discrete models than continuous ones. In summary, all results reported here support our discrete hypothesis of information representation in the brain, which signifies a major demarcation from the current understanding of the brain's physiology.
0
0
0
0
1
0
18,624
Real-Time Background Subtraction Using Adaptive Sampling and Cascade of Gaussians
Background-Foreground classification is a fundamental well-studied problem in computer vision. Due to the pixel-wise nature of modeling and processing in the algorithm, it is usually difficult to satisfy real-time constraints. There is a trade-off between the speed (because of model complexity) and accuracy. Inspired by the rejection cascade of Viola-Jones classifier, we decompose the Gaussian Mixture Model (GMM) into an adaptive cascade of classifiers. This way we achieve a good improvement in speed without compensating for accuracy. In the training phase, we learn multiple KDEs for different durations to be used as strong prior distribution and detect probable oscillating pixels which usually results in misclassifications. We propose a confidence measure for the classifier based on temporal consistency and the prior distribution. The confidence measure thus derived is used to adapt the learning rate and the thresholds of the model, to improve accuracy. The confidence measure is also employed to perform temporal and spatial sampling in a principled way. We demonstrate a speed-up factor of 5x to 10x and 17 percent average improvement in accuracy over several standard videos.
1
0
0
1
0
0
18,625
Multilevel nested simulation for efficient risk estimation
We investigate the problem of computing a nested expectation of the form $\mathbb{P}[\mathbb{E}[X|Y] \!\geq\!0]\!=\!\mathbb{E}[\textrm{H}(\mathbb{E}[X|Y])]$ where $\textrm{H}$ is the Heaviside function. This nested expectation appears, for example, when estimating the probability of a large loss from a financial portfolio. We present a method that combines the idea of using Multilevel Monte Carlo (MLMC) for nested expectations with the idea of adaptively selecting the number of samples in the approximation of the inner expectation, as proposed by (Broadie et al., 2011). We propose and analyse an algorithm that adaptively selects the number of inner samples on each MLMC level and prove that the resulting MLMC method with adaptive sampling has an $\mathcal{O}\left( \varepsilon^{-2}|\log\varepsilon|^2 \right)$ complexity to achieve a root mean-squared error $\varepsilon$. The theoretical analysis is verified by numerical experiments on a simple model problem. We also present a stochastic root-finding algorithm that, combined with our adaptive methods, can be used to compute other risk measures such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), with the latter being achieved with $\mathcal{O}\left(\varepsilon^{-2}\right)$ complexity.
0
0
0
0
0
1
18,626
Construction of a relativistic Ornstein-Uhlenbeck process
Based on a version of Dudley's Wiener process on the mass shell in the momentum Minkowski space of a massive point particle, a model of a relativistic Ornstein--Uhlenbeck process is constructed by addition of a specific drift term. The invariant distribution of this momentum process as well as other associated processes are computed.
0
0
1
0
0
0
18,627
Safe Trajectory Synthesis for Autonomous Driving in Unforeseen Environments
Path planning for autonomous vehicles in arbitrary environments requires a guarantee of safety, but this can be impractical to ensure in real-time when the vehicle is described with a high-fidelity model. To address this problem, this paper develops a method to perform trajectory design by considering a low-fidelity model that accounts for model mismatch. The presented method begins by computing a conservative Forward Reachable Set (FRS) of a high-fidelity model's trajectories produced when tracking trajectories of a low-fidelity model over a finite time horizon. At runtime, the vehicle intersects this FRS with obstacles in the environment to eliminate trajectories that can lead to a collision, then selects an optimal plan from the remaining safe set. By bounding the time for this set intersection and subsequent path selection, this paper proves a lower bound for the FRS time horizon and sensing horizon to guarantee safety. This method is demonstrated in simulation using a kinematic Dubin's car as the low-fidelity model and a dynamic unicycle as the high-fidelity model.
1
0
0
0
0
0
18,628
A Data and Model-Parallel, Distributed and Scalable Framework for Training of Deep Networks in Apache Spark
Training deep networks is expensive and time-consuming with the training period increasing with data size and growth in model parameters. In this paper, we provide a framework for distributed training of deep networks over a cluster of CPUs in Apache Spark. The framework implements both Data Parallelism and Model Parallelism making it suitable to use for deep networks which require huge training data and model parameters which are too big to fit into the memory of a single machine. It can be scaled easily over a cluster of cheap commodity hardware to attain significant speedup and obtain better results making it quite economical as compared to farm of GPUs and supercomputers. We have proposed a new algorithm for training of deep networks for the case when the network is partitioned across the machines (Model Parallelism) along with detailed cost analysis and proof of convergence of the same. We have developed implementations for Fully-Connected Feedforward Networks, Convolutional Neural Networks, Recurrent Neural Networks and Long Short-Term Memory architectures. We present the results of extensive simulations demonstrating the speedup and accuracy obtained by our framework for different sizes of the data and model parameters with variation in the number of worker cores/partitions; thereby showing that our proposed framework can achieve significant speedup (upto 11X for CNN) and is also quite scalable.
1
0
0
1
0
0
18,629
ICLabel: An automated electroencephalographic independent component classifier, dataset, and website
The electroencephalogram (EEG) provides a non-invasive, minimally restrictive, and relatively low cost measure of mesoscale brain dynamics with high temporal resolution. Although signals recorded in parallel by multiple, near-adjacent EEG scalp electrode channels are highly-correlated and combine signals from many different sources, biological and non-biological, independent component analysis (ICA) has been shown to isolate the various source generator processes underlying those recordings. Independent components (IC) found by ICA decomposition can be manually inspected, selected, and interpreted, but doing so requires both time and practice as ICs have no particular order or intrinsic interpretations and therefore require further study of their properties. Alternatively, sufficiently-accurate automated IC classifiers can be used to classify ICs into broad source categories, speeding the analysis of EEG studies with many subjects and enabling the use of ICA decomposition in near-real-time applications. While many such classifiers have been proposed recently, this work presents the ICLabel project comprised of (1) an IC dataset containing spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG recordings, (2) a website for collecting crowdsourced IC labels and educating EEG researchers and practitioners about IC interpretation, and (3) the automated ICLabel classifier. The classifier improves upon existing methods in two ways: by improving the accuracy of the computed label estimates and by enhancing its computational efficiency. The ICLabel classifier outperforms or performs comparably to the previous best publicly available method for all measured IC categories while computing those labels ten times faster than that classifier as shown in a rigorous comparison against all other publicly available EEG IC classifiers.
1
0
0
1
0
0
18,630
Inference on a New Class of Sample Average Treatment Effects
We derive new variance formulas for inference on a general class of estimands of causal average treatment effects in a Randomized Control Trial (RCT). We generalize Robins (1988) and show that when the estimand of interest is the Sample Average Treatment Effect of the Treated (SATT or SATC for controls), a consistent variance estimator exists. Although these estimands are equal to the Sample Average Treatment Effect (SATE) in expectation, potentially large differences in both accuracy and coverage can occur by the change of estimand, even asymptotically. Inference on the SATE, even using a conservative confidence interval, provides incorrect coverage of the SATT or SATC. We derive the variance and limiting distribution of a new and general class of estimands---any mixing between SATT and SATC---for which the SATE is a specific case. We demonstrate the applicability of the new theoretical results using Monte-Carlo simulations and an empirical application with hundreds of online experiments with an average sample size of approximately one hundred million observations per experiment. An R package, estCI, that implements all the proposed estimation procedures is available.
0
0
1
1
0
0
18,631
A split step Fourier/discontinuous Galerkin scheme for the Kadomtsev--Petviashvili equation
In this paper we propose a method to solve the Kadomtsev--Petviashvili equation based on splitting the linear part of the equation from the nonlinear part. The linear part is treated using FFTs, while the nonlinear part is approximated using a semi-Lagrangian discontinuous Galerkin approach of arbitrary order. We demonstrate the efficiency and accuracy of the numerical method by providing a range of numerical simulations. In particular, we find that our approach can outperform the numerical methods considered in the literature by up to a factor of five. Although we focus on the Kadomtsev--Petviashvili equation in this paper, the proposed numerical scheme can be extended to a range of related models as well.
0
1
1
0
0
0
18,632
Dense Transformer Networks
The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on natural and biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.
1
0
0
1
0
0
18,633
Tetragonal CH3NH3PbI3 Is Ferroelectric
Halide perovskite (HaP) semiconductors are revolutionizing photovoltaic (PV) solar energy conversion by showing remarkable performance of solar cells made with esp. tetragonal methylammonium lead tri-iodide (MAPbI3). In particular, the low voltage loss of these cells implies a remarkably low recombination rate of photogenerated carriers. It was suggested that low recombination can be due to spatial separation of electrons and holes, a possibility if MAPbI3 is a semiconducting ferroelectric, which, however, requires clear experimental evidence. As a first step we show that, in operando, MAPbI3 (unlike MAPbBr3) is pyroelectric, which implies it can be ferroelectric. The next step, proving it is (not) ferroelectric, is challenging, because of the material s relatively high electrical conductance (a consequence of an optical band gap suitable for PV conversion!) and low stability under high applied bias voltage. This excludes normal measurements of a ferroelectric hysteresis loop to prove ferroelctricity s hallmark for switchable polarization. By adopting an approach suitable for electrically leaky materials as MAPbI3, we show here ferroelectric hysteresis from well-characterized single crystals at low temperature (still within the tetragonal phase, which is the room temperature stable phase). Using chemical etching, we also image polar domains, the structural fingerprint for ferroelectricity, periodically stacked along the polar axis of the crystal, which, as predicted by theory, scale with the overall crystal size. We also succeeded in detecting clear second-harmonic generation, direct evidence for the material s non-centrosymmetry. We note that the material s ferroelectric nature, can, but not obviously need to be important in a PV cell, operating around room temperature.
0
1
0
0
0
0
18,634
A Simple and Efficient MapReduce Algorithm for Data Cube Materialization
Data cube materialization is a classical database operator introduced in Gray et al.~(Data Mining and Knowledge Discovery, Vol.~1), which is critical for many analysis tasks. Nandi et al.~(Transactions on Knowledge and Data Engineering, Vol.~6) first studied cube materialization for large scale datasets using the MapReduce framework, and proposed a sophisticated modification of a simple broadcast algorithm to handle a dataset with a 216GB cube size within 25 minutes with 2k machines in 2012. We take a different approach, and propose a simple MapReduce algorithm which (1) minimizes the total number of copy-add operations, (2) leverages locality of computation, and (3) balances work evenly across machines. As a result, the algorithm shows excellent performance, and materialized a real dataset with a cube size of 35.0G tuples and 1.75T bytes in 54 minutes, with 0.4k machines in 2014.
1
0
0
0
0
0
18,635
Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations
The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.
1
0
0
1
0
0
18,636
Completion of High Order Tensor Data with Missing Entries via Tensor-train Decomposition
In this paper, we aim at the completion problem of high order tensor data with missing entries. The existing tensor factorization and completion methods suffer from the curse of dimensionality when the order of tensor N>>3. To overcome this problem, we propose an efficient algorithm called TT-WOPT (Tensor-train Weighted OPTimization) to find the latent core tensors of tensor data and recover the missing entries. Tensor-train decomposition, which has the powerful representation ability with linear scalability to tensor order, is employed in our algorithm. The experimental results on synthetic data and natural image completion demonstrate that our method significantly outperforms the other related methods. Especially when the missing rate of data is very high, e.g., 85% to 99%, our algorithm can achieve much better performance than other state-of-the-art algorithms.
1
0
0
0
0
0
18,637
On the inner products of some Deligne--Lusztig type representations
In this paper we introduce a family of Deligne--Lusztig type varieties attached to connected reductive groups over quotients of discrete valuation rings, naturally generalising the higher Deligne--Lusztig varieties and some constructions related to the algebraisation problem raised by Lusztig. We establish the inner product formula between the representations associated to these varieties and the higher Deligne--Lusztig representations.
0
0
1
0
0
0
18,638
An invariant for embedded Fano manifolds covered by linear spaces
For an embedded Fano manifold $X$, we introduce a new invariant $S_X$ related to the dimension of covering linear spaces. The aim of this paper is to classify Fano manifolds $X$ which have large $S_X$.
0
0
1
0
0
0
18,639
Delivery Latency Trade-Offs of Heterogeneous Contents in Fog Radio Access Networks
A Fog Radio Access Network (F-RAN) is a cellular wireless system that enables content delivery via the caching of popular content at edge nodes (ENs) and cloud processing. The existing information-theoretic analyses of F-RAN systems, and special cases thereof, make the assumption that all requests should be guaranteed the same delivery latency, which results in identical latency for all files in the content library. In practice, however, contents may have heterogeneous timeliness requirements depending on the applications that operate on them. Given per-EN cache capacity constraint, there exists a fundamental trade-off among the delivery latencies of different users' requests, since contents that are allocated more cache space generally enjoy lower delivery latencies. For the case with two ENs and two users, the optimal latency trade-off is characterized in the high-SNR regime in terms of the Normalized Delivery Time (NDT) metric. The main results are illustrated by numerical examples.
1
0
0
0
0
0
18,640
Leaking Uninitialized Secure Enclave Memory via Structure Padding (Extended Abstract)
Intel software guard extensions (SGX) aims to provide an isolated execution environment, known as an enclave, for a user-level process to maximize its confidentiality and integrity. In this paper, we study how uninitialized data inside a secure enclave can be leaked via structure padding. We found that, during ECALL and OCALL, proxy functions that are automatically generated by the Intel SGX Software Development Kit (SDK) fully copy structure variables from an enclave to the normal memory to return the result of an ECALL function and to pass input parameters to an OCALL function. If the structure variables contain padding bytes, uninitialized enclave memory, which might contain confidential data like a private key, can be copied to the normal memory through the padding bytes. We also consider potential countermeasures against these security threats.
1
0
0
0
0
0
18,641
Asymptotic and numerical analysis of a stochastic PDE model of volume transmission
Volume transmission is an important neural communication pathway in which neurons in one brain region influence the neurotransmitter concentration in the extracellular space of a distant brain region. In this paper, we apply asymptotic analysis to a stochastic partial differential equation model of volume transmission to calculate the neurotransmitter concentration in the extracellular space. Our model involves the diffusion equation in a three-dimensional domain with interior holes that randomly switch between being either sources or sinks. These holes model nerve varicosities that alternate between releasing and absorbing neurotransmitter, according to when they fire action potentials. In the case that the holes are small, we compute analytically the first two nonzero terms in an asymptotic expansion of the average neurotransmitter concentration. The first term shows that the concentration is spatially constant to leading order and that this constant is independent of many details in the problem. Specifically, this constant first term is independent of the number and location of nerve varicosities, neural firing correlations, and the size and geometry of the extracellular space. The second term shows how these factors affect the concentration at second order. Interestingly, the second term is also spatially constant under some mild assumptions. We verify our asymptotic results by high-order numerical simulation using radial basis function-generated finite differences.
0
0
0
0
1
0
18,642
Mixtures of Hidden Truncation Hyperbolic Factor Analyzers
The mixture of factor analyzers model was first introduced over 20 years ago and, in the meantime, has been extended to several non-Gaussian analogues. In general, these analogues account for situations with heavy tailed and/or skewed clusters. An approach is introduced that unifies many of these approaches into one very general model: the mixture of hidden truncation hyperbolic factor analyzers (MHTHFA) model. In the process of doing this, a hidden truncation hyperbolic factor analysis model is also introduced. The MHTHFA model is illustrated for clustering as well as semi-supervised classification using two real datasets.
0
0
0
1
0
0
18,643
Representations on Partially Holomorphic Cohomology Spaces, Revisited
This is a semi--expository update and rewrite of my 1974 AMS AMS Memoir describing Plancherel formulae and partial Dolbeault cohomology realizations for standard tempered representations for general real reductive Lie groups. Even after so many years, much of that Memoir is up to date, but of course there have been a number of refinements, advances and new developments, most of which have applied to smaller classes of real reductive Lie groups. Here we rewrite that AMS Memoir in in view of these advances and indicate the ties with some of the more recent (or at least less classical) approaches to geometric realization of unitary representations.
0
0
1
0
0
0
18,644
Underground tests of quantum mechanics. Whispers in the cosmic silence?
By performing X-rays measurements in the "cosmic silence" of the underground laboratory of Gran Sasso, LNGS-INFN, we test a basic principle of quantum mechanics: the Pauli Exclusion Principle (PEP), for electrons. We present the achieved results of the VIP experiment and the ongoing VIP2 measurement aiming to gain two orders of magnitude improvement in testing PEP. We also use a similar experimental technique to search for radiation (X and gamma) predicted by continuous spontaneous localization models, which aim to solve the "measurement problem".
0
1
0
0
0
0
18,645
Implications for Post-Processing Nucleosynthesis of Core-Collapse Supernova Models with Lagrangian Particles
We investigate core-collapse supernova (CCSN) nucleosynthesis with self-consistent, axisymmetric (2D) simulations performed using the radiation-hydrodynamics code Chimera. Computational costs have traditionally constrained the evolution of the nuclear composition within multidimensional CCSN models to, at best, a 14-species $\alpha$-network capable of tracking only $(\alpha,\gamma)$ reactions from $^{4}$He to $^{60}$Zn. Such a simplified network limits the ability to accurately evolve detailed composition and neutronization or calculate the nuclear energy generation rate. Lagrangian tracer particles are commonly used to extend the nuclear network evolution by incorporating more realistic networks in post-processing nucleosynthesis calculations. However, limitations such as poor spatial resolution of the tracer particles, inconsistent thermodynamic evolution, including misestimation of expansion timescales, and uncertain determination of the multidimensional mass-cut at the end of the simulation impose uncertainties inherent to this approach. We present a detailed analysis of the impact of such uncertainties for four self-consistent axisymmetric CCSN models initiated from stellar metallicity, non-rotating progenitors of 12 $M_\odot$, 15 $M_\odot$, 20 $M_\odot$, and 25 $M_\odot$ and evolved with the smaller $\alpha$-network to more than 1 s after the launch of an explosion.
0
1
0
0
0
0
18,646
High Performance Parallel Image Reconstruction for New Vacuum Solar Telescope
Many technologies have been developed to help improve spatial resolution of observational images for ground-based solar telescopes, such as adaptive optics (AO) systems and post-processing reconstruction. As any AO system correction is only partial, it is indispensable to use post-processing reconstruction techniques. In the New Vacuum Solar Telescope (NVST), speckle masking method is used to achieve the diffraction limited resolution of the telescope. Although the method is very promising, the computation is quite intensive, and the amount of data is tremendous, requiring several months to reconstruct observational data of one day on a high-end computer. To accelerate image reconstruction, we parallelize the program package on a high performance cluster. We describe parallel implementation details for several reconstruction procedures. The code is written in C language using Message Passing Interface (MPI) and optimized for parallel processing in a multi-processor environment. We show the excellent performance of parallel implementation, and the whole data processing speed is about 71 times faster than before. Finally, we analyze the scalability of the code to find possible bottlenecks, and propose several ways to further improve the parallel performance. We conclude that the presented program is capable of executing in real-time reconstruction applications at NVST.
0
1
0
0
0
0
18,647
Approximation of Bandwidth for the Interactive Operation in Video on Demand System
An interactive session of video-on-demand (VOD) streaming procedure deserves smooth data transportation for the viewer, irrespective of their geographic location. To access the required video, bandwidth management during the video objects transportation at any interactive session is a mandatory prerequisite. It has been observed in the domain likes movie on demand, electronic encyclopedia, interactive games, and educational resources. The required data is imported from the distributed storage servers through the high speed backbone network. This paper presents the viewer driven session based multi-user model with respect to the overlay mesh network. In virtue of reality, the direct implication of this work elaborately shows the required bandwidth is a causal part in the video on demand system. The analytic model of session based single viewer bandwidth requirement model presents the bandwidth requirement for any interactive session like, pause, move slow, rewind, skip some number of frames, or move fast with some constant number of frames. This work presents the bandwidth requirement model for any interactive session that brings the trade-off in data-transportation and storage costs for different system resources and also for the various system configurations.
1
0
0
0
0
0
18,648
Twisted Recurrence via Polynomial Walks
In this paper we show how polynomial walks can be used to establish a twisted recurrence for sets of positive density in $\mathbb{Z}^d$. In particular, we prove that if $\Gamma \leq \operatorname{GL}_d(\mathbb{Z})$ is finitely generated by unipotents and acts irreducibly on $\mathbb{R}^d$, then for any set $B \subset \mathbb{Z}^d$ of positive density, there exists $k \geq 1$ such that for any $v \in k \mathbb{Z}^d$ one can find $\gamma \in \Gamma$ with $\gamma v \in B - B$. Our method does not require the linearity of the action, and we prove a twisted recurrence for semigroups of maps from $\mathbb{Z}^d$ to $\mathbb{Z}^d$ satisfying some irreducibility and polynomial assumptions. As one of the consequences, we prove a non-linear analog of Bogolubov's theorem -- for any set $B \subset \mathbb{Z}^2$ of positive density, and $p(n) \in \mathbb{Z}[n]$, with $p(0) = 0$ and $\operatorname{deg}(p) \geq 2$, there exists $k \geq 1$ such that $k \mathbb{Z} \subset \{ x - p(y) \, | \, (x,y) \in B-B \}$. Unlike the previous works on twisted recurrence that used recent results of Benoist-Quint and Bourgain-Furman-Lindenstrauss-Mozes on equidistribution of random walks on automorphism groups of tori, our method relies on the classical Weyl equidistribution for polynomial orbits on tori.
0
0
1
0
0
0
18,649
Well-posedness and scattering for the Boltzmann equations: Soft potential with cut-off
We prove the global existence of the unique mild solution for the Cauchy problem of the cut-off Boltzmann equation for soft potential model $\gamma=2-N$ with initial data small in $L^N_{x,v}$ where $N=2,3$ is the dimension. The proof relies on the existing inhomogeneous Strichartz estimates for the kinetic equation by Ovcharov and convolution-like estimates for the gain term of the Boltzmann collision operator by Alonso, Carneiro and Gamba. The global dynamics of the solution is also characterized by showing that the small global solution scatters with respect to the kinetic transport operator in $L^N_{x,v}$. Also the connection between function spaces and cut-off soft potential model $-N<\gamma<2-N$ is characterized in the local well-posedness result for the Cauchy problem with large initial data.
0
0
1
0
0
0
18,650
Wikipedia in academia as a teaching tool: from averse to proactive faculty profiles
This study concerned the active use of Wikipedia as a teaching tool in the classroom in higher education, trying to identify different usage profiles and their characterization. A questionnaire survey was administrated to all full-time and part-time teachers at the Universitat Oberta de Catalunya and the Universitat Pompeu Fabra, both in Barcelona, Spain. The questionnaire was designed using the Technology Acceptance Model as a reference, including items about teachers web 2.0 profile, Wikipedia usage, expertise, perceived usefulness, easiness of use, visibility and quality, as well as Wikipedia status among colleagues and incentives to use it more actively. Clustering and statistical analysis were carried out using the k-medoids algorithm and differences between clusters were assessed by means of contingency tables and generalized linear models (logit). The respondents were classified in four clusters, from less to more likely to adopt and use Wikipedia in the classroom, namely averse (25.4%), reluctant (17.9%), open (29.5%) and proactive (27.2%). Proactive faculty are mostly men teaching part-time in STEM fields, mainly engineering, while averse faculty are mostly women teaching full-time in non-STEM fields. Nevertheless, questionnaire items related to visibility, quality, image, usefulness and expertise determine the main differences between clusters, rather than age, gender or domain. Clusters involving a positive view of Wikipedia and at least some frequency of use clearly outnumber those with a strictly negative stance. This goes against the common view that faculty members are mostly sceptical about Wikipedia. Environmental factors such as academic culture and colleagues opinion are more important than faculty personal characteristics, especially with respect to what they think about Wikipedia quality.
1
0
0
0
0
0
18,651
A trans-disciplinary review of deep learning research for water resources scientists
Deep learning (DL), a new-generation of artificial neural network research, has transformed industries, daily lives and various scientific disciplines in recent years. DL represents significant progress in the ability of neural networks to automatically engineer problem-relevant features and capture highly complex data distributions. I argue that DL can help address several major new and old challenges facing research in water sciences such as inter-disciplinarity, data discoverability, hydrologic scaling, equifinality, and needs for parameter regionalization. This review paper is intended to provide water resources scientists and hydrologists in particular with a simple technical overview, trans-disciplinary progress update, and a source of inspiration about the relevance of DL to water. The review reveals that various physical and geoscientific disciplines have utilized DL to address data challenges, improve efficiency, and gain scientific insights. DL is especially suited for information extraction from image-like data and sequential data. Techniques and experiences presented in other disciplines are of high relevance to water research. Meanwhile, less noticed is that DL may also serve as a scientific exploratory tool. A new area termed 'AI neuroscience,' where scientists interpret the decision process of deep networks and derive insights, has been born. This budding sub-discipline has demonstrated methods including correlation-based analysis, inversion of network-extracted features, reduced-order approximations by interpretable models, and attribution of network decisions to inputs. Moreover, DL can also use data to condition neurons that mimic problem-specific fundamental organizing units, thus revealing emergent behaviors of these units. Vast opportunities exist for DL to propel advances in water sciences.
1
0
0
1
0
0
18,652
A new NS3 Implementation of CCNx 1.0 Protocol
The ccns3Sim project is an open source implementation of the CCNx 1.0 protocols for the NS3 simulator. We describe the implementation and several important features including modularity and process delay simulation. The ccns3Sim implementation is a fresh NS3-specific implementation. Like NS3 itself, it uses C++98 standard, NS3 code style, NS3 smart pointers, NS3 xUnit, and integrates with the NS3 documentation and manual. A user or developer does not need to learn two systems. If one knows NS3, one should be able to get started with the CCNx code right away. A developer can easily use their own implementation of the layer 3 protocol, layer 4 protocol, forwarder, routing protocol, Pending Interest Table (PIT) or Forwarding Information Base (FIB) or Content Store (CS). A user may configure or specify a new implementation for any of these features at runtime in the simulation script. In this paper, we describe the software architecture and give examples of using the simulator. We evaluate the implementation with several example experiments on ICN caching.
1
0
0
0
0
0
18,653
MultiRefactor: Automated Refactoring To Improve Software Quality
In this paper, a new approach is proposed for automated software maintenance. The tool is able to perform 26 different refactorings. It also contains a large selection of metrics to measure the impact of the refactorings on the software and six different search based optimization algorithms to improve the software. This tool contains both mono-objective and multi-objective search techniques for software improvement and is fully automated. The paper describes the various capabilities of the tool, the unique aspects of it, and also presents some research results from experimentation. The individual metrics are tested across five different codebases to deduce the most effective metrics for general quality improvement. It is found that the metrics that relate to more specific elements of the code are more useful for driving change in the search. The mono-objective genetic algorithm is also tested against the multi-objective algorithm to see how comparable the results gained are with three separate objectives. When comparing the best solutions of each individual objective the multi-objective approach generates suitable improvements in quality in less time, allowing for rapid maintenance cycles.
1
0
0
0
0
0
18,654
Learning Postural Synergies for Categorical Grasping through Shape Space Registration
Every time a person encounters an object with a given degree of familiarity, he/she immediately knows how to grasp it. Adaptation of the movement of the hand according to the object geometry happens effortlessly because of the accumulated knowledge of previous experiences grasping similar objects. In this paper, we present a novel method for inferring grasp configurations based on the object shape. Grasping knowledge is gathered in a synergy space of the robotic hand built by following a human grasping taxonomy. The synergy space is constructed through human demonstrations employing a exoskeleton that provides force feedback, which provides the advantage of evaluating the quality of the grasp. The shape descriptor is obtained by means of a categorical non-rigid registration that encodes typical intra-class variations. This approach is especially suitable for on-line scenarios where only a portion of the object's surface is observable. This method is demonstrated through simulation and real robot experiments by grasping objects never seen before by the robot.
1
0
0
0
0
0
18,655
Perturbations of self-adjoint operators in semifinite von Neumann algebras: Kato-Rosenblum theorem
In the paper, we prove an analogue of the Kato-Rosenblum theorem in a semifinite von Neumann algebra. Let $\mathcal{M}$ be a countably decomposable, properly infinite, semifinite von Neumann algebra acting on a Hilbert space $\mathcal{H}$ and let $\tau$ be a faithful normal semifinite tracial weight of $\mathcal M$. Suppose that $H$ and $H_1$ are self-adjoint operators affiliated with $\mathcal{M}$. We show that if $H-H_1$ is in $\mathcal{M}\cap L^{1}\left(\mathcal{M},\tau\right)$, then the ${norm}$ absolutely continuous parts of $H$ and $H_1$ are unitarily equivalent. This implies that the real part of a non-normal hyponormal operator in $\mathcal M$ is not a perturbation by $\mathcal{M}\cap L^{1}\left(\mathcal{M},\tau\right)$ of a diagonal operator. Meanwhile, for $n\ge 2$ and $1\leq p<n$, by modifying Voiculescu's invariant we give examples of commuting $n$-tuples of self-adjoint operators in $\mathcal{M}$ that are not arbitrarily small perturbations of commuting diagonal operators modulo $\mathcal{M}\cap L^{p}\left(\mathcal{M},\tau\right)$.
0
0
1
0
0
0
18,656
Dust radiative transfer modelling of the infrared ring around the magnetar SGR 1900$+$14
A peculiar infrared ring-like structure was discovered by {\em Spitzer} around the strongly magnetised neutron star SGR 1900$+$14. This infrared structure was suggested to be due to a dust-free cavity, produced by the SGR Giant Flare occurred in 1998, and kept illuminated by surrounding stars. Using a 3D dust radiative transfer code, we aimed at reproducing the emission morphology and the integrated emission flux of this structure assuming different spatial distributions and densities for the dust, and different positions for the illuminating stars. We found that a dust-free ellipsoidal cavity can reproduce the shape, flux, and spectrum of the ring-like infrared emission, provided that the illuminating stars are inside the cavity and that the interstellar medium has high gas density ($n_H\sim$1000 cm$^{-3}$). We further constrain the emitting region to have a sharp inner boundary and to be significantly extended in the radial direction, possibly even just a cavity in a smooth molecular cloud. We discuss possible scenarios for the formation of the dustless cavity and the particular geometry that allows it to be IR-bright.
0
1
0
0
0
0
18,657
Application of transfer matrix and transfer function analysis to grating-type dielectric laser accelerators: ponderomotive focusing of electrons
The question of suitability of transfer matrix description of electrons traversing grating-type dielectric laser acceleration (DLA) structures is addressed. It is shown that although matrix considerations lead to interesting insights, the basic transfer properties of DLA cells cannot be described by a matrix. A more general notion of a transfer function is shown to be a simple and useful tool for formulating problems of particle dynamics in DLA. As an example, a focusing structure is proposed which works simultaneously for all electron phases.
0
1
0
0
0
0
18,658
Discretization-free Knowledge Gradient Methods for Bayesian Optimization
This paper studies Bayesian ranking and selection (R&S) problems with correlated prior beliefs and continuous domains, i.e. Bayesian optimization (BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely studied for discrete R&S problems, which sample the one-step Bayes-optimal point. When used over continuous domains, previous work on the knowledge gradient [Scott et al., 2011, Wu and Frazier, 2016, Wu et al., 2017] often rely on a discretized finite approximation. However, the discretization introduces error and scales poorly as the dimension of domain grows. In this paper, we develop a fast discretization-free knowledge gradient method for Bayesian optimization. Our method is not restricted to the fully sequential setting, but useful in all settings where knowledge gradient can be used over continuous domains. We show how our method can be generalized to handle (i) batch of points suggestion (parallel knowledge gradient); (ii) the setting where derivative information is available in the optimization process (derivative-enabled knowledge gradient). In numerical experiments, we demonstrate that the discretization-free knowledge gradient method finds global optima significantly faster than previous Bayesian optimization algorithms on both synthetic test functions and real-world applications, especially when function evaluations are noisy; and derivative-enabled knowledge gradient can further improve the performances, even outperforming the gradient-based optimizer such as BFGS when derivative information is available.
1
0
1
1
0
0
18,659
Measuring the Robustness of Graph Properties
In this paper, we propose a perturbation framework to measure the robustness of graph properties. Although there are already perturbation methods proposed to tackle this problem, they are limited by the fact that the strength of the perturbation cannot be well controlled. We firstly provide a perturbation framework on graphs by introducing weights on the nodes, of which the magnitude of perturbation can be easily controlled through the variance of the weights. Meanwhile, the topology of the graphs are also preserved to avoid uncontrollable strength in the perturbation. We then extend the measure of robustness in the robust statistics literature to the graph properties.
1
0
0
1
0
0
18,660
Better Software Analytics via "DUO": Data Mining Algorithms Using/Used-by Optimizers
This paper claims that a new field of empirical software engineering research and practice is emerging: data mining using/used-by optimizers for empirical studies, or DUO. For example, data miners can generate the models that are explored by optimizers.Also, optimizers can advise how to best adjust the control parameters of a data miner. This combined approach acts like an agent leaning over the shoulder of an analyst that advises "ask this question next" or "ignore that problem, it is not relevant to your goals". Further, those agents can help us build "better" predictive models, where "better" can be either greater predictive accuracy, or faster modeling time (which, in turn, enables the exploration of a wider range of options). We also caution that the era of papers that just use data miners is coming to an end. Results obtained from an unoptimized data miner can be quickly refuted, just by applying an optimizer to produce a different (and better performing) model. Our conclusion, hence, is that for software analytics it is possible, useful and necessary to combine data mining and optimization using DUO.
1
0
0
0
0
0
18,661
Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep Convolutional Neural Networks
We propose an automatic diabetic retinopathy (DR) analysis algorithm based on two-stages deep convolutional neural networks (DCNN). Compared to existing DCNN-based DR detection methods, the proposed algorithm have the following advantages: (1) Our method can point out the location and type of lesions in the fundus images, as well as giving the severity grades of DR. Moreover, since retina lesions and DR severity appear with different scales in fundus images, the integration of both local and global networks learn more complete and specific features for DR analysis. (2) By introducing imbalanced weighting map, more attentions will be given to lesion patches for DR grading, which significantly improve the performance of the proposed algorithm. In this study, we label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus images from Kaggle competition dataset. Under the guidance of clinical ophthalmologists, the experimental results show that our local lesion detection net achieve comparable performance with trained human observers, and the proposed imbalanced weighted scheme also be proved to significantly improve the capability of our DCNN-based DR grading algorithm.
1
0
0
0
0
0
18,662
DRYVR:Data-driven verification and compositional reasoning for automotive systems
We present the DRYVR framework for verifying hybrid control systems that are described by a combination of a black-box simulator for trajectories and a white-box transition graph specifying mode switches. The framework includes (a) a probabilistic algorithm for learning sensitivity of the continuous trajectories from simulation data, (b) a bounded reachability analysis algorithm that uses the learned sensitivity, and (c) reasoning techniques based on simulation relations and sequential composition, that enable verification of complex systems under long switching sequences, from the reachability analysis of a simpler system under shorter sequences. We demonstrate the utility of the framework by verifying a suite of automotive benchmarks that include powertrain control, automatic transmission, and several autonomous and ADAS features like automatic emergency braking, lane-merge, and auto-passing controllers.
1
0
0
0
0
0
18,663
Deep Reinforcement Learning for Programming Language Correction
Novice programmers often struggle with the formal syntax of programming languages. To assist them, we design a novel programming language correction framework amenable to reinforcement learning. The framework allows an agent to mimic human actions for text navigation and editing. We demonstrate that the agent can be trained through self-exploration directly from the raw input, that is, program text itself, without any knowledge of the formal syntax of the programming language. We leverage expert demonstrations for one tenth of the training data to accelerate training. The proposed technique is evaluated on 6975 erroneous C programs with typographic errors, written by students during an introductory programming course. Our technique fixes 14% more programs and 29% more compiler error messages relative to those fixed by a state-of-the-art tool, DeepFix, which uses a fully supervised neural machine translation approach.
1
0
0
0
0
0
18,664
Dispersionless and multicomponent BKP hierarchies with quantum torus symmetries
In this article, we will construct the additional perturbative quantum torus symmetry of the dispersionless BKP hierarchy basing on the $W_{\infty}$ infinite dimensional Lie symmetry. These results show that the complete quantum torus symmetry is broken from the BKP hierarchy to its dispersionless hierarchy. Further a series of additional flows of the multicomponent BKP hierarchy will be defined and these flows constitute an $N$-folds direct product of the positive half of the quantum torus symmetries.
0
1
1
0
0
0
18,665
A Sampling Framework for Solving Physics-driven Inverse Source Problems
Partial differential equations are central to describing many physical phenomena. In many applications these phenomena are observed through a sensor network, with the aim of inferring their underlying properties. Leveraging from certain results in sampling and approximation theory, we present a new framework for solving a class of inverse source problems for physical fields governed by linear partial differential equations. Specifically, we demonstrate that the unknown field sources can be recovered from a sequence of, so called, generalised measurements by using multidimensional frequency estimation techniques. Next we show that---for physics-driven fields---this sequence of generalised measurements can be estimated by computing a linear weighted-sum of the sensor measurements; whereby the exact weights (of the sums) correspond to those that reproduce multidimensional exponentials, when used to linearly combine translates of a particular prototype function related to the Green's function of our underlying field. Explicit formulae are then derived for the sequence of weights, that map sensor samples to the exact sequence of generalised measurements when the Green's function satisfies the generalised Strang-Fix condition. Otherwise, the same mapping yields a close approximation of the generalised measurements. Based on this new framework we develop practical, noise robust, sensor network strategies for solving the inverse source problem, and then present numerical simulation results to verify their performance.
1
0
1
0
0
0
18,666
An Application of $h$-principle to Manifold Calculus
Manifold calculus is a form of functor calculus that analyzes contravariant functors from some categories of manifolds to topological spaces by providing analytic approximations to them. In this paper we apply the theory of h-principle to construct several examples of analytic functors in this sense. We prove that the analytic approximation of the Lagrangian embeddings functor $\mathrm{emb}_{\mathrm{Lag}}(-,N)$ is the totally real embeddings functor $\mathrm{emb}_{\mathrm{TR}}(-,N)$. Under certain conditions we provide a geometric construction for the homotopy fiber of $ \mathrm{emb}(M,N) \rightarrow \mathrm{imm}(M,N)$. This construction also provides an example of a functor which is itself empty when evaluated on most manifolds but it's analytic approximation is almost always non-empty.
0
0
1
0
0
0
18,667
Grasping Unknown Objects in Clutter by Superquadric Representation
In this paper, a quick and efficient method is presented for grasping unknown objects in clutter. The grasping method relies on real-time superquadric (SQ) representation of partial view objects and incomplete object modelling, well suited for unknown symmetric objects in cluttered scenarios which is followed by optimized antipodal grasping. The incomplete object models are processed through a mirroring algorithm that assumes symmetry to first create an approximate complete model and then fit for SQ representation. The grasping algorithm is designed for maximum force balance and stability, taking advantage of the quick retrieval of dimension and surface curvature information from the SQ parameters. The pose of the SQs with respect to the direction of gravity is calculated and used together with the parameters of the SQs and specification of the gripper, to select the best direction of approach and contact points. The SQ fitting method has been tested on custom datasets containing objects in isolation as well as in clutter. The grasping algorithm is evaluated on a PR2 and real time results are presented. Initial results indicate that though the method is based on simplistic shape information, it outperforms other learning based grasping algorithms that also work in clutter in terms of time-efficiency and accuracy.
1
0
0
0
0
0
18,668
Learning to Play with Intrinsically-Motivated Self-Aware Agents
Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a "world-model" network that learns to predict the dynamic consequences of the agent's actions. Simultaneously, we train a separate explicit "self-model" that allows the agent to track the error map of its own world-model, and then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in complex novel physical environments.
0
0
0
1
0
0
18,669
$\ell_1$-minimization method for link flow correction
A computational method, based on $\ell_1$-minimization, is proposed for the problem of link flow correction, when the available traffic flow data on many links in a road network are inconsistent with respect to the flow conservation law. Without extra information, the problem is generally ill-posed when a large portion of the link sensors are unhealthy. It is possible, however, to correct the corrupted link flows \textit{accurately} with the proposed method under a recoverability condition if there are only a few bad sensors which are located at certain links. We analytically identify the links that are robust to miscounts and relate them to the geometric structure of the traffic network by introducing the recoverability concept and an algorithm for computing it. The recoverability condition for corrupted links is simply the associated recoverability being greater than 1. In a more realistic setting, besides the unhealthy link sensors, small measurement noises may be present at the other sensors. Under the same recoverability condition, our method guarantees to give an estimated traffic flow fairly close to the ground-truth data and leads to a bound for the correction error. Both synthetic and real-world examples are provided to demonstrate the effectiveness of the proposed method.
1
1
0
0
0
0
18,670
Current Flow Group Closeness Centrality for Complex Networks
Current flow closeness centrality (CFCC) has a better discriminating ability than the ordinary closeness centrality based on shortest paths. In this paper, we extend this notion to a group of vertices in a weighted graph, and then study the problem of finding a subset $S$ of $k$ vertices to maximize its CFCC $C(S)$, both theoretically and experimentally. We show that the problem is NP-hard, but propose two greedy algorithms for minimizing the reciprocal of $C(S)$ with provable guarantees using the monotoncity and supermodularity. The first is a deterministic algorithm with an approximation factor $(1-\frac{k}{k-1}\cdot\frac{1}{e})$ and cubic running time; while the second is a randomized algorithm with a $(1-\frac{k}{k-1}\cdot\frac{1}{e}-\epsilon)$-approximation and nearly-linear running time for any $\epsilon > 0$. Extensive experiments on model and real networks demonstrate that our algorithms are effective and efficient, with the second algorithm being scalable to massive networks with more than a million vertices.
1
0
0
0
0
0
18,671
Visualization of Constraint Handling Rules: Semantics and Applications
The work in the paper presents an animation extension ($CHR^{vis}$) to Constraint Handling Rules (CHR). Visualizations have always helped programmers understand data and debug programs. A picture is worth a thousand words. It can help identify where a problem is or show how something works. It can even illustrate a relation that was not clear otherwise. $CHR^{vis}$ aims at embedding animation and visualization features into CHR programs. It thus enables users, while executing programs, to have such executions animated. The paper aims at providing the operational semantics for $CHR^{vis}$. The correctness of $CHR^{vis}$ programs is also discussed. Some applications of the new extension are also introduced.
1
0
0
0
0
0
18,672
Simplified Energy Landscape for Modularity Using Total Variation
Networks capture pairwise interactions between entities and are frequently used in applications such as social networks, food networks, and protein interaction networks, to name a few. Communities, cohesive groups of nodes, often form in these applications, and identifying them gives insight into the overall organization of the network. One common quality function used to identify community structure is modularity. In Hu et al. [SIAM J. App. Math., 73(6), 2013], it was shown that modularity optimization is equivalent to minimizing a particular nonconvex total variation (TV) based functional over a discrete domain. They solve this problem, assuming the number of communities is known, using a Merriman, Bence, Osher (MBO) scheme. We show that modularity optimization is equivalent to minimizing a convex TV-based functional over a discrete domain, again, assuming the number of communities is known. Furthermore, we show that modularity has no convex relaxation satisfying certain natural conditions. We therefore, find a manageable non-convex approximation using a Ginzburg Landau functional, which provably converges to the correct energy in the limit of a certain parameter. We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et al. and which is 7 times faster at solving the associated diffusion equation due to the fact that the underlying discretization is unconditionally stable. Our numerical tests include a hyperspectral video whose associated graph has 2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper of Hu et al.
0
0
1
1
0
0
18,673
A Hard Look at the Neutron Stars and Accretion Disks in 4U 1636-53, GX 17+2, and 4U 1705-44 with $\emph{NuSTAR}$
We present $\emph{NuSTAR}$ observations of neutron star (NS) low-mass X-ray binaries: 4U 1636-53, GX 17+2, and 4U 1705-44. We observed 4U 1636-53 in the hard state, with an Eddington fraction, $F_{\mathrm{Edd}}$, of 0.01; GX 17+2 and 4U 1705-44 were in the soft state with fractions of 0.57 and 0.10, respectively. Each spectrum shows evidence for a relativistically broadened Fe K$_{\alpha}$ line. Through accretion disk reflection modeling, we constrain the radius of the inner disk in 4U 1636-53 to be $R_{in}=1.03\pm0.03$ ISCO (innermost stable circular orbit) assuming a dimensionless spin parameter $a_{*}=cJ/GM^{2}=0.0$, and $R_{in}=1.08\pm0.06$ ISCO for $a_{*}=0.3$ (errors quoted at 1 $\sigma$). This value proves to be model independent. For $a_{*}=0.3$ and $M=1.4\ M_{\odot}$, for example, $1.08\pm0.06$ ISCO translates to a physical radius of $R=10.8\pm0.6$ km, and the neutron star would have to be smaller than this radius (other outcomes are possible for allowed spin parameters and masses). For GX 17+2, $R_{in}=1.00-1.04$ ISCO for $a_{*}=0.0$ and $R_{in}=1.03-1.30$ ISCO for $a_{*}=0.3$. For $a_{*}=0.3$ and $M=1.4\ M_{\odot}$, $R_{in}=1.03-1.30$ ISCO translates to $R=10.3-13.0$ km. The inner accretion disk in 4U 1705-44 may be truncated just above the stellar surface, perhaps by a boundary layer or magnetosphere; reflection models give a radius of 1.46-1.64 ISCO for $a_{*}=0.0$ and 1.69-1.93 ISCO for $a_{*}=0.3$. We discuss the implications that our results may have on the equation of state of ultradense, cold matter and our understanding of the innermost accretion flow onto neutron stars with low surface magnetic fields, and systematic errors related to the reflection models and spacetime metric around less idealized neutron stars.
0
1
0
0
0
0
18,674
Approximating Weighted Duo-Preservation in Comparative Genomics
Motivated by comparative genomics, Chen et al. [9] introduced the Maximum Duo-preservation String Mapping (MDSM) problem in which we are given two strings $s_1$ and $s_2$ from the same alphabet and the goal is to find a mapping $\pi$ between them so as to maximize the number of duos preserved. A duo is any two consecutive characters in a string and it is preserved in the mapping if its two consecutive characters in $s_1$ are mapped to same two consecutive characters in $s_2$. The MDSM problem is known to be NP-hard and there are approximation algorithms for this problem [3, 5, 13], but all of them consider only the "unweighted" version of the problem in the sense that a duo from $s_1$ is preserved by mapping to any same duo in $s_2$ regardless of their positions in the respective strings. However, it is well-desired in comparative genomics to find mappings that consider preserving duos that are "closer" to each other under some distance measure [19]. In this paper, we introduce a generalized version of the problem, called the Maximum-Weight Duo-preservation String Mapping (MWDSM) problem that captures both duos-preservation and duos-distance measures in the sense that mapping a duo from $s_1$ to each preserved duo in $s_2$ has a weight, indicating the "closeness" of the two duos. The objective of the MWDSM problem is to find a mapping so as to maximize the total weight of preserved duos. In this paper, we give a polynomial-time 6-approximation algorithm for this problem.
1
0
0
0
0
0
18,675
On the virtual singular braid monoid
We study the algebraic structures of the virtual singular braid monoid, $VSB_n$, and the virtual singular pure braid monoid, $VSP_n$. The monoid $VSB_n$ is the splittable extension of $VSP_n$ by the symmetric group $S_n$. We also construct a representation of $VSB_n$.
0
0
1
0
0
0
18,676
Discrete Games in Endogenous Networks: Equilibria and Policy
In games of friendship links and behaviors, I propose $k$-player Nash stability---a family of equilibria, indexed by a measure of robustness given by the number of permitted link changes, which is (ordinally and cardinally) ranked in a probabilistic sense. Application of the proposed framework to adolescents' tobacco smoking and friendship decisions suggests that: (a.) friendship networks respond to increases of tobacco prices and this response amplifies the intended policy effect on smoking, (b.) racially desegregating high-schools, via stimulating the social interactions of students with different intrinsic propensity to smoke, decreases the overall smoking prevalence, (c.) adolescents are averse to sharing friends so that there is a rivalry for friendships, (d.) when data on individuals' friendship network is not available, the importance of price centered policy tools is underestimated.
1
1
0
0
0
0
18,677
Evaluating regulatory reform of network industries: a survey of empirical models based on categorical proxies
Proxies for regulatory reforms based on categorical variables are increasingly used in empirical evaluation models. We surveyed 63 studies that rely on such indices to analyze the effects of entry liberalization, privatization, unbundling, and independent regulation of the electricity, natural gas, and telecommunications sectors. We highlight methodological issues related to the use of these proxies. Next, taking stock of the literature, we provide practical advice for the design of the empirical strategy and discuss the selection of control and instrumental variables to attenuate endogeneity problems undermining identification of the effects of regulatory reforms.
0
0
0
0
0
1
18,678
Theory of ground states for classical Heisenberg spin systems II
We apply the theory of ground states for classical, finite, Heisenberg spin systems previously published to a couple of spin systems that can be considered as finite models $K_{12},\,K_{15}$ and $K_{18}$ of the AF Kagome lattice. The model $K_{12}$ is isomorphic to the cuboctahedron. In particular, we find three-dimensional ground states that cannot be viewed as resulting from the well-known independent rotation of subsets of spin vectors. For a couple of ground states with translational symmetry we calculate the corresponding wave numbers. Finally we study the model $K_{12w}$ without boundary conditions which exhibits new phenomena as, e.~g., two-dimensional families of three-dimensional ground states.
0
1
0
0
0
0
18,679
Relative Property (T) for Nilpotent Subgroups
We show that relative Property (T) for the abelianization of a nilpotent normal subgroup implies relative Property (T) for the subgroup itself. This and other results are a consequence of a theorem of independent interest, which states that if $H$ is a closed subgroup of a locally compact group $G$, and $A$ is a closed subgroup of the center of $H$, such that $A$ is normal in $G$, and $(G/A, H/A)$ has relative Property (T), then $(G, H^{(1)})$ has relative Property (T), where $H^{(1)}$ is the closure of the commutator subgroup of $H$. In fact, the assumption that $A$ is in the center of $H$ can be replaced with the weaker assumption that $A$ is abelian and every $H$-invariant finite measure on the unitary dual of $A$ is supported on the set of fixed points.
0
0
1
0
0
0
18,680
Discriminant analysis in small and large dimensions
We study the distributional properties of the linear discriminant function under the assumption of normality by comparing two groups with the same covariance matrix but different mean vectors. A stochastic representation for the discriminant function coefficients is derived which is then used to obtain their asymptotic distribution under the high-dimensional asymptotic regime. We investigate the performance of the classification analysis based on the discriminant function in both small and large dimensions. A stochastic representation is established which allows to compute the error rate in an efficient way. We further compare the calculated error rate with the optimal one obtained under the assumption that the covariance matrix and the two mean vectors are known. Finally, we present an analytical expression of the error rate calculated in the high-dimensional asymptotic regime. The finite-sample properties of the derived theoretical results are assessed via an extensive Monte Carlo study.
0
0
1
1
0
0
18,681
Sparse Bayesian Inference for Dense Semantic Mapping
Despite impressive advances in simultaneous localization and mapping, dense robotic mapping remains challenging due to its inherent nature of being a high-dimensional inference problem. In this paper, we propose a dense semantic robotic mapping technique that exploits sparse Bayesian models, in particular, the relevance vector machine, for high-dimensional sequential inference. The technique is based on the principle of automatic relevance determination and produces sparse models that use a small subset of the original dense training set as the dominant basis. The resulting map posterior is continuous, and queries can be made efficiently at any resolution. Moreover, the technique has probabilistic outputs per semantic class through Bayesian inference. We evaluate the proposed relevance vector semantic map using publicly available benchmark datasets, NYU Depth V2 and KITTI; and the results show promising improvements over the state-of-the-art techniques.
1
0
0
0
0
0
18,682
Singularities and Semistable Degenerations for Symplectic Topology
We overview our recent work defining and studying normal crossings varieties and subvarieties in symplectic topology. This work answers a question of Gromov on the feasibility of introducing singular (sub)varieties into symplectic topology in the case of normal crossings singularities. It also provides a necessary and sufficient condition for smoothing normal crossings symplectic varieties. In addition, we explain some connections with other areas of mathematics and discuss a few directions for further research.
0
0
1
0
0
0
18,683
One-step Local M-estimator for Integrated Jump-Diffusion Models
In this paper, robust nonparametric estimators, instead of local linear estimators, are adapted for infinitesimal coefficients associated with integrated jump-diffusion models to avoid the impact of outliers on accuracy. Furthermore, consider the complexity of iteration of the solution for local M-estimator, we propose the one-step local M-estimators to release the computation burden. Under appropriate regularity conditions, we prove that one-step local M-estimators and the fully iterative M-estimators have the same performance in consistency and asymptotic normality. Through simulation, our method present advantages in bias reduction, robustness and reducing computation cost. In addition, the estimators are illustrated empirically through stock index under different sampling frequency.
0
0
1
1
0
0
18,684
Python Open Source Waveform Extractor (POWER): An open source, Python package to monitor and post-process numerical relativity simulations
Numerical simulations of Einstein's field equations provide unique insights into the physics of compact objects moving at relativistic speeds, and which are driven by strong gravitational interactions. Numerical relativity has played a key role to firmly establish gravitational wave astrophysics as a new field of research, and it is now paving the way to establish whether gravitational wave radiation emitted from compact binary mergers is accompanied by electromagnetic and astro-particle counterparts. As numerical relativity continues to blend in with routine gravitational wave data analyses to validate the discovery of gravitational wave events, it is essential to develop open source tools to streamline these studies. Motivated by our own experience as users and developers of the open source, community software, the Einstein Toolkit, we present an open source, Python package that is ideally suited to monitor and post-process the data products of numerical relativity simulations, and compute the gravitational wave strain at future null infinity in high performance environments. We showcase the application of this new package to post-process a large numerical relativity catalog and extract higher-order waveform modes from numerical relativity simulations of eccentric binary black hole mergers and neutron star mergers. This new software fills a critical void in the arsenal of tools provided by the Einstein Toolkit Consortium to the numerical relativity community.
1
1
0
0
0
0
18,685
Uniform confidence bands for nonparametric errors-in-variables regression
This paper develops a method to construct uniform confidence bands for a nonparametric regression function where a predictor variable is subject to a measurement error. We allow for the distribution of the measurement error to be unknown, but assume that there is an independent sample from the measurement error distribution. The sample from the measurement error distribution need not be independent from the sample on response and predictor variables. The availability of a sample from the measurement error distribution is satisfied if, for example, either 1) validation data or 2) repeated measurements (panel data) on the latent predictor variable with measurement errors, one of which is symmetrically distributed, are available. The proposed confidence band builds on the deconvolution kernel estimation and a novel application of the multiplier (or wild) bootstrap method. We establish asymptotic validity of the proposed confidence band under ordinary smooth measurement error densities, showing that the proposed confidence band contains the true regression function with probability approaching the nominal coverage probability. To the best of our knowledge, this is the first paper to derive asymptotically valid uniform confidence bands for nonparametric errors-in-variables regression. We also propose a novel data-driven method to choose a bandwidth, and conduct simulation studies to verify the finite sample performance of the proposed confidence band. Applying our method to a combination of two empirical data sets, we draw confidence bands for nonparametric regressions of medical costs on the body mass index (BMI), accounting for measurement errors in BMI. Finally, we discuss extensions of our results to specification testing, cases with additional error-free regressors, and confidence bands for conditional distribution functions.
0
0
1
1
0
0
18,686
Unconventional superconductivity in the BiS$_2$-based layered superconductor NdO$_{0.71}$F$_{0.29}$BiS$_2$
We investigate the superconducting-gap anisotropy in one of the recently discovered BiS$_2$-based superconductors, NdO$_{0.71}$F$_{0.29}$BiS$_2$ ($T_c$ $\sim$ 5 K), using laser-based angle-resolved photoemission spectroscopy. Whereas the previously discovered high-$T_c$ superconductors such as copper oxides and iron-based superconductors, which are believed to have unconventional superconducting mechanisms, have $3d$ electrons in their conduction bands, the conduction band of BiS$_2$-based superconductors mainly consists of Bi 6$p$ electrons, and hence the conventional superconducting mechanism might be expected. Contrary to this expectation, we observe a strongly anisotropic superconducting gap. This result strongly suggests that the pairing mechanism for NdO$_{0.71}$F$_{0.29}$BiS$_2$ is unconventional one and we attribute the observed anisotropy to competitive or cooperative multiple paring interactions.
0
1
0
0
0
0
18,687
Low noise sensitivity analysis of Lq-minimization in oversampled systems
The class of Lq-regularized least squares (LQLS) are considered for estimating a p-dimensional vector \b{eta} from its n noisy linear observations y = X\b{eta}+w. The performance of these schemes are studied under the high-dimensional asymptotic setting in which p grows linearly with n. In this asymptotic setting, phase transition diagrams (PT) are often used for comparing the performance of different estimators. Although phase transition analysis is shown to provide useful information for compressed sensing, the fact that it ignores the measurement noise not only limits its applicability in many application areas, but also may lead to misunderstandings. For instance, consider a linear regression problem in which n > p and the signal is not exactly sparse. If the measurement noise is ignored in such systems, regularization techniques, such as LQLS, seem to be irrelevant since even the ordinary least squares (OLS) returns the exact solution. However, it is well-known that if n is not much larger than p then the regularization techniques improve the performance of OLS. In response to this limitation of PT analysis, we consider the low-noise sensitivity analysis. We show that this analysis framework (i) reveals the advantage of LQLS over OLS, (ii) captures the difference between different LQLS estimators even when n > p, and (iii) provides a fair comparison among different estimators in high signal-to-noise ratios. As an application of this framework, we will show that under mild conditions LASSO outperforms other LQLS even when the signal is dense. Finally, by a simple transformation we connect our low-noise sensitivity framework to the classical asymptotic regime in which n/p goes to infinity and characterize how and when regularization techniques offer improvements over ordinary least squares, and which regularizer gives the most improvement when the sample size is large.
0
0
1
1
0
0
18,688
Robust Transceiver Design Based on Interference Alignment for Multi-User Multi-Cell MIMO Networks with Channel Uncertainty
In this paper, we firstly exploit the inter-user interference (IUI) and inter-cell interference (ICI) as useful references to develop a robust transceiver design based on interference alignment for a downlink multi-user multi-cell multiple-input multiple-output (MIMO) interference network under channel estimation error. At transmitters, we propose a two-tier transmit beamforming strategy, we first achieve the inner beamforming direction and allocated power by minimizing the interference leakage as well as maximizing the system energy efficiency, respectively. Then, for the outer beamformer design, we develop an efficient conjugate gradient Grassmann manifold subspace tracking algorithm to minimize the distances between the subspace spanned by interference and the interference subspace in the time varying channel. At receivers, we propose a practical interference alignment based on fast and robust fast data projection method (FDPM) subspace tracking algorithm, to achieve the receive beamformer under channel uncertainty. Numerical results show that our proposed robust transceiver design achieves better performance compared with some existing methods in terms of the sum rate and the energy efficiency.
1
0
0
0
0
0
18,689
Machine learning quantum mechanics: solving quantum mechanics problems using radial basis function networks
Inspired by the recent work of Carleo and Troyer[1], we apply machine learning methods to quantum mechanics in this article. The radial basis function network in a discrete basis is used as the variational wavefunction for the ground state of a quantum system. Variational Monte Carlo(VMC) calculations are carried out for some simple Hamiltonians. The results are in good agreements with theoretical values. The smallest eigenvalue of a Hermitian matrix can also be acquired using VMC calculations. Our results demonstrate that machine learning techniques are capable of solving quantum mechanical problems.
0
1
0
0
0
0
18,690
Scale-free networks are rare
A central claim in modern network science is that real-world networks are typically "scale free," meaning that the fraction of nodes with degree $k$ follows a power law, decaying like $k^{-\alpha}$, often with $2 < \alpha < 3$. However, empirical evidence for this belief derives from a relatively small number of real-world networks. We test the universality of scale-free structure by applying state-of-the-art statistical tools to a large corpus of nearly 1000 network data sets drawn from social, biological, technological, and informational sources. We fit the power-law model to each degree distribution, test its statistical plausibility, and compare it via a likelihood ratio test to alternative, non-scale-free models, e.g., the log-normal. Across domains, we find that scale-free networks are rare, with only 4% exhibiting the strongest-possible evidence of scale-free structure and 52% exhibiting the weakest-possible evidence. Furthermore, evidence of scale-free structure is not uniformly distributed across sources: social networks are at best weakly scale free, while a handful of technological and biological networks can be called strongly scale free. These results undermine the universality of scale-free networks and reveal that real-world networks exhibit a rich structural diversity that will likely require new ideas and mechanisms to explain.
1
0
0
1
1
0
18,691
Enlargeability, foliations, and positive scalar curvature
We extend the deep and important results of Lichnerowicz, Connes, and Gromov-Lawson which relate geometry and characteristic numbers to the existence and non-existence of metrics of positive scalar curvature (PSC). In particular, we show: that a spin foliation with Hausdorff homotopy groupoid of an enlargeable manifold admits no PSC metric; that any metric of PSC on such a foliation is bounded by a multiple of the reciprocal of the foliation K-area of the ambient manifold; and that Connes' vanishing theorem for characteristic numbers of PSC foliations extends to a vanishing theorem for Haefliger cohomology classes.
0
0
1
0
0
0
18,692
A simple mathematical model for unemployment: a case study in Portugal with optimal control
We propose a simple mathematical model for unemployment. Despite its simpleness, we claim that the model is more realistic and useful than recent models available in the literature. A case study with real data from Portugal supports our claim. An optimal control problem is formulated and solved, which provides some non-trivial and interesting conclusions.
0
0
0
0
0
1
18,693
Interpreting Deep Neural Networks Through Variable Importance
While the success of deep neural networks (DNNs) is well-established across a variety of domains, our ability to explain and interpret these methods is limited. Unlike previously proposed local methods which try to explain particular classification decisions, we focus on global interpretability and ask a universally applicable question: given a trained model, which features are the most important? In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable dependence into feature ranking. Our methodological contributions in this paper are two-fold. First, we propose an effect size analogue for DNNs that is appropriate for applications with highly collinear predictors (ubiquitous in computer vision). Second, we extend the recently proposed "RelATive cEntrality" (RATE) measure (Crawford et al., 2019) to the Bayesian deep learning setting. RATE applies an information theoretic criterion to the posterior distribution of effect sizes to assess feature significance. We apply our framework to three broad application areas: computer vision, natural language processing, and social science.
1
0
0
1
0
0
18,694
Conformality of $1/N$ corrections in SYK-like models
The Sachdev-Ye--Kitaev is a quantum mechanical model of $N$ Majorana fermions which displays a number of appealing features -- solvability in the strong coupling regime, near-conformal invariance and maximal chaos -- which make it a suitable model for black holes in the context of the AdS/CFT holography. In this paper we show for the colored SYK model and several of its tensor model cousins that the next-to-leading order in the $N$ expansion preserves the conformal invariance of the $2$-point function in the strong coupling regime, up to the contribution of the Goldstone bosons leading to the spontaneous breaking of the symmetry and which are already seen in the leading order $4$-point function. We also comment on the composite field approach for computing correlation functions in colored tensor models.
0
1
0
0
0
0
18,695
Segal-type models of higher categories
Higher category theory is an exceedingly active area of research, whose rapid growth has been driven by its penetration into a diverse range of scientific fields. Its influence extends through key mathematical disciplines, notably homotopy theory, algebraic geometry and algebra, mathematical physics, to encompass important applications in logic, computer science and beyond. Higher categories provide a unifying language whose greatest strength lies in its ability to bridge between diverse areas and uncover novel applications. In this foundational work we introduce a new approach to higher categories. It builds upon the theory of iterated internal categories, one of the simplest possible higher categorical structures available, by adopting a novel and remarkably simple "weak globularity" postulate and demonstrating that the resulting model provides a fully general theory of weak n-categories. The latter are among the most complex of the higher structures, and are crucial for applications. We show that this new model of "weakly globular n-fold categories" is suitably equivalent to the well studied model of weak n-categories due to Tamsamani and Simpson.
0
0
1
0
0
0
18,696
Low-Mass Dark Matter Search with CDMSlite
The SuperCDMS experiment is designed to directly detect weakly interacting massive particles (WIMPs) that may constitute the dark matter in our Galaxy. During its operation at the Soudan Underground Laboratory, germanium detectors were run in the CDMSlite mode to gather data sets with sensitivity specifically for WIMPs with masses ${<}$10 GeV/$c^2$. In this mode, a higher detector-bias voltage is applied to amplify the phonon signals produced by drifting charges. This paper presents studies of the experimental noise and its effect on the achievable energy threshold, which is demonstrated to be as low as 56 eV$_{\text{ee}}$ (electron equivalent energy). The detector-biasing configuration is described in detail, with analysis corrections for voltage variations to the level of a few percent. Detailed studies of the electric-field geometry, and the resulting successful development of a fiducial parameter, eliminate poorly measured events, yielding an energy resolution ranging from ${\sim}$9 eV$_{\text{ee}}$ at 0 keV to 101 eV$_{\text{ee}}$ at ${\sim}$10 eV$_{\text{ee}}$. New results are derived for astrophysical uncertainties relevant to the WIMP-search limits, specifically examining how they are affected by variations in the most probable WIMP velocity and the Galactic escape velocity. These variations become more important for WIMP masses below 10 GeV/$c^2$. Finally, new limits on spin-dependent low-mass WIMP-nucleon interactions are derived, with new parameter space excluded for WIMP masses $\lesssim$3 GeV/$c^2$
0
1
0
0
0
0
18,697
Gravity with free initial conditions: a solution to the cosmological constant problem testable by CMB B-mode polarization
In standard general relativity the universe cannot be started with arbitrary initial conditions, because four of the ten components of the Einstein's field equations (EFE) are constraints on initial conditions. In the previous work it was proposed to extend the gravity theory to allow free initial conditions, with a motivation to solve the cosmological constant problem. This was done by setting four constraints on metric variations in the action principle, which is reasonable because the gravity's physical degrees of freedom are at most six. However, there are two problems about this theory; the three constraints in addition to the unimodular condition were introduced without clear physical meanings, and the flat Minkowski spacetime is unstable against perturbations. Here a new set of gravitational field equations is derived by replacing the three constraints with new ones requiring that geodesic paths remain geodesic against metric variations. The instability problem is then naturally solved. Implications for the cosmological constant $\Lambda$ are unchanged; the theory converges into EFE with nonzero $\Lambda$ by inflation, but $\Lambda$ varies on scales much larger than the present Hubble horizon. Then galaxies are formed only in small $\Lambda$ regions, and the cosmological constant problem is solved by the anthropic argument. Because of the increased degrees of freedom in metric dynamics, the theory predicts new non-oscillatory modes of metric anisotropy generated by quantum fluctuation during inflation, and CMB B-mode polarization would be observed differently from the standard predictions by general relativity.
0
1
0
0
0
0
18,698
Gradual Learning of Recurrent Neural Networks
Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence modeling tasks. However, RNNs are difficult to train and tend to suffer from overfitting. Motivated by the Data Processing Inequality (DPI), we formulate the multi-layered network as a Markov chain, introducing a training method that comprises training the network gradually and using layer-wise gradient clipping. We found that applying our methods, combined with previously introduced regularization and optimization methods, resulted in improvements in state-of-the-art architectures operating in language modeling tasks.
1
0
0
1
0
0
18,699
Borel subsets of the real line and continuous reducibility
We study classes of Borel subsets of the real line $\mathbb{R}$ such as levels of the Borel hierarchy and the class of sets that are reducible to the set $\mathbb{Q}$ of rationals, endowed with the Wadge quasi-order of reducibility with respect to continuous functions on $\mathbb{R}$. Notably, we explore several structural properties of Borel subsets of $\mathbb{R}$ that diverge from those of Polish spaces with dimension zero. Our first main result is on the existence of embeddings of several posets into the restriction of this quasi-order to any Borel class that is strictly above the classes of open and closed sets, for instance the linear order $\omega_1$, its reverse $\omega_1^\star$ and the poset $\mathcal{P}(\omega)/\mathsf{fin}$ of inclusion modulo finite error. As a consequence of its proof, it is shown that there are no complete sets for these classes. We further extend the previous theorem to targets that are reducible to $\mathbb{Q}$. These non-structure results motivate the study of further restrictions of the Wadge quasi-order. In our second main theorem, we introduce a combinatorial property that is shown to characterize those $F_\sigma$ sets that are reducible to $\mathbb{Q}$. This is applied to construct a minimal set below $\mathbb{Q}$ and prove its uniqueness up to Wadge equivalence. We finally prove several results concerning gaps and cardinal characteristics of the Wadge quasi-order and thereby answer questions of Brendle and Geschke.
0
0
1
0
0
0
18,700
Pairing from dynamically screened Coulomb repulsion in bismuth
Recently, Prakash et. al. have discovered bulk superconductivity in single crystals of bismuth, which is a semi metal with extremely low carrier density. At such low density, we argue that conventional electron-phonon coupling is too weak to be responsible for the binding of electrons into Cooper pairs. We study a dynamically screened Coulomb interaction with effective attraction generated on the scale of the collective plasma modes. We model the electronic states in bismuth to include three Dirac pockets with high velocity and one hole pocket with a significantly smaller velocity. We find a weak coupling instability, which is greatly enhanced by the presence of the hole pocket. Therefore, we argue that bismuth is the first material to exhibit superconductivity driven by retardation effects of Coulomb repulsion alone. By using realistic parameters for bismuth we find that the acoustic plasma mode does not play the central role in pairing. We also discuss a matrix element effect, resulting from the Dirac nature of the conduction band, which may affect $T_c$ in the $s$-wave channel without breaking time-reversal symmetry.
0
1
0
0
0
0