ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
18,001
Separation of the charge density wave and superconducting states by an intermediate semimetal phase in pressurized TaTe2
In layered transition metal dichalcogenides (LTMDCs) that display both charge density waves (CDWs) and superconductivity, the superconducting state generally emerges directly on suppression of the CDW state. Here, however, we report a different observation for pressurized TaTe2, a non-superconducting CDW-bearing LTMDC at ambient pressure. We find that a superconducting state does not occur in TaTe2 after the full suppression of its CDW state, which we observe at about 3 GPa, but, rather, a non-superconducting semimetal state is observed. At a higher pressure, ~21 GPa, where both the semimetal state and the corresponding positive magnetoresistance effect are destroyed, superconductivity finally emerges and remains present up to ~50 GPa, the high pressure limit of our measurements. Our pressure-temperature phase diagram for TaTe2 demonstrates that the CDW and the superconducting phases in TaTe2 do not directly transform one to the other, but rather are separated by a semimetal state, - the first experimental case where the CDW and superconducting states are separated by an intermediate phase in LTMDC systems.
0
1
0
0
0
0
18,002
Zero distribution for Angelesco Hermite--Padé polynomials
We consider the problem of zero distribution of the first kind Hermite--Padé polynomials associated with a vector function $\vec f = (f_1, \dots, f_s)$ whose components $f_k$ are functions with a finite number of branch points in plane. We assume that branch sets of component functions are well enough separated (which constitute the Angelesco case). Under this condition we prove a theorem on limit zero distribution for such polynomials. The limit measures are defined in terms of a known vector equilibrium problem. Proof of the theorem is based on the methods developed by H.~Stahl, A.~A.~Gonchar and the author. These methods obtained some further generalization in the paper in application to systems of polynomials defined by systems of complex orthogonality relations. Together with the characterization of the limit zero distributions of Hermite--Padé polynomials by a vector equilibrium problem we consider an alternative characterization using a Riemann surface $\mathcal R(\vec f)$ associated with $\vec f$. In this terms we present a more general (without Angelesco condition) conjecture on the zero distribution of Hermite--Padé polynomials. Bibliography: 72 items.
0
0
1
0
0
0
18,003
Atomic-scale identification of novel planar defect phases in heteroepitaxial YBa$_2$Cu$_3$O$_{7-δ}$ thin films
We have discovered two novel types of planar defects that appear in heteroepitaxial YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO123) thin films, grown by pulsed-laser deposition (PLD) either with or without a La$_{2/3}$Ca$_{1/3}$MnO$_3$ (LCMO) overlayer, using the combination of high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) imaging and electron energy loss spectroscopy (EELS) mapping for unambiguous identification. These planar lattice defects are based on the intergrowth of either a BaO plane between two CuO chains or multiple Y-O layers between two CuO$_2$ planes, resulting in non-stoichiometric layer sequences that could directly impact the high-$T_c$ superconductivity.
0
1
0
0
0
0
18,004
Self-consistent assessment of Englert-Schwinger model on atomic properties
Our manuscript investigates a self-consistent solution of the statistical atom model proposed by Berthold-Georg Englert and Julian Schwinger (the ES model) and benchmarks it against atomic Kohn-Sham and two orbital-free models of the Thomas-Fermi-Dirac (TFD)-$\lambda$vW family. Results show that the ES model generally offers the same accuracy as the well-known TFD-$\frac{1}{5}$vW model; however, the ES model corrects the failure in Pauli potential near-nucleus region. We also point to the inability of describing low-$Z$ atoms as the foremost concern in improving the present model.
0
1
0
0
0
0
18,005
Target volatility option pricing in lognormal fractional SABR model
We examine in this article the pricing of target volatility options in the lognormal fractional SABR model. A decomposition formula by Ito's calculus yields a theoretical replicating strategy for the target volatility option, assuming the accessibilities of all variance swaps and swaptions. The same formula also suggests an approximation formula for the price of target volatility option in small time by the technique of freezing the coefficient. Alternatively, we also derive closed formed expressions for a small volatility of volatility expansion of the price of target volatility option. Numerical experiments show accuracy of the approximations in a reasonably wide range of parameters.
0
0
0
0
0
1
18,006
Robust Blind Deconvolution via Mirror Descent
We revisit the Blind Deconvolution problem with a focus on understanding its robustness and convergence properties. Provable robustness to noise and other perturbations is receiving recent interest in vision, from obtaining immunity to adversarial attacks to assessing and describing failure modes of algorithms in mission critical applications. Further, many blind deconvolution methods based on deep architectures internally make use of or optimize the basic formulation, so a clearer understanding of how this sub-module behaves, when it can be solved, and what noise injection it can tolerate is a first order requirement. We derive new insights into the theoretical underpinnings of blind deconvolution. The algorithm that emerges has nice convergence guarantees and is provably robust in a sense we formalize in the paper. Interestingly, these technical results play out very well in practice, where on standard datasets our algorithm yields results competitive with or superior to the state of the art. Keywords: blind deconvolution, robust continuous optimization
1
0
0
1
0
0
18,007
One-Dimensional Symmetry Protected Topological Phases and their Transitions
We present a unified perspective on symmetry protected topological (SPT) phases in one dimension and address the open question of what characterizes their phase transitions. In the first part of this work we use symmetry as a guide to map various well-known fermionic and spin SPTs to a Kitaev chain with coupling of range $\alpha \in \mathbb Z$. This unified picture uncovers new properties of old models --such as how the cluster state is the fixed point limit of the Affleck-Kennedy-Lieb-Tasaki state in disguise-- and elucidates the connection between fermionic and bosonic phases --with the Hubbard chain interpolating between four Kitaev chains and a spin chain in the Haldane phase. In the second part, we study the topological phase transitions between these models in the presence of interactions. This leads us to conjecture that the critical point between any SPT with $d$-dimensional edge modes and the trivial phase has a central charge $c \geq \log_2 d$. We analytically verify this for many known transitions. This agrees with the intuitive notion that the phase transition is described by a delocalized edge mode, and that the central charge of a conformal field theory is a measure of the gapless degrees of freedom.
0
1
0
0
0
0
18,008
Proceedings Eighth International Symposium on Games, Automata, Logics and Formal Verification
This volume contains the proceedings of the Eighth International Symposium on Games, Automata, Logic and Formal Verification (GandALF 2017). The symposium took place in Roma, Italy, from the 20th to the 22nd of September 2017. The GandALF symposium was established by a group of Italian computer scientists interested in mathematical logic, automata theory, game theory, and their applications to the specification, design, and verification of complex systems. Its aim is to provide a forum where people from different areas, and possibly with different backgrounds, can fruitfully interact. GandALF has a truly international spirit, as witnessed by the composition of the program and steering committee and by the country distribution of the submitted papers.
1
0
0
0
0
0
18,009
Valence Bonds in Random Quantum Magnets: Theory and Application to YbMgGaO4
We analyze the effect of quenched disorder on spin-1/2 quantum magnets in which magnetic frustration promotes the formation of local singlets. Our results include a theory for 2d valence-bond solids subject to weak bond randomness, as well as extensions to stronger disorder regimes where we make connections with quantum spin liquids. We find, on various lattices, that the destruction of a valence-bond solid phase by weak quenched disorder leads inevitably to the nucleation of topological defects carrying spin-1/2 moments. This renormalizes the lattice into a strongly random spin network with interesting low-energy excitations. Similarly when short-ranged valence bonds would be pinned by stronger disorder, we find that this putative glass is unstable to defects that carry spin-1/2 magnetic moments, and whose residual interactions decide the ultimate low energy fate. Motivated by these results we conjecture Lieb-Schultz-Mattis-like restrictions on ground states for disordered magnets with spin-1/2 per statistical unit cell. These conjectures are supported by an argument for 1d spin chains. We apply insights from this study to the phenomenology of YbMgGaO$_4$, a recently discovered triangular lattice spin-1/2 insulator which was proposed to be a quantum spin liquid. We instead explore a description based on the present theory. Experimental signatures, including unusual specific heat, thermal conductivity, and dynamical structure factor, and their behavior in a magnetic field, are predicted from the theory, and compare favorably with existing measurements on YbMgGaO$_4$ and related materials.
0
1
0
0
0
0
18,010
TuckER: Tensor Factorization for Knowledge Graph Completion
Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER.
1
0
0
1
0
0
18,011
Taming Non-stationary Bandits: A Bayesian Approach
We consider the multi armed bandit problem in non-stationary environments. Based on the Bayesian method, we propose a variant of Thompson Sampling which can be used in both rested and restless bandit scenarios. Applying discounting to the parameters of prior distribution, we describe a way to systematically reduce the effect of past observations. Further, we derive the exact expression for the probability of picking sub-optimal arms. By increasing the exploitative value of Bayes' samples, we also provide an optimistic version of the algorithm. Extensive empirical analysis is conducted under various scenarios to validate the utility of proposed algorithms. A comparison study with various state-of-the-arm algorithms is also included.
1
0
0
1
0
0
18,012
Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization
We propose a new randomized coordinate descent method for a convex optimization template with broad applications. Our analysis relies on a novel combination of four ideas applied to the primal-dual gap function: smoothing, acceleration, homotopy, and coordinate descent with non-uniform sampling. As a result, our method features the first convergence rate guarantees among the coordinate descent methods, that are the best-known under a variety of common structure assumptions on the template. We provide numerical evidence to support the theoretical results with a comparison to state-of-the-art algorithms.
0
0
0
1
0
0
18,013
Guarantees for Greedy Maximization of Non-submodular Functions with Applications
We investigate the performance of the standard Greedy algorithm for cardinality constrained maximization of non-submodular nondecreasing set functions. While there are strong theoretical guarantees on the performance of Greedy for maximizing submodular functions, there are few guarantees for non-submodular ones. However, Greedy enjoys strong empirical performance for many important non-submodular functions, e.g., the Bayesian A-optimality objective in experimental design. We prove theoretical guarantees supporting the empirical performance. Our guarantees are characterized by a combination of the (generalized) curvature $\alpha$ and the submodularity ratio $\gamma$. In particular, we prove that Greedy enjoys a tight approximation guarantee of $\frac{1}{\alpha}(1- e^{-\gamma\alpha})$ for cardinality constrained maximization. In addition, we bound the submodularity ratio and curvature for several important real-world objectives, including the Bayesian A-optimality objective, the determinantal function of a square submatrix and certain linear programs with combinatorial constraints. We experimentally validate our theoretical findings for both synthetic and real-world applications.
1
0
1
0
0
0
18,014
Affine Metrics and Associated Algebroid Structures: Application to General Relativity
In this paper, algebroid bundle associated to affine metrics provide an structure for unification of gravity and electromagnetism and, geometrization of matter.
0
0
1
0
0
0
18,015
Controllability of temporal networks: An analysis using higher-order networks
The control of complex networks is a significant challenge, especially when the network topology of the system to be controlled is dynamic. Addressing this challenge, here we introduce a novel approach which allows exploring the controllability of temporal networks. Studying six empirical data sets, we particularly show that order correlations in the sequence of interactions can both increase or decrease the time needed to achieve full controllability. Counter-intuitively, we find that this effect can be opposite than the effect of order correlations on other dynamical processes. Specifically, we show that order correlations that speed up a diffusion process in a given system can slow down the control of the same system, and vice-versa. Building on the higher-order graphical modeling framework introduced in recent works, we further demonstrate that spectral properties of higher-order network topologies can be used to analytically explain this phenomenon.
1
1
0
0
0
0
18,016
Mapping momentum-dependent electron-phonon coupling and non-equilibrium phonon dynamics with ultrafast electron diffuse scattering
Despite their fundamental role in determining material properties, detailed momentum-dependent information on the strength of electron-phonon and phonon-phonon coupling (EPC and PPC, respectively) across the entire Brillouin zone (BZ) has proved difficult to obtain. Here we demonstrate that ultrafast electron diffuse scattering (UEDS) directly provides such information. By exploiting symmetry-based selection rules and time-resolution, scattering from different phonon branches can be distinguished even without energy resolution. Using graphite as a model system, we show that UEDS patterns map the relative EPC and PPC strength through their profound sensitivity to photoinduced changes in phonon populations. We measure strong EPC to the $K$-point transverse optical phonon of $A_1'$ symmetry ($K-A_1'$) and along the entire longitudinal optical branch between $\Gamma-K$, not only to the $\Gamma-E_{2g}$ phonon as previously emphasized. We also determine that the subsequent phonon relaxation pathway involves three stages; decay via several identifiable channels to transverse acoustic (TA) and longitudinal acoustic (LA) phonons (1-2 ps), intraband thermalization of the non-equilibrium TA/LA phonon populations (30-40 ps) and interband relaxation of the LA/TA modes (115 ps). Combining UEDS with ultrafast angle-resolved photoelectron spectroscopy will yield a complete picture of the dynamics within and between electron and phonon subsystems, helping to unravel complex phases in which the intertwined nature of these systems have a strong influence on emergent properties.
0
1
0
0
0
0
18,017
Meridional Circulation Dynamics in a Cyclic Convective Dynamo
Surface observations indicate that the speed of the solar meridional circulation in the photosphere varies in anti-phase with the solar cycle. The current explanation for the source of this variation is that inflows into active regions alter the global surface pattern of the meridional circulation. When these localized inflows are integrated over a full hemisphere, they contribute to the slow down of the axisymmetric poleward horizontal component. The behavior of this large scale flow deep inside the convection zone remains largely unknown. Present helioseismic techniques are not sensitive enough to capture the dynamics of this weak large scale flow. Moreover, the large time of integration needed to map the meridional circulation inside the convection zone, also masks some of the possible dynamics on shorter timescales. In this work we examine the dynamics of the meridional circulation that emerges from a 3D MHD global simulation of the solar convection zone. Our aim is to assess and quantify the behavior of meridional circulation deep inside the convection zone, where the cyclic large-scale magnetic field can reach considerable strength. Our analyses indicate that the meridional circulation morphology and amplitude are both highly influenced by the magnetic field, via the impact of magnetic torques on the global angular momentum distribution. A dynamic feature induced by these magnetic torques is the development of a prominent upward flow at mid latitudes in the lower convection zone that occurs near the equatorward edge of the toroidal bands and that peaks during cycle maximum. Globally, the dynamo-generated large-scale magnetic field drives variations in the meridional flow, in stark contrast to the conventional kinematic flux transport view of the magnetic field being advected passively by the flow.
0
1
0
0
0
0
18,018
Bayesian analysis of 210Pb dating
In many studies of environmental change of the past few centuries, 210Pb dating is used to obtain chronologies for sedimentary sequences. One of the most commonly used approaches to estimate the ages of depths in a sequence is to assume a constant rate of supply (CRS) or influx of `unsupported' 210Pb from the atmosphere, together with a constant or varying amount of `supported' 210Pb. Current 210Pb dating models do not use a proper statistical framework and thus provide poor estimates of errors. Here we develop a new model for 210Pb dating, where both ages and values of supported and unsupported 210Pb form part of the parameters. We apply our model to a case study from Canada as well as to some simulated examples. Our model can extend beyond the current CRS approach, deal with asymmetric errors and mix 210Pb with other types of dating, thus obtaining more robust, realistic and statistically better defined estimates.
0
0
0
1
0
0
18,019
Analytic continuation with Padé decomposition
The ill-posed analytic continuation problem for Green's functions or self-energies can be done using the Padé rational polynomial approximation. However, to extract accurate results from this approximation, high precision input data of the Matsubara Green's function are needed. The calculation of the Matsubara Green's function generally involves a Matsubara frequency summation which cannot be evaluated analytically. Numerical summation is requisite but it converges slowly with the increase of the Matsubara frequency. Here we show that this slow convergence problem can be significantly improved by utilizing the Padé decomposition approach to replace the Matsubara frequency summation by a Padé frequency summation, and high precision input data can be obtained to successfully perform the Padé analytic continuation.
0
1
0
0
0
0
18,020
Imperative Functional Programs that Explain their Work
Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al., where dynamic slicing was defined for pure higher-order functional programs as a Galois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.
1
0
0
0
0
0
18,021
Synthetic dimensions in ultracold molecules: quantum strings and membranes
Synthetic dimensions alter one of the most fundamental properties in nature, the dimension of space. They allow, for example, a real three-dimensional system to act as effectively four-dimensional. Driven by such possibilities, synthetic dimensions have been engineered in ongoing experiments with ultracold matter. We show that rotational states of ultracold molecules can be used as synthetic dimensions extending to many - potentially hundreds of - synthetic lattice sites. Microwaves coupling rotational states drive fully controllable synthetic inter-site tunnelings, enabling, for example, topological band structures. Interactions leads to even richer behavior: when molecules are frozen in a real space lattice with uniform synthetic tunnelings, dipole interactions cause the molecules to aggregate to a narrow strip in the synthetic direction beyond a critical interaction strength, resulting in a quantum string or a membrane, with an emergent condensate that lives on this string or membrane. All these phases can be detected using measurements of rotational state populations.
0
1
0
0
0
0
18,022
Self-Organization and The Origins of Life: The Managed-Metabolism Hypothesis
The managed-metabolism hypothesis suggests that a cooperation barrier must be overcome if self-producing chemical organizations are to transition from non-life to life. This barrier prevents un-managed, self-organizing, autocatalytic networks of molecular species from individuating into complex, cooperative organizations. The barrier arises because molecular species that could otherwise make significant cooperative contributions to the success of an organization will often not be supported within the organization, and because side reactions and other free-riding processes will undermine cooperation. As a result, the barrier seriously limits the possibility space that can be explored by un-managed organizations, impeding individuation, complex functionality and the transition to life. The barrier can be overcome comprehensively by appropriate management which implements a system of evolvable constraints. The constraints support beneficial co-operators and suppress free riders. In this way management can manipulate the chemical processes of an autocatalytic organization, producing novel processes that serve the interests of the organization as a whole and that could not arise and persist spontaneously in an un-managed chemical organization. Management self-organizes because it is able to capture some of the benefits that are produced when its management of an autocatalytic organization promotes beneficial cooperation. Selection therefore favours the emergence of managers that take over and manage chemical organizations so as to overcome the cooperation barrier. The managed-metabolism hypothesis shows that if management is to overcome the cooperation barrier comprehensively, its interventions must be digitally coded. In this way, the hypothesis accounts for the two-tiered structure of all living cells in which a digitally-coded genetic apparatus manages an analogically-informed metabolism.
0
1
0
0
0
0
18,023
Pile-up Reduction, Bayesian Decomposition and Applications of Silicon Drift Detectors at LCLS
Silicon drift detectors (SDDs) revolutionized spectroscopy in fields as diverse as geology and dentistry. For a subset of experiments at ultra-fast, x-ray free-electron lasers (FELs), SDDs can make substantial contributions. Often the unknown spectrum is interesting, carrying science data, or the background measurement is useful to identify unexpected signals. Many measurements involve only several discrete photon energies known a priori. We designed a pulse function (a combination of gradual step and exponential decay function) and demonstrated that for individual pulses the signal amplitude, peaking time, and pulse amplitude are interrelated and the signal amplitude and peaking time are obtained for each pulse by fitting. Avoiding pulse shaping reduced peaking times to tens of nanoseconds, resulting in reduced pulse pile-up and allowing decomposition of remaining pulse pile-up at photon separation times down to 100~ns while yielding time-of-arrival information with precision of 10~nanoseconds. At pulsed sources or high photon rates, photon pile-up still occurs. We showed that the area of one photon peaks is not suitable for estimating high photon rates while pile-up spectrum fitting is relatively simple and preferable to pile-up spectrum deconvolution. We developed a photon pile-up model for constant intensity sources, extended it to variable intensity sources (typical for FELs) and used it to fit a complex pile-up spectrum, demonstrating its accuracy. Based on the pile-up model, we developed a Bayesian pile-up decomposition method that allows decomposing pile-up of single events with up to 6 photons from 6 monochromatic lines with 99% accuracy. The usefulness of SDDs will continue into the x-ray FEL era of science. Their successors, the ePixS hybrid pixel detectors, already offer hundreds of pixels, each with similar performance to an SDD, in a compact, robust and affordable package.
0
1
0
0
0
0
18,024
Online Learning to Rank in Stochastic Click Models
Online learning to rank is a core problem in information retrieval and machine learning. Many provably efficient algorithms have been recently proposed for this problem in specific click models. The click model is a model of how the user interacts with a list of documents. Though these results are significant, their impact on practice is limited, because all proposed algorithms are designed for specific click models and lack convergence guarantees in other models. In this work, we propose BatchRank, the first online learning to rank algorithm for a broad class of click models. The class encompasses two most fundamental click models, the cascade and position-based models. We derive a gap-dependent upper bound on the $T$-step regret of BatchRank and evaluate it on a range of web search queries. We observe that BatchRank outperforms ranked bandits and is more robust than CascadeKL-UCB, an existing algorithm for the cascade model.
1
0
0
1
0
0
18,025
Transforming Coroutining Logic Programs into Equivalent CHR Programs
We extend a technique called Compiling Control. The technique transforms coroutining logic programs into logic programs that, when executed under the standard left-to-right selection rule (and not using any delay features) have the same computational behavior as the coroutining program. In recent work, we revised Compiling Control and reformulated it as an instance of Abstract Conjunctive Partial Deduction. This work was mostly focused on the program analysis performed in Compiling Control. In the current paper, we focus on the synthesis of the transformed program. Instead of synthesizing a new logic program, we synthesize a CHR(Prolog) program which mimics the coroutining program. The synthesis to CHR yields programs containing only simplification rules, which are particularly amenable to certain static analysis techniques. The programs are also more concise and readable and can be ported to CHR implementations embedded in other languages than Prolog.
1
0
0
0
0
0
18,026
Gap and rings carved by vortices in protoplanetary dust
Large-scale vortices in protoplanetary disks are thought to form and survive for long periods of time. Hence, they can significantly change the global disk evolution and particularly the distribution of the solid particles embedded in the gas, possibly explaining asymmetries and dust concentrations recently observed at sub-millimeter and millimeter wavelengths. We investigate the spatial distribution of dust grains using a simple model of protoplanetary disk hosted by a giant gaseous vortex. We explore the dependence of the results on grain size and deduce possible consequences and predictions for observations of the dust thermal emission at sub-millimeter and millimeter wavelengths. Global 2D simulations with a bi-fluid code are used to follow the evolution of a single population of solid particles aerodynamically coupled to the gas. Possible observational signatures of the dust thermal emission are obtained using simulators of ALMA and ngVLA observations. We find that a giant vortex not only captures dust grains with Stokes number St < 1 but can also affect the distribution of larger grains (with St '~' 1) carving a gap associated to a ring composed of incompletely trapped particles. The results are presented for different particle size and associated to their possible signatures in disk observations. Gap clearing in the dust spatial distribution could be due to the interaction with a giant gaseous vortex and their associated spiral waves, without the gravitational assistance of a planet. Hence, strong dust concentrations at short sub-mm wavelengths associated with a gap and an irregular ring at longer mm and cm wavelengths could indicate the presence of an unseen gaseous vortex.
0
1
0
0
0
0
18,027
Wasserstein Dictionary Learning: Optimal Transport-based unsupervised non-linear dictionary learning
This paper introduces a new nonlinear dictionary learning method for histograms in the probability simplex. The method leverages optimal transport theory, in the sense that our aim is to reconstruct histograms using so-called displacement interpolations (a.k.a. Wasserstein barycenters) between dictionary atoms; such atoms are themselves synthetic histograms in the probability simplex. Our method simultaneously estimates such atoms, and, for each datapoint, the vector of weights that can optimally reconstruct it as an optimal transport barycenter of such atoms. Our method is computationally tractable thanks to the addition of an entropic regularization to the usual optimal transportation problem, leading to an approximation scheme that is efficient, parallel and simple to differentiate. Both atoms and weights are learned using a gradient-based descent method. Gradients are obtained by automatic differentiation of the generalized Sinkhorn iterations that yield barycenters with entropic smoothing. Because of its formulation relying on Wasserstein barycenters instead of the usual matrix product between dictionary and codes, our method allows for nonlinear relationships between atoms and the reconstruction of input data. We illustrate its application in several different image processing settings.
1
0
0
1
0
0
18,028
A Generalization of Smillie's Theorem on Strongly Cooperative Tridiagonal Systems
Smillie (1984) proved an interesting result on the stability of nonlinear, time-invariant, strongly cooperative, and tridiagonal dynamical systems. This result has found many applications in models from various fields including biology, ecology, and chemistry. Smith (1991) has extended Smillie's result and proved entrainment in the case where the vector field is time-varying and periodic. We use the theory of linear totally nonnegative differential systems developed by Schwarz (1970) to give a generalization of these two results. This is based on weakening the requirement for strong cooperativity to cooperativity, and adding an additional observability-type condition.
1
0
0
0
0
0
18,029
Geospatial Semantics
Geospatial semantics is a broad field that involves a variety of research areas. The term semantics refers to the meaning of things, and is in contrast with the term syntactics. Accordingly, studies on geospatial semantics usually focus on understanding the meaning of geographic entities as well as their counterparts in the cognitive and digital world, such as cognitive geographic concepts and digital gazetteers. Geospatial semantics can also facilitate the design of geographic information systems (GIS) by enhancing the interoperability of distributed systems and developing more intelligent interfaces for user interactions. During the past years, a lot of research has been conducted, approaching geospatial semantics from different perspectives, using a variety of methods, and targeting different problems. Meanwhile, the arrival of big geo data, especially the large amount of unstructured text data on the Web, and the fast development of natural language processing methods enable new research directions in geospatial semantics. This chapter, therefore, provides a systematic review on the existing geospatial semantic research. Six major research areas are identified and discussed, including semantic interoperability, digital gazetteers, geographic information retrieval, geospatial Semantic Web, place semantics, and cognitive geographic concepts.
1
0
0
0
0
0
18,030
Stability of the sum of two solitary waves for (gDNLS) in the energy space
In this paper, we continue the study in \cite{MiaoTX:DNLS:Stab}. We use the perturbation argument, modulational analysis and the energy argument in \cite{MartelMT:Stab:gKdV, MartelMT:Stab:NLS} to show the stability of the sum of two solitary waves with weak interactions for the generalized derivative Schrödinger equation (gDNLS) in the energy space. Here (gDNLS) hasn't the Galilean transformation invariance, the pseudo-conformal invariance and the gauge transformation invariance, and the case $\sigma>1$ we considered corresponds to the $L^2$-supercritical case.
0
0
1
0
0
0
18,031
A BERT Baseline for the Natural Questions
This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions and we plan to opensource the code for it in the near future.
1
0
0
0
0
0
18,032
Knowledge Evolution in Physics Research: An Analysis of Bibliographic Coupling Networks
Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the APS publications data sets, we ask in this letter how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on BEC, quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events.
1
1
0
0
0
0
18,033
On a lower bound for the energy functional on a family of Hamiltonian minimal Lagrangian tori in $\mathbb{C}P^2$
We study the energy functional on the set of Lagrangian tori in $\mathbb{C}P^2$ . We prove that the value of the energy functional on a certain family of Hamiltonian minimal Lagrangian tori in $\mathbb{C}P^2$ is strictly larger than energy of the Clifford torus.
0
0
1
0
0
0
18,034
Dynamic Mortality Risk Predictions in Pediatric Critical Care Using Recurrent Neural Networks
Viewing the trajectory of a patient as a dynamical system, a recurrent neural network was developed to learn the course of patient encounters in the Pediatric Intensive Care Unit (PICU) of a major tertiary care center. Data extracted from Electronic Medical Records (EMR) of about 12000 patients who were admitted to the PICU over a period of more than 10 years were leveraged. The RNN model ingests a sequence of measurements which include physiologic observations, laboratory results, administered drugs and interventions, and generates temporally dynamic predictions for in-ICU mortality at user-specified times. The RNN's ICU mortality predictions offer significant improvements over those from two clinically-used scores and static machine learning algorithms.
1
0
1
1
0
0
18,035
Global Strichartz estimates for the Schrödinger equation with non zero boundary conditions and applications
We consider the Schrödinger equation on a half space in any dimension with a class of nonhomogeneous boundary conditions including Dirichlet, Neuman and the so-called transparent boundary conditions. Building upon recent local in time Strichartz estimates (for Dirichlet boundary conditions), we obtain global Strichartz estimates for initial data in $H^s,\ 0\leq s\leq 2$ and boundary data in a natural space $\mathcal{H}^s$. For $s\geq 1/2$, the issue of compatibility conditions requires a thorough analysis of the $\mathcal{H}^s$ space. As an application we solve nonlinear Schrödinger equations and construct global asymptotically linear solutions for small data. A discussion is included on the appropriate notion of scattering in this framework, and the optimality of the $\mathcal{H}^s$ space.
0
0
1
0
0
0
18,036
Structure of $^{20}$Ne states in the resonance $^{16}$O+$α$ elastic scattering
Background The nuclear structure of the cluster bands in $^{20}$Ne presents a challenge for different theoretical approaches. It is especially difficult to explain the broad 0$^+$, 2$^+$ states at 9 MeV excitation energy. Simultaneously, it is important to obtain more reliable experimental data for these levels in order to quantitatively assess the theoretical framework. Purpose To obtain new data on $^{20}$Ne $\alpha$ cluster structure. Method Thick target inverse kinematics technique was used to study the $^{16}$O+$\alpha$ resonance elastic scattering and the data were analyzed using an \textit{R} matrix approach. The $^{20}$Ne spectrum, the cluster and nucleon spectroscopic factors were calculated using cluster-nucleon configuration interaction model (CNCIM). Results We determined the parameters of the broad resonances in \textsuperscript{20}Ne: 0$^+$ level at 8.77 $\pm$ 0.150 MeV with a width of 750 (+500/-220) keV; 2$^+$ level at 8.75 $\pm$ 0.100 MeV with the width of 695 $\pm$ 120 keV; the width of 9.48 MeV level of 65 $\pm$ 20 keV and showed that 9.19 MeV, 2$^+$ level (if exists) should have width $\leq$ 10 keV. The detailed comparison of the theoretical CNCIM predictions with the experimental data on cluster states was made. Conclusions Our experimental results by the TTIK method generally confirm the adopted data on $\alpha$ cluster levels in $^{20}$Ne. The CNCIM gives a good description of the $^{20}$Ne positive parity states up to an excitation energy of $\sim$ 7 MeV, predicting reasonably well the excitation energy of the states and their cluster and single particle properties. At higher excitations, the qualitative disagreement with the experimentally observed structure is evident, especially for broad resonances.
0
1
0
0
0
0
18,037
Terminal-Pairability in Complete Bipartite Graphs
We investigate the terminal-pairibility problem in the case when the base graph is a complete bipartite graph, and the demand graph is also bipartite with the same color classes. We improve the lower bound on maximum value of $\Delta(D)$ which still guarantees that the demand graph $D$ is terminal-pairable in this setting. We also prove a sharp theorem on the maximum number of edges such a demand graph can have.
0
0
1
0
0
0
18,038
Rate-optimal Meta Learning of Classification Error
Meta learning of optimal classifier error rates allows an experimenter to empirically estimate the intrinsic ability of any estimator to discriminate between two populations, circumventing the difficult problem of estimating the optimal Bayes classifier. To this end we propose a weighted nearest neighbor (WNN) graph estimator for a tight bound on the Bayes classification error; the Henze-Penrose (HP) divergence. Similar to recently proposed HP estimators [berisha2016], the proposed estimator is non-parametric and does not require density estimation. However, unlike previous approaches the proposed estimator is rate-optimal, i.e., its mean squared estimation error (MSEE) decays to zero at the fastest possible rate of $O(1/M+1/N)$ where $M,N$ are the sample sizes of the respective populations. We illustrate the proposed WNN meta estimator for several simulated and real data sets.
0
0
0
1
0
0
18,039
Fast, Robust, and Versatile Event Detection through HMM Belief State Gradient Measures
Event detection is a critical feature in data-driven systems as it assists with the identification of nominal and anomalous behavior. Event detection is increasingly relevant in robotics as robots operate with greater autonomy in increasingly unstructured environments. In this work, we present an accurate, robust, fast, and versatile measure for skill and anomaly identification. A theoretical proof establishes the link between the derivative of the log-likelihood of the HMM filtered belief state and the latest emission probabilities. The key insight is the inverse relationship in which gradient analysis is used for skill and anomaly identification. Our measure showed better performance across all metrics than related state-of-the art works. The result is broadly applicable to domains that use HMMs for event detection.
1
0
0
0
0
0
18,040
Geometric features of Vessiot--Guldberg Lie algebras of conformal and Killing vector fields on $\mathbb{R}^2$
This paper locally classifies finite-dimensional Lie algebras of conformal and Killing vector fields on $\mathbb{R}^2$ relative to an arbitrary pseudo-Riemannian metric. Several results about their geometric properties are detailed, e.g. their invariant distributions and induced symplectic structures. Findings are illustrated with two examples of physical nature: the Milne--Pinney equation and the projective Schrödinger equation on the Riemann sphere.
0
1
0
0
0
0
18,041
Fermion inter-particle potentials in 5D and a dimensional restriction prescription to 4D
This work sets out to compute and discuss effects of spin, velocity and dimensionality on inter-particle potentials systematically derived from gauge field-theoretic models. We investigate the interaction of fermionic particles by the exchange of a vector field in a parity-preserving description in five-dimensional $(5D)$ space-time. A particular dimensional reduction prescription is adopted $-$ reduction by dimensional restriction $-$ and special effects, like a pseudo-spin dependence, show up in four dimensions $(4D)$. What we refer to as pseudo-spin shall be duly explained. The main idea we try to convey is that the calculation of the potentials in $5D$ and the consequent reduction to $4D$ exhibits new effects that are not present if the potential is calculated in $4D$ after the action has been reduced.
0
1
0
0
0
0
18,042
Characterizations of idempotent discrete uninorms
In this paper we provide an axiomatic characterization of the idempotent discrete uninorms by means of three conditions only: conservativeness, symmetry, and nondecreasing monotonicity. We also provide an alternative characterization involving the bisymmetry property. Finally, we provide a graphical characterization of these operations in terms of their contour plots, and we mention a few open questions for further research.
1
0
1
0
0
0
18,043
Constraints on the sum of neutrino masses using cosmological data including the latest extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample
We investigate the constraints on the sum of neutrino masses ($\Sigma m_\nu$) using the most recent cosmological data, which combines the distance measurement from baryonic acoustic oscillation in the extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample with the power spectra of temperature and polarization anisotropies in the cosmic microwave background from the Planck 2015 data release. We also use other low-redshift observations including the baryonic acoustic oscillation at relatively low redshifts, the supernovae of type Ia and the local measurement of Hubble constant. In the standard cosmological constant $\Lambda$ cold dark matter plus massive neutrino model, we obtain the $95\%$ \acl{CL} upper limit to be $\Sigma m_\nu<0.129~\mathrm{eV}$ for the degenerate mass hierarchy, $\Sigma m_{\nu}<0.159~\mathrm{eV}$ for the normal mass hierarchy, and $\Sigma m_{\nu}<0.189~\mathrm{eV}$ for the inverted mass hierarchy. Based on Bayesian evidence, we find that the degenerate hierarchy is positively supported, and the current data combination can not distinguish normal and inverted hierarchies. Assuming the degenerate mass hierarchy, we extend our study to non-standard cosmological models including the generic dark energy, the spatial curvature, and the extra relativistic degrees of freedom, respectively, but find these models not favored by the data.
0
1
0
0
0
0
18,044
The effect of the choice of neural network depth and breadth on the size of its hypothesis space
We show that the number of unique function mappings in a neural network hypothesis space is inversely proportional to $\prod_lU_l!$, where $U_{l}$ is the number of neurons in the hidden layer $l$.
0
0
0
1
0
0
18,045
Fairwashing: the risk of rationalization
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the perception that a machine learning model respects some ethical values while it might not be the case. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.
1
0
0
1
0
0
18,046
Volume Dependence of N-Body Bound States
We derive the finite-volume correction to the binding energy of an N-particle quantum bound state in a cubic periodic volume. Our results are applicable to bound states with arbitrary composition and total angular momentum, and in any number of spatial dimensions. The only assumptions are that the interactions have finite range. The finite-volume correction is a sum of contributions from all possible breakup channels. In the case where the separation is into two bound clusters, our result gives the leading volume dependence up to exponentially small corrections. If the separation is into three or more clusters, there is a power-law factor that is beyond the scope of this work, however our result again determines the leading exponential dependence. We also present two independent methods that use finite-volume data to determine asymptotic normalization coefficients. The coefficients are useful to determine low-energy capture reactions into weakly bound states relevant for nuclear astrophysics. Using the techniques introduced here, one can even extract the infinite-volume energy limit using data from a single-volume calculation. The derived relations are tested using several exactly solvable systems and numerical examples. We anticipate immediate applications to lattice calculations of hadronic, nuclear, and cold atomic systems.
0
1
1
0
0
0
18,047
Tree-Structured Boosting: Connections Between Gradient Boosted Stumps and Full Decision Trees
Additive models, such as produced by gradient boosting, and full interaction models, such as classification and regression trees (CART), are widely used algorithms that have been investigated largely in isolation. We show that these models exist along a spectrum, revealing never-before-known connections between these two approaches. This paper introduces a novel technique called tree-structured boosting for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although tree-structured boosting is designed primarily to provide both the model interpretability and predictive performance needed for high-stake applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches.
1
0
0
1
0
0
18,048
Magnetic field influenced electron-impurity scattering and magnetotransport
We formulate a quasiclassical theory ($\omega_c\tau \lesssim 1$ with $\omega_c$ as the cyclotron frequency and $\tau$ as the relaxation time) to study the influence of magnetic field on electron-impurity scattering process in the two-dimensional electron gas. We introduce a general recipe based on an abstraction of the detailed impurity scattering process to define the scattering parameter such as the incoming and outgoing momentum and coordinate jump. In this picture, we can conveniently describe the skew scattering and coordinate jump, which will eventually modify the Boltzmann equation. We find an anomalous Hall resistivity different from the conventional Boltzmann-Drude result and a negative magnetoresistivity parabolic in magnetic field. The origin of these results has been analyzed. The relevance between our theory and recent simulation and experimental works is also discussed. Our theory dominates in dilute impurity system where the correlation effect is negligible.
0
1
0
0
0
0
18,049
Evidence for depletion of heavy silicon isotopes at comet 67P/Churyumov-Gerasimenko
Context. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) was designed to measure the composition of the gas in the coma of comet 67P/Churyumov-Gerasimenko, the target of the European Space Agency's Rosetta mission. In addition to the volatiles, ROSINA measured refractories sputtered off the comet by the interaction of solar wind protons with the surface of the comet. Aims. The origin of different solar system materials is still heavily debated. Isotopic ratios can be used to distinguish between different reservoirs and investigate processes occurring during the formation of the solar system. Methods. ROSINA consisted of two mass spectrometers and a pressure sensor. In the ROSINA Double Focusing Mass Spectrometer (DFMS), the neutral gas of cometary origin was ionized and then deflected in an electric and a magnetic field that separated the ions based on their mass-to-charge ratio. The DFMS had a high mass resolution, dynamic range, and sensitivity that allowed detection of rare species and the known major volatiles. Results. We measured the relative abundance of all three stable silicon isotopes with the ROSINA instrument on board the Rosetta spacecraft. Furthermore, we measured $^{13}$C/$^{12}$C in C$_2$H$_4$, C$_2$H$_5$, and CO. The DFMS in situ measurements indicate that the average silicon isotopic composition shows depletion in the heavy isotopes $^{29}$Si and $^{30}$Si with respect to $^{28}$Si and solar abundances, while $^{13}$C to $^{12}$C is analytically indistinguishable from bulk planetary and meteorite compositions. Although the origin of the deficiency of the heavy silicon isotopes cannot be explained unambiguously, we discuss mechanisms that could have contributed to the measured depletion of the isotopes $^{29}$Si and $^{30}$Si.
0
1
0
0
0
0
18,050
Design, Simulation, and Testing of a Flexible Actuated Spine for Quadruped Robots
Walking quadruped robots face challenges in positioning their feet and lifting their legs during gait cycles over uneven terrain. The robot Laika is under development as a quadruped with a flexible, actuated spine designed to assist with foot movement and balance during these gaits. This paper presents the first set of hardware designs for the spine of Laika, a physical prototype of those designs, and tests in both hardware and simulations that show the prototype's capabilities. Laika's spine is a tensegrity structure, used for its advantages with weight and force distribution, and represents the first working prototype of a tensegrity spine for a quadruped robot. The spine bends by adjusting the lengths of the cables that separate its vertebrae, and twists using an actuated rotating vertebra at its center. The current prototype of Laika has stiff legs attached to the spine, and is used as a test setup for evaluation of the spine itself. This work shows the advantages of Laika's spine by demonstrating the spine lifting each of the robot's four feet, both as a form of balancing and as a precursor for a walking gait. These foot motions, using specific combinations of bending and rotation movements of the spine, are measured in both simulation and hardware experiments. Hardware data are used to calibrate the simulations, such that the simulations can be used for control of balancing or gait cycles in the future. Future work will attach actuated legs to Laika's spine, and examine balancing and gait cycles when combined with leg movements.
1
0
0
0
0
0
18,051
The landscape of NeuroImage-ing research
As the field of neuroimaging grows, it can be difficult for scientists within the field to gain and maintain a detailed understanding of its ever-changing landscape. While collaboration and citation networks highlight important contributions within the field, the roles of and relations among specific areas of study can remain quite opaque. Here, we apply techniques from network science to map the landscape of neuroimaging research documented in the journal NeuroImage over the past decade. We create a network in which nodes represent research topics, and edges give the degree to which these topics tend to be covered in tandem. The network displays small-world architecture, with communities characterized by common imaging modalities and medical applications, and with bridges that integrate these distinct subfields. Using node-level analysis, we quantify the structural roles of individual topics within the neuroimaging landscape, and find high levels of clustering within the structural MRI subfield as well as increasing participation among topics related to psychiatry. The overall prevalence of a topic is unrelated to the prevalence of its neighbors, but the degree to which a topic becomes more or less popular over time is strongly related to changes in the prevalence of its neighbors. Broadly, this work presents a cohesive model for understanding the landscape of neuroimaging research across the field, in broad subfields, and within specific topic areas.
1
0
0
1
0
0
18,052
To tune or not to tune the number of trees in random forest?
The number of trees T in the random forest (RF) algorithm for supervised learning has to be set by the user. It is controversial whether T should simply be set to the largest computationally manageable value or whether a smaller T may in some cases be better. While the principle underlying bagging is that "more trees are better", in practice the classification error rate sometimes reaches a minimum before increasing again for increasing number of trees. The goal of this paper is four-fold: (i) providing theoretical results showing that the expected error rate may be a non-monotonous function of the number of trees and explaining under which circumstances this happens; (ii) providing theoretical results showing that such non-monotonous patterns cannot be observed for other performance measures such as the Brier score and the logarithmic loss (for classification) and the mean squared error (for regression); (iii) illustrating the extent of the problem through an application to a large number (n = 306) of datasets from the public database OpenML; (iv) finally arguing in favor of setting it to a computationally feasible large number, depending on convergence properties of the desired performance measure.
1
0
0
1
0
0
18,053
SPH Modeling of Short-crested Waves
This study investigates short-crested wave breaking over a planar beach by using the mesh-free Smoothed Particle Hydrodynamics model, GPUSPH. The short-crested waves are created by generating intersecting wave trains in a numerical wave basin. We examine the influence of beach slope, incident wave height, and incident wave angle on the generated short-crested waves. Short-crested wave breaking over a steeper beach generates stronger rip currents, and larger circulation cells in front of the beach. Intersecting wave trains with a larger incident wave height drive a more complicated short-crested wave field including isolated breakers and wave amplitude diffraction. Nearshore circulation induced by short-crested wave breaking is greatly influenced by the incident wave angle (or the rip current spacing). There is no secondary circulation cell between the nodal line and the antinodal line if the rip current spacing is narrow. However, there are multiple secondary circulation cells observed when the rip current spacing is relatively large.
0
1
0
0
0
0
18,054
Tunneling Field-Effect Junctions with WS$_2$ barrier
Transition metal dichalcogenides (TMDCs), with their two-dimensional structures and sizable bandgaps, are good candidates for barrier materials in tunneling field-effect transistor (TFET) formed from atomic precision vertical stacks of graphene and insulating crystals of a few atomic layers in thickness. We report first-principles study of the electronic properties of the Graphene/WS$_2$/Graphene sandwich structure revealing strong interface effects on dielectric properties and predicting a high ON/OFF ratio with an appropriate WS$_2$ thickness and a suitable range of the gate voltage. Both the band spin-orbit coupling splitting and the dielectric constant of the WS$_2$ layer depend on its thickness when in contact with the graphene electrodes, indicating strong influence from graphene across the interfaces. The dielectric constant is significantly reduced from the bulk WS$_2$ value. The effective barrier height varies with WS$_2$ thickness and can be tuned by a gate voltage. These results are critical for future nanoelectronic device designs.
0
1
0
0
0
0
18,055
Topological Larkin-Ovchinnikov phase and Majorana zero mode chain in bilayer superconducting topological insulator films
We theoretically study bilayer superconducting topological insulator film, in which superconductivity exists for both top and bottom surface states. We show that an in-plane magnetic field can drive the system into Larkin-Ovchinnikov (LO) phase, where electrons are paired with finite momenta. The LO phase is topologically non-trivial and characterized by a Z 2 topological invariant, leading to a Majorana zero mode chain along the edge perpendicular to in-plane magnetic fields.
0
1
0
0
0
0
18,056
Optimization and Analysis of Wireless Powered Multi-antenna Cooperative Systems
In this paper, we consider a three-node cooperative wireless powered communication system consisting of a multi-antenna hybrid access point (H-AP) and a single-antenna relay and a single-antenna user. The energy constrained relay and user first harvest energy in the downlink and then the relay assists the user using the harvested power for information transmission in the uplink. The optimal energy beamforming vector and the time split between harvest and cooperation are investigated. To reduce the computational complexity, suboptimal designs are also studied, where closed-form expressions are derived for the energy beamforming vector and the time split. For comparison purposes, we also present a detailed performance analysis in terms of the achievable outage probability and the average throughput of an intuitive energy beamforming scheme, where the H-AP directs all the energy towards the user. The findings of the paper suggest that implementing multiple antennas at the H-AP can significantly improve the system performance, and the closed-form suboptimal energy beamforming vector and time split yields near optimal performance. Also, for the intuitive beamforming scheme, a diversity order of (N+1)/2 can be achieved, where N is the number of antennas at the H-AP.
1
0
0
0
0
0
18,057
Asai cube L-functions and the local Langlands conjecture
Let $F$ be a non-archimedean locally compact field. We study a class of Langlands-Shahidi pairs $({\bf H},{\bf L})$, consisting of a quasi-split connected reductive group $\bf H$ over $F$ and a Levi subgroup $\bf L$ which is closely related to a product of restriction of scalars of ${\rm GL}_1$'s or ${\rm GL}_2$'s. We prove the compatibility of the resulting local factors with the Langlands correspondence. In particular, let $E$ be a cubic separable extension of $F$. We consider a simply connected quasi-split semisimple group $\bf H$ over $F$ of type $D_4$, with triality corresponding to $E$, and let $\bf L$ be its Levi subgroup with derived group ${\rm Res}_{E/F} {\rm SL}_2$. In this way we obtain Asai cube local factors attached to irreducible smooth representations of ${\rm GL}_2(E)$; we prove that they are Weil-Deligne factors obtained via the local Langlands correspondence for ${\rm GL}_2(E)$ and tensor induction from $E$ to $F$. A consequence is that Asai cube $\gamma$- and $\varepsilon$-factors become stable under twists by highly ramified characters.
0
0
1
0
0
0
18,058
What's in a game? A theory of game models
Game semantics is a rich and successful class of denotational models for programming languages. Most game models feature a rather intuitive setup, yet surprisingly difficult proofs of such basic results as associativity of composition of strategies. We set out to unify these models into a basic abstract framework for game semantics, game settings. Our main contribution is the generic construction, for any game setting, of a category of games and strategies. Furthermore, we extend the framework to deal with innocence, and prove that innocent strategies form a subcategory. We finally show that our constructions cover many concrete cases, mainly among the early models and the very recent sheaf-based ones.
1
0
0
0
0
0
18,059
Proposal for the Detection of Magnetic Monopoles in Spin Ice via Nanoscale Magnetometry
We present a proposal for applying nanoscale magnetometry to the search for magnetic monopoles in the spin ice materials holmium and dysprosium titanate. Employing Monte Carlo simulations of the dipolar spin ice model, we find that when cooled to below $1.5\,$K these materials exhibit a sufficiently low monopole density to enable the direct observation of magnetic fields from individual monopoles. At these temperatures we demonstrate that noise spectroscopy can capture the intrinsic fluctuations associated with monopole dynamics, allowing one to isolate the qualitative effects associated with both the Coulomb interaction between monopoles and the topological constraints implied by Dirac strings. We describe in detail three different nanoscale magnetometry platforms (muon spin rotation, nitrogen vacancy defects, and nanoSQUID arrays) that can be used to detect monopoles in these experiments, and analyze the advantages of each.
0
1
0
0
0
0
18,060
Rule-Based Spanish Morphological Analyzer Built From Spell Checking Lexicon
Preprocessing tools for automated text analysis have become more widely available in major languages, but non-English tools are often still limited in their functionality. When working with Spanish-language text, researchers can easily find tools for tokenization and stemming, but may not have the means to extract more complex word features like verb tense or mood. Yet Spanish is a morphologically rich language in which such features are often identifiable from word form. Conjugation rules are consistent, but many special verbs and nouns take on different rules. While building a complete dictionary of known words and their morphological rules would be labor intensive, resources to do so already exist, in spell checkers designed to generate valid forms of known words. This paper introduces a set of tools for Spanish-language morphological analysis, built using the COES spell checking tools, to label person, mood, tense, gender and number, derive a word's root noun or verb infinitive, and convert verbs to their nominal form.
1
0
0
0
0
0
18,061
Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a new method for embedding graphs while preserving directed edge information. Learning such continuous-space vector representations (or embeddings) of nodes in a graph is an important first step for using network information (from social networks, user-item graphs, knowledge bases, etc.) in many machine learning tasks. Unlike previous work, we (1) explicitly model an edge as a function of node embeddings, and we (2) propose a novel objective, the "graph likelihood", which contrasts information from sampled random walks with non-existent edges. Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings. When combined, our contributions enable us to significantly improve the state-of-the-art by learning more concise representations that better preserve the graph structure. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions, showing that our proposed method learn representations with error reductions of up to 76% and 55%, on directed and undirected graphs. In addition, we show that the representations learned by our method are quite space efficient, producing embeddings which have higher structure-preserving accuracy but are 10 times smaller.
1
0
0
1
0
0
18,062
Inference, Prediction, and Control of Networked Epidemics
We develop a feedback control method for networked epidemic spreading processes. In contrast to most prior works which consider mean field, open-loop control schemes, the present work develops a novel framework for feedback control of epidemic processes which leverages incomplete observations of the stochastic epidemic process in order to control the exact dynamics of the epidemic outbreak. We develop an observation model for the epidemic process, and demonstrate that if the set of observed nodes is sufficiently well structured, then the random variables which denote the process' infections are conditionally independent given the observations. We then leverage the attained conditional independence property to construct tractable mechanisms for the inference and prediction of the process state, avoiding the need to use mean field approximations or combinatorial representations. We conclude by formulating a one-step lookahead controller for the discrete-time Susceptible-Infected-Susceptible (SIS) epidemic process which leverages the developed Bayesian inference and prediction mechanisms, and causes the epidemic to die out at a chosen rate.
0
0
1
0
0
0
18,063
The Weisfeiler-Leman algorithm and the diameter of Schreier graphs
We prove that the number of iterations taken by the Weisfeiler-Leman algorithm for configurations coming from Schreier graphs is closely linked to the diameter of the graphs themselves: an upper bound is found for general Schreier graphs, and a lower bound holds for particular cases, such as for Schreier graphs with $G=\mbox{SL}_{n}({\mathbb F}_{q})$ ($q>2$) acting on $k$-tuples of vectors in ${\mathbb F}_{q}^{n}$; moreover, an exact expression is found in the case of Cayley graphs.
0
0
1
0
0
0
18,064
Correlations between primes in short intervals on curves over finite fields
We prove an analogue of the Hardy-Littlewood conjecture on the asymptotic distribution of prime constellations in the setting of short intervals in function fields of smooth projective curves over finite fields.
0
0
1
0
0
0
18,065
Extracting and Exploiting Inherent Sparsity for Efficient IoT Support in 5G: Challenges and Potential Solutions
Besides enabling an enhanced mobile broadband, next generation of mobile networks (5G) are envisioned for the support of massive connectivity of heterogeneous Internet of Things (IoT)s. These IoTs are envisioned for a large number of use-cases including smart cities, environment monitoring, smart vehicles, etc. Unfortunately, most IoTs have very limited computing and storage capabilities and need cloud services. Hence, connecting these devices through 5G systems requires huge spectrum resources in addition to handling the massive connectivity and improved security. This article discusses the challenges facing the support of IoTs through 5G systems. The focus is devoted to discussing physical layer limitations in terms of spectrum resources and radio access channel connectivity. We show how sparsity can be exploited for addressing these challenges especially in terms of enabling wideband spectrum management and handling the connectivity by exploiting device-to-device communications and edge-cloud. Moreover, we identify major open problems and research directions that need to be explored towards enabling the support of massive heterogeneous IoTs through 5G systems.
1
0
0
0
0
0
18,066
Existence and convexity of local solutions to degenerate hessian equations
In this work, we prove the existence of local convex solution to the degenerate Hessian equation
0
0
1
0
0
0
18,067
Superexponential estimates and weighted lower bounds for the square function
We prove the following superexponential distribution inequality: for any integrable $g$ on $[0,1)^{d}$ with zero average, and any $\lambda>0$ \[ |\{ x \in [0,1)^{d} \; :\; g \geq\lambda \}| \leq e^{- \lambda^{2}/(2^{d}\|S(g)\|_{\infty}^{2})}, \] where $S(g)$ denotes the classical dyadic square function in $[0,1)^{d}$. The estimate is sharp when dimension $d$ tends to infinity in the sense that the constant $2^{d}$ in the denominator cannot be replaced by $C2^{d}$ with $0<C<1$ independent of $d$ when $d \to \infty$. For $d=1$ this is a classical result of Chang--Wilson--Wolff [4]; however, in the case $d>1$ they work with a special square function $S_\infty$, and their result does not imply the estimates for the classical square function. Using good $\lambda$ inequalities technique we then obtain unweighted and weighted $L^p$ lower bounds for $S$; to get the corresponding good $\lambda$ inequalities we need to modify the classical construction. We also show how to obtain our superexponential distribution inequality (although with worse constants) from the weighted $L^2$ lower bounds for $S$, obtained in [5].
0
0
1
0
0
0
18,068
Reframing the S\&P500 Network of Stocks along the \nth{21} Century
Since the beginning of the new millennium, stock markets went through every state from long-time troughs, trade suspensions to all-time highs. The literature on asset pricing hence assumes random processes to be underlying the movement of stock returns. Observed procyclicality and time-varying correlation of stock returns tried to give the apparently random behavior some sort of structure. However, common misperceptions about the co-movement of asset prices in the years preceding the \emph{Great Recession} and the \emph{Global Commodity Crisis}, is said to have even fueled the crisis' economic impact. Here we show how a varying macroeconomic environment influences stocks' clustering into communities. From a sample of 296 stocks of the S\&P 500 index, distinct periods in between 2004 and 2011 are used to develop networks of stocks. The Minimal Spanning Tree analysis of those time-varying networks of stocks demonstrates that the crises of 2007-2008 and 2010-2011 drove the market to clustered community structures in both periods, helping to restore the stock market's ceased order of the pre-crises era. However, a comparison of the emergent clusters with the \textit{General Industry Classification Standard} conveys the impression that industry sectors do not play a major role in that order.
0
0
0
0
0
1
18,069
Point-contact spectroscopy of superconducting energy gap in $\rm DyNi_2B_2C$
The superconducting energy gap in $\rm DyNi_2B_2C$ has been investigated using a point-contact technique based on the Andreev reflection from a normal (N)-superconductor (S) boundary, where N is Ag. The observed differential resistance $dV/dI$ is well described by the Blonder-Tinkham-Klapwijk (BTK) theory based on the BSC density of states with zero broadening parameter. Typically, the intensity of the gap structure amounts to several percentage of the normal state resistance, which is an order of magnitude less than predicted by the theory. For $\rm DyNi_2B_2C$ with $T_c<T_N$ (the Neel temperature), we found gap values satisfying the ratio of $2\Delta_0/k_BT_c=3.63\pm 0.05$ similar to other superconducting nickel-borocarbides, both nonmagnetic and magnetic with $T_c\geq T_N$. The superconducting gap nonlinearity is superimposed on the antiferromagnetic structure in $dV/dI(V)$ which is suppressed at the magnetic field of the order of 3T applied nominally in the $ab$-plane and temperature $\geq 11~K$. The observed superconducting properties depend on the exact composition and structure at the surface of the crystal.
0
1
0
0
0
0
18,070
Parameterized Complexity of Safe Set
In this paper we study the problem of finding a small safe set $S$ in a graph $G$, i.e. a non-empty set of vertices such that no connected component of $G[S]$ is adjacent to a larger component in $G - S$. We enhance our understanding of the problem from the viewpoint of parameterized complexity by showing that (1) the problem is W[2]-hard when parameterized by the pathwidth $pw$ and cannot be solved in time $n^{o(pw)}$ unless the ETH is false, (2) it admits no polynomial kernel parameterized by the vertex cover number $vc$ unless $\mathrm{PH} = \Sigma^{\mathrm{p}}_{3}$, but (3) it is fixed-parameter tractable (FPT) when parameterized by the neighborhood diversity $nd$, and (4) it can be solved in time $n^{f(cw)}$ for some double exponential function $f$ where $cw$ is the clique-width. We also present (5) a faster FPT algorithm when parameterized by solution size.
1
0
0
0
0
0
18,071
A Matrix Variate Skew-t Distribution
Although there is ample work in the literature dealing with skewness in the multivariate setting, there is a relative paucity of work in the matrix variate paradigm. Such work is, for example, useful for modelling three-way data. A matrix variate skew-t distribution is derived based on a mean-variance matrix normal mixture. An expectation-conditional maximization algorithm is developed for parameter estimation. Simulated data are used for illustration.
0
0
1
1
0
0
18,072
Status Updates Through Multicast Networks
Using age of information as the freshness metric, we examine a multicast network in which real-time status updates are generated by the source and sent to a group of $n$ interested receivers. We show that in order to keep the information freshness at each receiver, the source should terminate the transmission of the current update and start sending a new update packet as soon as it receives the acknowledgements back from any $k$ out of $n$ nodes. As the source stopping threshold $k$ increases, a node is more likely to get the latest generated update, but the age of the most recent update is more likely to become outdated. We derive the age minimized stopping threshold $k$ that balances the likelihood of getting the latest update and the freshness of the latest update for shifted exponential link delay. Through numerical evaluations for different stopping strategies, we find that waiting for the acknowledgements from the earliest $k$ out of $n$ nodes leads to lower average age than waiting for a pre-selected group of $k$ nodes. We also observe that a properly chosen threshold $k$ can prevent information staleness for increasing number of nodes $n$ in the multicast network.
1
0
0
0
0
0
18,073
Extension of Convolutional Neural Network with General Image Processing Kernels
We applied pre-defined kernels also known as filters or masks developed for image processing to convolution neural network. Instead of letting neural networks find its own kernels, we used 41 different general-purpose kernels of blurring, edge detecting, sharpening, discrete cosine transformation, etc. for the first layer of the convolution neural networks. This architecture, thus named as general filter convolutional neural network (GFNN), can reduce training time by 30% with a better accuracy compared to the regular convolutional neural network (CNN). GFNN also can be trained to achieve 90% accuracy with only 500 samples. Furthermore, even though these kernels are not specialized for the MNIST dataset, we achieved 99.56% accuracy without ensemble nor any other special algorithms.
1
0
0
1
0
0
18,074
Sound event detection using weakly labeled dataset with stacked convolutional and recurrent neural network
This paper proposes a neural network architecture and training scheme to learn the start and end time of sound events (strong labels) in an audio recording given just the list of sound events existing in the audio without time information (weak labels). We achieve this by using a stacked convolutional and recurrent neural network with two prediction layers in sequence one for the strong followed by the weak label. The network is trained using frame-wise log mel-band energy as the input audio feature, and weak labels provided in the dataset as labels for the weak label prediction layer. Strong labels are generated by replicating the weak labels as many number of times as the frames in the input audio feature, and used for strong label layer during training. We propose to control what the network learns from the weak and strong labels by different weighting for the loss computed in the two prediction layers. The proposed method is evaluated on a publicly available dataset of 155 hours with 17 sound event classes. The method achieves the best error rate of 0.84 for strong labels and F-score of 43.3% for weak labels on the unseen test split.
1
0
0
0
0
0
18,075
Categoricity and Universal Classes
Let $(\mathcal{K} ,\subseteq )$ be a universal class with $LS(\mathcal{K})=\lambda$ categorical in regular $\kappa >\lambda^+$ with arbitrarily large models, and let $\mathcal{K}^*$ be the class of all $\mathcal{A}\in\mathcal{K}_{>\lambda}$ for which there is $\mathcal{B} \in \mathcal{K}_{\ge\kappa}$ such that $\mathcal{A}\subseteq\mathcal{B}$. We prove that $\mathcal{K}^*$ is categorical in every $\xi >\lambda^+$, $\mathcal{K}_{\ge\beth_{(2^{\lambda^+})^+}} \subseteq \mathcal{K}^{*}$, and the models of $\mathcal{K}^*_{>\lambda^+}$ are essentially vector spaces (or trivial i.e. disintegrated).
0
0
1
0
0
0
18,076
Water, High-Altitude Condensates, and Possible Methane Depletion in the Atmosphere of the Warm Super-Neptune WASP-107b
The super-Neptune exoplanet WASP-107b is an exciting target for atmosphere characterization. It has an unusually large atmospheric scale height and a small, bright host star, raising the possibility of precise constraints on its current nature and formation history. We report the first atmospheric study of WASP-107b, a Hubble Space Telescope measurement of its near-infrared transmission spectrum. We determined the planet's composition with two techniques: atmospheric retrieval based on the transmission spectrum and interior structure modeling based on the observed mass and radius. The interior structure models set a $3\,\sigma$ upper limit on the atmospheric metallicity of $30\times$ solar. The transmission spectrum shows strong evidence for water absorption ($6.5\,\sigma$ confidence), and the retrieved water abundance is consistent with expectations for a solar abundance pattern. The inferred carbon-to-oxygen ratio is subsolar at $2.7\,\sigma$ confidence, which we attribute to possible methane depletion in the atmosphere. The spectral features are smaller than predicted for a cloud-free composition, crossing less than one scale height. A thick condensate layer at high altitudes (0.1 - 3 mbar) is needed to match the observations. We find that physically motivated cloud models with moderate sedimentation efficiency ($f_\mathrm{sed} = 0.3$) or hazes with a particle size of 0.3 $\mu$m reproduce the observed spectral feature amplitude. Taken together, these findings serve as an illustration of the diversity and complexity of exoplanet atmospheres. The community can look forward to more such results with the high precision and wide spectral coverage afforded by future observing facilities.
0
1
0
0
0
0
18,077
Partial Order on the set of Boolean Regulatory Functions
Logical models have been successfully used to describe regulatory and signaling networks without requiring quantitative data. However, existing data is insufficient to adequately define a unique model, rendering the parametrization of a given model a difficult task. Here, we focus on the characterization of the set of Boolean functions compatible with a given regulatory structure, i.e. the set of all monotone nondegenerate Boolean functions. We then propose an original set of rules to locally explore the direct neighboring functions of any function in this set, without explicitly generating the whole set. Also, we provide relationships between the regulatory functions and their corresponding dynamics. Finally, we illustrate the usefulness of this approach by revisiting Probabilistic Boolean Networks with the model of T helper cell differentiation from Mendoza & Xenarios.
1
0
0
0
1
0
18,078
Scalable solvers for complex electromagnetics problems
In this work, we present scalable balancing domain decomposition by constraints methods for linear systems arising from arbitrary order edge finite element discretizations of multi-material and heterogeneous 3D problems. In order to enforce the continuity across subdomains of the method, we use a partition of the interface objects (edges and faces) into sub-objects determined by the variation of the physical coefficients of the problem. For multi-material problems, a constant coefficient condition is enough to define this sub-partition of the objects. For arbitrarily heterogeneous problems, a relaxed version of the method is defined, where we only require that the maximal contrast of the physical coefficient in each object is smaller than a predefined threshold. Besides, the addition of perturbation terms to the preconditioner is empirically shown to be effective in order to deal with the case where the two coefficients of the model problem jump simultaneously across the interface. The new method, in contrast to existing approaches for problems in curl-conforming spaces, preserves the simplicity of the original preconditioner, i.e., no spectral information is required, whilst providing robustness with regard to coefficient jumps and heterogeneous materials. A detailed set of numerical experiments, which includes the application of the preconditioner to 3D realistic cases, shows excellent weak scalability properties of the implementation of the proposed algorithms.
1
0
0
0
0
0
18,079
A flux-splitting method for hyperbolic-equation system of magnetized electron fluids in quasi-neutral plasmas
A flux-splitting method is proposed for the hyperbolic-equation system (HES) of magnetized electron fluids in quasi-neutral plasmas. The numerical fluxes are split into four categories, which are computed by using an upwind method which incorporates a flux-vector splitting (FVS) and advection upstream splitting method (AUSM). The method is applied to a test calculation condition of uniformly distributed and angled magnetic lines of force. All of the pseudo-time advancement terms converge monotonically and the conservation laws are strictly satisfied in the steady state. The calculation results are compared with those computed by using the elliptic-parabolic-equation system (EPES) approach using a magnetic-field-aligned mesh (MFAM). Both qualitative and quantitative comparisons yield good agreements of results, indicating that the HES approach with the flux-splitting method attains a high computational accuracy.
0
1
0
0
0
0
18,080
TransFlow: Unsupervised Motion Flow by Joint Geometric and Pixel-level Estimation
We address unsupervised optical flow estimation for ego-centric motion. We argue that optical flow can be cast as a geometrical warping between two successive video frames and devise a deep architecture to estimate such transformation in two stages. First, a dense pixel-level flow is computed with a geometric prior imposing strong spatial constraints. Such prior is typical of driving scenes, where the point of view is coherent with the vehicle motion. We show how such global transformation can be approximated with an homography and how spatial transformer layers can be employed to compute the flow field implied by such transformation. The second stage then refines the prediction feeding a second deeper network. A final reconstruction loss compares the warping of frame X(t) with the subsequent frame X(t+1) and guides both estimates. The model, which we named TransFlow, performs favorably compared to other unsupervised algorithms, and shows better generalization compared to supervised methods with a 3x reduction in error on unseen data.
1
0
0
0
0
0
18,081
A General Pipeline for 3D Detection of Vehicles
Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.
0
0
0
1
0
0
18,082
Search for CII Emission on Cosmological Scales at Redshift Z~2.6
We present a search for CII emission over cosmological scales at high-redshifts. The CII line is a prime candidate to be a tracer of star formation over large-scale structure since it is one of the brightest emission lines from galaxies. Redshifted CII emission appears in the submillimeter regime, meaning it could potentially be present in the higher frequency intensity data from the Planck satellite used to measure the cosmic infrared background (CIB). We search for CII emission over redshifts z=2-3.2 in the Planck 545 GHz intensity map by cross-correlating the 3 highest frequency Planck maps with spectroscopic quasars and CMASS galaxies from the Sloan Digital Sky Survey III (SDSS-III), which we then use to jointly fit for CII intensity, CIB parameters, and thermal Sunyaev-Zeldovich (SZ) emission. We report a measurement of an anomalous emission $\mathrm{I_\nu}=6.6^{+5.0}_{-4.8}\times10^4$ Jy/sr at 95% confidence, which could be explained by CII emission, favoring collisional excitation models of CII emission that tend to be more optimistic than models based on CII luminosity scaling relations from local measurements; however, a comparison of Bayesian information criteria reveal that this model and the CIB & SZ only model are equally plausible. Thus, more sensitive measurements will be needed to confirm the existence of large-scale CII emission at high redshifts. Finally, we forecast that intensity maps from Planck cross-correlated with quasars from the Dark Energy Spectroscopic Instrument (DESI) would increase our sensitivity to CII emission by a factor of 5, while the proposed Primordial Inflation Explorer (PIXIE) could increase the sensitivity further.
0
1
0
0
0
0
18,083
Ricci flow and diffeomorphism groups of 3-manifolds
We complete the proof of the Generalized Smale Conjecture, apart from the case of $RP^3$, and give a new proof of Gabai's theorem for hyperbolic 3-manifolds. We use an approach based on Ricci flow through singularities, which applies uniformly to spherical space forms other than $S^3$ and $RP^3$ and hyperbolic manifolds, to prove that the moduli space of metrics of constant sectional curvature is contractible. As a corollary, for such a 3-manifold $X$, the inclusion $\text{Isom} (X,g)\to \text{Diff}(X)$ is a homotopy equivalence for any Riemannian metric $g$ of constant sectional curvature.
0
0
1
0
0
0
18,084
RLlib: Abstractions for Distributed Reinforcement Learning
Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular computation patterns, each of which typically exhibits opportunities for distributed computation. We argue for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks. We demonstrate the benefits of this principle through RLlib: a library that provides scalable software primitives for RL. These primitives enable a broad range of algorithms to be implemented with high performance, scalability, and substantial code reuse. RLlib is available at this https URL.
1
0
0
0
0
0
18,085
A Multiscale-Analysis of Stochastic Bistable Reaction-Diffusion Equations
A multiscale analysis of 1D stochastic bistable reaction-diffusion equations with additive noise is carried out w.r.t. travelling waves within the variational approach to stochastic partial differential equations. It is shown with explicit error estimates on appropriate function spaces that up to lower order w.r.t. the noise amplitude, the solution can be decomposed into the orthogonal sum of a travelling wave moving with random speed and into Gaussian fluctuations. A stochastic differential equation describing the speed of the travelling wave and a linear stochastic partial differential equation describing the fluctuations are derived in terms of the coefficients. Our results extend corresponding results obtained for stochastic neural field equations to the present class of stochastic dynamics.
0
0
1
0
0
0
18,086
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Interpretability has emerged as a crucial aspect of machine learning, aimed at providing insights into the working of complex neural networks. However, existing solutions vary vastly based on the nature of the interpretability task, with each use case requiring substantial time and effort. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks ranging from identifying prototypes to explaining image predictions. MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.
1
0
0
1
0
0
18,087
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
We propose a new technique, Singular Vector Canonical Correlation Analysis (SVCCA), a tool for quickly comparing two representations in a way that is both invariant to affine transform (allowing comparison between different layers and networks) and fast to compute (allowing more comparisons to be calculated than with previous methods). We deploy this tool to measure the intrinsic dimensionality of layers, showing in some cases needless over-parameterization; to probe learning dynamics throughout training, finding that networks converge to final representations from the bottom up; to show where class-specific information in networks is formed; and to suggest new training regimes that simultaneously save computation and overfit less. Code: this https URL
1
0
0
1
0
0
18,088
Discrete Choice and Rational Inattention: a General Equivalence Result
This paper establishes a general equivalence between discrete choice and rational inattention models. Matejka and McKay (2015, AER) showed that when information costs are modelled using the Shannon entropy function, the resulting choice probabilities in the rational inattention model take the multinomial logit form. By exploiting convex-analytic properties of the discrete choice model, we show that when information costs are modelled using a class of generalized entropy functions, the choice probabilities in any rational inattention model are observationally equivalent to some additive random utility discrete choice model and vice versa. Thus any additive random utility model can be given an interpretation in terms of boundedly rational behavior. This includes empirically relevant specifications such as the probit and nested logit models.
0
0
0
1
0
0
18,089
A Hardy inequality for ultraspherical expansions with an application to the sphere
We prove a Hardy inequality for ultraspherical expansions by using a proper ground state representation. From this result we deduce some uncertainty principles for this kind of expansions. Our result also implies a Hardy inequality on spheres with a potential having a double singularity.
0
0
1
0
0
0
18,090
Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors
Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection---even choosing the number of nodes---remains an open question. Recent work has proposed the use of a horseshoe prior over node pre-activations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. In this work, we propose several modeling and inference advances that consistently improve the compactness of the model learned while maintaining predictive performance, especially in smaller-sample settings including reinforcement learning.
0
0
0
1
0
0
18,091
Efficient Reinforcement Learning via Initial Pure Exploration
In several realistic situations, an interactive learning agent can practice and refine its strategy before going on to be evaluated. For instance, consider a student preparing for a series of tests. She would typically take a few practice tests to know which areas she needs to improve upon. Based of the scores she obtains in these practice tests, she would formulate a strategy for maximizing her scores in the actual tests. We treat this scenario in the context of an agent exploring a fixed-horizon episodic Markov Decision Process (MDP), where the agent can practice on the MDP for some number of episodes (not necessarily known in advance) before starting to incur regret for its actions. During practice, the agent's goal must be to maximize the probability of following an optimal policy. This is akin to the problem of Pure Exploration (PE). We extend the PE problem of Multi Armed Bandits (MAB) to MDPs and propose a Bayesian algorithm called Posterior Sampling for Pure Exploration (PSPE), which is similar to its bandit counterpart. We show that the Bayesian simple regret converges at an optimal exponential rate when using PSPE. When the agent starts being evaluated, its goal would be to minimize the cumulative regret incurred. This is akin to the problem of Reinforcement Learning (RL). The agent uses the Posterior Sampling for Reinforcement Learning algorithm (PSRL) initialized with the posteriors of the practice phase. We hypothesize that this PSPE + PSRL combination is an optimal strategy for minimizing regret in RL problems with an initial practice phase. We show empirical results which prove that having a lower simple regret at the end of the practice phase results in having lower cumulative regret during evaluation.
1
0
0
1
0
0
18,092
Stochastic Evolution of Augmented Born--Infeld Equations
This paper compares the results of applying a recently developed method of stochastic uncertainty quantification designed for fluid dynamics to the Born-Infeld model of nonlinear electromagnetism. The similarities in the results are striking. Namely, the introduction of Stratonovich cylindrical noise into each of their Hamiltonian formulations introduces stochastic Lie transport into their dynamics in the same form for both theories. Moreover, the resulting stochastic partial differential equations (SPDE) retain their unperturbed form, except for an additional term representing induced Lie transport by the set of divergence-free vector fields associated with the spatial correlations of the cylindrical noise. The explanation for this remarkable similarity lies in the method of construction of the Hamiltonian for the Stratonovich stochastic contribution to the motion in both cases; which is done via pairing spatial correlation eigenvectors for cylindrical noise with the momentum map for the deterministic motion. This momentum map is responsible for the well-known analogy between hydrodynamics and electromagnetism. The momentum map for the Maxwell and Born-Infeld theories of electromagnetism treated here is the 1-form density known as the Poynting vector. Two Appendices treat the Hamiltonian structures underlying these results.
0
1
1
0
0
0
18,093
T-matrix evaluation of acoustic radiation forces on nonspherical objects in Bessel beams
Acoustical radiation force (ARF) induced by a single Bessel beam with arbitrary order and location on a nonspherical shape is studied with the emphasis on the physical mechanism and parameter conditions of negative (pulling) forces. Numerical experiments are conducted to verify the T-matrix method (TMM) for axial ARFs. This study may guide the experimental set-up to find negative axial ARF quickly and effectively based on the predicted parameters with TMM, and could be extended for lateral forces. The present work could help to design acoustic tweezers numerical toolbox, which provides an alternate to the optic tweezers.
0
1
0
0
0
0
18,094
On the commutative center of Moufang loops
We construct two infinite series of Moufang loops of exponent $3$ whose commutative center (i.e. the set of elements that commute with all elements of the loop) is not a normal subloop. In particular, we obtain examples of such loops of orders $3^8$ and $3^{11}$ one of which can be defined as the Moufang triplication of the free Burnside group $B(3,3)$.
0
0
1
0
0
0
18,095
The Kontsevich tetrahedral flow in 2D: a toy model
In the paper "Formality conjecture" (1996) Kontsevich designed a universal flow $\dot{\mathcal{P}}=\mathcal{Q}_{a:b}(\mathcal{P})=a\Gamma_{1}+b\Gamma_{2}$ on the spaces of Poisson structures $\mathcal{P}$ on all affine manifolds of dimension $n \geqslant 2$. We prove a claim from $\textit{loc. cit.}$ stating that if $n=2$, the flow $\mathcal{Q}_{1:0}=\Gamma_{1}(\mathcal{P})$ is Poisson-cohomology trivial: $\Gamma_{1}(\mathcal{P})$ is the Schouten bracket of $\mathcal{P}$ with $\mathcal{X}$, for some vector field $\mathcal{X}$; we examine the structure of the space of solutions $\mathcal{X}$. Both the construction of differential polynomials $\Gamma_{1}(\mathcal{P})$ and $\Gamma_{2}(\mathcal{P})$ and the technique to study them remain valid in higher dimensions $n \geqslant 3$, but neither the trivializing vector field $\mathcal{X}$ nor the setting $b:=0$ survive at $n\geqslant 3$, where the balance is $a:b=1:6$.
0
0
1
0
0
0
18,096
Twin Learning for Similarity and Clustering: A Unified Kernel Approach
Many similarity-based clustering methods work in two separate steps including similarity matrix computation and subsequent spectral clustering. However, similarity measurement is challenging because it is usually impacted by many factors, e.g., the choice of similarity metric, neighborhood size, scale of data, noise and outliers. Thus the learned similarity matrix is often not suitable, let alone optimal, for the subsequent clustering. In addition, nonlinear similarity often exists in many real world data which, however, has not been effectively considered by most existing methods. To tackle these two challenges, we propose a model to simultaneously learn cluster indicator matrix and similarity information in kernel spaces in a principled way. We show theoretical relationships to kernel k-means, k-means, and spectral clustering methods. Then, to address the practical issue of how to select the most suitable kernel for a particular clustering task, we further extend our model with a multiple kernel learning ability. With this joint model, we can automatically accomplish three subtasks of finding the best cluster indicator matrix, the most accurate similarity relations and the optimal combination of multiple kernels. By leveraging the interactions between these three subtasks in a joint framework, each subtask can be iteratively boosted by using the results of the others towards an overall optimal solution. Extensive experiments are performed to demonstrate the effectiveness of our method.
1
0
0
1
0
0
18,097
Big Data Analysis Using Shrinkage Strategies
In this paper, we apply shrinkage strategies to estimate regression coefficients efficiently for the high-dimensional multiple regression model, where the number of samples is smaller than the number of predictors. We assume in the sparse linear model some of the predictors have very weak influence on the response of interest. We propose to shrink estimators more than usual. Specifically, we use integrated estimation strategies in sub and full models and shrink the integrated estimators by incorporating a bounded measurable function of some weights. The exhibited double shrunken estimators improve the prediction performance of sub models significantly selected from existing Lasso-type variable selection methods. Monte Carlo simulation studies as well as real examples of eye data and Riboavin data confirm the superior performance of the estimators in the high-dimensional regression model.
0
0
0
1
0
0
18,098
Interpreter fr topologists
Let M be a transitive model of set theory. There is a canonical interpretation functor between the category of regular Hausdorff, continuous open images of Cech-complete spaces of M and the same category in V, preserving many concepts of topology, functional analysis, and dynamics. The functor can be further canonically extended to the category of Borel subspaces. This greatly simplifies and extends similar results of Fremlin.
0
0
1
0
0
0
18,099
Parallel-Data-Free Voice Conversion Using Cycle-Consistent Adversarial Networks
We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.
1
0
0
1
0
0
18,100
Cross-validation in high-dimensional spaces: a lifeline for least-squares models and multi-class LDA
Least-squares models such as linear regression and Linear Discriminant Analysis (LDA) are amongst the most popular statistical learning techniques. However, since their computation time increases cubically with the number of features, they are inefficient in high-dimensional neuroimaging datasets. Fortunately, for k-fold cross-validation, an analytical approach has been developed that yields the exact cross-validated predictions in least-squares models without explicitly training the model. Its computation time grows with the number of test samples. Here, this approach is systematically investigated in the context of cross-validation and permutation testing. LDA is used exemplarily but results hold for all other least-squares methods. Furthermore, a non-trivial extension to multi-class LDA is formally derived. The analytical approach is evaluated using complexity calculations, simulations, and permutation testing of an EEG/MEG dataset. Depending on the ratio between features and samples, the analytical approach is up to 10,000x faster than the standard approach (retraining the model on each training set). This allows for a fast cross-validation of least-squares models and multi-class LDA in high-dimensional data, with obvious applications in multi-dimensional datasets, Representational Similarity Analysis, and permutation testing.
0
0
0
1
0
0