text
stringlengths
11
9.77k
label
stringlengths
2
104
In this paper, we propose a deep convolutional neural network-based acoustic word embedding system on code-switching query by example spoken term detection. Different from previous configurations, we combine audio data in two languages for training instead of only using one single language. We transform the acoustic features of keyword templates and searching content to fixed-dimensional vectors and calculate the distances between keyword segments and searching content segments obtained in a sliding manner. An auxiliary variability-invariant loss is also applied to training data within the same word but different speakers. This strategy is used to prevent the extractor from encoding undesired speaker- or accent-related information into the acoustic word embeddings. Experimental results show that our proposed system produces promising searching results in the code-switching test scenario. With the increased number of templates and the employment of variability-invariant loss, the searching performance is further enhanced.
electrical engineering and systems science
We consider shape optimization problems for elasticity systems in architecture. A typical question in this context is to identify a structure of maximal stability close to an initially proposed one. We show the existence of such an optimally shaped structure within classes of bounded Lipschitz domains and within wider classes of bounded uniform domains with boundaries that may be fractal. In the first case the optimal shape realizes the infimum of a given energy functional over the class, in the second case it realizes the minimum. As a concrete application we discuss the existence of maximally stable roof structures under snow loads.
mathematics
A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatment effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fisher's exact test or the Bhattacharyya's overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.
statistics
The multi-tracer technique employs a ratio of densities of two differently biased galaxy samples that trace the same underlying matter density field, and was proposed to alleviate the cosmic variance problem. Here we propose a novel application of this approach, applying it to two different tracers one of which is the 21-cm signal of neutral hydrogen from the epochs of reionization and comic dawn. The second tracer is assumed to be a sample of high-redshift galaxies, but the approach can be generalized and applied to other high-redshift tracers. We show that the anisotropy of the ratio of the two density fields can be used to measure the sky-averaged 21-cm signal, probe the spectral energy distribution of radiative sources that drive this signal, and extract large-scale properties of the second tracer, e.g., the galaxy bias. Using simulated 21-cm maps and mock galaxy samples, we find that the method works well for an idealized galaxy survey. However, in the case of a more realistic galaxy survey which only probes highly biased luminous galaxies, the inevitable Poisson noise makes the reconstruction far more challenging. This difficulty can be mitigated with the greater sensitivity of future telescopes along with larger survey volumes.
astrophysics
Finding an effective medical treatment often requires a search by trial and error. Making this search more efficient by minimizing the number of unnecessary trials could lower both costs and patient suffering. We formalize this problem as learning a policy for finding a near-optimal treatment in a minimum number of trials using a causal inference framework. We give a model-based dynamic programming algorithm which learns from observational data while being robust to unmeasured confounding. To reduce time complexity, we suggest a greedy algorithm which bounds the near-optimality constraint. The methods are evaluated on synthetic and real-world healthcare data and compared to model-free reinforcement learning. We find that our methods compare favorably to the model-free baseline while offering a more transparent trade-off between search time and treatment efficacy.
computer science
The calculation of accurate photoelectron spectra (PES) for strong-field laser-atom experiments is a demanding computational task, even in single-active-electron approximation. The QPROP code, published in 2006, has been extended in 2016 in order to provide the possibility to calculate PES using the so-called t-SURFF approach [L. Tao, A. Scrinzi, New J. Phys. 14, 013021 (2012)]. In t-SURFF, the flux through a surface while the laser is on is monitored. Calculating PES from this flux through a surface enclosing a relatively small computational grid is much more efficient than calculating it from the widely spread wavefunction at the end of the laser pulse on a much larger grid. However, the smaller the minimum photoelectron energy of interest is, the more post-propagation after the actual laser pulse is necessary. This drawback of t-SURFF has been overcome by Morales et al. [F. Morales, T. Bredtmann, S. Patchkovskii, J. Phys. B: At. Mol. Opt. Phys. 49, 245001 (2016)] by noticing that the propagation of the wavefunction from the end of the laser pulse to infinity can be performed very efficiently in a single step. In this work, we introduce QPROP 3.0, in which this single-step post-propagation (dubbed i-SURFV) is added. Examples, illustrating the new feature, are discussed. A few other improvements, concerning mainly the parameter files, are also explained.
physics
The presence of Lipschitzian properties for solution mappings associated with nonlinear parametric optimization problems is desirable in the context of stability analysis or bilevel optimization. An example of such a Lipschitzian property for set-valued mappings, whose graph is the solution set of a system of nonlinear inequalities and equations, is R-regularity. Based on the so-called relaxed constant positive linear dependence constraint qualification, we provide a criterion ensuring the presence of the R-regularity property. In this regard, our analysis generalizes earlier results of that type which exploited the stronger Mangasarian-Fromovitz or constant rank constraint qualification. Afterwards, we apply our findings in order to derive new sufficient conditions which guarantee the presence of R-regularity for solution mappings in parametric optimization. Finally, our results are used to derive an existence criterion for solutions in pessimistic bilevel optimization and a sufficient condition for the presence of the so-called partial calmness property in optimistic bilevel optimization.
mathematics
This note describes a \emph{Macaulay2} package for computations in prime characteristic commutative algebra. This includes Frobenius powers and roots, $p^{-e}$-linear and $p^{e}$-linear maps, singularities defined in terms of these maps, different types of test ideals and modules, and ideals compatible with a given $p^{-e}$-linear map.
mathematics
Individual trapped atomic qubits represent one of the most promising technologies to scale quantum computers, owing to their negligible idle errors and the ability to implement a full set of reconfigurable gate operations via focused optical fields. However, the fidelity of quantum gate operations can be limited by weak confinement of the atoms transverse to the laser. We present measurements of this effect by performing individually-addressed entangling gates in chains of up to 25 trapped atomic ions that are weakly confined along the chain axis. We present a model that accurately describes the observed decoherence from the residual heating of the ions caused by noisy electric fields. We propose to suppress these effects through the use of ancilla ions interspersed in the chain to sympathetically cool the qubit ions throughout a quantum circuit.
quantum physics
Italy has been one of the first countries timewise strongly impacted by the COVID-19 pandemic. The adoption of social distancing and heavy lockdown measures is posing a heavy burden on the population and the economy. The timing of the measures has crucial policy-making implications. Using publicly available data for the pandemic progression in Italy, we quantitatively assess the effect of the intervention time on the pandemic expansion, with a methodology that combines a generalized susceptible-exposed-infectious-recovered (SEIR) model together with statistical learning methods. The modeling shows that the lockdown has strongly deviated the pandemic trajectory in Italy. However, the difference between the forecasts and real data up to 20 April 2020 can be explained only by the existence of a time lag between the actual issuance date and the full effect of the measures. To understand the relative importance of intervention with respect to other factors, a thorough uncertainty quantification of the model predictions is performed. Global sensitivity indices show that the the time of intervention is 4 times more relevant than quarantine, and eight times more important than intrinsic features of the pandemic such as protection and infection rates. The relevance of their interactions is also quantified and studied.
statistics
We derive a mathematical model and the corresponding computational scheme to study deflection of a two-dimensional elastic cantilever beam immersed in a channel, where one end of the beam is fixed to the channel wall. The immersed boundary method has been employed to simulate numerically the fluid-structure interaction problem. We investigate how variations in physical and numerical parameters change the effective material properties of the elastic beam and compare the results qualitatively with linear beam theory. We also pay careful attention to "corner effects" -- irregularities in beam shape near the free and fixed ends -- and show how this can be remedied by smoothing out the corners with a "fillet" or rounded shape. Finally, we extend the immersed boundary formulation to include porosity in the beam and investigate the effect that the resultant porous flow has on beam deflection.
physics
We summarize our results for light (pseudo-)scalar mesons at finite chemical potential and vanishing temperature. We extract the meson bound state wave functions, masses, and decay constants up to the first-order phase transition from the homogeneous Bethe-Salpeter equation and confirm the validity of the Silver-Blaze property. For this purpose, we solve a coupled set of truncated Dyson-Schwinger equations for the quark and gluon propagators of QCD in Landau gauge.
high energy physics phenomenology
Many-body localized systems in which interactions and disorder come together defy the expectations of quantum statistical mechanics: In contrast to ergodic systems, they do not thermalize when undergoing nonequilibrium dynamics. What is less clear, however, is how topological features interplay with many-body localized phases as well as the nature of the transition between a topological and a trivial state within the latter. In this work, we numerically address these questions, using a combination of extensive tensor network calculations, specifically DMRG-X, as well as exact diagonalization, leading to a comprehensive characterization of Hamiltonian spectra and eigenstate entanglement properties.
condensed matter
We study the problem of quasisymmetrically embedding spaces homeomorphic to the Sierpi\'nski carpet into the plane. In the case of so called dyadic slit carpets, several characterizations are obtained. One characterization is in terms of a Transboundary Loewner Property (TLP) which is a transboundary analogue of the Loewner property of Heinonen and Koskela. We show that a dyadic slit carpet can be quasisymmetrically embedded into the plane if and only if it is TLP. Moreover, every dyadic slit carpet $X$ can be associated to a "pillowcase sphere" $\widehat{X}$ which is a metric space homeomorphic to the sphere $\mathbb{S}^2$. We show that $X$ quasisymmetrically embeds into the plane if and only if $\widehat{X}$ is quasisymmetric to $\mathbb{S}^2$ if and only if $\widehat{X}$ is Ahlfors $2$-regular.
mathematics
We present an integrated approach for perception and control for an autonomous vehicle and demonstrate this approach in a high-fidelity urban driving simulator. Our approach first builds a model for the environment, then trains a policy exploiting the learned model to identify the action to take at each time-step. To build a model for the environment, we leverage several deep learning algorithms. To that end, first we train a variational autoencoder to encode the input image into an abstract latent representation. We then utilize a recurrent neural network to predict the latent representation of the next frame and handle temporal information. Finally, we utilize an evolutionary-based reinforcement learning algorithm to train a controller based on these latent representations to identify the action to take. We evaluate our approach in CARLA, a high-fidelity urban driving simulator, and conduct an extensive generalization study. Our results demonstrate that our approach outperforms several previously reported approaches in terms of the percentage of successfully completed episodes for a lane keeping task.
electrical engineering and systems science
Integrating solid-state quantum emitters with nanophotonic resonators is essential for efficient spin-photon interfacing and optical networking applications. While diamond color centers have proven to be excellent candidates for emerging quantum technologies, their integration with optical resonators remains challenging. Conventional approaches based on etching resonators into diamond often negatively impact color center performance and offer low device yield. Here, we developed an integrated photonics platform based on templated atomic layer deposition of TiO2 on diamond membranes. Our fabrication method yields high-performance nanophotonic devices while avoiding etching wavelength-scale features into diamond. Moreover, this technique generates highly reproducible optical resonances and can be iterated on individual diamond samples, a unique processing advantage. Our approach is suitable for a broad range of both wavelengths and substrates and can enable high-cooperativity interfacing between cavity photons and coherent defects in diamond or silicon carbide, rare earth ions, or other material systems.
quantum physics
We brute-force evaluate the vacuum character for $\mathcal N=2$ vertex operator algebras labelled by crystallographic complex reflection groups $G(k,1,1)=\mathbb Z_k$, $k=3,4,6$, and $G(3,1,2)$. For $\mathbb Z_{3,4}$ and $G(3,1,2)$ these vacuum characters have been conjectured to respectively reproduce the Macdonald limit of the superconformal index for rank one and rank two S-fold $\mathcal N=3$ theories in four dimensions. For the $\mathbb Z_3$ case, and in the limit where the Macdonald index reduces to the Schur index, we find agreement with predictions from the literature.
high energy physics theory
We consider a $(q,y)$-analogue of Laguerre polynomials $L^{(\alpha)}_n(x;y;q)$ for integral $\alpha\geq -1$, which turns out to be a rescaled version of Al-Salam--Chihara polynomials. A combinatorial interpretation for the $(q,y)$-Laguerre polynomials is given using a colored version of Foata-Strehl's Laguerre configurations with suitable statistics. When $\alpha\geq 0$, the corresponding moments are described using certain classical statistics on permutations, and the linearization coefficients are proved to be a polynomial in $y$ and $q$ with nonnegative integral coefficients.
mathematics
Collins' (2002) statement "correspondence analysis makes you blind" followed after his seriation like description of a brand attribute count data set analyzed by Whitlark and Smith (2001), who applied correspondence analysis. In this essay we comment on Collins' statement within taxicab correspondence analysis framework by simultaneously decomposing the covariance matrix and its associated density matrix, thus interpreting two interrelated maps for contingency tables : TCov map and TCA map.
statistics
We provide a new perspective on parallel 2-transport and principal 2-group bundles with 2-connection. We define parallel 2-transport as a 2-functor from the thin fundamental 2-groupoid to the 2-category of 2-group torsors. The definition of the 2-category of 2-group torsors is new, and we develop the tools necessary for computations in this 2-category. We prove a version of the non-Abelian Stokes Theorem and the Ambrose-Singer Theorem for 2-transport. This definition motivated by the fact that principal $G$-bundles with connection are equivalent to functors from the thin fundamental groupoid to the category of $G$-torsors. In the same lines we deduce a notion of principal 2-bundle with 2-connection, and show it is equivalent to our notion 2-transport functors. This gives a stricter notion than appears in the literature, which is more concrete. It allows for computations of 2-holonomy which will be exploited in a companion paper to define Wilson surface observables. Furthermore this notion can be generalized to a concrete but strict notion of $n$-transport for arbitrary $n$.
mathematics
In the last two decades, tools have been implemented to more formally specify the semantic analysis phase of a compiler instead of relying on handwritten code. In this paper, we introduce patterns and a method to translate a formal definition of a language's type system into a specification for JastAdd, which is one of the aforementioned tools based on Reference Attribute Grammars. This methodological approach will help language designers and compiler engineers to more systematically use such tools for semantic analysis. As an example, we use a simple, yet complete imperative language and provide an outlook on how the method can be extended to cover further language constructs or even type inference.
computer science
Excitation by light pulses enables the manipulation of phases of quantum condensed matter. Here, we photoexcite high-energy holon-doublon pairs as a way to alter the magnetic free energy landscape of the Kitaev-Heisenberg magnet $\alpha$-RuCl$_3$, with the aim to dynamically stabilize a proximate spin liquid phase. The holon-doublon pair recombination through multimagnon emission is tracked through the time-evolution of the magnetic linear dichroism originating from the competing zigzag spin ordered ground state. A small holon-doublon density suffices to reach a spin disordered state. The phase transition is described within a dynamic Ginzburg-Landau framework, corroborating the quasistationary nature of the transient spin disordered phase. Our work provides insight into the coupling between the electronic and magnetic degrees of freedom in $\alpha$-RuCl$_3$ and suggests a new route to reach a proximate spin liquid phase in Kitaev-Heisenberg magnets.
condensed matter
Abelian Chern-Simons theory, characterized by the so-called $K$ matrix, has been quite successful in characterizing and classifying Abelian fractional quantum hall effect (FQHE) as well as symmetry protected topological (SPT) phases, especially for bosonic SPT phases. However, there are still some puzzles in dealing with fermionic SPT(fSPT) phases. In this paper, we utilize the Abelian Chern-Simons theory to study the fSPT phases protected by arbitrary Abelian total symmetry $G_f$. Comparing to the bosonic SPT phases, fSPT phases with Abelian total symmetry $G_f$ has three new features: (1) it may support gapless majorana fermion edge modes, (2) some nontrivial bosonic SPT phases may be trivialized if $G_f$ is a nontrivial extention of bosonic symmetry $G_b$ over $\mathbb{Z}_2^f$, (3) certain intrinsic fSPT phases can only be realized in interacting fermionic system. We obtain edge theories for various fSPT phases, which can also be regarded as conformal field theories (CFT) with proper symmetry anomaly. In particular, we discover the construction of Luttinger liquid edge theories with central charge $n-1$ for Type-III bosonic SPT phases protected by $(\mathbb{Z}_n)^3$ symmetry and the Luttinger liquid edge theories for intrinsically interacting fSPT protected by unitary Abelian symmetry. The ideas and methods used here might be generalized to derive the edge theories of fSPT phases with arbitrary unitary finite Abelian total symmetry $G_f$.
condensed matter
We formulate approximate Bayesian inference in non-conjugate temporal and spatio-temporal Gaussian process models as a simple parameter update rule applied during Kalman smoothing. This viewpoint encompasses most inference schemes, including expectation propagation (EP), the classical (Extended, Unscented, etc.) Kalman smoothers, and variational inference. We provide a unifying perspective on these algorithms, showing how replacing the power EP moment matching step with linearisation recovers the classical smoothers. EP provides some benefits over the traditional methods via introduction of the so-called cavity distribution, and we combine these benefits with the computational efficiency of linearisation, providing extensive empirical analysis demonstrating the efficacy of various algorithms under this unifying framework. We provide a fast implementation of all methods in JAX.
statistics
In this paper, we study the problem of robust global synchronization of resetting clocks in multi-agent networked systems, where by robust global synchronization we mean synchronization that is insensitive to arbitrarily small disturbances, and which is achieved from all initial conditions. In particular, we aim to address the following question: Given a set of homogeneous agents with periodic clocks sharing the same parameters, what kind of information flow topologies will guarantee that the resulting networked systems can achieve robust global synchronization? To address this question, we rely on the framework of robust hybrid dynamical systems and a class of distributed hybrid resetting algorithms. Using the hybrid-system approach, we provide a partial solution to the question: Specifically, we show that one can achieve robust global synchronization with no purely discrete-time solutions in any networked system whose underlying information flow topology is a rooted acyclic digraph. Such a result is complementary to the existing result [1] in which strongly connected digraphs are considered as the underlying information flow topologies of the networked systems. We have further computed in the paper the convergence time for a networked system to reach global synchronization. In particular, the computation reveals the relationship between convergence time and the structure of the underlying digraph. We illustrate our theoretical findings via numerical simulations towards the end of the paper.
computer science
Can a computer determine a piano player's skill level? Is it preferable to base this assessment on visual analysis of the player's performance or should we trust our ears over our eyes? Since current CNNs have difficulty processing long video videos, how can shorter clips be sampled to best reflect the players skill level? In this work, we collect and release a first-of-its-kind dataset for multimodal skill assessment focusing on assessing piano player's skill level, answer the asked questions, initiate work in automated evaluation of piano playing skills and provide baselines for future work.
computer science
This paper reports the design and implementation of a three-link brachiation robot. The robot is able to travel along horizontal monkey bars using continuous arm swings. We build a full order dynamics model for the robot and formulate each cycle of robot swing motion as an optimal control problem. The iterative Linear Quadratic Regulator (iLQR) algorithm is used to find the optimal control strategy during one swing. We select suitable robot design parameters by comparing the cost of robot motion generated by the iLQR algorithm for different robot designs. In particular, using this approach we show the importance of having a body link and low inertia arms for efficient brachiation. Further, we propose a trajectory tracking controller that combines a cascaded PID controller and an input-output linearization controller to enable the robot to track desired trajectory precisely and reject external disturbance during brachiation. Experiments on the simulated robot and the real robot demonstrate that the robot can robustly swing between monkey bars with same or different spacing of handholds.
electrical engineering and systems science
Two-dimensional materials with hexagonal symmetry such as graphene and transition metal dichalcogenides} are unique materials to study light-field-controlled electron dynamics inside of a solid. Around the $K$-point, the dispersion relation represents an ideal system to study intricately coupled intraband motion and interband (Landau-Zener) transitions driven by the optical field of phase-controlled few-cycle laser pulses. Based on the coupled nature of the intraband and interband processes, we have recently observed in graphene repeated coherent Landau-Zener transitions between valence and conduction band separated by around half an optical period of ~1.3 fs [Higuchi et al., Nature 550, 224 (2017)]. Due to the low temporal symmetry of the applied laser pulse, a residual current density and a net electron polarization are formed. Here we show extended numerical data on the temporal evolution of the conduction band population of 2D materials with hexagonal symmetry during the light-matter interaction, yielding deep insights to attosecond-fast electron dynamics. In addition, we show that a residual ballistic current density is formed, which strongly increases when a band gap is introduced. Both, the sub-cycle electron dynamics and the resulting residual current are relevant for the fundamental understanding and future applications of strongly driven electrons in two-dimensional materials, including graphene or transition metal dichalcogenide monolayers.
physics
Accelerator-based terahertz (THz) radiation has been expected to realize a high-power broadband source. Employing a low-emittance and short-bunch electron beam at a high repetition rate, a scheme of coherent diffraction-radiation in an optical cavity layout is proposed. The scheme's stimulated radiation process between bunches can greatly enhance the efficiency of the radiation emission. We performed an experiment with a superconducting linac constructed as an energy recovery linac (ERL) test facility. The electron beam passes through small holes in the cavity mirrors without being destroyed. A sharp THz resonance signal, which indicates broadband stimulated radiation correlated with beam deceleration, was observed while scanning the round-trip length of the cavity. This observation proves the efficient beam-to-radiation energy conversion due to the stimulated radiation process.
physics
We study the eigenvalue distribution of a GUE matrix with a variance profile that is perturbed by an additive random matrix that may possess spikes. Our approach is guided by Voiculescu's notion of freeness with amalgamation over the diagonal and by the notion of deterministic equivalent. This allows to derive a fixed point equation to approximate the spectral distribution of certain deformed GUE matrices with a variance profile and to characterize the location of potential outliers in such models in a non-asymptotic setting. We also consider the singular values distribution of a rectangular Gaussian random matrix with a variance profile in a similar setting of additive perturbation. We discuss the application of this approach to the study of low-rank matrix denoising models in the presence of heteroscedastic noise, that is when the amount of variance in the observed data matrix may change from entry to entry. Numerical experiments are used to illustrate our results.
mathematics
The James Webb Space Telescope will offer high-angular resolution observing capability in the near-infrared with masking interferometry on NIRISS, and coronagraphic imaging on NIRCam & MIRI. Full aperture kernel-phase based interferometry complements these observing modes, probing for companions at small separations while preserving the telescope throughput. Our goal is to derive both theoretical and operational contrast detection limits for the kernel-phase analysis of JWST NIRISS full-pupil observations by using tools from hypothesis testing theory, applied to observations of faint brown dwarfs with this instrument, but the tools and methods introduced here are applicable in a wide variety of contexts. We construct a statistically independent set of observables from aberration-robust kernel phases. Three detection tests based on these observable quantities are designed and analysed, all guaranteeing a constant false alarm rate for small phase aberrations. One of these tests, the Likelihood Ratio or Neyman-Pearson test, provides a theoretical performance bound for any detection test. The operational detection method considered here is shown to exhibit only marginal power loss with respect to the theoretical bound. In principle, for the test set to a false alarm probability of 1%, companion at contrasts reaching 10^3 at separations of 200 mas around objects of magnitude 14.1 are detectable. With JWST NIRISS, contrasts of up to 10^4 at separations of 200 mas could be ultimately achieved, barring significant wavefront drift. The proposed detection method is close to the ultimate bound and offers guarantees over the probability of making a false detection for binaries, as well as over the error bars for the estimated parameters of the binaries detectable by JWST NIRISS. This method is not only applicable to JWST NIRISS but to any imaging system with adequate sampling.
astrophysics
The magnetic fields of low-mass stars are observed to be variable on decadal timescales, ranging in behaviour from cyclic to stochastic. The changing strength and geometry of the magnetic field should modify the efficiency of angular momentum loss by stellar winds, but this has not been well quantified. In Finley et al. (2018) we investigated the variability of the Sun, and calculated the time-varying angular momentum loss rate in the solar wind. In this work, we focus on four low-mass stars that have all had their surface magnetic fields mapped for multiple epochs. Using mass loss rates determined from astrospheric Lyman-$\alpha$ absorption, in conjunction with scaling relations from the MHD simulations of Finley & Matt (2018), we calculate the torque applied to each star by their magnetised stellar winds. The variability of the braking torque can be significant. For example, the largest torque for $\epsilon$ Eri is twice its decadal averaged value. This variation is comparable to that observed in the solar wind, when sparsely sampled. On average, the torques in our sample range from 0.5-1.5 times their average value. We compare these results to the torques of Matt et al. (2015), which use observed stellar rotation rates to infer the long-time averaged torque on stars. We find that our stellar wind torques are systematically lower than the long-time average values, by a factor of ~3-30. Stellar wind variability appears unable to resolve this discrepancy, implying that there remain some problems with observed wind parameters, stellar wind models, or the long-term evolution models, which have yet to be understood.
astrophysics
This paper is devoted to the study of some connections between coadjoint orbits in infinite dimensional Lie algebras, isospectral deformations and linearization of dynamical systems. We explain how results from deformation theory, cohomology groups and algebraic geometry can be used to obtain insight into the dynamics of integrable systems. Another part will be dedicated to the study of infinite continued fraction, orthogonal polynomials, the isospectral deformation of periodic Jacobi matrices and general difference operators from an algebraic geometrical point of view. Some connections with Cauchy-Stieltjes transform of a suitable measure and Abelian integrals are given. Finally the notion of algebraically completely integrable systems is explained, techniques to solve such systems are presented and some interesting cases appear as coverings of such dynamical systems. These results are exemplified by several problems of dynamical systems of relevance in mathematical physics.
mathematics
The self-excited spanwise homogeneous perturbations arising in shock-wave/boundary-layer interaction (SWBLI) system formed in a hypersonic flow of molecular nitrogen over a double wedge are investigated using the kinetic Direct Simulation Monte Carlo (DSMC) method. The flow has transitional Knudsen and unit Reynolds numbers of 3.4 x 10$^{-3}$ and 5.2 x 10$^4$ m$^{-1}$, respectively. Strong thermal nonequilibrium exists downstream of the Mach 7 detached (bow) shock generated due to the upper wedge surface. A linear instability mechanism is expected to make the pre-computed 2-D base flow potentially unstable under spanwise perturbations. The specific intent is to assess the growth rates of unstable modes, the wavelength, location, and origin of spanwise periodic flow structures, and the characteristic frequencies present in this interaction.
physics
Different algorithms can be used for clustering purposes with data sets. On of these algorithms, uses topological features extracted from the data set to base the clusters on. The complexity of this algorithm is however exponential in the number of data points. Recently a quantum algorithm was proposed by Lloyd Garnerone and Zanardi with claimed polynomial complexity, hence an exponential improved over classical algorithms. However, we show that this algorithm in general cannot be used to compute these topological features in any dimension but the zeroth. We also give pointers on how to still use the algorithm for clustering purposes.
quantum physics
Rigorous use of SUSYQM approach applied for Klein-Gordon equation with scalar and vector potentials is discussed. The method is applied to solve exactly, for bound states, two models with position-dependent masses and $\mathcal{PT}$-symmetric vector potentials, depending on some parameters. The necessary conditions on the parameters to get physical solutions are described. Some special cases are also derived by adjusting the parameters of the models.
quantum physics
Dual-comb spectroscopy can provide broad spectral bandwidth and high spectral resolution in a short acquisition time, enabling time-resolved measurements. Specifically, spectroscopy in the mid-infrared wavelength range is of particular interest, since most of the molecules have their strongest rotational-vibrational transitions in this "fingerprint" region. Here we report time-resolved mid-infrared dual-comb spectroscopy for the first time, covering ~300 nm bandwidth around 3.3 {\mu}m with 6 GHz spectral resolution and 20 {\mu}s temporal resolution. As a demonstration, we study a CH4/He gas mixture in an electric discharge, while the discharge is modulated between dark and glow regimes. We simultaneously monitor the production of C2H6 and the vibrational excitation of CH4 molecules, observing the dynamics of both processes. This approach to broadband, high-resolution, and time-resolved mid-infrared spectroscopy provides a new tool for monitoring the kinetics of fast chemical reactions, with potential applications in various fields such as physical chemistry and plasma/combustion analysis.
physics
We study gauge fields of arbitrary spin in de Sitter space. These include Yang-Mills fields and gravitons, as well as the higher-spin fields of Vasiliev theory. We focus on antipodally symmetric solutions to the field equations, i.e. ones that live on "elliptic" de Sitter space dS_4/Z_2. For free fields, we find spanning sets of such solutions, including boundary-to-bulk propagators. We find that free solutions on dS_4/Z_2 can only have one of the two types of boundary data at infinity, meaning that the boundary 2-point functions vanish. In Vasiliev theory, this property persists order by order in the interaction, i.e. the boundary n-point functions in dS_4/Z_2 all vanish. This implies that a higher-spin dS/CFT based on the Lorentzian dS_4/Z_2 action is empty. For more general interacting theories, such as ordinary gravity and Yang-Mills, we can use the free-field result to define a well-posed perturbative initial value problem in dS_4/Z_2.
high energy physics theory
Using SPITZER 3.6$\mu$m imaging, we investigate the physical and data-driven origins of up-bending (Type III) disk breaks. We apply a robust new break-finding algorithm to 175 low-inclination disk galaxies previously identified as containing Type III breaks, classify each galaxy by its outermost re-classified (via our new algorithm) break type, and compare the local environments of each resulting subgroup. Using three different measures of the local density of galaxies, we find that galaxies with extended outer spheroids (Type IIIs) occupy the highest density environments in our sample, while those with extended down-bending (Type II) disks and symmetric outskirts occupy the lowest density environments. Among outermost breaks, the most common origin of Type III breaks in our sample is methodological; the use of elliptical apertures to measure the radial profiles of asymmetric galaxies usually results in features akin to Type III breaks.
astrophysics
In the Next-to-Minimal Supersymmetric Standard Model (NMSSM) with extra heavy neutrino superfields, neutrino may acquire its mass via a seesaw mechanism and sneutrino may act as a viable dark matter (DM) candidate. Given the strong tension between the naturalness for $Z$ boson mass and the DM direct detection experiments for customary neutralino DM candidate, we augment the NMSSM with Type-I seesaw mechanism, which is the simplest extension of the theory to predict neutrino mass, and study the scenarios of sneutrino DM. We construct likelihood function with LHC Higgs data, B-physics measurements, DM relic density and its direct and indirect search limits, and perform a comprehensive scan over the parameter space of the theory by Nested Sampling method. We adopt both Bayesian and frequentist statistical quantities to illustrate the favored parameter space of the scenarios, the DM annihilation mechanism as well as the features of DM-nucleon scattering. We find that the scenarios are viable over broad parameter regions, especially the Higgsino mass $\mu$ can be below about $250 {\rm GeV}$ for a significant part of the region, which predicts $Z$ boson mass in a natural way. We also find that the DM usually co-annihilated with the Higgsinos to get the measured relic density, and consequently the DM-nucleon scattering rate is naturally suppressed to coincide with the recent XENON-1T results even for light Higgsinos. Other issues, such as the LHC search for the Higgsinos, are also addressed.
high energy physics phenomenology
Reduced rank nonlinear filters are increasingly utilized in data assimilation of geophysical flows, but often require a set of ensemble forward simulations to estimate forecast covariance. On the other hand, predictor-corrector type nudging approaches are still attractive due to their simplicity of implementation when more complex methods need to be avoided. However, optimal estimate of nudging gain matrix might be cumbersome. In this paper, we put forth a fully nonintrusive recurrent neural network approach based on a long short-term memory (LSTM) embedding architecture to estimate the nudging term, which plays a role not only to force the state trajectories to the observations but also acts as a stabilizer. Furthermore, our approach relies on the power of archival data and the trained model can be retrained effectively due to power of transfer learning in any neural network applications. In order to verify the feasibility of the proposed approach, we perform twin experiments using Lorenz 96 system. Our results demonstrate that the proposed LSTM nudging approach yields more accurate estimates than both extended Kalman filter (EKF) and ensemble Kalman filter (EnKF) when only sparse observations are available. With the availability of emerging AI-friendly and modular hardware technologies and heterogeneous computing platforms, we articulate that our simplistic nudging framework turns out to be computationally more efficient than either the EKF or EnKF approaches.
physics
We calculate the entanglement entropy of a slab of finite width in the pure Maxwell theory. We find that a large part of entropy is contributed by the entanglement of a mode, nonlocal in terms of the transverse magnetic field degrees of freedom. Even though the entangled mode is nonlocal, its contribution to the entropy is local in the sense that the entropy of a slab of a finite thickness is equal to the entropy of the boundary plus a correction exponential in thickness of the slab.
high energy physics theory
In bipedal gait design literature, one of the common ways of generating stable 3D walking gait is by designing the frontal and sagittal controllers as decoupled dynamics. The study of the decoupled frontal dynamics is, however, still understudied if compared with the sagittal dynamics. In this paper it is presented a formal approach to the problem of frontal dynamics stabilization by extending the hybrid zero dynamics framework to deal with the frontal gait design problem.
computer science
This work proposes a semi-parametric approach to estimate Covid-19 (SARS-CoV-2) evolution in Spain. Considering the sequences of 14 days cumulative incidence of all Spanish regions, it combines modern Deep Learning (DL) techniques for analyzing sequences with the usual Bayesian Poisson-Gamma model for counts. DL model provides a suitable description of observed sequences but no reliable uncertainty quantification around it can be obtained. To overcome this we use the prediction from DL as an expert elicitation of the expected number of counts along with their uncertainty and thus obtaining the posterior predictive distribution of counts in an orthodox Bayesian analysis using the well known Poisson-Gamma model. The overall resulting model allows us to either predict the future evolution of the sequences on all regions, as well as, estimating the consequences of eventual scenarios.
statistics
$J$-holomorphic curves in nearly K\"ahler $\mathbb{CP}^3$ are related to minimal surfaces in $S^4$ as well as associative submanifolds in $\Lambda^2_-(S^4)$. We introduce the class of transverse $J$-holomorphic curves and establish a Bonnet-type theorem for them. We classify flat tori in $S^4$ and construct moment-type maps from $\mathbb{CP}^3$ to relate them to the theory of $\mathrm{U}(1)$-invariant minimal surfaces on $S^4$.
mathematics
Graph representation learning is a ubiquitous task in machine learning where the goal is to embed each vertex into a low-dimensional vector space. We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution. The bipartite graph is assumed to be generated by a semiparametric exponential family distribution, whose parametric component is given by the proximity of outputs of two one-layer neural networks, while nonparametric (nuisance) component is the base measure. Neural networks take high-dimensional features as inputs and output embedding vectors. In this setting, the representation learning problem is equivalent to recovering the weight matrices. The main challenges of estimation arise from the nonlinearity of activation functions and the nonparametric nuisance component of the distribution. To overcome these challenges, we propose a pseudo-likelihood objective based on the rank-order decomposition technique and focus on its local geometry. We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate. Moreover, we prove that the sample complexity of the problem is linear in dimensions (up to logarithmic factors), which is consistent with parametric Gaussian models. However, our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
statistics
Conditions for the existence of a fixed spectrum \{i.e., the set of fixed modes\} for a multi-channel linear system have been known for a long time. The aim of this paper is to reestablish one of these conditions using a new and transparent approach based on matroid theory.
computer science
End-to-end neural network models (E2E) have shown significant performance benefits on different INTERSPEECH ComParE tasks. Prior work has applied either a single instance of an E2E model for a task or the same E2E architecture for different tasks. However, applying a single model is unstable or using the same architecture under-utilizes task-specific information. On ComParE 2020 tasks, we investigate applying an ensemble of E2E models for robust performance and developing task-specific modifications for each task. ComParE 2020 introduces three sub-challenges: the breathing sub-challenge to predict the output of a respiratory belt worn by a patient while speaking, the elderly sub-challenge to estimate the elderly speaker's arousal and valence levels and the mask sub-challenge to classify if the speaker is wearing a mask or not. On each of these tasks, an ensemble outperforms the single E2E model. On the breathing sub-challenge, we study the impact of multi-loss strategies on task performance. On the elderly sub-challenge, predicting the valence and arousal levels prompts us to investigate multi-task training and implement data sampling strategies to handle class imbalance. On the mask sub-challenge, using an E2E system without feature engineering is competitive to feature-engineered baselines and provides substantial gains when combined with feature-engineered baselines.
electrical engineering and systems science
We consider the problem of consistently matching multiple sets of elements to each other, which is a common task in fields such as computer vision. To solve the underlying NP-hard objective, existing methods often relax or approximate it, but end up with unsatisfying empirical performance due to a misaligned objective. We propose a coordinate update algorithm that directly optimizes the target objective. By using pairwise alignment information to build an undirected graph and initializing the permutation matrices along the edges of its Maximum Spanning Tree, our algorithm successfully avoids bad local optima. Theoretically, with high probability our algorithm guarantees an optimal solution under reasonable noise assumptions. Empirically, our algorithm consistently and significantly outperforms existing methods on several benchmark tasks on real datasets.
statistics
Nonlocal boxes are conceptual tools that capture the essence of the phenomenon of quantum non-locality, central to modern quantum theory and quantum technologies. We introduce network nonlocal boxes tailored for quantum networks under the natural assumption that these networks connect independent sources and do not allow signaling. Hence, these boxes satisfy the No-Signaling and Independence (NSI) principle. For the case of boxes without inputs, connecting pairs of sources and producing binary outputs, we prove that there is an essentially unique network nonlocal box with local random outputs and maximal 2-box correlations: $E_2=\sqrt{2}-1$.
quantum physics
We address two shortcomings in online travel time estimation methods for congested urban traffic. The first shortcoming is related to the determination of the number of mixture modes, which can change dynamically, within day and from day to day. The second shortcoming is the wide-spread use of Gaussian probability densities as mixture components. Gaussian densities fail to capture the positive skew in travel time distributions and, consequently, large numbers of mixture components are needed for reasonable fitting accuracy when applied as mixture components. They also assign positive probabilities to negative travel times. To address these issues, this paper derives a mixture distribution with Gamma component densities, which are asymmetric and supported on the positive numbers. We use sparse estimation techniques to ensure parsimonious models and propose a generalization of Gamma mixture densities using Mittag-Leffler functions, which provides enhanced fitting flexibility and improved parsimony. In order to accommodate within-day variability and allow for online implementation of the proposed methodology (i.e., fast computations on streaming travel time data), we introduce a recursive algorithm which efficiently updates the fitted distribution whenever new data become available. Experimental results using real-world travel time data illustrate the efficacy of the proposed methods.
statistics
We consider an infinite homogeneous tree V endowed with the usual metric d defined on graphs and a weighted measure \mu. The metric measure space V,d,\mu) is nondoubling and of exponential growth, hence the classical theory of Hardy spaces does not apply in this setting. We construct an atomic Hardy space H^1 on (V,d,\mu) and investigate some of its properties, focusing in particular on real interpolation properties and on boundedness of singular integrals on H^1.
mathematics
We discuss a new neural network-based direction of arrival estimation scheme that tackles the estimation task as a multidimensional classification problem. The proposed estimator uses a classification chain with as many stages as the number of sources. Each stage is a multiclass classification network that estimates the position of one of the sources. This approach can be interpreted as the approximation of a successive evaluation of the maximum a posteriori estimator. By means of simulations for fully sampled antenna arrays and systems with subarray sampling, we show that it is able to outperform existing estimation techniques in terms of accuracy, while maintaining a very low computational complexity.
electrical engineering and systems science
Because the cochlear is very small and complex, vibration data of the whole basement membrane are not yet available from existing experiments, To address this question, this work technically adopts the mathematical and biological methods to establish a theoretical analytical model of the Spiral cochlear , combined with medical and modern light source imaging experimental data. In addition, a numerical model of a real human ear is also established. By performing numerous calculations, the results reproduce the known travelling wave vibration of basement membrane. Meanwhile, an exciting finding that revealing a new vibration mode is obtained. More importantly, this newly discovered model intrinsically explain many experimental observations that cannot be explained by travelling wave theory, which solves a long standing various queries to travelling wave vibration among researchers. These results not only complement vibration data that are inaccessible through experiments but also reveal a new hearing mechanism.
physics
Large-scale relativistic calculations are performed for the transition energy and line strength of the $ 1s^{2} 2s 2p$ $^1P_{1} \,-\ 1s^{2} 2s^{2}$ $^1S_{0} $ transition in Be-like carbon. Based on the multiconfiguration Dirac-Hartree-Fock~(MCDHF) approach, different correlation models are developed to account for all major electron-electron correlation contributions. These correlation models are tested with various sets of the initial and the final state wave functions. The uncertainty of the predicted line strength due to missing correlation effects is estimated from the differences between the results obtained with those models. The finite nuclear mass effect is accurately calculated taking into account the energy, wave functions as well as operator contributions. As a result, a reliable theoretical benchmark of the $E1$ line strength is provided to support high precision lifetime measurement of the $ 1s^{2} 2s 2p$ $^1P_{1} $ state in Be-like carbon.
physics
This paper shows that the topological structures of particle orbits generated by a generic class of vector fields on spherical surfaces, called the flow of finite type, are in one-to-one correspondence with discrete structures such as trees/graphs and sequence of letters. The flow of finite type is an extension of structurally stable Hamiltonian vector fields, which appear in many theoretical and numerical investigations of 2D incompressible fluid flows. Moreover, it contains compressible 2D vector fields such as the Morse--Smale vector fields and the projection of 3D vector fields onto 2D sections. The discrete representation is not only a simple symbolic identifier for the topological structure of complex flows, but it also gives rise to a new methodology of topological data analysis for flows, called Topological Flow Data Analysis (TFDA), when applied to data brought by measurements, experiments and numerical simulations of complex flows. As a proof of concept, we provide some applications of the representation theory to 2D compressible vector fields and a 3D vector field arising in an industrial problem.
mathematics
We prove that an infinite (bounded) involution lattice and even pseudo--Kleene algebra can have any number of congruences between $2$ and its number of elements or equalling its number of subsets, regardless of whether it has as many ideals as elements or as many ideals as subsets; consequently, the same holds for antiortholattices. Under the Generalized Continuum Hypothesis, this means that an infinite (bounded) involution lattice, pseudo--Kleene algebra or antiortholattice can have any number of congruences between $2$ and its number of subsets, regardless of its number of ideals.
mathematics
We consider efficient estimation of flexible transformation models with interval-censored data. To reduce the dimension of semi-parametric models, the unknown monotone transformation function is approximated via monotone splines. A penalization technique is used to provide more computationally efficient estimation of all parameters. To accomplish model fitting, a computationally efficient nested iterative expectation-maximization (EM) based algorithm is developed for estimation, and an easily implemented variance-covariance approach is proposed for inference on regression parameters. Theoretically, we show that the estimator of the transformation function achieves the optimal rate of convergence and the estimators of regression parameters are asymptotically normal and efficient. The penalized procedure is assessed through extensive numerical experiments and further illustrated via two real applications.
statistics
We construct ${\cal N}=2$ supersymmetric low-energy effective action of $5D, {\cal N}=2$ supersymmetric Yang-Mills theory in $5D, {\cal N}=1$ harmonic superspace. It is obtained as a hypermultiplet completion of the leading $W \ln W$-term in the ${\cal N}=1$ SYM low-energy effective action by invoking the second implicit on-shell ${\cal N}=1$ supersymmetry. After passing to components, the ${\cal N}=2$ effective action constructed displays, along with other terms, the $SO(5)$-invariant $F^4/X^3$ term. Though we specialize to the case of $SU(2)$ gauge group spontaneously broken to $U(1)$, our consideration is applicable to any gauge symmetry broken to some abelian subgroup.
high energy physics theory
Parameter servers (PSs) facilitate the implementation of distributed training for large machine learning tasks. A key challenge for PS performance is that parameter access is non-uniform in many real-world machine learning tasks, i.e., different parameters exhibit drastically different access patterns. We identify skew and nondeterminism as two major sources for non-uniformity. Existing PSs are ill-suited for managing such non-uniform access because they uniformly apply the same parameter management technique to all parameters. As consequence, the performance of existing PSs is negatively affected and may even fall behind that of single node baselines. In this paper, we explore how PSs can manage non-uniform access efficiently. We find that it is key for PSs to support multiple management techniques and to leverage a well-suited management technique for each parameter. We present Lapse2, a PS that replicates hot spot parameters, relocates less frequently accessed parameters, and employs specialized techniques to manage nondeterminism that arises from random sampling. In our experimental study, Lapse2 outperformed existing, single-technique PSs by up to one order of magnitude and provided near-linear scalability across multiple machine learning tasks.
computer science
We examine the impact of black hole jet feedback on the properties of the low-redshift intergalactic medium (IGM) in the SIMBA simulation, with a focus on the Ly$\alpha$ forest mean flux decrement $D_A$. Without jet feedback, we confirm the Photon Underproduction Crisis (PUC) in which $\Gamma_{\rm HI}$ at $z=0$ must be increased by $\times6$ over the Haardt & Madau value in order to match the observed $D_{A}$. Turning on jet feedback lowers this discrepancy to $\sim\times 2.5$, and additionally using the recent Faucher-Gigu\`ere background mostly resolves the PUC, along with producing a flux probability distribution function in accord with observations. The PUC becomes apparent at late epochs ($z \lesssim 1$) where the jet and no-jet simulations diverge; at higher redshifts SIMBA reproduces the observed $D_{A}$ with no adjustment, with or without jets. The main impact of jet feedback is to lower the cosmic baryon fraction in the diffuse IGM from 39% to 16% at $z=0$, while increasing the warm-hot intergalactic medium (WHIM) baryon fraction from 30% to 70%; the lowering of the diffuse IGM content directly translates into a lowering of $D_{A}$ by a similar factor. Comparing to the older MUFASA simulation that employs different quenching feedback but is otherwise similar to SIMBA, MUFASA matches $D_{A}$ less well than SIMBA, suggesting that low-redshift measurements of $D_{A}$ and $\Gamma_{\rm HI}$ could provide constraints on feedback mechanisms. Our results suggest that widespread IGM heating at late times is a plausible solution to the PUC, and that SIMBA's jet AGN feedback model, included to quench massive galaxies, approximately yields this required heating.
astrophysics
A novel method for extracting threshold voltage and substrate effect parameters of MOSFETs with constant current bias at all levels of inversion is presented. This generalized constant-current (GCC) method exploits the charge-based model of MOSFETs to extract threshold voltage and other substrate-effect related parameters. The method is applicable over a wide range of current throughout weak and moderate inversion and to some extent in strong inversion. This method is particularly useful when applied for MOSFETs presenting edge conduction effect (subthreshold hump) in CMOS processes using Shallow Trench Isolation (STI).
physics
On small scales there have been a number of claims of discrepancies between the standard Cold Dark Matter (CDM) model and observations. The 'missing satellites problem' infamously describes the overabundance of subhalos from CDM simulations compared to the number of satellites observed in the Milky Way. A variety of solutions to this discrepancy have been proposed; however, the impact of the specific properties of the Milky Way halo relative to the typical halo of its mass have yet to be explored. Motivated by recent studies that identified ways in which the Milky Way is atypical (e.g., Licquia et al. 2015), we investigate how the properties of dark matter halos with mass comparable to our Galaxy's --- including concentration, spin, shape, and scale factor of the last major merger --- correlate with the subhalo abundance. Using zoom-in simulations of Milky Way-like halos, we build two models of subhalo abundance as functions of host halo properties and conclude that the Milky Way should be expected to have 22%-44% fewer subhalos with low maximum rotation velocities ($V_{\rm max}^{\rm sat} \sim 10$kms$^{-1}$) at the 95% confidence level and up to 72% fewer than average subhalos with high rotation velocities ($V_{\rm max}^{\rm sat} \gtrsim 30$kms$^{-1}$, comparable to the Magellanic Clouds) than would be expected for a typical halo of the Milky Way's mass. Concentration is the most informative single parameter for predicting subhalo abundance. Our results imply that models tuned to explain the missing satellites problem assuming typical subhalo abundances for our Galaxy will be over-correcting.
astrophysics
For a domain $\Omega$ in a geodesically convex surface, we introduce a scattering energy $\mathcal{E}(\Omega)$, which measures the asymmetry of $\Omega$ by quantifying its incompatibility with an isometric circle action. We prove several sharp quantitative isoperimetric inequalities involving $\mathcal{E}(\Omega)$ and characterize the domains with vanishing scattering energy by their convexity and rotational symmetry. We also give a new proof of the sharp Sobolev inequality for Riemannian surfaces which is independent of the isoperimetric inequality.
mathematics
We continue our solution of the inverse problem started by the first author in [Int. J. Mod. Phys. A 35, xxxx (2020), in production]. Additional potential functions for exactly solvable problems that correspond to the same energy spectrum formula but for different energy polynomials and bases are found. In this work, we obtain a class of potential functions associated with the Wilson polynomial and "Jacobi basis".
quantum physics
This paper is concerned with the estimating problem of response quantile with high dimensional covariates when response is missing at random. Some existing methods define root-n consistent estimators for the response quantile. But these methods require correct specifications of both the conditional distribution of response given covariates and the selection probability function. In this paper, a debiased method is proposed by solving a convex programming. The estimator obtained by the proposed method is asymptotically normal given a correctly specified parametric model for the condition distribution function, without the requirement to specify and estimate the selection probability function. Moreover, the proposed estimator is asymptotically more efficient than the existing estimators. The proposed method is evaluated by a simulation study and is illustrated by a real data example.
statistics
This paper looks at the performance of an outdoor visible light (OVLC) communication system used for Internet-of-Vehicles (IoV). In the proposed system an amplify-and-forward (AF) opportunistic scheme is used to extend the range of information broadcast from a traffic light to vehicles. The Gamma-Gamma channel gain and the Lambertian direct current (DC) channel gain are used to model the fading coefficient of each transmission and the short and thermal noise models used to represent the system noise. The statistics of the system are examined by deriving closed-form expressions for the cumulative distribution function (CDF) and probability density function (PDF) for the equivalent end-to-end signal-to-noise ratio (SNR). Novel closed form equations are also developed for the outage probability and ergodic capacity. Numerical simulations performed showed that the system performance in terms of both outage probability and ergodic capacity improves with decreasing turbulence intensity. Results also illustrate that, as the distance between the vehicles increases, the system performance is more deteriorated.
electrical engineering and systems science
We consider $C$-compact orthogonally additive operators in vector lattices. After providing some examples of $C$-compact orthogonally additive operators on a vector lattice with values in a Banach space we show that the set of those operators is a projection band in the Dedekind complete vector lattice of all regular orthogonally additive operators. In second part of the article we introduce a new class of vector lattices, called $C$-complete, and show that any laterally-to-norm continuous $C$-compact orthogonally additive operator from a $C$-complete vector lattice to a Banach space is narrow, which generalizes a result of Pliev and Popov.
mathematics
Astrophysical observations currently provide the only robust, empirical measurements of dark matter. In the coming decade, astrophysical observations will guide other experimental efforts, while simultaneously probing unique regions of dark matter parameter space. This white paper summarizes astrophysical observations that can constrain the fundamental physics of dark matter in the era of LSST. We describe how astrophysical observations will inform our understanding of the fundamental properties of dark matter, such as particle mass, self-interaction strength, non-gravitational interactions with the Standard Model, and compact object abundances. Additionally, we highlight theoretical work and experimental/observational facilities that will complement LSST to strengthen our understanding of the fundamental characteristics of dark matter.
astrophysics
Convolution plays a crucial role in various applications in signal and image processing, analysis, and recognition. It is also the main building block of convolution neural networks (CNNs). Designing appropriate convolution neural networks on manifold-structured point clouds can inherit and empower recent advances of CNNs to analyzing and processing point cloud data. However, one of the major challenges is to define a proper way to "sweep" filters through the point cloud as a natural generalization of the planar convolution and to reflect the point cloud's geometry at the same time. In this paper, we consider generalizing convolution by adapting parallel transport on the point cloud. Inspired by a triangulated surface-based method [Stefan C. Schonsheck, Bin Dong, and Rongjie Lai, arXiv:1805.07857.], we propose the Narrow-Band Parallel Transport Convolution (NPTC) using a specifically defined connection on a voxel-based narrow-band approximation of point cloud data. With that, we further propose a deep convolutional neural network based on NPTC (called NPTC-net) for point cloud classification and segmentation. Comprehensive experiments show that the proposed NPTC-net achieves similar or better results than current state-of-the-art methods on point cloud classification and segmentation.
computer science
The Ruijsenaars-Schneider models are conventionally regarded as relativistic generalizations of the Calogero integrable systems. Surprisingly enough, their supersymmetric generalizations escaped attention. In this work, N=2 supersymmetric extensions of the rational and hyperbolic Ruijsenaars-Schneider three-body models are constructed within the framework of the Hamiltonian formalism. It is also known that the rational model can be described by the geodesic equations associated with a metric connection. We demonstrate that the hyperbolic systems are linked to non-metric connections.
high energy physics theory
We present new HI data of the dwarf galaxy KK 69, obtained with the Giant Metrewave Radio Telescope (GMRT) with a signal-to-noise ratio that almost double previous observations. We carried out a Gaussian spectral decomposition and stacking methods to identify the cold neutral medium (CNM) and the warm neutral medium (WNM) of the HI gas. We found that 30% of the total HI gas, which corresponds to a mass of approx. 10^7 Mo, is in the CNM phase. The distribution of the HI in KK 69 is not symmetric. Our GMRT HI intensity map of KK 69 overlaid onto a Hubble Space Telescope image reveals an offset of approx. 4 kpc between the HI high-density region and the stellar body, indicating it may be a dwarf transitional galaxy. The offset, along with the potential truncation of the HI body, are evidence of interaction with the central group spiral galaxy NGC 2683, indicating the HI gas is being stripped from KK 69. Additionally, we detected extended HI emission of a dwarf galaxy member of the group as well as a possible new galaxy located near the north-eastern part of the NGC 2683 HI disk.
astrophysics
We calculate the rapidity gap survival probability associated with the Higgs decay and Higgs plus dijet production in proton-proton collisions by resumming the leading non-global logarithms without any approximation to the number of colors. For dijet production, depending on partonic subprocesses, the probability involves various `color multipoles', i.e., the product of 4 ($qq\to qq$) or 6 ($qg\to qg$) or 8 ($gg\to gg$) Wilson lines. We calculate all these multipoles for a fixed dijet configuration and discuss the factorization of higher multipoles into lower multipoles as well as the validity of the large-$N_c$ approximation.
high energy physics phenomenology
This brief introduction to Model Predictive Control specifically addresses stochastic Model Predictive Control, where probabilistic constraints are considered. A simple linear system subject to uncertainty serves as an example. The Matlab code for this stochastic Model Predictive Control example is available online.
electrical engineering and systems science
This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics.
electrical engineering and systems science
Searching for and making decisions about products is becoming increasingly easier in the e-commerce space, thanks to the evolution of recommender systems. Personalization and recommender systems have gone hand-in-hand to help customers fulfill their shopping needs and improve their experiences in the process. With the growing adoption of conversational platforms for shopping, it has become important to build personalized models at scale to handle the large influx of data and perform inference in real-time. In this work, we present an end-to-end machine learning system for personalized conversational voice commerce. We include components for implicit feedback to the model, model training, evaluation on update, and a real-time inference engine. Our system personalizes voice shopping for Walmart Grocery customers and is currently available via Google Assistant, Siri and Google Home devices.
computer science
The light mediator scenario of self-interacting dark matter is strongly constrained in many ways. After summarizing the various constraints, we discuss minimal options and models which allow to nevertheless satisfy all these constraints. One straightforward possibility arises if the dark matter and light mediator particles have a temperature sizably smaller than the SM particles. Another simple possibility arises if dark matter doesn't annihilate dominantly into a pair of light mediators but into heavier particles. Both possibilities are discussed with scalar as well as vector boson light mediators. Further possibilities, such as with a hierarchy of quartic scalar couplings, are also identified.
high energy physics phenomenology
We define a nature-inspired model for entanglement optimization in the quantum Internet. The optimization model aims to maximize the entanglement fidelity and relative entropy of entanglement for the entangled connections of the entangled network structure of the quantum Internet. The cost functions are subject of a minimization defined to cover and integrate the physical attributes of entanglement transmission, purification, and storage of entanglement in quantum memories. The method can be implemented with low complexity that allows a straightforward application in the quantum Internet and quantum networking scenarios.
quantum physics
Electron-positron pair production, in combined Sauter potential wells and an oscillating one is imposed on a static Sauter potential, is investigated by using the computational quantum field theory. We find that the gain number (the difference of pair number under combined potentials to the simple addition of pair number for each potential) of the created pairs depends strongly on the depth of static potential and the frequency of oscillating potential. In particular, it is more sensitive to the frequency compared with the depth. For the low-frequency multiphoton regime, the gaining is almost positive and exhibits interesting nonlinear characteristics on both depth and frequency. For the single-photon regime, however, the gaining is almost negative and decreases near linearly with depth while it exhibits an oscillation characteristic with frequency. Furthermore, the optimal frequency and depth of gain number are found and discussed.
quantum physics
In this paper we propose a novel approach to compute the Koopman operator from sparse time series data. In recent years there has been considerable interests in operator theoretic methods for data-driven analysis of dynamical systems. Existing techniques for the approximation of the Koopman operator require sufficiently large data sets, but in many applications, the data set may not be large enough to approximate the operators to acceptable limits. In this paper, using ideas from robust optimization, we propose an algorithm to compute the Koopman operator from sparse data. We enrich the sparse data set with artificial data points, generated by adding bounded artificial noise and and formulate the noisy robust learning problem as a robust optimization problem and show that the optimal solution is the Koopman operator with smallest error. We illustrate the efficiency of our proposed approach in three different dynamical systems, namely, a linear system, a nonlinear system and a dynamical system governed by a partial differential equation.
mathematics
Sensing local environment through the motional response of small molecules lays the foundation of many fundamental technologies. The information of local viscosity, for example, is contained in the random rotational Brownian motions of molecules. However, detection of the motions is challenging for molecules with sub-nanometer scale or high motional rates. Here we propose and experimentally demonstrate a novel method of detecting fast rotational Brownian motions of small magnetic molecules. With electronic spins as sensors, we are able to detect changes in motional rates, which yield different noise spectra and therefore different relaxation signals of the sensors. As a proof-of-principle demonstration, we experimentally implemented this method to detect the motions of gadolinium (Gd) complex molecules with nitrogen-vacancy (NV) centers in nanodiamonds. With all-optical measurements of the NV centers' longitudinal relaxation, we distinguished binary solutions with varying viscosities. Our method paves a new way for detecting fast motions of sub-nanometer sized magnetic molecules with better spatial resolution than conventional optical methods. It also provides a new tool in designing better contrast agents in magnetic resonance imaging.
quantum physics
New-generation wireless communication systems will employ large-scale antenna arrays to satisfy the increasing capacity demand. This massive scenario brings new challenges to the channel equalization problem due to the increased signal processing complexity. We present a novel low-rank tensor equalizer to tackle the high computational demands of the classical linear approach. Specifically, we propose a method to design a canonical polyadic tensor filter to minimize the mean square error criterion. Our simulation results indicate that the proposed equalizer needs fewer calculations and is more robust to short training sequences than the benchmark.
electrical engineering and systems science
The monolithic integration of electromechanical transduction at the nanoscale with advanced CMOS is among the most important challenges of semiconductor electronic systems to leverage the multi-domain sensing, actuation, and resonance properties of nano-mechanical systems. Here we report on the demonstration of vibrating devices enabled by atomically engineered ferroelectric Hf0.5Zr0.5O2 thin films with a variety of mechanical resonance modes with frequencies (f0) between 340kHz - 13GHz and frequency-quality (Q) factor products (f0 x Q) up to 3.97 x 10^12. Experiments based on electrical and optical probing elucidate and quantify the role of the electrostrictive effect in the electromechanical transduction behavior of the Hf0.5Zr0.5O2 film. We further demonstrate the role of nonlinear electromechanical scattering on the operation of Hf0.5Zr0.5O2 transduced resonators. This investigation also highlights the potential of atomically engineered ferroelectric Hf0.5Zr0.5O2 transducers for new classes of CMOS-monolithic linear and nonlinear nanomechanical resonators in centimeter- and millimeter-wave frequencies.
condensed matter
Distributed energy resources are an ideal candidate for the provision of additional flexibility required by power system to support the increasing penetration of renewable energy sources. The integrating large number of resources in the existing market structure, particularly in the light of providing flexibility services, is envisioned through the concept of virtual power plant (VPP). To this end, it is crucial to establish a clear methodology for VPP flexibility modelling. In this context, this paper first puts forward the need to clarify the difference between feasibility and flexibility potential of a VPP, and then propose a methodology for the evaluation of relevant operating regions. Similar concepts can also be used to modelling TSO/DSO interface operation. Several case studies are designed to reflect the distinct information conveyed by feasibility and flexibility operating regions in the presence of "slow" and "fast" responding resources for a VPP partaking in provision of energy and grid support services. The results also highlight the impact of flexible load and importantly network topology on the VPP feasibility (FOR) and flexibility (FXOR) operating regions.
electrical engineering and systems science
We study a set of Run-and-tumble particle (RTP) dynamics in two spatial dimensions. In the first case of the orientation {\theta} of the particle can assume a set of n possible discrete values while in the second case {\theta} is a continuous variable. We calculate exactly the marginal position distributions for n = 3,4 and the continuous case and show that in all the cases the RTP shows a cross-over from a ballistic to diffusive regime. The ballistic regime is a typical signature of the active nature of the systems and is characterized by non-trivial position distributions which depends on the specific model. We also show that, the signature of activity at long-times can be found in the atypical fluctuations which we also characterize by computing the large deviation functions explicitly.
condensed matter
Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-ImageNet, Tiered-ImageNet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the ImageNet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the "hardness" of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.
computer science
The ability of phase-change materials to reversibly and rapidly switch between two stable phases has driven their use in a number of applications such as data storage and optical modulators. Incorporating such materials into metasurfaces enables new approaches to the control of optical fields. In this article we present the design of novel switchable metasurfaces that enable the control of the nonclassical two-photon quantum interference. These structures require no static power consumption, operate at room temperature, and have high switching speed. For the first adaptive metasurface presented in this article, tunable nonclassical two-photon interference from -97.7% (anti-coalescence) to 75.48% (coalescence) is predicted. For the second adaptive geometry, the quantum interference switches from -59.42% (anti-coalescence) to 86.09% (coalescence) upon a thermally driven crystallographic phase transition. The development of compact and rapidly controllable quantum devices is opening up promising paths to brand-new quantum applications as well as the possibility of improving free space quantum logic gates, linear-optics bell experiments, and quantum phase estimation systems.
quantum physics
Given two graphs $H_1$ and $H_2$, a graph $G$ is $(H_1,H_2)$-free if it contains no subgraph isomorphic to $H_1$ or $H_2$. Let $P_t$ and $C_s$ be the path on $t$ vertices and the cycle on $s$ vertices, respectively. In this paper we show that for any $(P_6,C_4)$-free graph $G$ it holds that $\chi(G)\le \frac{3}{2}\omega(G)$, where $\chi(G)$ and $\omega(G)$ are the chromatic number and clique number of $G$, respectively. %Our bound is attained by $C_5$ and the Petersen graph. Our bound is attained by several graphs, for instance, the five-cycle, the Petersen graph, the Petersen graph with an additional universal vertex, and all $4$-critical $(P_6,C_4)$-free graphs other than $K_4$ (see \cite{HH17}). The new result unifies previously known results on the existence of linear $\chi$-binding functions for several graph classes. Our proof is based on a novel structure theorem on $(P_6,C_4)$-free graphs that do not contain clique cutsets. Using this structure theorem we also design a polynomial time $3/2$-approximation algorithm for coloring $(P_6,C_4)$-free graphs. Our algorithm computes a coloring with $\frac{3}{2}\omega(G)$ colors for any $(P_6,C_4)$-free graph $G$ in $O(n^2m)$ time.
mathematics
This paper derives the asymptotic behavior of the following ruin probability $$P\{\exists t \in G(\delta):B_H(t)-c_1t>q_1u,B_H(t)-c_2t>q_2u\}, \ \ \ u \rightarrow \infty,$$ where $B_H$ is a standard fractional Brownian motion, $c_1,q_1,c_2,q_2>0$ and $G(\delta)$ denotes a regular grid $\{0,\delta, 2\delta,...\}$ for some $\delta>0$. The approximation depends on $H$, $\delta$ (only when $H\leq 1/2$) and the relations between parameters $c_1,q_1,c_2,q_2$.
mathematics
In this paper we explore a novel method for collecting survey data following a natural disaster and then combine this data with device-derived mobility information to explore demographic outcomes. Using social media as a survey platform for measuring demographic outcomes, especially those that are challenging or expensive to field for, is increasingly of interest to the demographic community. Recent work by Schneider and Harknett (2019) explores the use of Facebook targeted advertisements to collect data on low-income shift workers in the United States. Other work has addressed immigrant assimilation (Stewart et al, 2019), world fertility (Ribeiro et al, 2020), and world migration stocks (Zagheni et al, 2017). We build on this work by introducing a rapid-response survey of post-disaster demographic and economic outcomes fielded through the Facebook app itself. We use these survey responses to augment app-derived mobility data that comprises Facebook Displacement Maps to assess the validity of and drivers underlying those observed behavioral trends. This survey was deployed following the 2019 Australia bushfires to better understand how these events displaced residents. In doing so we are able to test a number of key hypotheses around displacement and demographics. In particular, we uncover several gender differences in key areas, including in displacement decision-making and timing, and in access to protective equipment such as smoke masks. We conclude with a brief discussion of research and policy implications.
computer science
Sextic double solids, double covers of $\mathbb P^3$ branched along a sextic surface, are the lowest degree Gorenstein Fano 3-folds, hence are expected to behave very rigidly in terms of birational geometry. Smooth sextic double solids, and those which are $\mathbb Q$-factorial with ordinary double points, are known to be birationally rigid. In this article, we study sextic double solids with an isolated compound $A_n$ singularity. We prove a sharp bound $n \leq 8$, describe models for each $n$ explicitly and prove that sextic double solids with $n > 3$ are birationally non-rigid.
mathematics
Performing imperfect or noisy measurements on a quantum system both impacts the measurement outcome and the state of the system after the measurement. In this paper we are concerned with imperfect calorimetric measurements. In calorimetric measurements one typically measures the energy of a thermal environment to extract information about the system. The measurement is imperfect in the sense that we simultaneously measure the energy of the calorimeter and an additional noise bath. Under weak coupling assumptions, we find that the presence of the noise bath manifests itself by modifying the jump rates of the reduced system dynamics. We study an example of a driven qubit interacting with resonant bosons calorimeter and find increasing the noise leads to a reduction in the power flowing from qubit to calorimeter and thus an apparent heating up of the calorimeter.
quantum physics
We introduce a variation of the Ziv-Lempel and Crochemore factorizations of words by requiring each factor to be a palindrome. We compute these factorizations for the Fibonacci word, and more generally, for all $m$-bonacci words.
computer science
A detailed study is presented of the relativistic Wigner function for a quantum spinless particle evolving in time according to the Salpeter equation.
quantum physics
In space-like separated experiments and other scenarios where multiple parties share a classical common cause but no cause-effect relations, quantum theory allows a variety of nonsignaling resources which are useful for distributed quantum information processing. These include quantum states, nonlocal boxes, steering assemblages, teleportages, channel steering assemblages, and so on. Such resources are often studied using nonlocal games, semiquantum games, entanglement-witnesses, teleportation experiments, and similar tasks. We introduce a unifying framework which subsumes the full range of nonsignaling resources, as well as the games and experiments which probe them, into a common resource theory: that of local operations and shared randomness (LOSR). Crucially, we allow these LOSR operations to locally change the type of a resource, so that players can convert resources of any type into resources of any other type, and in particular into strategies for the specific type of game they are playing. We then prove several theorems relating resources and games of different types. These theorems generalize a number of seminal results from the literature, and can be applied to lessen the assumptions needed to characterize the nonclassicality of resources. As just one example, we prove that semiquantum games are able to perfectly characterize the LOSR nonclassicality of every resource of any type (not just quantum states, as was previously shown). As a consequence, we show that any resource can be characterized in a measurement-device-independent manner.
quantum physics
In this paper, we present an implicit finite difference method for the numerical solution of the Black-Scholes model of American put options without dividend payments. We combine the proposed numerical method by using a front fixing approach where the option price and the early exercise boundary are computed simultaneously. Consistency and stability properties of the method are studied. We choose to improve the accuracy of the computed solution via a mesh refinement based on Richardson's extrapolation. Comparisons with some proposed methods for the American options problem are carried out to validate the obtained numerical results and to show the efficiency of the proposed numerical methods. Finally, by \textit{a posteriori} error estimator, we find a suitable computational grid requiring that the computed solution verifies a prefixed tolerance.
mathematics
The Waring rank of the generic $d \times d$ determinant is bounded above by $d \cdot d!$. This improves previous upper bounds, which were of the form an exponential times the factorial. Our upper bound comes from an explicit power sum decomposition. We describe some of the symmetries of the decomposition and set-theoretic defining equations for the terms of the decomposition.
mathematics
A periodic spatial modulation, as created by a moir\'e pattern, has been extensively studied with the view to engineer and tune the properties of graphene. Graphene encapsulated by hexagonal boron nitride (hBN) when slightly misaligned with the top and bottom hBN layers experiences two interfering moir\'e patterns, resulting in a so-called super-moir\'e (SM). This leads to a lattice and electronic spectrum reconstruction. A geometrical construction of the non-relaxed SM patterns allows us to indicate qualitatively the induced changes in the electronic properties and to locate the SM features in the density of states and in the conductivity. To emphasize the effect of lattice relaxation, we report band gaps at all Dirac-like points in the hole doped part of the reconstructed spectrum, which are expected to be enhanced when including interaction effects. Our result is able to distinguish effects due to lattice relaxation and due to the interfering SM and provides a clear picture on the origin of recently experimentally observed effects in such trilayer heterostuctures.
condensed matter
The high-energy scattering of massive electroweak bosons, known as vector boson scattering (VBS), is a sensitive probe of new physics. VBS signatures will be thoroughly and systematically investigated at the LHC with the large data samples available and those that will be collected in the near future. Searches for deviations from Standard Model (SM) expectations in VBS facilitate tests of the Electroweak Symmetry Breaking (EWSB) mechanism. Current state-of-the-art tools and theory developments, together with the latest experimental results, and the studies foreseen for the near future are summarized. A review of the existing Beyond the SM (BSM) models that could be tested with such studies as well as data analysis strategies to understand the interplay between models and the effective field theory paradigm for interpreting experimental results are discussed. This document is a summary of the EU COST network "VBScan" workshop on the sensitivity of VBS processes for BSM frameworks that took place December 4-5, 2019 at the LIP facilities in Lisbon, Portugal. In this manuscript we outline the scope of the workshop, summarize the different contributions from theory and experiment, and discuss the relevant findings.
high energy physics phenomenology
We develop a unified framework for the construction of soft dressings at boundaries of spacetime, such as the null infinity of Minkowski spacetime and the horizon of a Schwarzschild black hole. The construction is based on an old proposal of Mandelstam for quantizing QED and considers matter fields dressed by Wilson lines. Along time-like paths, the Wilson lines puncturing the boundary are the analogs of flat space Faddeev-Kulish dressings. We focus on the Schwarzschild black hole where our framework provides a quantum-field-theoretical perspective of the Hawking-Perry-Strominger viewpoint that black holes carry soft hair, through a study of the Wilson line dressings, localized on the horizon.
high energy physics theory