text
stringlengths
11
9.77k
label
stringlengths
2
104
We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF). AdVIL employs two variational distributions to approximately infer the latent variables and estimate the partition function of an MRF, respectively. The two variational distributions provide an estimate of the negative log-likelihood of the MRF as a minimax optimization problem, which is solved by stochastic gradient descent. AdVIL is proven convergent under certain conditions. On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs. On the other hand, compared with existing black-box methods, AdVIL provides a tighter estimate of the log partition function and achieves much better empirical results.
computer science
If the early universe is dominated by an energy density which evolves other than radiation-like the normal Hubble-temperature relation $H\propto T^2$ is broken and dark matter relic density calculations in this era can be significantly different. We first highlight that for a population of states $\phi$ sourcing an initial expansion rate of the form $H\propto T^{2+n/2}$ for $n\geq-4$, during the period of appreciable $\phi$ decays the evolution transitions to $H\propto T^4$. The decays of $\phi$ imply a source of entropy production in the thermal bath which alters the Boltzmann equations and impacts the dark matter relic abundance. We show that the form of the initial expansion rate leaves a lasting imprint on relic densities established while $H\propto T^4$ since the value of the exponent $n$ changes the temperature evolution of the thermal bath. In particular, a dark matter relic density set via freeze-in or non-thermal production is highly sensitive to the temperature dependance of the initial expansion rate. This work generalises earlier studies which assumed initial expansion rates due to matter or kination domination.
high energy physics phenomenology
Within the holographic cosmology paradigm, specifically the model of McFadden and Skenderis, but more generally than that, the cosmological constant is found to naturally flow from a large value, to a small value, which can even be as low as the needed $\sim 10^{-120}$ times the original, along with the dual RG flow. Within this context then, the cosmological constant problem is mapped to a simple quantum field theory property, even though the exact mechanism for it in gravity is still obscure. I consider several examples of gravity duals to explain the mechanism.
high energy physics theory
While the interpretability of machine learning models is often equated with their mere syntactic comprehensibility, we think that interpretability goes beyond that, and that human interpretability should also be investigated from the point of view of cognitive science. In particular, the goal of this paper is to discuss to what extent cognitive biases may affect human understanding of interpretable machine learning models, in particular of logical rules discovered from data. Twenty cognitive biases are covered, as are possible debiasing techniques that can be adopted by designers of machine learning algorithms and software. Our review transfers results obtained in cognitive psychology to the domain of machine learning, aiming to bridge the current gap between these two areas. It needs to be followed by empirical studies specifically focused on the machine learning domain.
statistics
We evaluate the spin-wave spectrum and dynamic susceptibility in a layered superconductors with helical interlayer magnetic structure. We especially focus on the structure in which the moments rotate 90$^{\circ}$ from layer to layer realized in the iron pnictide RbEuFe$_{4}$As$_{4}$. The spin-wave spectrum in superconductors is strongly renormalized due to the long-range electromagnetic interactions between the oscillating magnetic moments. This leads to strong enhancement of the frequency of the mode coupled with uniform field and this enhancement exists only within a narrow range of the c-axis wave vectors of the order of the inverse London penetration depth. The key feature of materials like RbEuFe$_{4}$As$_{4}$ is that this uniform mode corresponds to the maximum frequency of the spin-wave spectrum with respect to c-axis wave vector. As a consequence, the high-frequency surface resistance acquires a very distinct asymmetric feature spreading between the bare and renormalized frequencies. We also consider excitation of spin waves with Josephson effect in a tunneling contact between helical-magnetic and conventional superconductors and study the interplay between the spin-wave features and geometrical cavity resonances in the current-voltage characteristics.
condensed matter
We investigate the dynamic evolution and thermodynamic process of a driven quantum system immersed in a finite-temperature heat bath. A Born-Markovian quantum master equation is formally derived for the time-dependent system with discrete energy levels. This quantum master equation can be applied to situations with a broad range of driving speeds and bath temperatures and thus be used to study the finite-time quantum thermodynamics even when nonadiabatic transition and dissipation coexist. The dissipative Landau-Zener model is analyzed as an example. The population evolution and transition probability of the model reveal the importance of the competition between driving and dissipation beyond the adiabatic regime. Moreover, local maximums of irreversible entropy production occur at intermediate sweep velocity and finite temperature, which the low-dissipation model cannot describe.
quantum physics
We present a comparison of the dispersion relations derived for anti-plane surface waves using the two distinct approaches of the surface elasticity vis-a-vis the lattice dynamics. We consider an elastic half-space with surface stresses described within the Gurtin-Murdoch model, and present a formulation of its discrete counterpart that is a square lattice half-plane with surface row of particles having mass and elastic bonds different from the ones in the bulk. As both models possess anti-plane surface waves we discuss similarities between the continuum and discrete viewpoint. In particular, in the context of the behaviour of phase velocity, we discuss the possible characterization of the surface shear modulus through the parameters involved in lattice formulation.
physics
The magnetic flux cancellation on the Sun plays a crucial role in determining the manner in which the net magnetic flux changes in every solar cycle, affecting the large scale evolution of the coronal magnetic fields and heliospheric environment. We investigate, in this paper, the correlation between the solar magnetic flux cancelled at the equator and the solar magnetic flux transported to the poles by comparing the net amount of magnetic flux in the latitude belt 0${^{\circ}}$-5${^{\circ}}$ and 45${^{\circ}}$-60${^{\circ}}$, estimated using synoptic magnetograms from National Solar Observatory at Kitt Peak, during Solar Cycles 21-24. We find a good correlation between the net flux in the latitude band 0${^{\circ}}$-5${^{\circ}}$ and 55${^{\circ}}$-60${^{\circ}}$ for the Northern as well as for the Southern hemispheres. However, we find a poor correlation if the net flux for the Northern and Southern hemispheres are considered together. In addition, we investigate the correlation between the net flux cancelled at the equator and the strength of solar polar field at cycle minimum, and find a good correlation between the two. We discuss the implication of the correlation of flux transported across the equator and to the poles that has an important bearing in the estimation of the residual polar cap field strength at the cycle minimum. This can be used a predictive tool for estimating the amplitude of subsequent cycles and we use this to estimate maximum smoothed sunspot numbers of 77$\pm$5 and 85$\pm$5 for the Northern and Southern hemispheres, respectively, for the upcoming Solar Cycle 25.
astrophysics
The icosahedral-like polyhedral fraction (ICO-like fraction) has been studied as a criterion for predicting the glass-forming ability of bulk ternary metallic glasses, Al90Sm8X2 (X = Al (binary), Cu, Ag, Au), using ab initio molecular dynamics (AIMD) simulations. We found that the ICO-like fraction can be determined with adequate precision to explore correlations with AIMD simulations. We then demonstrated that ICO-like fraction correlates with the critical cooling rate, which is a widely used intrinsic measure of glass forming ability. These results suggest that the ICO-like fraction from AIMD simulations may offer a useful guide for searching and screening for good glass formers.
condensed matter
Stopping criteria for Stochastic Gradient Descent (SGD) methods play important roles from enabling adaptive step size schemes to providing rigor for downstream analyses such as asymptotic inference. Unfortunately, current stopping criteria for SGD methods are often heuristics that rely on asymptotic normality results or convergence to stationary distributions, which may fail to exist for nonconvex functions and, thereby, limit the applicability of such stopping criteria. To address this issue, in this work, we rigorously develop two stopping criteria for SGD that can be applied to a broad class of nonconvex functions, which we term Bottou-Curtis-Nocedal functions. Moreover, as a prerequisite for developing these stopping criteria, we prove that the gradient function evaluated at SGD's iterates converges strongly to zero for Bottou-Curtis-Nocedal functions, which addresses an open question in the SGD literature. As a result of our work, our rigorously developed stopping criteria can be used to develop new adaptive step size schemes or bolster other downstream analyses for nonconvex functions.
mathematics
We experimentally demonstrate the generation of a temporal pulse doublet from the propagation of an initial super-Gaussian waveform in a nonlinear focusing medium. The picosecond structures are characterized both in amplitude and phase and their close-to-Gaussian Fourier-transform limited shape is found in excellent agreement with numerical simulations. This nonlinear fiber-based single-stage reshaping scheme is energy efficient, can sustain GHz repetition rates and temporal compression factors around 10 are demonstrated.
physics
Two novel visual cryptography (VC) schemes are proposed by combining VC with single-pixel imaging (SPI) for the first time. It is pointed out that the overlapping of visual key images in VC is similar to the superposition of pixel intensities by a single-pixel detector in SPI. In the first scheme, QR-code VC is designed by using opaque sheets instead of transparent sheets. The secret image can be recovered when identical illumination patterns are projected onto multiple visual key images and a single detector is used to record the total light intensities. In the second scheme, the secret image is shared by multiple illumination pattern sequences and it can be recovered when the visual key patterns are projected onto identical items. The application of VC can be extended to more diversified scenarios by our proposed schemes.
electrical engineering and systems science
We propose a postselecting parity-swap amplifier for Schr\"odinger cat states that does not require the amplified state to be known a priori. The device is based on a previously-implemented state comparison amplifier for coherent states. It consumes only Gaussian resource states, which provides an advantage over some cat state amplifiers. It requires simple Geiger-mode photodetectors and works with high fidelity and approximately twofold gain.
quantum physics
One main limitation of the existing optimal scaling results for Metropolis--Hastings algorithms is that the assumptions on the target distribution are unrealistic. In this paper, we consider optimal scaling of random-walk Metropolis algorithms on general target distributions in high dimensions arising from practical MCMC models from Bayesian statistics. For optimal scaling by maximizing expected squared jumping distance (ESJD), we show the asymptotically optimal acceptance rate $0.234$ can be obtained under general realistic sufficient conditions on the target distribution. The new sufficient conditions are easy to be verified and may hold for some general classes of MCMC models arising from Bayesian statistics applications, which substantially generalize the product i.i.d. condition required in most existing literature of optimal scaling. Furthermore, we show one-dimensional diffusion limits can be obtained under slightly stronger conditions, which still allow dependent coordinates of the target distribution. We also connect the new diffusion limit results to complexity bounds of Metropolis algorithms in high dimensions.
statistics
This paper investigates the problem of classification of unmanned aerial vehicles (UAVs) from radio frequency (RF) fingerprints at the low signal-to-noise ratio (SNR) regime. We use convolutional neural networks (CNNs) trained with both RF time-series images and the spectrograms of 15 different off-the-shelf drone controller RF signals. When using time-series signal images, the CNN extracts features from the signal transient and envelope. As the SNR decreases, this approach fails dramatically because the information in the transient is lost in the noise, and the envelope is distorted heavily. In contrast to time-series representation of the RF signals, with spectrograms, it is possible to focus only on the desired frequency interval, i.e., 2.4 GHz ISM band, and filter out any other signal component outside of this band. These advantages provide a notable performance improvement over the time-series signals-based methods. To further increase the classification accuracy of the spectrogram-based CNN, we denoise the spectrogram images by truncating them to a limited spectral density interval. Creating a single model using spectrogram images of noisy signals and tuning the CNN model parameters, we achieve a classification accuracy varying from 92% to 100% for an SNR range from -10 dB to 30 dB, which significantly outperforms the existing approaches to our best knowledge.
electrical engineering and systems science
There is renewed interest in using the coherence between beams generated in separate down-converter sources for new applications in imaging, spectroscopy, microscopy and optical coherence tomography (OCT). These schemes make use of continuous wave (CW) pumping in the low parametric gain regime, which produces frequency correlations, and frequency entanglement, between signal-idler pairs generated in each single source. But can induced coherence still be observed if there is no frequency correlation, so the biphoton wavefunction is factorable? We will show that this is the case, and this might be an advantage for OCT applications. High axial resolution requires a large bandwidth. For CW pumping this requires the use of short nonlinear crystals. This is detrimental since short crystals generate small photon fluxes. We show that the use of ultrashort pump pulses allows improving axial resolution even for long crystal that produce higher photon fluxes.
quantum physics
In this paper, we study the propagation speeds of reaction-diffusion-advection (RDA) fronts in time-periodic cellular and chaotic flows with Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We first apply the variational principle to reduce the computation of KPP front speeds to a principal eigenvalue problem of a linear advection-diffusion operator with space-time periodic coefficients on a periodic domain. To this end, we develop efficient Lagrangian particle methods to compute the principal eigenvalue through the Feynman-Kac formula. By estimating the convergence rate of Feynman-Kac semigroups and the operator splitting methods for approximating the linear advection-diffusion solution operators, we obtain convergence analysis for the proposed numerical methods. Finally, we present numerical results to demonstrate the accuracy and efficiency of the proposed method in computing KPP front speeds in time-periodic cellular and chaotic flows, especially the time-dependent Arnold-Beltrami-Childress (ABC) flow and time-dependent Kolmogorov flow in three-dimensional space.
mathematics
We evaluate the effectiveness of different classical potentials to predict the thermodynamics of a number of organic solid form polymorphs relative to experimentally reported values using the quasi-harmonic approximation. Using the polarizable potential AMOEBA we are able to predict the correct sign of the enthalpy difference for 71+/-12 % of the polymorphs. Alternatively, all point charge potentials perform on par with random chance of correcting the correct sign (50%) for enthalpy. We find that the entropy is less sensitive to the accuracy of the potential with all force fields, excluding CGenFF, reporting the correct sign of the entropy for 64+/-13 - 75+/-11 % of the systems. Predicting the correct sign of the enthalpy and entropy differences can help indicate the low and high temperature stability of the polymorphs, unfortunately the error relative to experiment in these predicted values can be as large as 1 - 2.5 kcal/mol at the transition temperature.
condensed matter
We propose an adiabatic method for optimal phonon temperature estimation using trapped ions which can be operated beyond the Lamb-Dicke regime. The quantum sensing technique relies on a time-dependent red-sideband transition of phonon modes, described by the non-linear Jaynes-Cummings model in general. A unique feature of our sensing technique is that the relevant information of the phonon thermal distributions can be transferred to the collective spin-degree of freedom. We show that each of the thermal state probabilities is adiabatically mapped onto the respective collective spin-excitation configuration and thus the temperature estimation is carried out simply by performing a spin-dependent laser fluorescence measurement at the end of the adiabatic transition. We characterize the temperature uncertainty in terms of the Fisher information and show that the state projection measurement saturates the fundamental quantum Cram\'er-Rao bound for quantum oscillator at thermal equilibrium.
quantum physics
One terahertz (THz) waveguide based on 3D printed metallic photonic crystals is experimentally and numerically demonstrated in 0.1-0.6 THz, which consists of periodic metal rod arrays (MRAs). Results demonstrated that such waveguide supports two waveguide modes such as fundamental and high-order TM-modes. The high-order TM-mode shows high field confinement, and it is sensitive to the geometry changes. By tuning the metal rod interspace, the spectral positions, bandwidths, and transmittances of the high-frequency band can be optimized. The investigation shows that a mode conversion between high-order modes occurs when the MRAs symmetry is broken via change the air interspace.
physics
We compute the effect of the chiral phase transition of QCD on the axion mass and self-coupling; the coupling of the axion to the quarks at finite temperature is described within the Nambu-Jona-Lasinio model. We find that the axion mass decreases with temperature, following the response of the topological susceptibility, in agreement with previous results obtained within chiral perturbation theory at low and intermediate temperatures. As expected, the comparison with lattice data shows that chiral perturbation theory fails to reproduce the topological susceptibility around the chiral critical temperature, while the Nambu$-$Jona-Lasinio model offers a better qualitative agreement with these data, hence a more reliable estimate of the temperature dependence of the axion mass in the presence of a hot quark medium. We complete our study by computing the temperature dependence of the self-coupling of the axion, finding that this coupling decreases at and above the phase transition. The model used in our work as well as the results presented here pave the way to the computation of the in-medium effects of hot and/or dense quark-gluon plasma on the axion properties.
high energy physics phenomenology
Let $p$ be an odd prime. For any $p$-adic integer $a$ we let $\overline{a}$ denote the unique integer $x$ with $-p/2<x<p/2$ and $x-a$ divisible by $p$. In this paper we study some permutations involving quadratic residues modulo $p$. For instance, we consider the following three sequences. \begin{align*} &A_0: \overline{1^2},\ \overline{2^2},\ \cdots,\ \overline{((p-1)/2)^2},\\ &A_1: \overline{a_1},\ \overline{a_2},\ \cdots,\ \overline{a_{(p-1)/2}},\\ &A_2: \overline{g^2},\ \overline{g^4},\ \cdots,\ \overline{g^{p-1}}, \end{align*} where $g\in\Z$ is a primitive root modulo $p$ and $1\le a_1<a_2<\cdots<a_{(p-1)/2}\le p-1$ are all quadratic residues modulo $p$. Obviously $A_i$ is a permutation of $A_j$ and we call this permutation $\sigma_{i,j}$. Sun obtained the sign of $\sigma_{0,1}$ when $p\equiv 3\pmod4$. In this paper we give the sign of $\sigma_{0,1}$ and determine the sign $\sigma_{0,2}$ when $p\equiv 1\pmod 4$.
mathematics
Motivated by the HRRT-formula for holographic entanglement entropy, we consider the following question: what are the position and the surface area of extremal surfaces in a perturbed geometry, given their anchor on the asymptotic boundary? We derive explicit expressions for the change in position and surface area, thereby providing a closed form expression for the canonical energy. We find that a perturbation governed by some small parameter $\lambda$ yields an expansion of the surface area in terms of a highly non-local expression involving multiple integrals of geometric quantities over the original extremal surface.
high energy physics theory
We present the first public version of Caravel, a C++17 framework for the computation of multi-loop scattering amplitudes in quantum field theory, based on the numerical unitarity method. Caravel is composed of modules for the $D$-dimensional decomposition of integrands of scattering amplitudes into master and surface terms, the computation of tree-level amplitudes in floating point or finite-field arithmetic, the numerical computation of one- and two-loop amplitudes in QCD and Einstein gravity, and functional reconstruction tools. We provide programs that showcase Caravel's main functionalities and allow to compute selected one- and two-loop amplitudes.
high energy physics phenomenology
We study co-dimension two monodromy defects in theories of conformally coupled scalars and free Dirac fermions in arbitrary $d$ dimensions. We characterise this family of conformal defects by computing the one-point functions of the stress-tensor and conserved current for Abelian flavour symmetries as well as two-point functions of the displacement operator. In the case of $d=4$, the normalisation of these correlation functions are related to defect Weyl anomaly coefficients, and thus provide crucial information about the defect conformal field theory. We provide explicit checks on the values of the defect central charges by calculating the universal part of the defect contribution to entanglement entropy, and further, we use our results to extract the universal part of the vacuum R\'enyi entropy. Moreover, we leverage the non-supersymmetric free field results to compute a novel defect Weyl anomaly coefficient in a $d=4$ theory of free $\mathcal{N}=2$ hypermultiplets. Including singular modes in the defect operator product expansion of fundamental fields, we identify notable relevant deformations in the singular defect theories and show that they trigger a renormalisation group flow towards an IR fixed point with the most regular defect OPE. We also study Gukov-Witten defects in free $d=4$ Maxwell theory and show that their central charges vanish.
high energy physics theory
Rapid increase in the number of photo voltaic systems (PVSs) across the low voltage distribution systems required real-time monitoring of those systems. However, considering the necessary investment cost, real-time monitoring infrastructure installation does not seem to be achieved in the near future. Thus, alternative approaches to achieve this necessity is crucial. In this study, a method is proposed to nowcast the power generation of the behind-the-meter PVSs. The method relies on the strong correlation between the harmonic current injection of the PVS inverter and the power generation of the PVS. The Artificial Neural Network method (ANN) is utilized to model the relation between the predictors and the aggregated PVS output, i.e. generated power. Finally, the effectiveness of the approach is assessed with real field data.
electrical engineering and systems science
This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural network field. To this end, we give a brief overview of knowledge distillation and some related works including learning using privileged information(LUPI) and generalized distillation(GD). Even though knowledge distillation based on the teacher-student architecture was initially devised as a model compression technique, it has found versatile applications over various frameworks. In this paper, we review the characteristics of knowledge distillation from the hypothesis that the three important ingredients of knowledge distillation are distilled knowledge and loss,teacher-student paradigm, and the distillation process. In addition, we survey the versatility of the knowledge distillation by studying its direct applications and its usage in combination with other deep learning paradigms. Finally we present some future works in knowledge distillation including explainable knowledge distillation where the analytical analysis of the performance gain is studied and the self-supervised learning which is a hot research topic in deep learning community.
computer science
Motivated by the recent progress in solving the large charge sector of conformal field theories, we revisit the mass-charge relation of boson stars asymptotic to global AdS. We construct and classify a large number of electrically charged boson star solutions in a toy model and two supergravity models arising from the $SU(3)$ and $U(1)^4$ truncation of $D=4$ $SO(8)$ gauged maximal supergravity. We find a simple ansatz for the chemical potential that can fit the numerical data in striking accuracy for the full range of charge. Combining with the first law of thermodynamics, we can then evaluate the mass as a function of the charge and obtain the free energy in the fixed charge ensemble. We show that in the toy model, the ground state can be either the extremal RN black hole or the boson stars depending on the parameter region. For the $SU(3)$ truncation, there always exists a boson star that has smaller free energy than the extremal RN black hole, in contrast to the $U(1)^4$ model where the extremal RN black hole is always the ground state. In all models, for boson star solutions with arbitrarily large charge, we show that the large charge expansion of the mass reproduces the same structure exhibited in the CFT side.
high energy physics theory
In this paper, we extend the T-duality isomorphism by Gualtieri and Cavalcanti, from invariant exact Courant algebroids, to exotic exact Courant algebroids such that the momentum and winding numbers are exchanged, filling in a gap in the literature.
high energy physics theory
We describe several methods to construct minimal foliations by hyperbolic surfaces on closed 3-manifolds, and discuss the properties of the examples thus obtained.
mathematics
Congruential pseudorandom number generators rely on good multipliers, that is, integers that have good performance with respect to the spectral test. We provide lists of multipliers with a good lattice structure up to dimension eight and up to lag eight for generators with typical power-of-two moduli, analyzing in detail multipliers close to the square root of the modulus, whose product can be computed quickly.
computer science
In this paper, we present an analysis of the pulsating behavior of $Kepler$ target KIC 10284901. The Fourier transform of the high-precision light curve reveals seven independent frequencies for its light variations. Among them, the first two frequencies are main pulsation modes: F0 = 18.994054(1) $\rm{day^{-1}}$ and F1 = 24.335804(4) $\rm{day^{-1}}$, the ratio of F0/F1 = 0.7805 classifies this star as a double-mode high-amplitude $\delta$ Scuti (HADS) star; another two frequencies, $f_{m1}$ = 0.4407 day$^{-1}$ and $f_{m2}$ = 0.8125 day$^{-1}$, are detected directly, and the modulations of $f_{m1}$ and $f_{m2}$ to F0 and F1 modes (seen as quintuplet structures centered on these two modes in the frequency spectrum) are also found. This is the first detection of a double-modulation effect in the HADS stars. The features of the frequency pattern and the ratio ($f_{m1}$/$f_{m2}$ $\approx$ 1:2), as well as the cyclic variation of amplitude of the two dominant pulsation modes, which seem to be similar to that in Blazhko RR Lyrae stars, indicate this modulation might be related to the Blazhko effect. A preliminary analysis suggests that KIC 10284901 is in the bottom of the HADS instability strip and situated in the main sequence.
astrophysics
Multi-component dark matter scenarios are studied in the model with $U(1)_X$ dark gauge symmetry that is broken into its product subgroup $Z_2 \times Z_3$ \'{a} la Krauss-Wilczek mechanism. In this setup, there exist two types of dark matter fields, $X$ and $Y$, distinguished by different $Z_2 \times Z_3$ charges. The real and imaginary parts of the $Z_2$-charged field, $X_R$ and $X_I$, get different masses from the $U(1)_X$ symmetry breaking. The field $Y$, which is another dark matter candidate due to the unbroken $Z_3$ symmetry, belongs to the Strongly Interacting Massive Particle (SIMP)-type dark matter. Both $X_I$ and $X_R$ may contribute to $Y$'s $3\rightarrow 2$ annihilation processes, opening a new class of SIMP models with local dark gauge symmetry. Depending on the mass difference between $X_I$ and $X_R$, we have either two-component or three-component dark matter scenarios. In particular two- or three-component SIMP scenarios can be realised not only for small mass difference between $X$ and $Y$, but also for large mass hierarchy between them, which is a new and unique feature of the present model. We consider both theoretical and experimental constraints, and present four case studies of the multi-component dark matter scenarios.
high energy physics phenomenology
Automatic testing of mobile applications has been a well-researched area in recent years. However, testing in industry is still a very manual practice, as research results have not been fully transferred and adopted. Considering mobile applications, manual testing has the additional burden of adequate testing posed by a large number of available devices and different configurations, as well as the maintenance and setup of such devices. In this paper, we propose and evaluate the use of a model-based test generation approach, where generated tests are executed on a set of cloud-hosted real mobile devices. By using a model-based approach we generate dynamic, less brittle, and implementation simple test cases. The test execution on multiple real devices with different configurations increase the confidence in the implementation of the system under test. Our evaluation shows that the used approach produces a high coverage of the parts of the application related to user interactions. Nevertheless, the inclusion of external services in test generation is required in order to additionally increase the coverage of the complete application. Furthermore, we present the lessons learned while transferring and implementing this approach in an industrial context and applying it to the real product.
computer science
Traffic speed data imputation is a fundamental challenge for data-driven transport analysis. In recent years, with the ubiquity of GPS-enabled devices and the widespread use of crowdsourcing alternatives for the collection of traffic data, transportation professionals increasingly look to such user-generated data for many analysis, planning, and decision support applications. However, due to the mechanics of the data collection process, crowdsourced traffic data such as probe-vehicle data is highly prone to missing observations, making accurate imputation crucial for the success of any application that makes use of that type of data. In this article, we propose the use of multi-output Gaussian processes (GPs) to model the complex spatial and temporal patterns in crowdsourced traffic data. While the Bayesian nonparametric formalism of GPs allows us to model observation uncertainty, the multi-output extension based on convolution processes effectively enables us to capture complex spatial dependencies between nearby road segments. Using 6 months of crowdsourced traffic speed data or "probe vehicle data" for several locations in Copenhagen, the proposed approach is empirically shown to significantly outperform popular state-of-the-art imputation methods.
statistics
In this work a theoretical analysis of the lateral resolution limits of an optical setup in a microscopy setting with an entangled photons pair source is performed. A correlated biphoton wavefunction of a general Gaussian form, is propagated through a given experimental setup. Next, the signal photon's spatial mode profile width and central position are investigated in the heralding scenario, which means the information about the idler photon is not neglected. The impact of the correlations on the signal photon's spatial mode profile is compared in the heralding and non-heralding scenario. A realistic experimental scheme is considered, taking into account finite size optics and single photon detectors. This method allows us to significantly alleviate the photon loss problem in the optical setup which is a crucial factor limiting practical applications of single photon based techniques. It is achieved by an effect similar to phase shaping introduced by spatial light modulators.
quantum physics
Because of relatively low electron mobility of Ga2O3, it is important to identify proper current spreading materials. Fluorine-doped SnO2 (FTO) offers superior properties to those of indium tin oxide (ITO) including higher thermal stability, larger bandgap, and lower cost. However, the Ga2O3/FTO heterojunction including the important band offset and the I-V characteristics have not been reported. In this work, we have grown the Ga2O3/FTO heterojunction and performed X-ray photoelectron spectroscopy (XPS) measurement. The conduction and valence band offsets were determined to be 0.11 and 0.42 eV, indicating a minor barrier for electron transport and type-I characteristics. The subsequent I-V measurement of the Ga2O3/FTO heterojunction exhibited ohmic behavior. The results of this work manifests excellent candidacy of FTO for current spreading layers of Ga2O3 devices for high temperature and UV applications.
physics
Recent studies of pairing and charge order in materials such as FeSe, SrTiO$_3$, and 2H-NbSe$_2$ have suggested that momentum dependence of the electron-phonon coupling plays an important role in their properties. Initial attempts to study Hamiltonians which either do not include or else truncate the range of Coulomb repulsion have noted that the resulting spatial non-locality of the electron-phonon interaction leads to a dominant tendency to phase separation. Here we present Quantum Monte Carlo results for such models in which we incorporate both on-site and intersite electron-electron interactions. We show that these can stabilize phases in which the density is homogeneous and determine the associated phase boundaries. As a consequence, the physics of momentum dependent electron-phonon coupling can be determined outside of the trivial phase separated regime.
condensed matter
Superconducting stacks and bulks can act as very strong magnets (more than 17 T), but they lose their magnetization in the presence of alternating (or ripple) transverse magnetic fields, due to the dynamic magneto-resistance. This demagnetization is a major concern for applications requiring high run times, such as motors and generators, where ripple fields are of high amplitude and frequency. We have developed a numerical model based on dynamic magneto-resistance that is much faster than the conventional Power-Law-resistivity model, enabling us to simulate high number of cycles with the same accuracy. We simulate demagnetization behavior of superconducting stacks made of 10-100 tapes for up to 2 million cycles of applied ripple field. We found that for high number of cycles, the trapped field reaches non-zero stationary values for both superconducting bulks and stacks; as long as the ripple field amplitudes are below the parallel penetration field, being determined by the penetration field for a single tape in stacks. Bulks keep substantial stationary values for much higher ripple field amplitudes than the stacks, being relevant for high number of cycles. However, for low number of cycles, stacks lose much less magnetization as compared to bulks.
physics
A book about turning high-degree optimization problems into quadratic optimization problems that maintain the same global minimum (ground state). This book explores quadratizations for pseudo-Boolean optimization, perturbative gadgets used in QMA completeness theorems, and also non-perturbative k-local to 2-local transformations used for quantum mechanics, quantum annealing and universal adiabatic quantum computing. The book contains ~70 different Hamiltonian transformations, each of them on a separate page, where the cost (in number of auxiliary binary variables or auxiliary qubits, or number of sub-modular terms, or in graph connectivity, etc.), pros, cons, examples, and references are given. One can therefore look up a quadratization appropriate for the specific term(s) that need to be quadratized, much like using an integral table to look up the integral that needs to be done. This book is therefore useful for writing compilers to transform general optimization problems, into a form that quantum annealing or universal adiabatic quantum computing hardware requires; or for transforming quantum chemistry problems written in the Jordan-Wigner or Bravyi-Kitaev form, into a form where all multi-qubit interactions become 2-qubit pairwise interactions, without changing the desired ground state. Applications cited include computer vision problems (e.g. image de-noising, un-blurring, etc.), number theory (e.g. integer factoring), graph theory (e.g. Ramsey number determination), and quantum chemistry. The book is open source, and anyone can make modifications here: https://github.com/HPQC-LABS/Book_About_Quadratization.
quantum physics
A Viking-class Europa Lander is a high-risk, high-cost venture. In its place, Europa should be explored by a series of low-cost scouts. These will be landers and small flyby craft. These missions will ascertain the nature of Europa's surface at a scale of meters to centimeters. Some will search for the presence of organic molecules. All of them will precede a large Europa Lander.
astrophysics
The nodal-line semimetals have attracted immense interest due to the unique electronic structures such as the linear dispersion and the vanishing density of states as the Fermi energy approaching the nodes. Here, we report temperature-dependent transport and scanning tunneling microscope (spectroscopy) (STM[S]) measurements on nodal-line semimetal ZrSiSe.Our experimental results and theoretical analyses consistently demonstrate that the temperature induces Lifshitz transitions at 80 and 106 K in ZrSiSe, which results in the transport anomalies at the same temperatures. More strikingly, we observe a V-shaped dip structure around Fermi energy from the STS spectrum at low temperature,which can be attributed to co-effect of the spin-orbit coupling and excitonic instability. Our observations indicate the correlation interaction may play an important role in ZrSiSe, which owns the quasi-two-dimensional electronic structures.
condensed matter
Graph sparsification underlies a large number of algorithms, ranging from approximation algorithms for cut problems to solvers for linear systems in the graph Laplacian. In its strongest form, "spectral sparsification" reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. In this work we demonstrate a polynomial quantum speedup for spectral sparsification and many of its applications. In particular, we give a quantum algorithm that, given a weighted graph with $n$ nodes and $m$ edges, outputs a classical description of an $\epsilon$-spectral sparsifier in sublinear time $\tilde{O}(\sqrt{mn}/\epsilon)$. This contrasts with the optimal classical complexity $\tilde{O}(m)$. We also prove that our quantum algorithm is optimal up to polylog-factors. The algorithm builds on a string of existing results on sparsification, graph spanners, quantum algorithms for shortest paths, and efficient constructions for $k$-wise independent random strings. Our algorithm implies a quantum speedup for solving Laplacian systems and for approximating a range of cut problems such as min cut and sparsest cut.
quantum physics
We propose an algorithm, HPREF (Hierarchical Partitioning by Repeated Features), that produces a hierarchical partition of a set of clusterings of a fixed dataset, such as sets of clusterings produced by running a clustering algorithm with a range of parameters. This gives geometric structure to such sets of clustering, and can be used to visualize the set of results one obtains by running a clustering algorithm with a range of parameters.
statistics
Along with the rapid growth of Industrial Internet-of-Things (IIoT) applications and their penetration into many industry sectors, real-time wireless networks (RTWNs) have been playing a more critical role in providing real-time, reliable and secure communication services for such applications. A key challenge in RTWN management is how to ensure real-time Quality of Services (QoS) especially in the presence of unexpected disturbances and lossy wireless links. Most prior work takes centralized approaches for handling disturbances, which are slow and subject to single-point failure, and do not scale. To overcome these drawbacks, this paper presents a fully distributed packet scheduling framework called FD-PaS. FD-PaS aims to provide guaranteed fast response to unexpected disturbances while achieving minimum performance degradation for meeting the timing and reliability requirements of all critical tasks. To combat the scalability challenge, FD-PaS incorporates several key advances in both algorithm design and data link layer protocol design to enable individual nodes to make on-line decisions locally without any centralized control. Our extensive simulation and testbed results have validated the correctness of the FD-PaS design and demonstrated its effectiveness in providing fast response for handling disturbances while ensuring the designated QoS requirements.
computer science
It is known that limits on baryon-violating nucleon decays do not, in general, imply corresponding suppression of $n - \bar n$ transitions. In the context of a model with fermions propagating in higher dimensions, we investigate a related question, namely the implications of limits on $\Delta L=-1$ proton and bound neutron decays mediated by four-fermion operators for rates of nucleon decays mediated by $k$-fermion operators with $k =6$ and $k=8$. These include a variety of nucleon and dinucleon decays to dilepton and trilepton final states with $\Delta L=-3, \ -2, \ 1$, and $2$. We carry out a low-energy effective field theory analysis of relevant operators for these decays and show that, in this extra-dimensional model, the rates for these decays are strongly suppressed and hence are in accord with experimental limits.
high energy physics phenomenology
Motivated by extended black hole thermodynamics, we generalize the R\'enyi entropy of charged holographic conformal field theories (CFTs) in $d$-dimensions. Specifically, following (1807.09215), we extend the quench description of the R\'enyi entropy of globally charged holographic CFTs by including pressure variations of charged hyperbolically sliced anti de Sitter black holes. We provide an exhaustive analysis of the new type of charged R\'enyi entropy, where we find an interesting interplay between a parameter controlling the pressure of the black hole and its charge. A field theoretic interpretation of this extended charged R\'enyi entropy is given. In particular, in $d=2$, where the bulk geometry becomes the charged Ba\~nados, Teitelboim, Zanelli black hole, we write down the extended charged R\'enyi entropy in terms of the twist operators of the charged field theory. An area law prescription for the extended R\'enyi entropy is formulated. We comment on several avenues for future work, including how global charge conservation relates to black hole super-entropicity.
high energy physics theory
Building upon recent advances in entropy-regularized optimal transport, and upon Fenchel duality between measures and continuous functions , we propose a generalization of the logistic loss that incorporates a metric or cost between classes. Unlike previous attempts to use optimal transport distances for learning, our loss results in unconstrained convex objective functions, supports infinite (or very large) class spaces, and naturally defines a geometric generalization of the softmax operator. The geometric properties of this loss make it suitable for predicting sparse and singular distributions, for instance supported on curves or hyper-surfaces. We study the theoretical properties of our loss and show-case its effectiveness on two applications: ordinal regression and drawing generation.
statistics
We classify toric log del Pezzo surfaces of Picard number one by introducing the notion, cascades. As an application, we show that if such a surface is K\"ahler-Einstein, then it should admit a special cascade, and it satisfies the equality of the orbifold Bogomolov-Miyaoka-Yau inequality, i.e., $K^2 = 3e_{orb}.$
mathematics
We investigate the exact-WKB analysis for quantum mechanics in a periodic potential, with $N $ minima on $S^{1}$. We describe the Stokes graphs of a general potential problem as a network of Airy-type or degenerate Weber-type building blocks, and provide a dictionary between the two. The two formulations are equivalent, but with their own pros and cons. Exact-WKB produces the quantization condition consistent with the known conjectures and mixed anomaly. The quantization condition for the case of $N$-minima on the circle factorizes over the Hilbert sub-spaces labeled by discrete theta angle (or Bloch momenta), and is consistent with 't Hooft anomaly for even $N$ and global inconsistency for odd $N$. By using Delabaere-Dillinger-Pham formula, we prove that the resurgent structure is closed in these Hilbert subspaces, built on discrete theta vacua, and by a transformation, this implies that fixed topological sectors (columns of resurgence triangle) are also closed under resurgence.
quantum physics
A numerical method for simulating three-phase flows with moving contact lines on arbitrarily complex surfaces is developed in the framework of lattice Boltzmann method. In this method, the immiscible three-phase flow is modeled through a multiple-relaxation-time color-gradient model, which not only allows for a full range of interfacial tensions but also produces stable outcomes for a wide range of viscosity ratios. A characteristic line model is introduced to implement the wetting boundary condition, which is not only easy to implement but also able to handle arbitrarily complex boundaries with prescribed contact angles. The developed method is first validated by the simulation of a Janus droplet resting on a flat surface, a perfect Janus droplet deposited on a cylinder, and the capillary intrusion of ternary fluids for various viscosity ratios. It is then used to study a compound droplet subject to a uniform incoming flow passing through a multi-pillar structure, where three different values of surface wettability are considered. The simulated results show that the surface wettability has significant impact on the droplet dynamic behavior and final fluid distribution.
physics
In this survey paper, we present \v{C}ech and sheaf cohomologies -- themes that were presented by Koszul in University of S\~ao Paulo during his visit in the late 1950s -- we present expansions for categories of generalized sheaves (i.e, Grothendieck toposes), with examples of applications in other cohomology theories and other areas of mathematics, besides providing motivations and historical notes. We conclude explaining the difficulties in establishing a cohomology theory for elementary toposes, presenting alternative approaches by considering constructions over quantales, that provide structures similar to sheaves, and indicating researches related to logic: constructive (intuitionistic and linear) logic for toposes, sheaves over quantales, and homological algebra.
mathematics
We experimentally demonstrate stimulated four-wave mixing in two linearly uncoupled integrated Si$_3$N$_4$ micro-resonators. In our structure the resonance combs of each resonator can be tuned independently, with the energy transfer from one resonator to the other occurring in the presence of a nonlinear interaction. This method allows flexible and efficient on-chip control of the nonlinear interaction, and is readily applicable to other third-order nonlinear phenomena.
physics
Recently, the data-selective adaptive Volterra filters have been proposed; however, up to now, there are not any theoretical analyses on its behavior rather than numerical simulations. Therefore, in this paper, we analyze the robustness (in the sense of l2-stability) of the data-selective Volterra normalized least-mean-square (DS-VNLMS) algorithm. First, we study the local robustness of this algorithm at any iteration, then we propose a global bound for the error/discrepancy in the coefficient vector. Also, we demonstrate that the DS-VNLMS algorithm improves the parameter estimation for the majority of the iterations that an update is implemented. Moreover, we prove that if the noise bound is known, we can set the DS-VNLMS so that it never degrades the estimate. The simulation results corroborate the validity of the executed analysis and demonstrate that the DS-VNLMS algorithm is robust against noise, no matter how its parameters are adopted.
computer science
Excitations of impurity complexes in semiconductors can not only provide a route to fill the terahertz gap in optical technologies, but can also connect local quantum bits to scale up solid-state quantum-computing devices. However, taking into account both the interactions among electrons/holes, and the host band structures, is challenging. Here we combine first-principles band-structure calculations with quantum-chemistry methodology to evaluate the ground and excited states of a pair of phosphorous donors in silicon within s single framework. We use a broken-symmetry Hartree-Fock approach, followed by a time-dependent Hartree-Fock method to compute the excited states. Our Hamiltonian for each valley includes an anisotropic kinetic energy term, which splits the 2p_0 and 2p_+- transitions of isolated donors by ~4 meV, in good agreement with experiments. Our single-valley calculations show the optical response is a strong function of the optical polarisation, and suggest the use of valley polarisation to control optics and reduce oscillations in exchange interactions. When taking into account all valleys, including valley-orbital interactions, we find a gap opens between the 1s to 2p transition and low-energy charge-transfer states within 1s manifolds (which become optically allowed because of inter-donor interactions). In contrast to the single-valley case, we find charge-transfer excited states also in the triplet sector, thanks to the valley degrees of freedom. These states have a qualitatively correct energy as compared with the previous experiments; additionally, we predict new excitations below 20 meV that have not been analysed previously. A statistical average of nearest-neighbour pairs at different separations suggests that THz radiation could be used to excite pairs spin-selectively. Our approach can readily be extended to other donors and to other semiconducting hosts.
condensed matter
We study the Landau--Streater quantum channel $\Phi: \mathcal{B}(\mathcal{H}_d) \mapsto \mathcal{B}(\mathcal{H}_d)$, whose Kraus operators are proportional to the irreducible unitary representation of $SU(2)$ generators of dimension $d$. We establish $SU(2)$ covariance for all $d$ and $U(3)$ covariance for $d=3$. Using the theory of angular momentum, we explicitly find the spectrum and the minimal output entropy of $\Phi$. Negative eigenvalues in the spectrum of $\Phi$ indicate that the channel cannot be obtained as a result of Hermitian Markovian quantum dynamics. Degradability and antidegradability of the Landau--Streater channel is fully analyzed. We calculate classical and entanglement-assisted capacities of $\Phi$. Quantum capacity of $\Phi$ vanishes if $d=2,3$ and is strictly positive if $d \geqslant 4$. We show that the channel $\Phi \otimes \Phi$ does not annihilate entanglement and preserves entanglement of some states with Schmidt rank $2$ if $d \geqslant 3$.
quantum physics
The symmetric sparse matrix-vector multiplication (SymmSpMV) is an important building block for many numerical linear algebra kernel operations or graph traversal applications. Parallelizing SymmSpMV on today's multicore platforms with up to 100 cores is difficult due to the need to manage conflicting updates on the result vector. Coloring approaches can be used to solve this problem without data duplication, but existing coloring algorithms do not take load balancing and deep memory hierarchies into account, hampering scalability and full-chip performance. In this work, we propose the recursive algebraic coloring engine (RACE), a novel coloring algorithm and open-source library implementation, which eliminates the shortcomings of previous coloring methods in terms of hardware efficiency and parallelization overhead. We describe the level construction, distance-k coloring, and load balancing steps in RACE, use it to parallelize SymmSpMV, and compare its performance on 31 sparse matrices with other state-of-the-art coloring techniques and Intel MKL on two modern multicore processors. RACE outperforms all other approaches substantially and behaves in accordance with the Roofline model. Outliers are discussed and analyzed in detail. While we focus on SymmSpMV in this paper, our algorithm and software is applicable to any sparse matrix operation with data dependencies that can be resolved by distance-k coloring.
computer science
For real-world deployment of automatic speech recognition (ASR), the system is desired to be capable of fast inference while relieving the requirement of computational resources. The recently proposed end-to-end ASR system based on mask-predict with connectionist temporal classification (CTC), Mask-CTC, fulfills this demand by generating tokens in a non-autoregressive fashion. While Mask-CTC achieves remarkably fast inference speed, its recognition performance falls behind that of conventional autoregressive (AR) systems. To boost the performance of Mask-CTC, we first propose to enhance the encoder network architecture by employing a recently proposed architecture called Conformer. Next, we propose new training and decoding methods by introducing auxiliary objective to predict the length of a partial target sequence, which allows the model to delete or insert tokens during inference. Experimental results on different ASR tasks show that the proposed approaches improve Mask-CTC significantly, outperforming a standard CTC model (15.5% $\rightarrow$ 9.1% WER on WSJ). Moreover, Mask-CTC now achieves competitive results to AR models with no degradation of inference speed ($<$ 0.1 RTF using CPU). We also show a potential application of Mask-CTC to end-to-end speech translation.
electrical engineering and systems science
We propose a new complete method, based on the Wigner distributions of photons, how to calculate differential distributions of dileptons created via photon-photon fusion in semicentral ($b<2R_A$) $AA$ collisions. The formalism is used to calculate different distributions of invariant mass, dilepton transverse momentum and acoplanarity for different regions of centrality. The results of calculation are compared with recent STAR, ALICE and ATLAS experimental data. Very good agreement with the data is achieved without free parameters and without including additional mechanisms such as a possible rescattering of leptons in the quark-gluon plasma.
high energy physics phenomenology
In this work, we study a recently proposed operational measure of nonlocality by Fonseca and Parisio~[Phys. Rev. A 92, 030101(R) (2015)] which describes the probability of violation of local realism under randomly sampled observables, and the strength of such violation as described by resistance to white noise admixture. While our knowledge concerning these quantities is well established from a theoretical point of view, the experimental counterpart is a considerably harder task and very little has been done in this field. It is caused by the lack of complete knowledge about the facets of the local polytope required for the analysis. In this paper, we propose a simple procedure towards experimentally determining both quantities for $N$-qubit pure states, based on the incomplete set of tight Bell inequalities. We show that the imprecision arising from this approach is of similar magnitude as the potential measurement errors. We also show that even with both a randomly chosen $N$-qubit pure state and randomly chosen measurement bases, a violation of local realism can be detected experimentally almost $100\%$ of the time. Among other applications, our work provides a feasible alternative for the witnessing of genuine multipartite entanglement without aligned reference frames.
quantum physics
We consider inference from non-random samples in data-rich settings where high-dimensional auxiliary information is available both in the sample and the target population, with survey inference being a special case. We propose a regularized prediction approach that predicts the outcomes in the population using a large number of auxiliary variables such that the ignorability assumption is reasonable while the Bayesian framework is straightforward for quantification of uncertainty. Besides the auxiliary variables, inspired by Little & An (2004), we also extend the approach by estimating the propensity score for a unit to be included in the sample and also including it as a predictor in the machine learning models. We show through simulation studies that the regularized predictions using soft Bayesian additive regression trees yield valid inference for the population means and coverage rates close to the nominal levels. We demonstrate the application of the proposed methods using two different real data applications, one in a survey and one in an epidemiology study.
statistics
The production of very high energy muons inside an extensive air shower is observable at $\nu$ telescopes and sensitive to the composition of the primary cosmic ray. Here we discuss five different sources of these muons: pion and kaon decays; charmed hadron decays; rare decays of unflavored mesons; photon conversion into a muon pair; and photon conversion into a $J/\psi$ vector meson decaying into muons. We solve the cascade equations for a $10^{10.5}$ GeV proton primary and find that unflavored mesons and gamma conversions are the two main sources of $E\ge 10^{8.5}$ GeV muons, while charm decays dominate at $10^{5.5}\,{\rm GeV}< E< 10^{8.5}\,{\rm GeV}$. In inclined events one of these muons may deposite a large fraction of its energy near the surface, implying fluctuations in the longitudinal profile of the shower and in the muon to electron count at the ground level. In particular, we show that 1 out of 6 proton showers of $10^{10.5}$ GeV include an $E>10^6$ GeV deposition within 500 g/cm$^2$, while only in 1 out of 330 showers it is above $10^7$ GeV. We also show that the production of high energy muons is very different in proton, iron or photon showers ({e.g., conversions $\gamma\to \mu^+ \mu^-$ are the main source of $E\ge 10^4$ GeV muons in photon showers). Finally, we use Monte Carlo simulations to discuss the validity of our results.
high energy physics phenomenology
We present a new method for the spectral characterization of pulsed twin beam sources in the high gain regime, using cascaded stimulated emission. We show an implementation of this method for a ppKTP spontaneous parametric down-conversion source generating up to 60 photon pairs per pulse, and demonstrate excellent agreement between our experiments and our theory. This work enables the complete and accurate experimental characterization of high gain effects in parametric down conversion, including self and cross-phase modulation. Moreover, our theory allows the exploration of designs with the goal of improving the specifications of twin beam sources for application in quantum information, computation, sampling, and metrology.
quantum physics
We introduce a phase space with spinorial momenta, corresponding to fermionic derivatives, for a 2d supersymmetric (1, 1) sigma model. We show that there is a generalisation of the covariant De Donder-Weyl Hamiltonian formulation on this phase space with canonical equations equivalent to the Lagrangian formulation, find the corresponding multisymplectic form and Hamiltonian multivectors. The covariance of the formulation makes it possible to see how additional non-manifest supersymmetries arise in analogy to those of the Lagrangian formulation. We then observe that an intermediate phase space Lagrangian defined on the sum of the tangent and cotanget spaces is a first order Lagrangian for the sigma model and derive additional supersymmetries for this.
high energy physics theory
We introduce an efficient decoder of the color code in $d\geq 2$ dimensions, the Restriction Decoder, which uses any $d$-dimensional toric code decoder combined with a local lifting procedure to find a recovery operation. We prove that the Restriction Decoder successfully corrects errors in the color code if and only if the corresponding toric code decoding succeeds. We also numerically estimate the Restriction Decoder threshold for the color code in two and three dimensions against the bit-filp and phase-flip noise with perfect syndrome extraction. We report that the 2D color code threshold $p_{\textrm{2D}} \approx 10.2\%$ on the square-octagon lattice is on a par with the toric code threshold on the square lattice.
quantum physics
Bayesian inference without the access of likelihood, or likelihood-free inference, has been a key research topic in simulations, to yield a more realistic generation result. Recent likelihood-free inference updates an approximate posterior sequentially with the dataset of the cumulative simulation input-output pairs over inference rounds. Therefore, the dataset is gathered through the iterative simulations with sampled inputs from a proposal distribution by MCMC, which becomes the key of inference quality in this sequential framework. This paper introduces a new proposal modeling, named as Implicit Surrogate Proposal (ISP), to generate a cumulated dataset with further sample efficiency. ISP constructs the cumulative dataset in the most diverse way by drawing i.i.d samples via a feed-forward fashion, so the posterior inference does not suffer from the disadvantages of MCMC caused by its non-i.i.d nature, such as auto-correlation and slow mixing. We analyze the convergence property of ISP in both theoretical and empirical aspects to guarantee that ISP provides an asymptotically exact sampler. We demonstrate that ISP outperforms the baseline inference algorithms on simulations with multi-modal posteriors.
statistics
In this work, we present a computational analysis of the planar wave propagation behavior of a one-dimensional periodic multi-stable cellular material. Wave propagation in these materials is interesting because they combine the ability of periodic cellular materials to exhibit stop and pass bands with the ability to dissipate energy through cell-level elastic instabilities. Here, we use Bloch periodic boundary conditions to compute the dispersion curves and introduce a new approach for computing wide band directionality plots. Also, we deconstruct the wave propagation behavior of this material to identify the contributions from its various structural elements by progressively building the unit cell, structural element by element, from a simple, homogeneous, isotropic primitive. Direct integration time domain analyses of a representative volume element at a few salient frequencies in the stop and pass bands are used to confirm the existence of partial band gaps in the response of the cellular material. Insights gained from the above analyses are then used to explore modifications of the unit cell that allow the user to tune the band gaps in the response of the material. We show that this material behaves like a locally resonant material that exhibits low frequency band gaps for small amplitude planar waves. Moreover, modulating the geometry or material of the central bar in the unit cell provides a path to adjust the position of the band gaps in the material response.
physics
In contextual continuum-armed bandits, the contexts $x$ and the arms $y$ are both continuous and drawn from high-dimensional spaces. The payoff function to learn $f(x,y)$ does not have a particular parametric form. The literature has shown that for Lipschitz-continuous functions, the optimal regret is $\tilde{O}(T^{\frac{d_x+d_y+1}{d_x+d_y+2}})$, where $d_x$ and $d_y$ are the dimensions of contexts and arms, and thus suffers from the curse of dimensionality. We develop an algorithm that achieves regret $\tilde{O}(T^{\frac{d_x+1}{d_x+2}})$ when $f$ is globally concave in $y$. The global concavity is a common assumption in many applications. The algorithm is based on stochastic approximation and estimates the gradient information in an online fashion. Our results generate a valuable insight that the curse of dimensionality of the arms can be overcome with some mild structures of the payoff function.
statistics
We study the possibility of realising cosmic inflation, dark matter (DM), baryon asymmetry of the universe (BAU) and light neutrino masses in non-supersymmetric minimal gauged $B-L$ extension of the standard model with three right handed neutrinos. The singlet scalar field responsible for spontaneous breaking of $B-L$ gauge symmetry also plays the role of inflaton by virtue of its non-minimal coupling to gravity. While the lightest right handed neutrino is the DM candidate, being stabilised by an additional $Z_2$ symmetry, we show by performing a detailed renormalisation group evolution (RGE) improved study of inflationary dynamics that thermal DM is generally overproduced due to insufficient annihilations through gauge and scalar portals. This happens due to strict upper limits obtained on gauge and other dimensionless couplings responsible for DM annihilation while assuming the non-minimal coupling to gravity to be at most of order unity. The non-thermal DM scenario is viable, with or without $Z_2$ symmetry, although in such a case the $B-L$ gauge sector remains decoupled from the inflationary dynamics due to tiny couplings. We also show that the reheat temperature predicted by the model prefers non-thermal leptogenesis with hierarchical right handed neutrinos while being consistent with other requirements.
high energy physics phenomenology
In high mountains, the effects of climate change are manifesting most rapidly. This is especially critical for the high-altitude carbon cycle, for which new feedbacks could be triggered. However, mountain carbon dynamics is only partially known. In particular, models of the processes driving carbon fluxes in high-altitude grasslands and Alpine tundra need to be improved. Here, we propose a comparison of three empirical approaches using systematic statistical analysis, to identify the environmental variables controlling $CO_2$ fluxes. The methods were applied to a complete dataset of simultaneous in situ measurements of the net $CO_2$ exchange, ecosystem respiration and basic environmental variables in three sampling sites in the same catchment. Large year-to-year variations in the gross primary production (GPP) and ecosystem respiration (ER) dependences on solar irradiance and temperature were observed,. We thus implemented a multi regression model in which additional variables were introduced as perturbations of the standard exponential and rectangular hyperbolic functions for ER and GPP, respectively. A comparison of this model with other common modelling strategies, showed the benefits of this approach, resulting in large explained variances (83% to 94%). The optimum ensemble of variables explaining the inter- and intra-annual flux variability included solar irradiance, soil moisture and day of the year for GPP, and air temperature, soil moisture, air pressure and day of the year for the ER, in agreement with other studies. The modelling approach discussed here provides a basis for selecting drivers of carbon fluxes and understanding their role in high-altitude Alpine ecosystems, also allowing for future short-range assessments of local trends.
physics
We introduce two scalar leptoquarks, the SU$(2)_L$ isosinglet denoted $\phi\sim(\mathbf{3}, \mathbf{1}, -1/3)$ and the isotriplet $\varphi\sim(\mathbf{3}, \mathbf{3}, -1/3)$, to explain observed deviations from the standard model in semi-leptonic $B$-meson decays. We explore the regions of parameter space in which this model accommodates the persistent tensions in the decay observables $R_{D^{(*)}}$, $R_{K^{(*)}}$, and angular observables in $b\to s \mu\mu$ transitions. Additionally, we exploit the role of these exotics in existing models for one-loop neutrino mass generation derived from $\Delta L=2$ effective operators. Introducing the vector-like quark $\chi \sim (\mathbf{3}, \mathbf{2}, -5/6)$ necessary for lepton-number violation, we consider the contribution of both leptoquarks to the generation of radiative neutrino mass. We find that constraints permit simultaneously accommodating the flavour anomalies while also explaining the relative smallness of neutrino mass without the need for cancellation between leptoquark contributions. A characteristic prediction of our model is a rate of muon--electron conversion in nuclei fixed by the anomalies in $b \to s \mu \mu$ and neutrino mass; the COMET experiment will thus test and potentially falsify our scenario. The model also predicts signatures that will be tested at the LHC and Belle II.
high energy physics phenomenology
Many snakes live in deserts, forests, and river valleys and traverse challenging 3-D terrain like rocks, felled trees, and rubble, with obstacles as large as themselves and variable surface properties. By contrast, apart from branch cantilevering, burrowing, swimming, and gliding, laboratory studies of snake locomotion focused on that on simple flat surfaces. Here, to begin to understand snake locomotion in complex 3-D terrain, we study how the variable kingsnake, a terrestrial generalist, traversed a large step of variable surface friction and step height (up to 30% snout-vent length). The snake traversed by partitioning its body into three sections with distinct functions. Body sections below and above the step oscillated laterally on horizontal surfaces for propulsion, while the body section in between cantilevered in a vertical plane to bridge the large height increase. As the animal progressed, these three sections traveled down its body, conforming overall body shape to the step. In addition, the snake adjusted the partitioned gait in response to increase in step height and decrease in surface friction, at the cost of reduced speed. As surface friction decreased, body movement below and above the step changed from a continuous lateral undulation with little slip to an intermittent oscillatory movement with much slip, and initial head lift-off became closer to the step. Given these adjustments, body partitioning allowed the snake to be always stable, even when initially cantilevering but before reaching the surface above. Such a partitioned gait may be generally useful for diverse, complex 3-D terrain.
physics
We describe two natural scenarios in which both dark matter WIMPs (weakly interacting massive particles) and a variety of supersymmetric partners should be discovered in the foreseeable future. In the first scenario, the WIMPs are neutralinos, but they are only one component of the dark matter, which is dominantly composed of other relic particles such as axions. (This is the multicomponent model of Baer, Barger, Sengupta, and Tata.) In the second scenario, the WIMPs result from an extended Higgs sector and may be the only dark matter component. In either scenario, both the dark matter WIMP and a plethora of other neutral and charged particles await discovery at many experimental facilities. The new particles in the second scenario have far weaker cross-sections for direct and indirect detection via their gauge interactions, which are either momentum-dependent or second-order. However, as we point out here, they should have much stronger interactions via the Higgs. We estimate that their interactions with fermions will then be comparable to (although not equal to) those of neutralinos with a corresponding Higgs interaction. It follows that these newly proposed dark matter particles should be within reach of emerging and proposed facilities for direct, indirect, and collider-based detection.
high energy physics phenomenology
$SU(2)_L$-invariance links charged dilepton and dineutrino couplings. This relation allows to perform tests of lepton universality (LU) and charged lepton flavor conservation (cLFC) with flavor-summed dineutrino observables, assuming only standard model (SM)-like light neutrinos. We obtain model-independent upper limits on $|\Delta c|=|\Delta u|=1$ branching ratios, for $D \to P \,\nu \bar \nu$, $D \to P P' \,\nu \bar \nu$, $P,P'=\pi, K$, baryonic $\Lambda_c^+ \to p \,\nu \bar \nu$, and $\Xi_c^+ \to \Sigma^+ \, \nu \bar \nu$ and inclusive decays, the largest of which do not exceed few$\times10^{-5}$, and $10^{-5}$ if cLFC holds, and $10^{-6}$ if LU is intact.
high energy physics phenomenology
We study a method to simulate quantum many-body dynamics of spin ensembles using measurement-based feedback. By performing a weak collective measurement on a large ensemble of two-level quantum systems and applying global rotations conditioned on the measurement outcome, one can simulate the dynamics of a mean-field quantum kicked top, a standard paradigm of quantum chaos. We analytically show that there exists a regime in which individual quantum trajectories adequately recover the classical limit, and show the transition between noisy quantum dynamics to full deterministic chaos described by classical Lyapunov exponents. We also analyze the effects of decoherence, and show that the proposed scheme represents a robust method to explore the emergence of chaos from complex quantum dynamics in a realistic experimental platform based on an atom-light interface.
quantum physics
The balance between stretching and bending deformations characterizes shape transitions of thin elastic sheets. While stretching dominates the mechanical response in tension, bending dominates in compression after an abrupt buckling transition. Recently, experimental results in suspended living epithelial monolayers have shown that, due to the asymmetry in surface stresses generated by molecular motors across the thickness $e$ of the epithelium, the free edges of such tissues spontaneously curl out-of-plane, stretching the sheet in-plane as a result. This suggests that a competition between bending and stretching sets the morphology of the tissue margin. In this study, we use the framework of non-euclidean plates to incorporate active pre-strain and spontaneous curvature to the theory of thin elastic shells. We show that, when the spontaneous curvature of the sheet scales like $1/e$, stretching and bending energies have the same scaling in the limit of a vanishingly small thickness and therefore both compete, in a way that is continuously altered by an external tension, to define the three-dimensional shape of the tissue.
physics
Coherent states are normally used to describe the state of a laser field in experiments that generate and detect squeezed states of light. Nevertheless, since the laser field absolute phase is unknown, its quantum state can be described by a statistical mixture of coherent states with random phases, which is equivalent to a statistical mixture of Fock states. Here we describe single-mode squeezed vacuum experiments using this mixed quantum state for the laser field. Representing the laser state in the Fock basis, we predict the usual experimental results without using the squeezing concept in the analysis and concluding that no squeezed state is generated in the experiments. We provide a general physical explanation for the noise reduction in the experiments in terms of a better definition of the relative phase between the signal and local oscillator fields. This explanation is valid in any description of the laser field (in terms of coherent or Fock states), thus providing a deeper understanding of the phenomenon.
quantum physics
The uses of a silicon-pixel camera with very good time resolution ($\sim$nanosecond) for detecting multiple, bunched optical photons is explored. We present characteristics of the camera and describe experiments proving its counting capabilities. We use a spontaneous parametric down-conversion source to generate correlated photon pairs, and exploit the Hong-Ou-Mandel interference effect in a fiber-coupled beam splitter to bunch the pair onto the same output fiber. It is shown that the time and spatial resolution of the camera enables independent detection of two photons emerging simultaneously from a single spatial mode.
quantum physics
We prove some results about existence of connecting and closed geodesics in a manifold endowed with a Kropina metric. These have applications to both null geodesics of spacetimes endowed with a null Killing vector field and Zermelo's navigation problem with critical wind.
mathematics
While a growing body of research indicates that relativistic magnetic reconnection is a prodigious source of particle acceleration in high-energy astrophysical systems, the dominant acceleration mechanism remains controversial. Using a combination of fully kinetic simulations and theoretical analysis, we demonstrate that Fermi-type acceleration within the large-scale motional electric fields dominates over direct acceleration from non-ideal electric fields within small-scale diffusion regions. This result has profound implications for modeling particle acceleration in large-scale astrophysical problems, since it opens up the possiblity of modeling the energetic spectra without resolving microscopic diffusion regions.
astrophysics
In this paper we propose a new class of iterative regularization methods for solving ill-posed linear operator equations. The prototype of these iterative regularization methods is in the form of second order evolution equation with a linear vanishing damping term, which can be viewed not only as an extension of the asymptotical regularization, but also as a continuous analog of the Nesterov's acceleration scheme. New iterative regularization methods are derived from this continuous model in combination with damped symplectic numerical schemes. The regularization property as well as convergence rates and acceleration effects under the H\"older-type source conditions of both continuous and discretized methods are proven. The second part of this paper is concerned with the application of the newly developed accelerated iterative regularization methods to the diffusion-based bioluminescence tomography, which is modeled as an inverse source problem in elliptic partial differential equations with both Dirichlet and Neumann boundary data. A relaxed mathematical formulation is proposed so that the discrepancy principle can be applied to the iterative scheme without the usage of Sobolev embedding constants. Several numerical examples, as well as a comparison with the state-of-the-art methods, are given to show the accuracy and the acceleration effect of the new methods.
mathematics
Breast cancer screening is one of the most common radiological tasks with over 39 million exams performed each year. While breast cancer screening has been one of the most studied medical imaging applications of artificial intelligence, the development and evaluation of the algorithms are hindered due to the lack of well-annotated large-scale publicly available datasets. This is particularly an issue for digital breast tomosynthesis (DBT) which is a relatively new breast cancer screening modality. We have curated and made publicly available a large-scale dataset of digital breast tomosynthesis images. It contains 22,032 reconstructed DBT volumes belonging to 5,610 studies from 5,060 patients. This included four groups: (1) 5,129 normal studies, (2) 280 studies where additional imaging was needed but no biopsy was performed, (3) 112 benign biopsied studies, and (4) 89 studies with cancer. Our dataset included masses and architectural distortions which were annotated by two experienced radiologists. Additionally, we developed a single-phase deep learning detection model and tested it using our dataset to serve as a baseline for future research. Our model reached a sensitivity of 65% at 2 false positives per breast. Our large, diverse, and highly-curated dataset will facilitate development and evaluation of AI algorithms for breast cancer screening through providing data for training as well as common set of cases for model validation. The performance of the model developed in our study shows that the task remains challenging and will serve as a baseline for future model development.
electrical engineering and systems science
This is the first of two papers in which we investigate the properties of the displacement functions of automorphisms of free groups (more generally, free products) on Culler-Vogtmann Outer space and its simplicial bordification - the free splitting complex - with respect to the Lipschitz metric. The theory for irreducible automorphisms being well-developed, we concentrate on the reducible case. Since we deal with the bordification, we develop all the needed tools in the more general setting of deformation spaces, and their associated free splitting complexes. In the present paper we study the local properties of the displacement function. In particular, we study its convexity properties and the behaviour at bordification points, by geometrically characterising its continuity-points. We prove that the global-simplex-displacement spectrum of $Aut(F_n)$ is a well-ordered subset of $\mathbb R$, this being helpful for algorithmic purposes. We introduce a weaker notion of train tracks, which we call {\em partial train tracks} (which coincides with the usual one for irreducible automorphisms) and we prove that, for any automorphism, points of minimal displacement - minpoints - coincide with the marked metric graphs that support partial train tracks. We show that any automorphism, reducible or not, has a partial train track (hence a minpoint) either in the outer space or its bordification. We show that, given an automorphism, any of its invariant free factors is seen in a partial train track map. In a subsequent paper we will prove that level sets of the displacement functions are connected, and we will apply that result to solve certain decision problems.
mathematics
Recognizing wild faces is extremely hard as they appear with all kinds of variations. Traditional methods either train with specifically annotated variation data from target domains, or by introducing unlabeled target variation data to adapt from the training data. Instead, we propose a universal representation learning framework that can deal with larger variation unseen in the given training data without leveraging target domain knowledge. We firstly synthesize training data alongside some semantically meaningful variations, such as low resolution, occlusion and head pose. However, directly feeding the augmented data for training will not converge well as the newly introduced samples are mostly hard examples. We propose to split the feature embedding into multiple sub-embeddings, and associate different confidence values for each sub-embedding to smooth the training procedure. The sub-embeddings are further decorrelated by regularizing variation classification loss and variation adversarial loss on different partitions of them. Experiments show that our method achieves top performance on general face recognition datasets such as LFW and MegaFace, while significantly better on extreme benchmarks such as TinyFace and IJB-S.
computer science
Are neutrinos with definite masses Majorana or Dirac particles? This is one of the most fundamental problem of the modern neutrino physics. The solution of this problem could be crucial for understanding of the origin of small neutrino masses. We will review here basic arguments in favor of the Majorana nature of massive neutrinos. The phenomenological theory of $0\nu\beta\beta$-decay is briefly discussed and recent experimental data and sensitivity of future experiments are presented.
high energy physics phenomenology
Coherent-one-way (COW) quantum key distribution (QKD) held the promise of distributing secret keys over long distances with a simple experimental setup. Indeed, this scheme is currently used in commercial applications. Surprisingly, however, it has been recently shown that its secret key rate scales at most quadratically with the system's transmittance and, thus, it is not appropriate for long distance QKD transmission. Such pessimistic result was derived by employing a so-called zero-error attack, in which the eavesdropper does not introduce any error, but still the legitimate users of the system cannot distill a secure key. Here, we present a zero-error attack against COW-QKD that is essentially optimal, in the sense that no other attack can restrict further its maximum achievable distance in the absence of errors. This translates into an upper bound on its secret key rate that is more than an order of magnitude lower than previously known upper bounds.
quantum physics
Software for mixed-integer linear programming can return incorrect results for a number of reasons, one being the use of inexact floating-point arithmetic. Even solvers that employ exact arithmetic may suffer from programming or algorithmic errors, motivating the desire for a way to produce independently verifiable certificates of claimed results. Due to the complex nature of state-of-the-art MILP solution algorithms, the ideal form of such a certificate is not entirely clear. This paper proposes such a certificate format, illustrating its capabilities and structure through examples. The certificate format is designed with simplicity in mind and is composed of a list of statements that can be sequentially verified using a limited number of simple yet powerful inference rules. We present a supplementary verification tool for compressing and checking these certificates independently of how they were created. We report computational results on a selection of mixed-integer linear programming instances from the literature. To this end, we have extended the exact rational version of the MIP solver SCIP to produce such certificates.
mathematics
In this paper, we propose a semi-Lagrangian discontinuous Galerkin method coupled with Runge-Kutta exponential integrators (SLDG-RKEI) for nonlinear Vlasov dynamics. The commutator-free Runge-Kutta (RK) exponential integrators (EI) were proposed by Celledoni, et al. (FGCS, 2003). In the nonlinear transport setting, the RKEI can be used to decompose the evolution of the nonlinear transport into a composition of a sequence of linearized dynamics. The resulting linearized transport equations can be solved by the semi-Lagrangian (SL) discontinuous Galerkin (DG) method proposed in Cai, et al. (JSC, 2017). The proposed method can achieve high order spatial accuracy via the SLDG framework, and high order temporal accuracy via the RK EI. Due to the SL nature, the proposed SLDG-RKEI method is not subject to the CFL condition, thus they have the potential in using larger time-stepping sizes than those in the Eulerian approach. Inheriting advantages from the SLDG method, the proposed SLDG-RKEI schemes are mass conservative, positivity-preserving, have no dimensional splitting error, perform well in resolving complex solution structures, and can be evolved with adaptive time-stepping sizes. We show the performance of the SLDG-RKEI algorithm by classical test problems for the nonlinear Vlasov-Poisson system, as well as the Guiding center Vlasov model. Though that it is not our focus of this paper to explore the SLDG-RKEI scheme for nonlinear hyperbolic conservation laws that develop shocks, we show some preliminary results on schemes' performance on the Burgers' equation.
mathematics
We measure the output power of an Er/Yb fiber laser with twelve different SMF-28 narrowband output couplers and demonstrate experimentally that the optimal reflectivity is ~ 1 %. The fiber laser efficiency with the optimal output coupler is ~ 38 %. In addition, we successfully inscribe a similar output coupler in-situ during laser operation with 800 nm femtosecond pulses and the phase mask technique. An output power very close to the optimal was obtained with the in-situ inscribed output coupler.
physics
We investigate the quench dynamics of strongly coupled superconductors within the time-dependent Gutzwiller approximation from the BCS to the BEC regime and evaluate the out-of-equilibrium transient spectral density and optical conductivity relevant for pump probe experiments. Fourier transformation of the order parameter dynamics reveals a frequency $\Omega_J$ which, as in the BCS case, is controlled by the spectral gap. However, we find a crossover from the BCS dynamics to a new strong coupling regime where a characteristic frequency $\Omega_U$, associated to double occupancy fluctuations controls the order parameter dynamics. The change of regime occurs close to a dynamical phase transition. Both, $\Omega_J$ and $\Omega_U$ give rise to a complex structure of self-driven slow Rabi oscillations which are visible in the non-equilibrium optical conductivity where also side bands appear due to the modulation of the double occupancy by superconducting amplitude oscillations. Analogous results apply to CDW and SDW systems.
condensed matter
Levasseur and Stafford described the rings of differential operators on various classical invariant rings of characteristic zero; in each of the cases they considered, the differential operators form a simple ring. Towards an attack on the simplicity of rings of differential operators on invariant rings of linearly reductive groups over the complex numbers, Smith and Van den Bergh asked if differential operators on the corresponding rings of positive prime characteristic lift to characteristic zero differential operators. We prove that, in general, this is not the case for determinantal hypersurfaces, as well as for Pfaffian and symmetric determinantal hypersurfaces. We also prove that, with few exceptions, these hypersurfaces do not admit a mod $p^2$ lift of the Frobenius endomorphism.
mathematics
Dynamical encirclement of an Exceptional Point (EP) and corresponding time-asymmetric mode evolution properties due to breakdown in adiabatic theorem have been a key to range of exotic physical effects in various open atomic, molecular and optical systems. Here, exploiting a gain-loss assisted dual-mode optical waveguide that hosts a dynamical EP-encirclement scheme, we have explored enhanced nonreciprocal effect in the dynamics of light with onset of saturable nonlinearity in the optical medium. We propose a prototype waveguide-based isolation scheme with judicious tuning of nonlinearity level where one can pass only a chosen mode in any of the desired directions as per device requirement. The deliberate presence of EP enormously enhances the nonreciprocal transmission contrast even up to 40 dB over the proposed device length with a scope of further scalability. This exclusive topologically robust mode selective all-optical isolation scheme will certainly offer opportunities in integrated photonic circuits for efficient coupling operation from external sources and improve device performances.
physics
Let $H$ be a Krull monoid with finite class group $G$ such that every class contains a prime divisor. We consider the system $\mathcal L (H)$ of all sets of lengths of $H$ and study when $\mathcal L (H)$ contains or is contained in a system $\mathcal L (H')$ of a Krull monoid $H'$ with finite class group $G'$, prime divisors in all classes and Davenport constant $\mathsf D (G')=\mathsf D (G)$. Among others, we show that if $G$ is either cyclic of order $m \ge 7$ or an elementary $2$-group of rank $m-1 \ge 6$, and $G'$ is any group which is non-isomorphic to $G$ but with Davenport constant $\mathsf D (G')=\mathsf D (G)$, then the systems $\mathcal L (H)$ and $\mathcal L (H')$ are incomparable.
mathematics
We investigate charge pumping in the vicinity of order-obstructed topological phases, i.e. symmetry protected topological phases masked by spontaneous symmetry breaking in the presence of strong correlations. To explore this, we study a prototypical Su-Schrieffer-Heeger model with finite-range interaction that gives rise to orbital charge density wave order, and characterize the impact of this order on the model's topological properties. In the ordered phase, where the many-body topological invariant loses quantization, we find that not only is quantized charge pumping still possible, but it is even assisted by the collective nature of the orbital charge density wave order. Remarkably, we show that the Thouless pump scenario may be used to uncover the underlying topology of order-obstructed phases.
condensed matter
Accurate detection of pathological conditions in human subjects can be achieved through off-line analysis of recorded biological signals such as electrocardiograms (ECGs). However, human diagnosis is time-consuming and expensive, as it requires the time of medical professionals. This is especially inefficient when indicative patterns in the biological signals are infrequent. Moreover, patients with suspected pathologies are often monitored for extended periods, requiring the storage and examination of large amounts of non-pathological data, and entailing a difficult visual search task for diagnosing professionals. In this work we propose a compact and sub-mW low power neural processing system that can be used to perform on-line and real-time preliminary diagnosis of pathological conditions, to raise warnings for the existence of possible pathological conditions, or to trigger an off-line data recording system for further analysis by a medical professional. We apply the system to real-time classification of ECG data for distinguishing between healthy heartbeats and pathological rhythms. Multi-channel analog ECG traces are encoded as asynchronous streams of binary events and processed using a spiking recurrent neural network operated in a reservoir computing paradigm. An event-driven neuron output layer is then trained to recognize one of several pathologies. Finally, the filtered activity of this output layer is used to generate a binary trigger signal indicating the presence or absence of a pathological pattern. We validate the approach proposed using a Dynamic Neuromorphic Asynchronous Processor (DYNAP) chip, implemented using a standard 180 nm CMOS VLSI process, and present experimental results measured from the chip.
electrical engineering and systems science
We argue that the `island conjecture' and the replica wormhole derivation of the Page curve break monogamy of entanglement through allowing black hole interior states to be non-classically correlated while also pairwise entangled with radiation states. The reason is that quantum degrees of freedom (present in any half of a Hawking pair) cannot all be identified with the environment at semi-classical pair production, and can only be fixed relative to a subsystem, as required for the Page curve, by correlations equivalent to entanglement - regardless of what those correlations are attributed to. This implies that the recent gravity (replica wormhole) and holographic (island conjecture) derivations of the Page curve entail new physics not yet properly taken into account.
high energy physics theory
We consider $U(N)_k$ Chern-Simons theory on $S^3$ in Seifert framing and write down the partition function as a unitary matrix model. In the large $k$ and large $N$ limit the eigenvalue density satisfies an upper bound $\frac{1}{2\pi\lambda}$ where $\lambda=N/(k+N)$. We study the partition function under saddle point approximation and find that the saddle point equation admits a gapped solution for the eigenvalue density. The on-shell partition function on this solution matches with the partition function in the canonical framing up to a phase. However the eigenvalue density saturates the upper cap at a critical value of $\lambda$ and ceases to exist beyond that. We find a new phase (called cap-gap phase) in this theory for $\lambda$ beyond the critical value and see that the on-shell free energy for the cap-gap phase is less than that of the gapped phase. We also check the level-rank duality in the theory and observe that the level-rank dual of the gapped phase is a \emph{capped} phase whereas the cap-gap phase is level-rank dual to itself.
high energy physics theory
A small dielectric object with positive permittivity may resonate when the free-space wavelength is large in comparison with the object dimensions if the permittivity is sufficiently high. We show that these resonances are described by the magnetoquasistatic approximation of the Maxwell's equations in which the normal component of the displacement current density field vanishes on the surface of the particle. They are associated to values of permittivities and frequencies for which source-free quasistatic magnetic fields exist, which are connected to the eigenvalues of a magnetostatic integral operator. We present the general physical properties of magnetoquasistatic resonances in dielectrics with arbitrary shape. They arise from the interplay between the polarization energy stored in the dielectric and the energy stored in the magnetic field. Our findings improve the understanding of resonances in high-permittivity dielectric objects and provide a powerful tool that greatly simplifies the analysis and design of high index resonators.
physics
Lossy image compression is often limited by the simplicity of the chosen loss measure. Recent research suggests that generative adversarial networks have the ability to overcome this limitation and serve as a multi-modal loss, especially for textures. Together with learned image compression, these two techniques can be used to great effect when relaxing the commonly employed tight measures of distortion. However, convolutional neural network based algorithms have a large computational footprint. Ideally, an existing conventional codec should stay in place, which would ensure faster adoption and adhering to a balanced computational envelope. As a possible avenue to this goal, in this work, we propose and investigate how learned image coding can be used as a surrogate to optimize an image for encoding. The image is altered by a learned filter to optimise for a different performance measure or a particular task. Extending this idea with a generative adversarial network, we show how entire textures are replaced by ones that are less costly to encode but preserve sense of detail. Our approach can remodel a conventional codec to adjust for the MS-SSIM distortion with over 20% rate improvement without any decoding overhead. On task-aware image compression, we perform favourably against a similar but codec-specific approach.
electrical engineering and systems science
We interpret the neutrino anomalies in neutrino oscillation experiments and the high energy neutrino events at IceCube in terms of neutrino oscillations in an extension of the standard model where three sterile neutrinos are introduced so as to make two light neutrinos to be Pseudo-Dirac particles and a light neutrino to be a Majorana particle. Our model is different from the so-called $3+n$ model with $n$ sterile neutrinos suggested to interpret short baseline anomalies in terms of neutrino oscillations. While the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix in $3+n$ model is simply extended to $n\times n$ unitary matrix, the neutrino mixing matrix in our model is parameterized so as to keep the $3\times3$ PMNS mixing matrix for three active neutrinos unitary. There are also no flavor changing neutral current interactions leading to the conversion of active neutrinos to sterile ones or vice versa. We derive new forms of neutrino oscillation probabilities containing the new interference between the active and sterile neutrinos which are characterized by additional new parameters $\Delta m^2$ and $\theta$. Based on the new formulae derived, we show how the short baseline neutrino anomalies can be explained in terms of oscillations, and study the implication of the high energy neutrino events detected at IceCube on the probe of pseudo-Dirac neutrinos. New phenomenological effects attributed to the existence of the sterile neutrinos are discussed.
high energy physics phenomenology