text
stringlengths
11
9.77k
label
stringlengths
2
104
Modified gravity theories with an effective Newton constant that varies over cosmological timescales generally predict a different gravitational wave luminosity distance than General Relativity. While this holds for a uniform variation, we show that if locally screened at the source and at the observer as required to pass stringent astrophysical tests of gravity, the General Relativistic distance is restored. In the absence of such a screening, the same effect must modify electromagnetic luminosity distances inferred from supernovae Type Ia, to the extent that the effects can cancel in the comparison. Hence, either the modifications considered employ screening, which leaves no signature in Standard Sirens of a cosmological modification of gravity, or screening does not operate, in which case there can be a signal that is however well below the forseeable sensitivity of the probe when astrophysical bounds are employed. We recover these results both in the Jordan and Einstein frames, paying acute attention to pecularities of each frame such as the notion of redshift or geodesic motions. We emphasise that despite these limitations, Standard Sirens provide valuable independent tests of gravity that differ fundamentally from other probes, a circumstance that is generally important for the wider scope of gravitational modifications and related scenarios. Finally, we use our results to show that the gravitational wave propagation is not affected by dark sector interactions, which restores a dark degeneracy between conformal and disformal couplings that enables observationally viable cosmic self-acceleration to emenate from those.
astrophysics
In this article we describe the development of machine learning models to assist the CLAS12 tracking algorithm by identifying tracks through inferring missing segments in the drift chambers. Auto encoders are used to reconstruct missing segments from track trajectory. Implemented neural network was able to reliably reconstruct missing segment positions with accuracy of $\approx 0.35$ wires, and lead to recovery of missing tracks with accuracy of $>99.8\%$.
computer science
Heavy-ion collisions at low beam energies explore the high density regime of strongly-interacting matter. The dynamical evolution of these collisions can be successfully described by hadronic transport approaches. In March 2019, the HADES collaboration has taken data for AgAg collisions at $E_{\rm Kin}=1.58A$ GeV and in this work, we provide predictions for particle production and spectra within the Simulating Many Accelerated Strongly-interacting Hadrons (SMASH) approach. The multiplicities and spectra of strange and non-strange particles follow the expected trends as a function of system size. In particular, in AuAu collisions, much higher yields of double-strange baryons were observed experimentally than expected from a thermal model. Therefore, we incorporate a previously suggested mechanism to produce $\Xi$ baryons via rare decays of high mass $N^*$ resonances and predict the multiplicities. In addition, we predict the invariant mass spectrum for dilepton emission and explore the most important sources of dileptons above 1 GeV, that are expected to indicate the temperature of the medium. Interestingly, the overall dilepton emission is very similar to the one in AuAu collisions at $1.23 A$ GeV, a hint that the smaller system at a higher energy behaves very similar to the larger system at lower beam energy.
high energy physics phenomenology
We introduce a novel hybrid algorithm to simulate the real-time evolution of quantum systems using parameterized quantum circuits. The method, named "projected - Variational Quantum Dynamics" (p-VQD) realizes an iterative, global projection of the exact time evolution onto the parameterized manifold. In the small time-step limit, this is equivalent to the McLachlan's variational principle. Our approach is efficient in the sense that it exhibits an optimal linear scaling with the total number of variational parameters. Furthermore, it is global in the sense that it uses the variational principle to optimize all parameters at once. The global nature of our approach then significantly extends the scope of existing efficient variational methods, that instead typically rely on the iterative optimization of a restricted subset of variational parameters. Through numerical experiments, we also show that our approach is particularly advantageous over existing global optimization algorithms based on the time-dependent variational principle that, due to a demanding quadratic scaling with parameter numbers, are unsuitable for large parameterized quantum circuits.
quantum physics
We use self-similarity in N-body simulations of scale-free models to test for resolution dependence in the mass function and two-point correlation functions of dark matter halos. We use 1024$^3$ particle simulations performed with ABACUS, and compare results obtained with two halo finders: friends-of-friends (FOF) and ROCKSTAR. The FOF mass functions show a systematic deviation from self-similarity which is explained by resolution dependence of the FOF mass assignment previously reported in the literature. Weak evidence for convergence is observed only starting from halos of several thousand particles, and mass functions are overestimated by at least as much as 20-25 percent for halos of 50 particles. The mass function of the default ROCKSTAR halo catalog (with bound virial spherical overdensity mass), on the other hand, shows good convergence from of order 50 to 100 particles per halo, with no detectable evidence at the few percent level of any systematic dependence for larger particle number. Tests show that the mass unbinding procedure in ROCKSTAR is the key factor in obtaining this much improved resolution. Applying the same analysis to the halo-halo two point correlation function, we find again strong evidence for convergence only for ROCKSTAR halos, at separations sufficiently large so that halos do not overlap. At these separations we can exclude dependence on resolution at the 5-10 percent level once halos have of order 50 to 100 particles. At smaller separations results are not converged even at significantly larger particle number, and bigger simulations would be required to establish the resolution required for convergence.
astrophysics
Let $\mathfrak{g}$ be the Lie algebra of a compact Lie group. For a $\mathfrak{g}$-valued 1-form $A$, consider the Yang-Mills action \begin{equation} S_{{\rm YM}}(A) = \int_{\mathbb{R}^4} \left|dA + A \wedge A \right|^2 \nonumber \end{equation} using the standard metric on $T\mathbb{R}^4$. When we consider the Lie group $U(1)$, the Lie algebra $\mathfrak{g}$ is isomorphic to $\mathbb{R} \otimes i$, thus $A \wedge A = 0$. For some simple closed loop $C$, we want to make sense of the following path integral, \begin{equation} \frac{1}{Z}\ \int_{A \in \mathcal{A} /\mathcal{G}} \exp \left[ \int_{C} A\right] e^{-\frac{1}{2}\int_{\mathbb{R}^4}|dA|^2}\ DA, \nonumber \end{equation} whereby $DA$ is some Lebesgue type of measure on the space of $\mathfrak{g}$-valued 1-forms, modulo gauge transformations, $\mathcal{A} /\mathcal{G}$, and $Z$ is some partition function. We will construct an Abstract Wiener space for which we can define the above Yang-Mills path integral rigorously, using renormalization techniques found in lattice gauge theory. We will further show that the Area Law formula do not hold in the abelian Yang-Mills theory.
mathematics
We develop a possibility of generating tensor non-Gaussianity in a kind of anisotropic inflation, where a $U(1)$ gauge field is kinetically coupled to a spectator scalar field. Owing to this coupling, the coherent mode of the electric field appears and softly breaks the isotropy of the Universe. We compute the bispectrum of linearly-polarized tensor perturbations sourced by the gauge field and find that it is strongly red-tilted and has distinctive statistical anisotropies including higher-order multipole moments. Interestingly, the tensor bispectra with the specific combinations of linear polarization modes are dominant, and their amplitudes depend on the different sets of multipole moments. This new type of statistically-anisotropic tensor non-Gaussianity can be potentially testable with the upcoming cosmic microwave background B-mode polarization experiments.
astrophysics
This paper deals with the design of scheduling logics for Networked Control Systems (NCSs) whose shared communication networks have limited capacity. We assume that among \(N\) plants, only \(M\:(< N)\) plants can communicate with their controllers at any time instant. We present an algorithm to allocate the network to the plants periodically such that stability of each plant is preserved. The main apparatus for our analysis is a switched systems representation of the individual plants in an NCS. We rely on multiple Lyapunov-like functions and graph-theoretic arguments to design our scheduling logics. The set of results presented in this paper is a continuous-time counterpart of the results proposed in [15]. We present a set of numerical experiments to demonstrate the performance of our techniques.
electrical engineering and systems science
Developing resource allocation algorithms with strong real-time and high efficiency has been an imperative topic in wireless networks. Conventional optimization-based iterative resource allocation algorithms often suffer from slow convergence, especially for massive multiple-input-multiple-output (MIMO) beamforming problems. This paper studies learning-based efficient massive beamforming methods for multi-user MIMO networks. The considered massive beamforming problem is challenging in two aspects. First, the beamforming matrix to be learned is quite high-dimensional in case with a massive number of antennas. Second, the objective is often time-varying and the solution space is not fixed due to some communication requirements. All these challenges make learning representation for massive beamforming an extremely difficult task. In this paper, by exploiting the structure of the most popular WMMSE beamforming solution, we propose convolutional massive beamforming neural networks (CMBNN) using both supervised and unsupervised learning schemes with particular design of network structure and input/output. Numerical results demonstrate the efficacy of the proposed CMBNN in terms of running time and system throughput.
electrical engineering and systems science
We use a stacking method to study the radial light profiles of luminous red galaxies (LRGs) at redshift $\sim 0.62$ and $\sim 0.25$, out to a radial range of 200 kpc. We do not find noticeable evolution of the profiles at the two redshifts. The LRG profiles appear to be well approximated by a single Sersic profile, although some excess light can be seen outside 60 kpc. We quantify the excess light by measuring the integrated flux and find that the excess is about 10\% -- a non-dominant but still nonnegligible component.
astrophysics
Quantifying the confidence (or conversely the uncertainty) of a prediction is a highly desirable trait of an automatic system, as it improves the robustness and usefulness in downstream tasks. In this paper we investigate confidence estimation for end-to-end automatic speech recognition (ASR). Previous work has addressed confidence measures for lattice-based ASR, while current machine learning research mostly focuses on confidence measures for unstructured deep learning. However, as the ASR systems are increasingly being built upon deep end-to-end methods, there is little work that tries to develop confidence measures in this context. We fill this gap by providing an extensive benchmark of popular confidence methods on four well-known speech datasets. There are two challenges we overcome in adapting existing methods: working on structured data (sequences) and obtaining confidences at a coarser level than the predictions (words instead of tokens). Our results suggest that a strong baseline can be obtained by scaling the logits by a learnt temperature, followed by estimating the confidence as the negative entropy of the predictive distribution and, finally, sum pooling to aggregate at word level.
electrical engineering and systems science
Label noise in multiclass classification is a major obstacle to the deployment of learning systems. However, unlike the widely used class-conditional noise (CCN) assumption that the noisy label is independent of the input feature given the true label, label noise in real-world datasets can be aleatory and heavily dependent on individual instances. In this work, we investigate the instance-dependent noise (IDN) model and propose an efficient approximation of IDN to capture the instance-specific label corruption. Concretely, noting the fact that most columns of the IDN transition matrix have only limited influence on the class-posterior estimation, we propose a variational approximation that uses a single-scalar confidence parameter. To cope with the situation where the mapping from the instance to its confidence value could vary significantly for two adjacent instances, we suggest using instance embedding that assigns a trainable parameter to each instance. The resulting instance-confidence embedding (ICE) method not only performs well under label noise but also can effectively detect ambiguous or mislabeled instances. We validate its utility on various image and text classification tasks.
computer science
Preceptron model updating with back propagation has become the routine of deep learning. Continuous feed forward procedure is required in order for backward propagate to function properly. Doubting the underlying physical interpretation on transformer based models such as GPT brought about by the routine explaination, a new method of training is proposed in order to keep self-consistency of the physics. By treating the GPT model as a space-time diagram, and then trace the worldlines of signals, identifing the possible paths of signals in order fot a self-attention event to occure. With a slight modification, self-attention can be viewed as an ising model interaction, which enables the goal to be designed as energy of system. Target is treated as an external magnetic field inducing signals modeled as magnetic dipoles. A probability network is designed to pilot input signals travelling for different durations through different routes. A rule of updating the probabilities is designed in order to form constructive interference at target locations so that instantaneous energy can be maximised. Experiment was conducted on a 4-class classification problem extracted from MNIST. The results exhibit interesting but expected behavours, which do not exist in a bp updated network, but more like learning in a real human, especially in the few-shot scenario.
electrical engineering and systems science
Doping is a widely used method to tune physical properties of ferroelectric perovskites. Since doping can induce charges due to the substitution of certain elements, charge effects shall be considered in doped samples. To understand how charges can affect the system, we incorporate the dipole-charge interaction into our simulations, where the pinched hysteresis loops can well be reproduced. Two charge compensation models are proposed and numerically investigated to understand how lanthanum doping affect BaTiO$_{3}$'s ferroelectric phase transition temperature and hysteresis loop. The consequences of the two charge compensation models are compared and discussed.
condensed matter
We prove that each simple polygonal arc {\gamma} attains at most two pairs of support lines of given angle difference such that each pair has s1 < s2 < s3 that {\gamma}(s1) and {\gamma}(s3) are on one such line and {\gamma}(s2) is on the other line.
mathematics
I examine the regime of forward scattering of an energetic particle in a Plasma medium in thermal equilibrium. Treating the particle as an open quantum system interacting with a bath, I look at the time evolution of the reduced density matrix of the system. The kinematic and dynamical time scales that emerge can exist in several possible hierarchies which can lead to different EFT formulations. I show that in certain hierarchies, it becomes necessary to account for arbitrary number of coherent exchanges between the system and the bath going beyond the independent scattering paradigm. Analytic results are obtained in certain limits and the formalism is applied for the measurement of transverse momentum broadening of a quark in a Quark Gluon Plasma medium.
high energy physics phenomenology
We present the evidence for the existence of substructure in the cold front cluster A2554 based on a 20.14 ks Chandra observation. Using centroid shift and X-ray brightness concentration parameters, we confirm that A2554 is a dynamically disturbed system. We detect two dominant structures; a main cluster at z = 0.1108 and a foreground northern substructure at z = 0.1082. The analysis reveals an X-ray surface brightness edge at r \simeq 60 kpc from the cluster core. The thermodynamical profiles across the edge are ruling out the shock scenario. The temperature jump (from \sim 6 keV to \sim 10 keV), and pressure equilibrium (P0/P1 = 1.01 \pm 0.23) across the edge, are consistent with the definition of a cold front with a Mach number M=0.94^{+0.13}_{-0.17} also observed a weak bow-shock at \sim 100 kpc in front of the cold cloud, corresponding an upper limit to the Mach number M \sim 1.1. If the northern substructure was not related to the cold front, we conclude that the transonic motion of the cloud is caused by a merger, which was weak or occurred long ago.
astrophysics
In this paper we study the increment of the entanglement entropy and of the (replica) logarithmic negativity in a zero-density excited state of a free massive bosonic theory, compared to the ground state. This extends the work of two previous publications by the same authors. We consider the case of two disconnected regions and find that the change in the entanglement entropy depends only on the combined size of the regions and is independent of their connectivity. We subsequently generalize this result to any number of disconnected regions. For the replica negativity we find that its increment is a polynomial with integer coefficients depending only on the sizes of the two regions. The logarithmic negativity turns out to have a more complicated functional structure than its replica version, typically involving roots of polynomials on the sizes of the regions. We obtain our results by two methods already employed in previous work: from a qubit picture and by computing four-point functions of branch point twist fields in finite volume. We test our results against numerical simulations on a harmonic chain and find excellent agreement.
high energy physics theory
In this paper we find analytical solutions for the scalar and gauge fields in the Freedman-Robertson-Walker multiply warped braneworld scenario. With this we find the precise mass spectra for these fields. We compare these spectra with that previously found in the literature for the static case.
high energy physics theory
A photoelectron forced to pass through two atomic energy levels before receding from the residual ion shows interference fringes in its angular distribution as manifestation of a two-slit-type interference experiment in wave-vector space. This scenario was experimentally realized by irradiating a Rubidium atom by two low-intensity continuous-wave lasers [Pursehouse et al., Phys. Rev. Lett. 122, 053204 (2019)]. In a one-photon process the first laser excites the 5p level while the second uncorrelated photon elevates the excited population to the continuum. This same continuum state can also be reached when the second laser excites the 6p state and the first photon then triggers the ionization. As the two lasers are weak and their relative phases uncorrelated, the coherence needed for generating the interference stems from the atom itself. Increasing the intensity or shortening the laser pulses enhances the probability that two photons from both lasers act at the same time, and hence the coherence properties of the applied lasers are expected to affect the interference fringes. Here, this aspect is investigated in detail, and it is shown how tuning the temporal shapes of the laser pulses allows for tracing the time-dependence of the interference fringes. We also study the influence of applying a third laser field with a random amplitude, resulting in a random fluctuation of one of the ionization amplitudes and discuss how the interference fringes are affected.
quantum physics
Recent gamma-ray and radio observations provide stringent constraints for annihilating dark matter. The current $2\sigma$ lower limits of dark matter mass can be constrained to $\sim 100$ GeV for thermal relic annihilation cross section. In this article, we use the radio continuum spectral data of a nearby galaxy NGC4214 and differentiate the thermal contribution, dark matter annihilation contribution and cosmic-ray contribution. We can get more stringent constraints of dark matter mass and annihilation cross sections. The $5\sigma$ lower limits of thermal relic annihilating dark matter mass obtained are 300 GeV, 220 GeV, 220 GeV, 500 GeV and 600 GeV for $e^+e^-$, $\mu^+\mu^-$, $\tau^+\tau^-$, $W^+W^-$ and $b\bar{b}$ channels respectively. These limits challenge the dark matter interpretation of the gamma-ray, positron and antiproton excess in our Milky Way.
astrophysics
This paper describes the new QuickFind method in LcTools for finding signals and associated TTVs (Transit Timing Variations) in light curves from NASA space missions. QuickFind is adept at finding medium to large sized signals (generally those with S/N ratios above 15) extremely fast, significantly reducing overall processing time for a light curve as compared to the BLS detection method. For example, on the lead author's computer, QuickFind was able to detect both KOI signals for star 10937029 in a 14 quarter Kepler light curve spanning 1,459 days in roughly 2 seconds whereas BLS took about 155 seconds to find both signals making QuickFind in this example about 77 times faster than BLS. This paper focuses on the user interfaces, data processing algorithm, and performance tests for the QuickFind method in LcTools.
astrophysics
Graph neural networks have achieved state-of-the-art accuracy for graph node classification. However, GNNs are difficult to scale to large graphs, for example frequently encountering out-of-memory errors on even moderate size graphs. Recent works have sought to address this problem using a two-stage approach, which first aggregates data along graph edges, then trains a classifier without using additional graph information. These methods can run on much larger graphs and are orders of magnitude faster than GNNs, but achieve lower classification accuracy. We propose a novel two-stage algorithm based on a simple but effective observation: we should first train a classifier then aggregate, rather than the other way around. We show our algorithm is faster and can handle larger graphs than existing two-stage algorithms, while achieving comparable or higher accuracy than popular GNNs. We also present a theoretical basis to explain our algorithm's improved accuracy, by giving a synthetic nonlinear dataset in which performing aggregation before classification actually decreases accuracy compared to doing classification alone, while our classify then aggregate approach substantially improves accuracy compared to classification alone.
computer science
Under the Markov decision process (MDP) congestion game framework, we study the problem of enforcing global constraints using tolls on a population of players with stochastic dynamics and coupled congestion costs. Existing work demonstrates that by enforcing identical tolls on every player, the optimal joint strategy for the playing population can be shifted to satisfy global design constraints. However, computing the minimum tolling value for constraint satisfaction requires explicit modelling of the congestion cost as a function of the playing population. In this paper, we assume that both the playing population and the constraint-enforcing authority, the game designer, lack such a model. Instead, the game designer can enforce tolls on a gaming instance that responds by approximating the optimal joint strategy under any toll. Under these assumptions, we develop a myopic algorithm that enables the game designer to compute the minimum tolling value, and prove that, up to the approximation error made by the gaming instance, our algorithm not only converges to the correct toll, but will guarantee average constraint satisfaction during the iterative process. Finally, we demonstrate how our model and algorithm can be applied to the profit-seeking ride-share driver population of Manhattan, New York City to optimally reduce traffic congestion using tolls.
computer science
Significant scientific and technological progress in the field of spintronics is based on trilayer magnetic tunnel junction devices which principally rely on the physics of single barrier tunneling. While technologically relevant devices have been prototyped, the physics of single barrier tunneling poses ultimate limitations on the performance of magnetic tunnel junction devices. Here, we propose a fresh route toward high performance magnetic tunnel junctions by making electronic analogs of optical phenomena such as anti-reflections and Fabry-P\`erot resonances. The devices we propose feature anti-reflection enabled superlattice heterostructures sandwiched between the fixed and the free ferromagnets of the magnetic tunnel junction structure. Our predictions are based on the non-equilibrium Green's function spin transport formalism coupled self-consistently with the Landau-Lifshitz-Gilbert-Slonczewski equation. Owing to the physics of bandpass spin filtering in the bandpass superlattice magnetic tunnel junction device, we demonstrate an ultra-high boost in the tunnel magneto-resistance (TMR$\approx5\times10^4\%$) and nearly 92% suppression of spin transfer torque switching bias in comparison to a traditional trilayer magnetic tunnel junction device. We rationalize improvised spin transfer torque switching via analysis of the Slonczewski spin current transmission spectra. The proof of concepts presented here can lead to next-generation spintronics device design harvesting the rich physics of superlattice heterostructures and exploiting spintronic analogs of optical phenomena.
condensed matter
Radiative transfer describes the propagation of electromagnetic radiation through an interacting medium. This process is often simulated by the use of the Monte Carlo method, which involves the probabilistic determination and tracking of simulated photon packages. In the regime of high optical depths, this approach encounters difficulties since a proper representation of the various physical processes can only be achieved by considering high numbers of simulated photon packages. As a consequence, the demand for computation time rises accordingly and thus practically puts a limit on the optical depth of models that can be simulated. Here we present a method that aims to solve the problem of high optical depths in dusty media, which relies solely on the use of unbiased Monte Carlo radiative transfer. For that end, we identified and precalculated repeatedly occuring and simulated processes, stored their outcome in a multidimensional cumulative distribution function, and immediately replaced the basic Monte Carlo transfer during a simulation by that outcome. During the precalculation, we generated emission spectra as well as deposited energy distributions of photon packages traveling from the center of a sphere to its rim. We carried out a performance test of the method to confirm its validity and gain a boost in computation speed by up to three orders of magnitude. We then applied the method to a simple model of a viscously heated circumstellar disk, and we discuss the necessity of finding a solution for the optical depth problem with regard to a proper temperature calculation. We find that the impact of an incorrect treatment of photon packages in highly optically thick regions extents even to optically thin regions, thus, changing the overall observational appearance of the disk.
astrophysics
We introduce a new method for generating color images from sketches or edge maps. Current methods either require some form of additional user-guidance or are limited to the "paired" translation approach. We argue that segmentation information could provide valuable guidance for sketch colorization. To this end, we propose to leverage semantic image segmentation, as provided by a general purpose panoptic segmentation network, to create an additional adversarial loss function. Our loss function can be integrated to any baseline GAN model. Our method is not limited to datasets that contain segmentation labels, and it can be trained for "unpaired" translation tasks. We show the effectiveness of our method on four different datasets spanning scene level indoor, outdoor, and children book illustration images using qualitative, quantitative and user study analysis. Our model improves its baseline up to 35 points on the FID metric. Our code and pretrained models can be found at https://github.com/giddyyupp/AdvSegLoss.
computer science
We present an extension of the renormalisation procedure based on the R-operation in $D$ dimensions at two-loop level, in which the numerators of all Feynman diagrams can be constructed in four dimensions, and the rational terms stemming from the interplay of $(D-4)$-dimensional numerator parts and UV poles are fully reconstructed from a finite set of universal local counterterms. This represents an extension of the concept of rational terms of type $R_2$ to two loops. We provide a general method to compute one and two-loop rational counterterms from massive one-scale tadpole integrals. Finally, we present the full set of rational counterterms of UV origin for QED up to two-loop order.
high energy physics phenomenology
Gravitationally lensed curved arcs provide a wealth of information about the underlying lensing distortions. Extracting precise lensing information from extended sources is a key component in many studies aiming to answer fundamental questions about the Universe. To maintain accuracy with increased precision, it is of vital importance to characterize and understand the impact of degeneracies inherent in lensing observables. In this work, we present a formalism to describe the gravitational lensing distortion effects resulting in curved extended arcs based on the eigenvectors and eigenvalues of the local lensing Jacobian and their directional differentials. We identify a non-local and non-linear extended deflector basis that inherits these local properties. Our parameterization is tightly linked to observable features in extended sources and allows one to accurately extract the lensing information of extended images without imposing an explicit global deflector model. We quantify what degeneracies can be broken based on specific assumptions on the local lensing nature and assumed intrinsic source shape. Our formalism is applicable from the weak linear regime, the semi-linear regime all the way up to the highly non-linear regime of highly magnified arcs of multiple images. The methodology and implementation presented in this work provides a framework to assessing systematics, to guide inference efforts in the right choices in complexity based on the data at hand, and to quantify the lensing information extracted in a model-independent way.
astrophysics
The multipartite Greenberger-Horne-Zeilinger (GHZ) states are indispensable elements for various quantum information processing tasks. Here we put forward two deterministic proposals to dissipatively prepare tripartite GHZ states in a neutral atom system. The first scheme can be considered as an extension of a recent work [T. M. Wintermantel, Y. Wang, G. Lochead, \textit{et al}, {Phys. Rev. Lett. \textbf{124}, 070503 (2020)}]. By virtue of the polychromatic driving fields and the engineered spontaneous emission, a multipartite GHZ state with odd numbers of atoms are generated with a high efficiency. This scheme effectively overcomes the problem of dependence on the initial state but sensitive to the decay of Rydberg state. In the second scenario, we exploit the spontaneous emission of the Rydberg states as a resource, thence a steady tripartite GHZ state with fidelity around $98\%$ can be obtained by simultaneously integrating the switching driving of unconventional Rydberg pumping and the Rydberg antiblockade effect.
quantum physics
In this paper, we investigate the decay of Higgs boson to $h_c$ plus a photon in the NRQCD theoretical framework. Comparing with the Higgs decay to $J/\psi$ plus a photon channel, this process has not indirect contribution, can be used to detect the Yukawa coupling of Higgs and charm quarks. The results show that the decay branch ratio of this process is about $10^{-8}$. If we takes into account the $10^{-3}$ efficiency in the $h_c$ detection, no events will be available even in the case of $30ab^{-1}$ luminosity at FCC-pp with 100 TeV center of mass energy. However, if the detection efficiency of $h_c$ is greatly improved in the future, this process will play an important role at linear $e^+e^-$ future colliders and at LHCb. Moreover, this process should be also play an important role when the anomalous charm Yukawa couplings are larger and direct sensitivity.
high energy physics phenomenology
R\"odl, Ruci\'nski, and Szemer\'edi determined the minimum $(k-1)$-degree threshold for the existence of fractional perfect matchings in $k$-uniform hypergrahs, and K\"uhn, Osthus, and Townsend extended this result by asymptotically determining the $d$-degree threshold for the range $k-1>d\ge k/2$. In this note, we prove the following exact degree threshold: Let $k,d$ be positive integers with $k\ge 4$ and $k-1>d\geq k/2$, and let $n$ be any integer with $n\ge k^2$. Then any $n$-vertex $k$-uniform hypergraph with minimum $d$-degree $\delta_d(H)>{n-d\choose k-d} -{n-d-(\lceil n/k\rceil-1)\choose k-d}$ contains a fractional perfect matching. This lower bound on the minimum $d$-degree is best possible. We also determine optimal minimum $d$-degree conditions which guarantees the existence of fractional matchings of size $s$, where $0<s\le n/k$ (when $k/2\le d\le k-1$), or with $s$ large enough and $s\le n/k$ (when $2k/5<d<k/2$).
mathematics
In this paper we have discussed convergence of power series both in p-adic norm as well as real norm. We have investigated rational summability of power series with respect to both p-adic norm and real norm under certain conditions. Then we have studied convergence of specially constructed power series and derived summation formula. Finally, we have studied the adele, idele and some results regarding it with the help of convergent power series.
mathematics
Cross-lingual voice conversion (CLVC) is a quite challenging task since the source and target speakers speak different languages. This paper proposes a CLVC framework based on bottleneck features and deep neural network (DNN). In the proposed method, the bottleneck features extracted from a deep auto-encoder (DAE) are used to represent speaker-independent features of speech signals from different languages. A DNN model is trained to learn the mapping between bottleneck features and the corresponding spectral features of the target speaker. The proposed method can capture speaker-specific characteristics of a target speaker, and hence requires no speech data from source speaker during training. The performance of the proposed method is evaluated using data from three Indian languages: Telugu, Tamil and Malayalam. The experimental results show that the proposed method outperforms the baseline Gaussian mixture model (GMM)-based CLVC approach.
electrical engineering and systems science
Bang-bang control is often used to implement a minimal-time shortcut to adiabaticity for efficient transport of atoms in a moving harmonic trap. However, drastic changes of the on-off controller, leading to high transport-mode excitation and energy consumption, become infeasible under realistic experimental conditions. To circumvent these problems, we propose smooth bang-bang protocols with near-minimal time, by setting the physical constraints on the relative displacement, speed, and acceleration between the mass center of the atom and the trap center. We adopt Pontryagin's maximum principle to obtain the analytical solutions of smooth bang-bang protocol for near-time-minimal control. More importantly, it is found that the energy excitation and sloshing amplitude are significantly reduced at the expense of operation time. We also present a multiple shooting method for the self-consistent numerical analysis. Finally, this method is applied to other tasks, e.g., energy minimization, where obtaining smooth analytical form is complicated.
quantum physics
Quantum devices, such as quantum simulators, quantum annealers, and quantum computers, may be exploited to solve problems beyond what is tractable with classical computers. This may be achieved as the Hilbert space available to perform such `calculations' is far larger than that which may be classically simulated. In practice, however, quantum devices have imperfections, which may limit the accessibility to the whole Hilbert space. We thus determine that the dimension of the space of quantum states that are available to a quantum device is a meaningful measure of its functionality, though unfortunately this quantity cannot be directly experimentally determined. Here we outline an experimentally realisable approach to obtaining the required Hilbert space dimension of such a device to compute its time evolution, by exploiting the thermalization dynamics of a probe qubit. This is achieved by obtaining a fluctuation-dissipation theorem for high-temperature chaotic quantum systems, which facilitates the extraction of information on the Hilbert space dimension via measurements of the decay rate, and time-fluctuations.
quantum physics
We present the first metal abundance profiles for a representative sample of massive clusters. Our measures extend to $R_{500}$ and are corrected for a systematic error plaguing previous outskirt estimates. Our profiles flatten out at large radii, admittedly not a new result, however the radial range and representative nature of our sample extends its import well beyond previous findings. We find no evidence of segregation between cool-core and non-cool-core systems beyond $\sim 0.3 R_{500}$, implying that, as was found for thermodynamic properties (Ghirardini et al, 2019), the physical state of the core does not affect global cluster properties. Our mean abundance within $R_{500}$ shows a very modest scatter, $< $15%, suggesting the enrichment process must be quite similar in all these massive systems. This is a new finding and has significant implications on feedback processes. Together with results from thermodynamic properties presented in a previous X-COP paper, it affords a coherent picture where feedback effects do not vary significantly from one system to another. By combing ICM with stellar measurements we have found the amount of Fe diffused in the ICM to be about ten times higher than that locked in stars. Although our estimates suggest, with some strength, that the measured iron mass in clusters is well in excess of the predicted one, systematic errors prevent us from making a definitive statement. Further advancements will only be possible when systematic uncertainties, principally those associated to stellar masses, both within and beyond $R_{500}$, can be reduced.
astrophysics
In recent years, the manipulation of Fano resonances in the time domain has unlocked deep insights into a broad spectrum of systems' coherent dynamics. Here, inelastic scattering of light with coherent acoustic phonons is harnessed to achieve complex Fano resonances. The sudden change of phonon momentum during reflection leads to a transition from anti-Stokes to Stokes light scattering, producing two different resonances that interfere in the measurement process. We highlight the conditions necessary to achieve such interference, revealing an underlying symmetry between photons and phonons, and verify the theory experimentally. Then, we demonstrate the possibility to characterize energy and coherence losses at rough interfaces, thus providing a mechanism for nondestructive testing of interface quality. Our results describe numerous unexplained observations in ultrafast acoustics and can be generalized to the scattering of light with any waves.
physics
In a noiseless linear estimation problem, one aims to reconstruct a vector x* from the knowledge of its linear projections y=Phi x*. There have been many theoretical works concentrating on the case where the matrix Phi is a random i.i.d. one, but a number of heuristic evidence suggests that many of these results are universal and extend well beyond this restricted case. Here we revisit this problematic through the prism of development of message passing methods, and consider not only the universality of the l1 transition, as previously addressed, but also the one of the optimal Bayesian reconstruction. We observed that the universality extends to the Bayes-optimal minimum mean-squared (MMSE) error, and to a range of structured matrices.
statistics
Learning to predict scene depth and camera motion from RGB inputs only is a challenging task. Most existing learning based methods deal with this task in a supervised manner which require ground-truth data that is expensive to acquire. More recent approaches explore the possibility of estimating scene depth and camera pose in a self-supervised learning framework. Despite encouraging results are shown, current methods either learn from monocular videos for depth and pose and typically do so without enforcing multi-view geometry constraints between scene structure and camera motion, or require stereo sequences as input where the ground-truth between-frame motion parameters need to be known. In this paper we propose to jointly optimize the scene depth and camera motion via incorporating differentiable Bundle Adjustment (BA) layer by minimizing the feature-metric error, and then form the photometric consistency loss with view synthesis as the final supervisory signal. The proposed approach only needs unlabeled monocular videos as input, and extensive experiments on the KITTI and Cityscapes dataset show that our method achieves state-of-the-art results in self-supervised approaches using monocular videos as input, and even gains advantage to the line of methods that learns from calibrated stereo sequences (i.e. with pose supervision).
computer science
Class-imbalance is an inherent characteristic of multi-label data which affects the prediction accuracy of most multi-label learning methods. One efficient strategy to deal with this problem is to employ resampling techniques before training the classifier. Existing multilabel sampling methods alleviate the (global) imbalance of multi-label datasets. However, performance degradation is mainly due to rare subconcepts and overlapping of classes that could be analysed by looking at the local characteristics of the minority examples, rather than the imbalance of the whole dataset. We propose a new method for synthetic oversampling of multi-label data that focuses on local label distribution to generate more diverse and better labeled instances. Experimental results on 13 multi-label datasets demonstrate the effectiveness of the proposed approach in a variety of evaluation measures, particularly in the case of an ensemble of classifiers trained on repeated samples of the original data.
computer science
In a structural health monitoring (SHM) system that uses digital cameras to monitor cracks of structural surfaces, techniques for reliable and effective data compression are essential to ensure a stable and energy efficient crack images transmission in wireless devices, e.g., drones and robots with high definition cameras installed. Compressive sensing (CS) is a signal processing technique that allows accurate recovery of a signal from a sampling rate much smaller than the limitation of the Nyquist sampling theorem. The conventional CS method is based on the principle that, through a regularized optimization, the sparsity property of the original signals in some domain can be exploited to get the exact reconstruction with a high probability. However, the strong assumption of the signals being highly sparse in an invertible space is relatively hard for real crack images. In this paper, we present a new approach of CS that replaces the sparsity regularization with a generative model that is able to effectively capture a low dimension representation of targeted images. We develop a recovery framework for automatic crack segmentation of compressed crack images based on this new CS method and demonstrate the remarkable performance of the method taking advantage of the strong capability of generative models to capture the necessary features required in the crack segmentation task even the backgrounds of the generated images are not well reconstructed. The superior performance of our recovery framework is illustrated by comparing with three existing CS algorithms. Furthermore, we show that our framework is extensible to other common problems in automatic crack segmentation, such as defect recovery from motion blurring and occlusion.
electrical engineering and systems science
Probing optical excitations with nanometer resolution is important for understanding their dynamics and interactions down to the atomic scale. Electron microscopes currently offer the unparalleled ability of rendering spatially-resolved electron spectra with combined meV and sub-nm resolution, while the use of ultrafast optical pulses enables fs temporal resolution and exposure of the electrons to ultraintense confined optical fields. Here, we theoretically investigate fundamental aspects of the interaction of fast electrons with localized optical modes that are made possible by these advances. We use a quantum-optics description of the optical field to predict that the resulting electron spectra strongly depend on the statistics of the sample excitations (bosonic or fermionic) and their population (Fock, coherent, or thermal), whose autocorrelation functions are directly retrieved from the ratios of electron gain intensities. We further explore feasible experimental scenarios to probe the quantum characteristics of the sampled excitations and their populations.
quantum physics
Since the discovery of the Verwey transition in magnetite, transition metal compounds with pyrochlore structures have been intensively studied as a platform for realizing remarkable electronic phase transitions. We report the discovery of a unique phase transition that preserves the cubic symmetry of the beta-pyrochlore oxide CsW$_2$O$_6$, where each of W 5d electrons are confined in regular-triangle W3 trimers. This trimer formation is an unprecedented self-organization of d electrons, which can be resolved into a charge order satisfying the Anderson condition in a nontrivial way, orbital order caused by the distortion of WO6 octahedra, and the formation of a spin-singlet pair in a regular-triangle trimer. Electronic instability due to the unusual three-dimensional nesting of Fermi surfaces and the localized nature of the 5d electrons characteristic of the pyrochlore oxides were found to play important roles in this unique charge-orbital-spin coupled phenomenon.
condensed matter
Uranus and Neptune are the last unexplored planets of the Solar System. I show that they hold crucial keys to understand the atmospheric dynamics and structure of planets with hydrogen atmospheres. Their atmospheres are active and storms are believed to be fueled by methane condensation which is both extremely abundant and occurs at low optical depth. This means that mapping temperature and methane abundance as a function of position and depth will inform us on how convection organizes in an atmosphere with no surface and condensates that are heavier than the surrounding air, a general feature of gas giants. Using this information will be essential to constrain the interior structure of Uranus and Neptune themselves, but also of Jupiter, Saturn and numerous exoplanets with hydrogen atmospheres. Owing to the spatial and temporal variability of these atmospheres, an orbiter is required. A probe would provide a reference profile to lift ambiguities inherent to remote observations. It would also measure abundances of noble gases which can be used to reconstruct the history of planet formation in the Solar System. Finally, mapping the planets' gravity and magnetic fields will be essential to constrain their global composition, structure and evolution.
astrophysics
We present the confirmation of the eccentric warm giant planet TOI-201 b, first identified as a candidate in \textit{TESS} photometry (Sectors 1-8, 10-13, and 27-28) and confirmed using ground-based photometry from NGTS and radial velocities from FEROS, HARPS, CORALIE, and \textsc{Minerva}-Australis. TOI-201 b orbits a young ($\mathrm{0.87^{+0.46}_{-0.49} \, Gyr}$) and bright(V=9.07 mag) F-type star with a $\mathrm{52.9781 \, d}$ period. The planet has a mass of $\mathrm{0.42^{+0.05}_{-0.03}\, M_J}$, a radius of $\mathrm{1.008^{+0.012}_{-0.015}\, R_J}$, and an orbital eccentricity of $0.28^{+0.06}_{-0.09}$; it appears to still be undergoing fairly rapid cooling, as expected given the youth of the host star. The star also shows long-term variability in both the radial velocities and several activity indicators, which we attribute to stellar activity. The discovery and characterization of warm giant planets such as TOI-201 b is important for constraining formation and evolution theories for giant planets.
astrophysics
We construct an ${\cal N}{=}\,2$ supersymmetric extension of $n$-particle Ruijsenaars-Schneider models. The guiding feature is a deformation of the phase space. The supercharges have a "free" form linear in the fermions but produce an interacting four-fermion Hamiltonian. A field-dependent unitary transformation maps to standard fermions obeying conventional Poisson brackets. In this frame, the supercharges and Hamiltonian have long "fermionic tails". We also comment on previous attempts in this direction.
high energy physics theory
We investigate the strongly coupled minimal walking technicolor model (MWT) in the framework of a bottom-up holographic model, where the global $SU(4)$ symmetry breaks to $SO(4)$ subgroup. In the holographic model, we found that 125GeV composite Higgs particles and small Peskin-Takeuchi $S$ parameter can be achieved simultaneously. In addition, the model predicts a large number of particles at the TeV scale, including dark matter candidate Technicolor Interacting Massive Particles (TIMPs). If we consider the dark matter nuclear spin-independent cross-section in the range of $10^{-45}\sim 10 ^ {-48} $cm$^2$, which can be detected by future experiments, the mass range of TIMPs predicted by the holographic technicolor model is 2 $\sim$ 4 TeV.
high energy physics phenomenology
We consider the problem of multiple hypothesis testing when there is a logical nested structure to the hypotheses. When one hypothesis is nested inside another, the outer hypothesis must be false if the inner hypothesis is false. We model the nested structure as a directed acyclic graph, including chain and tree graphs as special cases. Each node in the graph is a hypothesis and rejecting a node requires also rejecting all of its ancestors. We propose a general framework for adjusting node-level test statistics using the known logical constraints. Within this framework, we study a smoothing procedure that combines each node with all of its descendants to form a more powerful statistic. We prove a broad class of smoothing strategies can be used with existing selection procedures to control the familywise error rate, false discovery exceedance rate, or false discovery rate, so long as the original test statistics are independent under the null. When the null statistics are not independent but are derived from positively-correlated normal observations, we prove control for all three error rates when the smoothing method is arithmetic averaging of the observations. Simulations and an application to a real biology dataset demonstrate that smoothing leads to substantial power gains.
statistics
The entanglement spectrum (ES) provides a barometer of quantum entanglement and encodes physical information beyond that contained in the entanglement entropy. In this paper, we explore the ES of stabilizer codes, which furnish exactly solvable models for a plethora of gapped quantum phases of matter. Studying the ES for stabilizer Hamiltonians in the presence of arbitrary weak local perturbations thus allows us to develop a general framework within which the entanglement features of gapped topological phases can be computed and contrasted. In particular, we study models harboring fracton order, both type-I and type-II, and compare the resulting ES with that of both conventional topological order and of (strong) subsystem symmetry protected topological (SSPT) states. We find that non-local surface stabilizers (NLSS), a set of symmetries of the Hamiltonian which form on the boundary of the entanglement cut, act as purveyors of universal non-local features appearing in the entanglement spectrum. While in conventional topological orders and fracton orders, the NLSS retain a form of topological invariance with respect to the entanglement cut, subsystem symmetric systems---fracton and SSPT phases---additionally show a non-trivial geometric dependence on the entanglement cut, corresponding to the subsystem symmetry. This sheds further light on the interplay between geometric and topological effects in fracton phases of matter and demonstrates that strong SSPT phases harbour a measure of quasi-local entanglement beyond that encountered in conventional SPT phases. We further show that a version of the edge-entanglement correspondence, established earlier for gapped two-dimensional topological phases, also holds for gapped three-dimensional fracton models.
quantum physics
We develop a formalism with two different UV cutoff scales, one for space and one for time, appropriate for the richer structure of non-Lorentz invariant quantum field theories. In this formalism there are two different beta-functions for each coupling constant, arising from independent variations of the energy or momentum cutoffs. For holographic non-relativistic theories with rotational invariance, we develop the technique to calculate such beta-functions using a generalization of the superpotential formalism developed in [JHEP ${\bf 1208}$, 164 (2012)]. We then proceed and compute the beta-function around a Lifshitz critical point, as well as for general Lifshitz-invariant theories with hyperscaling violation. Finally, we do a similar computation in a weakly-coupled Lifshitz invariant QFT.
high energy physics theory
The hadronic two-body weak decays of the doubly charmed baryons $\Xi_{cc}^{++}, \Xi_{cc}^+$ and $\Omega_{cc}^+$ are studied in this work. To estimate the nonfactorizable contributions, we work in the pole model for the $P$-wave amplitudes and current algebra for $S$-wave ones. For the $\Xi_{cc}^{++}\to \Xi_c^+\pi^+$ mode, we find a large destructive interference between factorizable and nonfactorizable contributions for both $S$- and $P$-wave amplitudes. Our prediction of $\sim 0.70\%$ for its branching fraction is smaller than the earlier estimates in which nonfactorizable effects were not considered, but agrees nicely with the result based on an entirely different approach, namely, the covariant confined quark model. On the contrary, a large constructive interference was found in the $P$-wave amplitude by Dhir and Sharma, leading to a branching fraction of order $(7-16)\%$. Using the current results for the absolute branching fractions of $(\Lambda_c^+,\Xi_c^+)\to p K^-\pi^+$ and the LHCb measurement of $\Xi_{cc}^{++}\to\Xi_c^+\pi^+$ relative to $\Xi_{cc}^{++}\to\Lambda_c^+ K^- \pi^+\pi^+$, we obtain $\B(\Xi_{cc}^{++}\to\Xi_c^+\pi^+)_{\rm expt}\approx (1.83\pm1.01)\%$ after employing the latest prediction of $\B(\Xi_{cc}^{++}\to\Sigma_c^{++}\overline{K}^{*0})$. Our prediction of $\mathcal{B}(\Xi_{cc}^{++}\to\Xi_c^+\pi^+)\approx 0.7\%$ is thus consistent with the experimental value but in the lower end. It is important to pin down the branching fraction of this mode in future study. Factorizable and nonfactorizable $S$-wave amplitudes interfere constructively in $\Xi_{cc}^+\to\Xi_c^0\pi^+$. Its large branching fraction of order 4\% may enable experimentalists to search for the $\Xi_{cc}^+$ through this mode. That is, the $\Xi_{cc}^+$ is reconstructed through the $\Xi_{cc}^+\to\Xi_c^0\pi^+$ followed by the decay chain $\Xi_c^0\to \Xi^-\pi^+\to p\pi^-\pi^-\pi^+$.
high energy physics phenomenology
The Finiteness Problem is shown to be unsolvable for any sufficiently large class of modular lattices.
mathematics
Let $g$ be a \emph{Hermitian }$l2$\emph{-structure} on a nonempty set $V$, that is a map from $E_{2}(V)=\{(x,y):x\neq y\in V\}$ to the \emph{complex} field $\mathbb{C}$ such that $g(u,v)=\overline{g(v,u)}$ for $u\neq v\in V$. With respect to an ordering $v_{1},v_{2},\ldots,v_{n}$\ of $V$, the \emph{adjacency matrix} of $g$ is the $n\times n$ Hermitian matrix $M=[m_{ij}]_{1\leq i,j\leq n}$ in which $m_{ij}=0$ if $i=j$ and $m_{ij}=g(v_{i},v_{j})$ otherwise. The \emph{characteristic polynomial} of $g$ is defined as the characteristic polynomial of its adjacency matrix. We say that $g$ is $k$-\emph{spectrally monomorphic} if all its substructures with $k$ vertices have the same characteristic polynomial. In this work, we characterize the class of $k$-spectrally monomorphic Hermitian $l2$-structures of order $n$, for $k=3,\ldots,n-3$.
mathematics
Electron-phonon coupling, diagonal in a real space formulation, leads to polaron paradigm of smoothly varying properties. However, fundamental changes, namely the singular behavior of polarons, occur if non-diagonal pairing is involved into consideration. The study of polaron transformations and related properties of matter is of particular interest for realistic models, since competition between diagonal and non-diagonal electron-phonon contributions in the presence of other strong interactions can result in unconventional behavior of the system. Here we consider the multiband pd-model of cuprate superconductors with electron-phonon interaction and analyze the features of the systems that are caused by the competition of diagonal and non-diagonal electron-phonon contributions in the limit of strong electron correlations. Using the polaronic version of the generalized tight-binding method, we describe the evolution of the band structure, Fermi surface, density of states at Fermi level, and phonon spectral function in the space of electron-phonon parameters ranging from weak to strong coupling strength of the adiabatic limit. On the phase diagram of polaron properties we reveal two quantum phase transitions and show how electron-phonon interaction gives rise to Fermi surface transformation (i) from hole pockets to Fermi arcs and (ii) from hole to electron type of conductivity. We also demonstrate the emergence of new states in the phonon spectral function of the polaron and discuss their origin.
condensed matter
This paper presents a games-in-games approach to provide design guidelines for mosaic command and control that enables the secure and resilient multi-domain operations. Under the mosaic design, pieces or agents in the network are equipped with flexible interoperability and the capability of self-adaptability, self-healing, and resiliency so that they can reconfigure their responses to achieve the global mission in spite of failures of nodes and links in the adversarial environment. The proposed games-in-games approach provides a system-of-systems science for mosaic distributed design of large-scale systems. Specifically, the framework integrates three layers of design for each agent including strategic layer, tactical layer, and mission layer. Each layer in the established model corresponds to a game of a different scale that enables the integration of threat models and achieve self-mitigation and resilience capabilities. The solution concept of the developed multi-layer multi-scale mosaic design is characterized by Gestalt Nash equilibrium (GNE) which considers the interactions between agents across different layers. The developed approach is applicable to modern battlefield networks which are composed of heterogeneous assets that access highly diverse and dynamic information sources over multiple domains. By leveraging mosaic design principles, we can achieve the desired operational goals of deployed networks in a case study and ensure connectivity among entities for the exchange of information to accomplish the mission.
computer science
We present a sample of 151 massive ($M_* > 10^{10}\mathrm{M_\odot}$) quiescent galaxies at $2 < z < 5$, based on a sophisticated Bayesian spectral energy distribution fitting analysis of the CANDELS UDS and GOODS-South fields. Our sample includes a robust sub-sample of 61 objects for which we confidently exclude low-redshift and star-forming solutions. We identify 10 robust objects at $z>3$, of which 2 are at $z>4$. We report formation redshifts, demonstrating that the oldest objects formed at $z > 6$, however individual ages from our photometric data have significant uncertainties, typically $\sim0.5$ Gyr. We demonstrate that the UVJ colours of the quiescent population evolve with redshift at $z>3$, becoming bluer and more similar to post-starburst galaxies at lower redshift. Based upon this we construct a model for the time-evolution of quiescent galaxy UVJ colours, concluding that the oldest objects are consistent with forming the bulk of their stellar mass at $z\sim6-7$ and quenching at $z\sim5$. We report spectroscopic redshifts for two of our objects at $z=3.440$ and $3.396$, which exhibit extremely weak Ly$\alpha$ emission in ultra-deep VANDELS spectra. We calculate star-formation rates based on these line fluxes, finding that these galaxies are consistent with our quiescent selection criteria, provided their Ly$\alpha$ escape fractions are $>3$ and $>10$ per cent respectively. We finally report that our highest-redshift robust object exhibits a continuum break at $\lambda\sim7000$A in a spectrum from VUDS, consistent with our photometric redshift of $z_\mathrm{phot}=4.72^{+0.06}_{-0.04}$. If confirmed as quiescent this object would be the highest-redshift known quiescent galaxy. To obtain stronger constraints on the times of the earliest quenching events, high-SNR spectroscopy must be extended to $z\gtrsim3$ quiescent objects.
astrophysics
The ALICE muon trigger (MTR) system consists of 72 Resistive Plate Chamber (RPC) detectors arranged in two stations, each composed of two planes with 18 RPCs per plane. The detectors are operated in maxi-avalanche mode using a mixture of 89.7% C$_2$H$_2$F$_4$, 10% i-C$_4$H$_{10}$ and 0.3% SF$_6$. A number of detector performance indicators, such as efficiency and dark current, have been monitored over time throughout the LHC Run2 (2015-18). While the efficiency showed very good stability, a steady increase in the absorbed dark current was observed. Since the end of 2018, the LHC has entered a phase of long shutdown, during which the ALICE experiment will be upgraded to cope with the next phase of data taking, expected in 2021. The MTR is undergoing a major upgrade of the front-end and readout electronics, and will change its functionalities, becoming a Muon Identifier. Only the replacement of the most irradiated RPCs is planned during the upgrade. It is therefore important to perform dedicated studies to gain further insights into the status of the detector. In particular, two RPCs were flushed with pure Ar gas for a prolonged period of time and a plasma was created by fully ionizing the gas. The output gas was analyzed using a Gas Chromatograph combined with a Mass Spectrometer and the possible presence of fluorinated compounds originating from the interaction of the plasma with the inner surfaces of the detector has been assessed using an Ion-Selective Electrode station. This contribution will include a detailed review of the ALICE muon RPC performance at the LHC. The procedure and results of the argon plasma test, described above, are also discussed.
physics
We develop a stochastic theory that treats time-dependent exciton-exciton s-wave scattering and that accounts for dynamic Coulomb screening, which we describe within a mean-field limit. With this theory, we model excitation-induced dephasing effects on time-resolved two-dimensional coherent optical lineshapes and we identify a number of features that can be attributed to the many-body dynamics occurring in the background of the exciton, including dynamic line narrowing, mixing of real and imaginary spectral components, and multi-quantum states. We test the model by means of multidimensional coherent spectroscopy on a two-dimensional metal-halide semiconductor that hosts tightly bound excitons and biexcitons that feature strong polaronic character. We find that the exciton nonlinear coherent lineshape reflects many-body correlations that give rise to excitation-induced dephasing. Furthermore, we observe that the exciton lineshape evolves with population time over time windows in which the population itself is static, in a manner that reveals the evolution of the multi-exciton many-body couplings. Specifically, the dephasing dynamics slow down with time, at a rate that is governed by the strength of exciton many-body interactions and on the dynamic Coulomb screening potential. The real part of the coherent optical lineshape displays strong dispersive character at zero time, which transforms to an absorptive lineshape on the dissipation timescale of excitation-induced dephasing effects, while the imaginary part displays converse behavior. Our microscopic theoretical approach is sufficiently flexible to allow for a wide exploration of how system-bath dynamics contribute to linear and non-linear time-resolved spectral behavior.
physics
We construct an unbounded representative for the shriek class associated to the embeddings of spheres into Euclidean space. We equip this unbounded Kasparov cycle with a connection and compute the unbounded Kasparov product with the Dirac operator on $\mathbb R^{n+1}$. We find that the resulting spectral triple for the algebra $C(\mathbb S^n)$ differs from the Dirac operator on the round sphere by a so-called index cycle, whose class in $KK_0(\mathbb C, \mathbb C)$ represents the multiplicative unit. At all points we check that our construction involving the unbounded Kasparov product is compatible with the bounded Kasparov product using Kucerovsky's criterion and we thus capture the composition law for the shriek map for these immersions at the unbounded KK-theoretical level.
mathematics
We report the superconducting properties of new hexagonal Nb$_{10+2x}$Mo$_{35-x}$Ru$_{35-x}$Rh$_{10}$Pd$_{10}$ high-entropy alloys (HEAs) (0 $\leq$ $x$ $\leq$ 5). With increasing $x$, the superconducting transition temperature $T_{\rm c}$ shows a maximum of 6.19 K at $x$ = 2.5, while the zero-temperature upper critical field $B_{\rm c2}$(0) increases monotonically, reaching 8.3 T at $x$ = 5. For all $x$ values, the specific heat jump deviates from the Bardeen-Cooper-Schreiffer behavior. In addition, we show that $T_{\rm c}$ of these HEAs is not determined mainly by the density of states at the Fermi level and would be enhanced by lowering the valence electron concentration.
condensed matter
This paper concerns a number of diagram categories, namely the partition, planar partition, Brauer, partial Brauer, Motzkin and Temperley-Lieb categories. If $\mathcal K$ denotes any of these categories, and if $\sigma\in\mathcal K_{nm}$ is a fixed morphism, then an associative operation $\star_\sigma$ may be defined on $\mathcal K_{mn}$ by $\alpha\star_\sigma\beta=\alpha\sigma\beta$. The resulting semigroup $\mathcal K_{mn}^\sigma=(\mathcal K_{mn},\star_\sigma)$ is called a sandwich semigroup. We conduct a thorough investigation of these sandwich semigroups, with an emphasis on structural and combinatorial properties such as Green's relations and preorders, regularity, stability, mid-identities, ideal structure, (products of) idempotents, and minimal generation. It turns out that the Brauer category has many remarkable properties not shared by any of the other diagram categories we study. Because of these unique properties, we may completely classify isomorphism classes of sandwich semigroups in the Brauer category, calculate the rank (smallest size of a generating set) of an arbitrary sandwich semigroup, enumerate Green's classes and idempotents, and calculate ranks (and idempotent ranks, where appropriate) of the regular subsemigroup and its ideals, as well as the idempotent-generated subsemigroup. Several illustrative examples are considered throughout, partly to demonstrate the sometimes-subtle differences between the various diagram categories.
mathematics
The string vertices of closed string field theory are subsets of the moduli spaces of punctured Riemann surfaces that satisfy a geometric version of the Batalin-Vilkovisky master equation. We present a homological proof of existence of string vertices and their uniqueness up to canonical transformations. Using hyperbolic metrics on surfaces with geodesic boundaries we give an exact construction of string vertices as sets of surfaces with systole greater than or equal to $L$ with $L\leq 2\, \hbox{arcsinh}\, 1$. Intrinsic hyperbolic collars prevent the appearance of short geodesics upon sewing. The surfaces generated by Feynman diagrams are naturally endowed with Thurston metrics: hyperbolic on the vertices and flat on the propagators. For the classical theory the length $L$ is arbitrary and, as $L\to \infty$ hyperbolic vertices become the minimal-area vertices of closed string theory.
high energy physics theory
Forest fires are the outcome of a complex interaction between environmental factors, topography and socioeconomic factors (Bedia et al, 2014). Therefore, understand causality and early prediction are crucial elements for controlling such phenomenon and saving lives.The aim of this study is to build spatio-temporal model to understand causality of forest fires in Europe, at NUTS2 level between 2012 and 2016, using environmental and socioeconomic variables.We have considered a disease mapping approach, commonly used in small area studies to assess thespatial pattern and to identify areas characterised by unusually high or low relative risk.
statistics
Density waves in Saturn's rings are usually tightly wrapped spiral patterns generated by resonances with either Saturn's moons or structures inside the planet. However, between the Barnard and Bessel Gaps in the Cassini Division (i.e. between 120,240 and 120,300 km from Saturn's spin axis), there are density variations that appear to form an axisymmetric density wave consisting of concentric zones of varying densities that propagate radially through the rings. Axisymmetric waves cannot be generated directly by a satellite resonance, but instead appear to be excited by interference between a nearby satellite resonance and normal mode oscillations on the inner edge of the Barnard Gap. Similar axisymmetric waves may exist just interior to other resonantly confined edges that exhibit a large number of normal modes, including the Dawes ringlet in the outer C ring and the outermost part of the B ring.
astrophysics
`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck. This emerging paradigm aims at embedding some aspects of computations inside the memory array, thereby avoiding frequent and expensive movement of data between the compute unit and the storage memory. In-memory computing with respect to Silicon memories has been widely explored on various memory bit-cells. Embedding computation inside the 6 transistor (6T) SRAM array is of special interest since it is the most widely used on-chip memory. In this paper, we present a novel in-memory multiplication followed by accumulation operation capable of performing parallel dot products within 6T SRAM without any changes to the standard bitcell. We, further, study the effect of circuit non-idealities and process variations on the accuracy of the LeNet-5 and VGG neural network architectures against the MNIST and CIFAR-10 datasets, respectively. The proposed in-memory dot-product mechanism achieves 88.8% and 99% accuracy for the CIFAR-10 and MNIST, respectively. Compared to the standard von Neumann system, the proposed system is 6.24x better in energy consumption and 9.42x better in delay.
computer science
A mixed quantum state is represented by a Hermitian positive semi-definite operator $\rho$ with unit trace. The positivity requirement is responsible for a highly nontrivial geometry of the set of quantum states. A known way to satisfy this requirement automatically is to use the map $\rho=\tau^2 / \mathrm {tr} \, \tau^2$, where $\tau$ can be an arbitrary Hermitian operator. We elaborate a parametrization of the set of quantum states induced by the parametrization of the linear space of Hermitian operators by virtue of this map. In particular, we derive an equation for the boundary of the set. Further, we discuss how this parametrization can be applied to a set of quantum states constrained by some symmetry, or, more generally, some linear condition. As an example, we consider the parametrization of sets of Werner states of qubits.
quantum physics
In a regular full exponential family, the maximum likelihood estimator (MLE) need not exist in the traditional sense. However, the MLE may exist in the completion of the exponential family. Existing algorithms for finding the MLE in the completion solve many linear programs; they are slow in small problems and too slow for large problems. We provide new, fast, and scalable methodology for finding the MLE in the completion of the exponential family. This methodology is based on conventional maximum likelihood computations which come close, in a sense, to finding the MLE in the completion of the exponential family. These conventional computations construct a likelihood maximizing sequence of canonical parameter values which goes uphill on the likelihood function until they meet a convergence criteria. Nonexistence of the MLE in this context results from a degeneracy of the canonical statistic of the exponential family, the canonical statistic is on the boundary of its support. There is a correspondance between this boundary and the null eigenvectors of the Fisher information matrix. Convergence of Fisher information along a likelihood maximizing sequence follows from cumulant generating function (CGF) convergence along a likelihood maximizing sequence, conditions for which are given. This allows for the construction of necessarily one-sided confidence intervals for mean value parameters when the MLE exists in the completion. We demonstrate our methodology on three examples in the main text and three additional examples in the Appendix. We show that when the MLE exists in the completion of the exponential family, our methodology provides statistical inference that is much faster than existing techniques.
mathematics
In this thesis we will work under the premises of the Cellular Automata Interpretation of QM, by Gerard 't Hooft, according to whom particles evolve following the rules of Cellular Automata (CA), a mathematical model consisting of discrete units that evolve following deterministic laws in discrete space and time. The states of a Cellular Automaton are, by definition, classical and thus deterministic and do not form superpositions. Since it is not possible to know how to demonstrate the underlying classical deterministic structure and dynamics at the smallest microscopic scales at present, what we pursue in this thesis, besides summarizing the concept of the Cellular Automaton Interpretation, is to show that quantum phenomena, in particular superposition states, can arise in a deterministic model because of the limited precision of measurements. In order to do that, we follow the path of a recent article by Elze, considering a triplet of Ising spins and their dynamics in a CA context, which is possible to formalize using Pauli matrices and quantum mechanics operators. We will thus observe how the system will shift from a triplet of Ising spins to a triplet of qubits due to the arising of superposition after applying some perturbations on the Hamiltonian and the dynamics operators.
quantum physics
Small adversarial perturbations of input data are able to drastically change performance of machine learning systems, thereby challenging the validity of such systems. We present the very first end-to-end adversarial attacks on a music instrument classification system allowing to add perturbations directly to audio waveforms instead of spectrograms. Our attacks are able to reduce the accuracy close to a random baseline while at the same time keeping perturbations almost imperceptible and producing misclassifications to any desired instrument.
electrical engineering and systems science
Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance. To counteract this, modern estimators either introduce bias, rely on multiple function evaluations, or use learned, input-dependent baselines. Thus, there is a need for estimators that require minimal tuning, are computationally cheap, and have low mean squared error. In this paper, we show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization without increasing the number of function evaluations. This provably reduces the mean squared error. We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
statistics
A graph $G$ is a cocomparability graph if there exists an acyclic transitive orientation of the edges of its complement graph $\overline{G}$. LBFS$^{+}$ is a variant of the generic Lexicographic Breadth First Search (LBFS), which uses a specific tie-breaking mechanism. Starting with some ordering $\sigma_{0}$ of $G$, let $\{\sigma_{i}\}_{i\geq 1}$ be the sequence of orderings such that $\sigma_{i}=$LBFS$^{+}(G, \sigma_{i-1})$. The LexCycle($G$) is defined as the maximum length of a cycle of vertex orderings of $G$ obtained via such a sequence of LBFS$^{+}$ sweeps. Dusart and Habib conjectured in 2017 that LexCycle($G$)=2 if $G$ is a cocomparability graph and proved it holds for interval graphs. In this paper, we show that LexCycle($G$)=2 if $G$ is a $\overline{P_{2}\cup P_{3}}$-free cocomparability graph, where a $\overline{P_{2}\cup P_{3}}$ is the graph whose complement is the disjoint union of $P_{2}$ and $P_{3}$. As corollaries, it's applicable for diamond-free cocomparability graphs, cocomparability graphs with girth at least 4, as well as interval graphs.
mathematics
We study ridge correlations of the glasma in pp collisions at $\sqrt{s_{\mathrm{NN}}}=7$ TeV by using the color glass condensate (CGC) formalism. The azimuthal collimation at long range rapidity is intrinsic to glasma dynamics and is reproduced here. When rapidity window enlarges, ridge correlations in two dimensional $\Delta y$-$\Delta\phi$ distribution and one dimensional $\Delta\phi$ distribution at long range rapidity gap are enhanced. The enhancements are demonstrated to be the contributions of source gluons. The quantum evolution of the gluons presents unique correlation patterns in differential correlation function. These characters of two gluon correlations open a way of testing the production mechanism from experimental measurements.
high energy physics phenomenology
Let $A$ be a set and $V$ a real Hilbert space. Let $H$ be a real Hilbert space of functions $f:A\to V$ and assume $H$ is continuously embedded in the Banach space of bounded functions. For $i=1,\cdots,n$, let $(x_i,y_i)\in A\times V$ comprise our dataset. Let $0<q<1$ and $f^*\in H$ be the unique global minimizer of the functional \begin{equation*} u(f) = \frac{q}{2}\Vert f\Vert_{H}^{2} + \frac{1-q}{2n}\sum_{i=1}^{n}\Vert f(x_i)-y_i\Vert_{V}^{2}. \end{equation*} For $x\in A$ and $v\in V$ let $\Phi(x,v)\in H$ be the unique element such that $(\Phi(x,v),f)_{H}=(f(x),v)_{V}$ for all $f\in H$. In this paper we show that for each $k\in\mathbb{N}$, $k\geq 2$ one has a random function $F_{k}\in H$ with the structure \begin{equation*} F_{k} = \sum_{h=1}^{N_k} \Lambda_{k, h} \Phi(x_{I_h}, \mathcal{E}_{h}) \end{equation*} (where $0\leq N_k\leq k-1$ are Binomially distributed with success probability $1-q$, $\Lambda_{k, h}\in\mathbb{R}$ are random coefficients, $1\leq I_{h}\leq n$ are independent and uniformly distributed and $\mathcal{E}_{h}\in V$ are random vectors) such that asymptotically for large $k$ we have \begin{equation*} E\left[ \Vert F_{k}-f^*\Vert_{H}^{2} \right] = O(\frac{1}{k}). \end{equation*} Thus we achieve the Monte Carlo type error estimate with no metric or measurability structure on $A$, possibly infinite dimensional $V$ and the ingredients of approximating functions are just the Riesz representatives $\Phi(x,v)\in H$. We obtain this result by considering the stochastic gradient descent sequence in the Hilbert space $H$ to minimize the functional $u$.
mathematics
The accuracy of quantum dynamics simulation is usually measured by the error of the unitary evolution operator in the operator norm, which in turn depends on certain norm of the Hamiltonian. For unbounded operators, after suitable discretization, the norm of the Hamiltonian can be very large, which significantly increases the simulation cost. However, the operator norm measures the worst-case error of the quantum simulation, while practical simulation concerns the error with respect to a given initial vector at hand. We demonstrate that under suitable assumptions of the Hamiltonian and the initial vector, if the error is measured in terms of the vector norm, the computational cost may not increase at all as the norm of the Hamiltonian increases using Trotter type methods. In this sense, our result outperforms all previous error bounds in the quantum simulation literature. Our result extends that of [Jahnke, Lubich, BIT Numer. Math. 2000] to the time-dependent setting. We also clarify the existence and the importance of commutator scalings of Trotter and generalized Trotter methods for time-dependent Hamiltonian simulations.
quantum physics
We propose an entirely redesigned framework of bandlimited signal reconstruction for the time encoding machine (TEM) introduced by Lazar and T\'oth. As the encoding part of TEM consists in obtaining integral values of a bandlimited input over known time intervals, it theoretically amounts to applying a known linear operator on the input. We then approach the general question of signal reconstruction by pseudo-inversion of this operator. We perform this task numerically and iteratively using projections onto convex sets (POCS). The algorithm can be implemented exactly in discrete time with multiplications that are all reduced to scaling by signed powers of two, thanks to the use of relaxation coefficients. Meanwhile, the algorithm achieves a rate of convergence similar to that of Lazar and T\'oth. For real-time processing, we propose an approximate time-varying FIR implementation, which avoids the splitting of the input into blocks. We finally propose some preliminary analysis of semi-convergence of the algorithm under data noise.
electrical engineering and systems science
We present a method to control collisions between ultracold neutral atoms in the electronic ground state and trapped ions. During the collision, the neutral atom is resonantly excited by a laser to a low-field-seeking Rydberg state, which is repelled by the ion. As the atom is reflected from the ion, it is de-excited back into its electronic ground level. The efficiency of shielding is analyzed as a function of laser frequency and power, initial atom-ion collision energy, and collision angle. The suitability of several Rydberg levels of Na and Rb for shielding is discussed. Useful applications of shielding include the suppression of unwanted chemical reactions between atoms and ions, a prerequisite for controlled atom-ion interactions.
condensed matter
We introduce a vortex phase transform with a lenslet-array to accompany shallow, dense, ``small-brain'' neural networks for high-speed and low-light imaging. Our single-shot ptychographic approach exploits the coherent diffraction, compact representation, and edge enhancement of Fourier-tranformed spiral-phase gradients. With vortex spatial encoding, a small brain is trained to deconvolve images at rates 5-20 times faster than those achieved with random encoding schemes, where greater advantages are gained in the presence of noise. Once trained, the small brain reconstructs an object from intensity-only data, solving an inverse mapping without performing iterations on each image and without deep-learning schemes. With this hybrid, optical-digital, vortex Fourier encoded, small-brain scheme, we reconstruct MNIST Fashion objects illuminated with low-light flux (5 nJ/cm$^2$) at a rate of several thousand frames per second on a 15 W central processing unit, two orders of magnitude faster than convolutional neural networks.
electrical engineering and systems science
When interference affecting various communication and sensor systems contains clearly identifiable outliers (e.g. an impulsive component), it can be efficiently mitigated in real time by intermittently nonlinear filters developed in our earlier work, achieving improvements in the signal quality otherwise unattainable. However, apparent amplitude outliers in the interference can disappear and reappear due to various filtering effects, including fading and multipass, as the signal propagates through media and/or the signal processing chain. In addition, the outlier structure of the interference can be obscured by strong non-outlier interfering signals, such as thermal noise and/or adjacent channel interference, or by the signal of interest itself. In this paper, we first outline the overall approach to using intermittently nonlinear filters for in-band, real-time mitigation of such interference with hidden outlier components in practical complex interference scenarios. We then introduce Complementary Intermittently Nonlinear Filters (CINFs) and focus on the particular task of mitigating the outlier noise obscured by the signal of interest itself. We describe practical implementations of such nonlinear filtering arrangements for mitigation of hidden outlier interference, in the process of analog-to-digital conversion, for wide ranges of interference powers and the rates of outlier generating events. To emphasize the effectiveness and versatility of this approach, in our examples we use particularly challenging waveforms that severely obscure low-amplitude outlier noise, such as broadband chirp signals (e.g. used in radar, sonar, and spread-spectrum communications) and ``bursty," high crest factor signals (e.g. OFDM).
electrical engineering and systems science
We would like to develop model theory for T, a complete theory in L_{theta,theta}(tau) when theta is a compact cardinal. We already have bare bones stability theory and it seemed we can go no further. Dealing with ultrapowers (and ultraproducts) naturally we restrict ourselves to "D a theta-complete ultrafilter on I, probably (I,theta)-regular". The basic theorems of model theory work and can be generalized (like Los theorem), but can we generalize deeper parts of model theory? The first section tries to sort out what occurs to the notion of stable T for complete L_{theta,theta}-theories T. We generalize several properties of complete first order T, equivalent to being stable (see [Sh:c]) and find out which implications hold and which fail. In particular, can we generalize stability enough to generalize [Sh:c, Ch. VI]? Let us concentrate on saturation in the local sense (types consisting of instances of one formula). We prove that at least we can characterize the T's (of cardinality < theta for simplicity) which are minimal for appropriate cardinal lambda > 2^kappa +|T| in each of the following two senses. One is generalizing Keisler order which measures how saturated are ultrapowers. Another asks: Is there an L_{theta,theta}-theory T_1 supseteq T of cardinality |T| + 2^theta such that for every model M_1 of T_1 of cardinality > lambda, the tau(T)-reduct M of M_1 is lambda^+-saturated. Moreover, the two versions of stable used in the characterization are different.
mathematics
We propose a quantum repeater protocol and architecture that mitigates decoherence of the entangled states by optimizing the quantum memory buffer time. The protocol maximizes the rate of distillable entanglement in the average accessed state at all nesting levels. The achievable rate is higher by orders of magnitude in comparison to a canonical protocol that does not optimize the buffer time. The advantage of the proposed design is observed for all nesting levels of the repeater for technologically feasible memory quality, entanglement generation and swapping success probabilities.
quantum physics
We propose an indoor localization algorithm for visible light systems by considering effects of non-line-of-sight (NLOS) propagation. The proposed algorithm, named database assisted nonlinear least squares (DA-NLS), utilizes ideas from both the classical NLS algorithm and the fingerprinting algorithm to achieve accurate and robust localization performance in NLOS environments. In particular, a database is used to learn NLOS effects, and then an NLS algorithm is employed to estimate the position. The performance of the proposed algorithm is compared against that of the fingerprinting and NLS algorithms.
electrical engineering and systems science
Q-learning, which seeks to learn the optimal Q-function of a Markov decision process (MDP) in a model-free fashion, lies at the heart of reinforcement learning. When it comes to the synchronous setting (such that independent samples for all state-action pairs are drawn from a generative model in each iteration), substantial progress has been made recently towards understanding the sample efficiency of Q-learning. Take a $\gamma$-discounted infinite-horizon MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$: to yield an entrywise $\varepsilon$-accurate estimate of the optimal Q-function, state-of-the-art theory for Q-learning proves that a sample size on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^5\varepsilon^{2}}$ is sufficient, which, however, fails to match with the existing minimax lower bound. This gives rise to natural questions: what is the sharp sample complexity of Q-learning? Is Q-learning provably sub-optimal? In this work, we settle these questions by (1) demonstrating that the sample complexity of Q-learning is at most on the order of $\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^4\varepsilon^2}$ (up to some log factor) for any $0<\varepsilon <1$, and (2) developing a matching lower bound to confirm the sharpness of our result. Our findings unveil both the effectiveness and limitation of Q-learning: its sample complexity matches that of speedy Q-learning without requiring extra computation and storage, albeit still being considerably higher than the minimax lower bound.
statistics
Non-Gaussian quantum states have been deterministically prepared and autonomously stabilized in single- and two-mode circuit quantum electrodynamics architectures via engineered dissipation. However, it is currently unknown how to scale up this technique to multi-mode non-Gaussian systems. Here, we upgrade dissipation engineering to collective (normal) modes of nonlinear resonator arrays and show how to stabilize multi-mode Schrodinger cat states. These states are multi-photon and multi-mode quantum superpositions of coherent states in a single normal mode delocalized over an arbitrary number of cavities. We consider tailored dissipative coupling between resonators that are parametrically driven and feature an on-site nonlinearity, which is either a Kerr-type nonlinearity or an engineered two-photon loss. For both types of nonlinearity, we find the same exact closed-form solutions for the two-dimensional steady-state manifold spanned by superpositions of multi-mode Schrodinger cat states. We further show that, in the Zeno limit of strong dissipative coupling, the even parity multi-mode cat state can be deterministically prepared from the vacuum. Remarkably, engineered two-photon loss gives rise to a fast relaxation towards the steady state, protecting the state preparation against decoherence due to intrinsic single-photon losses, which sets in at longer times. The relaxation time is independent of system size making the state preparation scalable. Multi-mode cat states are naturally endowed with a noise bias that increases exponentially with system size and can thus be exploited for enhanced robust encoding of quantum information.
quantum physics
In this paper we investigate operator Hilbert systems and their separable morphisms. We prove that the operator Hilbert space of Pisier is an operator system, which possesses the self-duality property. It is established a link between unital positive maps and Pietch factorizations, which allows us to describe all separable morphisms from an abelian C*-algebra to an operator Hilbert system. Finally, we prove a key property of entanglement breaking maps that involves operator Hilbert systems.
mathematics
The galaxy-scale gravitational lens B0128+437 generates a quadrupole-image configuration of a background quasar that shows milli-arcsecond-scale subcomponents in the multiple images observed with VLBI. As this multiple-image configuration including the subcomponents has eluded a parametric lens-model characterisation so far, we determine local lens properties at the positions of the multiple images with our model-independent approach. Using PixeLens, we also succeed in setting up a global free-form mass density reconstruction including all subcomponents as constraints. We compare the model-independent local lens properties with those obtained by PixeLens and those obtained by the parametric modelling algorithm Lensmodel. A comparison of all three approaches and a model-free analysis based on the relative polar angles of the multiple images corroborate the hypothesis that elliptically symmetric models are too simplistic to characterise the asymmetric mass density distribution of this lenticular or late-type galaxy. In addition, the model-independent approach efficiently determines local lens properties on the scale of the quasar subcomponents, which are computationally intensive to obtain by free-form model-based approaches. As only 40% of the small-scale subcomponent local lens properties overlap within the 1-$\sigma$ confidence bounds, mass density gradients on milli-arcsecond scales cannot be excluded. Hence, aiming at a global reconstruction of the deflecting mass density distribution, increasingly detailed observations require flexible free-form models that allow for density fluctuations on milli-arcsecond scale to replace parametric ones, especially for asymmetric lenses or lenses with localised inhomogeneities like B0128.
astrophysics
In a recent paper, Rathie and Pogany established thirty two novel and general reductions of two and three variables generalized hypergeometric functions. In this paper we provide twenty four further novel and general reduction formulas. The results are established by the application of Beta and Gamma integral methods to the three identities involving products of generalized hypergeometric functions obtained earlier by Kim and Rathie. As special cases, we mention some interesting results.
mathematics
In this note we study CFT$_2$ Virasoro conformal blocks with heavy operators in the large-$c$ limit in the context of AdS$_3$/CFT$_2$ correspondence. We compute the lengths of the holographic Steiner trees dual to the $5$-point and $6$-point conformal blocks using the superlight approximation when one or more dimensions are much less than the others. These results are generalized for $N$-point holographic Steiner trees dual to $(N+1)$-point conformal blocks with superlight weights.
high energy physics theory
In this paper, we generally expressed the virial expansion of ideal quantum gases by the heat kernel coefficients for the corresponding Laplace type operator. As examples, we give the virial coefficients for quantum gases in $d$-dimensional confined space and spheres, respectively. Our results show that, the relative correction from the boundary to the second virial coefficient is independent of the dimension and it always enhances the quantum exchange interaction. In $d$-dimensional spheres, however, the influence of the curvature enhances the quantum exchange interaction in two dimensions, but weakens it in higher dimensions ($d>3$).
condensed matter
Metasurfaces constitute effective media for manipulating and transforming impinging EM waves. Related studies have explored a series of impactful MS capabilities and applications in sectors such as wireless communications, medical imaging and energy harvesting. A key-gap in the existing body of work is that the attributes of the EM waves to-be-controlled (e.g., direction, polarity, phase) are known in advance. The present work proposes a practical solution to the EM wave sensing problem using the intelligent and networked MS counterparts-the HyperSurfaces (HSFs), without requiring dedicated field sensors. An nano-network embedded within the HSF iterates over the possible MS configurations, finding the one that fully absorbs the impinging EM wave, hence maximizing the energy distribution within the HSF. Using a distributed consensus approach, the nano-network then matches the found configuration to the most probable EM wave traits, via a static lookup table that can be created during the HSF manufacturing. Realistic simulations demonstrate the potential of the proposed scheme. Moreover, we show that the proposed workflow is the first-of-its-kind embedded EM compiler, i.e., an autonomic HSF that can translate high-level EM behavior objectives to the corresponding, low-level EM actuation commands.
electrical engineering and systems science
In this paper, we investigate the effect of supersymmetry on the symmetry classification of random matrix theory ensembles. We mainly consider the random matrix behaviors in the $\mathcal{N}=1$ supersymmetric generalization of the Sachdev-Ye-Kitaev (SYK) model, a toy model for the two-dimensional quantum black hole with supersymmetric constraint. Some analytical arguments and numerical results are given to show that the statistics of the supersymmetric SYK model could be interpreted as random matrix theory ensembles, with a different eight-fold classification from the original SYK model and some new features. The time-dependent evolution of the spectral form factor is also investigated, where predictions from random matrix theory are governing the late time behavior of the chaotic Hamiltonian with supersymmetry.
high energy physics theory
In this paper we address a fundamental question in communication, that is, in the presence of various noise scenarios such as white/colored Gaussian noise and impulsive -type noises, how to efficiently and accurately transmit a set of various signals either they are speeches or images from one side into another via a communication channel? We need to manipulate the various signals in a way so that they are more robust to different types of noises while at the same time the set of signals can be efficiently and accurately transmitted into another side via a communication channel. In this paper we propose a spatial-temporal spreading method for a set of signals and show its applications in communications.
electrical engineering and systems science
The second author studied arithmetic properties of a class of sequences that generalize the sequence of derangements. The aim of the following paper is to disprove two conjectures stated in \cite{miska}. The first conjecture regards the set of prime divisors of their terms. The latter one is devoted to the order of magnitude of considered sequences.
mathematics
Axion-like particles (ALPs) with couplings to electromagnetism have long been postulated as extensions to the Standard Model. String theory predicts an "axiverse" of many light axions, some of which may make up the dark matter in the universe and/or solve the strong CP problem. We propose a new experiment using superconducting radiofrequency (SRF) cavities which is sensitive to light ALPs independent of their contribution to the cosmic dark matter density. Off-shell ALPs will source cubic nonlinearities in Maxwell's equations, such that if a SRF cavity is pumped at frequencies $\omega_1$ and $\omega_2$, in the presence of ALPs there will be power in modes with frequencies $2\omega_1 \pm \omega_2$. Our setup is similar in spirit to light-shining-through-walls (LSW) experiments, but because the pump field itself effectively converts the ALP back to photons inside a single cavity, our sensitivity scales differently with the strength of the external fields, allowing for superior reach as compared to experiments like OSQAR while utilizing current technology. Furthermore, a well-defined program of increasing sensitivity has a guaranteed physics result: the first observation of the Euler-Heisenberg term of low-energy QED at energies below the electron mass. We discuss how the ALP contribution may be separated from the QED contribution by a suitable choice of pump modes and cavity geometry, and conclude by describing the ultimate sensitivity of our proposed program of experiments to ALPs.
high energy physics phenomenology
Place a droplet of mineral oil on water and the oil will spread to cover the water surface in a thin film -- a phenomenon familiar to many, owing to the rainbow-faced puddles left behind leaking buses on rainy days. In this paper we study the everted problem: an aqueous droplet deposited onto a deep layer of silicone oil. As it is energetically favourable for the oil phase to spread to cover the droplet surface completely, the droplet is ultimately engulfed in the oil layer. We present a detailed study of engulfment dynamics, from the instant the droplet first impacts the oil surface until it finally sediments into the less dense oil. We study a broad range of droplet sizes (micrometric to millimetric) and oil kinematic viscosities ($10^2$ to $10^5$ cSt), corresponding to a viscosity-dominated parameter regime with relevance to oil spills. Our investigation primarily examines droplet engulfment dynamics over two distinct stages: a rapid earlier stage in which the droplet is almost entirely submerged, driven by capillary forces in the oil surface, and cloaked by a thin layer of oil; and a much slower later stage in which gravity pulls on the drop adhered to the oil surface, thus driving a peeling flow. This means that gravitational effects are essential to complete the engulfmet of the droplet, even for micrometric droplets. We observe the longest engulfment times for droplets of intermediate size. Experiments at fixed droplet size reveal a power law dependence of engulfment time on oil kinematic viscosity.
physics
The study of bulge morphology and evolutionary history, together with the associated scaling relations, is essential in studies of galaxy formation and evolution. Following from our previous work (Pastrav 2020), we present a detailed structural analysis of a representative sample of nearby spiral and early-type galaxies taken from the KINGFISH/SINGS survey. The photometric parameters of bulges are obtained from bulge-disk decompositions using GALFIT data analysis algorithm. The method and the corrections previously obtained in Pastrav et al. (2013a,b) are used to derive intrinsic photometric and structural bulge parameters, corrected for projection and dust effects. We show the main bulge scaling relations and the black-hole relations, both the observed and the intrinsic ones, in the optical regime. We find dust and inclination effects to produce more important changes in the slope and zero-point of the Kormendy relation for spiral galaxies than those for ETGs, with bulges of spiral galaxies residing on a steeper slope relation that the early-type galaxies. We observe that the Kormendy relation in combination with a threshold in bulge S\'{e}rsic index ($n_{b}$) does not clearly produce an accurate morphological separation of bulges, while $n_{b}$-bulge-to-total flux ratio ($B/T$) and $B/T$-stellar mass ($M_{\star}$), can be used to discriminate between bulges with different morphologies or between early- and late-type galaxies. We confirm the existence of two distinct intrinsic relations for bulges of spirals and early-type galaxies, between the bulge luminosity (or absolute magnitude) and S\'{e}rsic index, and between the mass of the central black-hole and bulge luminosity, respectively. Within errors, we confirm a unified intrinsic black hole mass-$n_{b}$ relation for all bulges. The parameters of all the aforementioned relations are consistent with values found in other works.
astrophysics
We study open topological gravity in two dimensions, or, the intersection theory on the moduli space of open Riemann surfaces initiated by Pandharipande, Solomon and Tessler. The open free energy, the generating function for the open intersection numbers, obeys the open KdV equations and Buryak's differential equation and is related by a formal Fourier transformation to the Baker-Akhiezer wave function of the KdV hierarchy. Using these properties we study the genus expansion of the free energy in detail. We construct explicitly the genus zero part of the free energy. We then formulate a method of computing higher genus corrections by solving Buryak's equation and obtain them up to high order. This method is much more efficient than our previous approach based on the saddle point calculation. Along the way we show that the higher genus corrections are polynomials in variables that are expressed in terms of genus zero quantities only, generalizing the constitutive relation of closed topological gravity.
high energy physics theory
Due to the mass gap between the Standard Model and possible New Physics states, electroweak effective approaches are appropriate. Although a linear realization of the electroweak symmetry breaking with the Higgs forming a doublet together with the Goldstone bosons of the EWSB is a first possibility (SMEFT), we adopt the more general non-linear realization, where the Higgs is a singlet with independent couplings (EWET, HEFT or EWChL). We present the effective Lagrangian at low energies (the EWET, with only the SM fields) and at high energies (the resonance theory, with also a set of resonances). Taking into account the high scale of these resonances, their experimental searches seem to be more accessible by considering their imprints at low-energies, i.e., their imprints in the Low Energy Constants (LECs) of the EWET at energies lower than the resonance masses. We give some examples of these phenomenological connections.
high energy physics phenomenology
Stochastic field distortions caused by atmospheric turbulence are a fundamental limitation to the astrometric accuracy of ground-based imaging. This distortion field is measurable at the locations of stars with accurate positions provided by the Gaia DR2 catalog; we develop the use of Gaussian process regression (GPR) to interpolate the distortion field to arbitrary locations in each exposure. We introduce an extension to standard GPR techniques that exploits the knowledge that the 2-dimensional distortion field is curl-free. Applied to several hundred 90-second exposures from the Dark Energy Survey as a testbed, we find that the GPR correction reduces the variance of the turbulent distortions $\approx12\times$, on average, with better performance in denser regions of the Gaia catalog. The RMS per-coordinate distortion in the $riz$ bands is typically $\approx7$ mas before any correction, and $\approx2$ mas after application of the GPR model. The GPR astrometric corrections are validated by the observation that their use reduces, from 10 to 5 mas RMS, the residuals to an orbit fit to $riz$-band observations over 5 years of the $r=18.5$ trans-Neptunian object Eris. We also propose a GPR method, not yet implemented, for simultaneously estimating the turbulence fields and the 5-dimensional stellar solutions in a stack of overlapping exposures, which should yield further turbulence reductions in future deep surveys.
astrophysics
In estimating an unknown parameter of a quantum state the quantum Fisher information (QFI) is a pivotal quantity, which depends on the state and its derivate with respect to the unknown parameter. We prove the continuity property for the QFI in the sense that two close states with close first derivatives have close QFIs. This property is completely general and irrespective of dynamics or how states acquire their parameter dependence and also the form of parameter dependence---indeed this continuity is basically a feature of the classical Fisher information that in the case of the QFI naturally carries over from the manifold of probability distributions onto the manifold of density matrices. We demonstrate that in the special case where the dependence of the states on the unknown parameter comes from one dynamical map (quantum channel), the continuity holds in its reduced form with respect to the initial states. In addition, we show that when one initial state evolves through two different quantum channels, the continuity relation applies in its general form. A situation in which such scenario can occur is an open-system metrology where one of the maps represents the ideal dynamics whereas the other map represents the real (noisy) dynamics. In the making of our main result, we also introduce a regularized representation for the symmetric logarithmic derivative which works for general states even with incomplete rank, and its features continuity similarly to the QFI.
quantum physics