text
stringlengths
11
9.77k
label
stringlengths
2
104
One of the suggested thick disc formation mechanisms is that they were born quickly and in situ from a turbulent clumpy disc. Subsequently, thin discs formed slowly within them from leftovers of the turbulent phase and from material accreted through cold flows and minor mergers. In this letter, I propose an observational test to verify this hypothesis. By combining thick disc and total stellar masses of edge-on galaxies with galaxy stellar mass functions calculated in the redshift range of $z\leq3.0$, I derived a positive correlation between the age of the youngest stars in thick discs and the stellar mass of the host galaxy; galaxies with a present-day stellar mass of $\mathcal{M}_\star(z=0)<10^{10}\,\mathcal{M}_\odot$ have thick disc stars as young as $4-6\,{\rm Gyr}$, whereas the youngest stars in the thick discs of Milky-Way-like galaxies are $\sim10\,{\rm Gyr}$ old. I tested this prediction against the scarcely available thick disc age estimates, all of them are from galaxies with $\mathcal{M}_\star(z=0)\gtrsim10^{10}\,\mathcal{M}_\odot$, and I find that field spiral galaxies seem to follow the expectation. On the other hand, my derivation predicts ages that are too low for the thick discs in lenticular galaxies, indicating a fast early evolution for S0 galaxies. I propose the idea of conclusively testing whether thick discs formed quickly and in situ by obtaining the ages of thick discs in field galaxies with masses of $\mathcal{M}_\star(z=0)\sim10^{9.5}\,\mathcal{M}_\odot$ and by checking whether they contain $\sim5\,{\rm Gyr}$-old stars.
astrophysics
We consider the 1-loop effective potential in type I string theory compactified on a torus, with supersymmetry broken by the Scherk-Schwarz mechanism. At fixed supersymmetry breaking scale M, and up to exponentially suppressed terms, we show that the potential admits local minima of arbitrary sign, in dimension d<=5. While the open string Wilson lines are massive, the closed string moduli are flat directions. In a T-dual picture, the relevant backgrounds involve isolated 1/2-branes, whose positions are frozen on orientifold planes, thus decreasing the rank of the gauge group, and introducing massless fermions in fundamental representations.
high energy physics theory
Transverse momentum dependent (TMD) distributions at small x exhibit a rich infinite twist structure that encompasses the leading twist (partonic) distributions as well as the physics of gluon saturation. Progress to further the connection between the standard TMD framework at moderate x and small x has been recently made. In this context, we show that light cone Wilson line operators at small-x can be formulated in terms of transverse gauge links. This new formulation of small x operators allows a direct matching with the standard leading twist gluon TMD distributions and provides an efficient and general prescription for computing TMD distributions at small x beyond leading twist.
high energy physics phenomenology
In this paper, we present a simple approach to train Generative Adversarial Networks (GANs) in order to avoid a \textit {mode collapse} issue. Implicit models such as GANs tend to generate better samples compared to explicit models that are trained on tractable data likelihood. However, GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. To bridge this gap, we propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model and support both adversarial and likelihood framework in a joint training manner which diversify the estimated density in order to cover different modes. We propose to use an adversarial network to \textit {transfer knowledge} from an autoregressive model (teacher) to the generator (student) of a GAN model. A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach. We conduct extensive experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to demonstrate the effectiveness of the proposed HGAN under qualitative and quantitative evaluations. The experimental results show the superiority and competitiveness of our method compared to the baselines.
computer science
This paper aims at giving a novel approach to investigate the behavior of the renormalization group flow for tensorial group field theories to all order of the perturbation theory. From an appropriate choice of the kinetic kernel, we build an infinite family of just-renormalizable models, for tensor fields with arbitrary rank $d$. Investigating the large $d$-limit, we show that the self-energy melonic amplitude is decomposed as a product of loop-vertex functions depending only on dimensionless mass. The corresponding melonic amplitudes may be mapped as trees in the so-called Hubbard-Stratonivich representation, and we show that only trees with edges of different colors survive in the large $d$-limit. These two key features allow to resum the perturbative expansion for self energy, providing an explicit expression for arbitrary external momenta in terms of Lambert function. Finally, inserting this resumed solution into the Callan-Symanzik equations, and taking into account the strong relation between two and four point functions arising from melonic Ward-Takahashi identities, we then deduce an explicit expression for relevant and marginal $\beta$-functions, valid to all orders of the perturbative expansion. By Investigating the solutions of the resulting flow, we conclude about the nonexistence of any fixed point in the investigated region of the full phase space.
high energy physics theory
Improved understanding of charge-transport in single molecules is essential for harnessing the potential of molecules e.g. as circuit components at the ultimate size limit. However, interpretation and analysis of the large, stochastic datasets produced by most quantum transport experiments remains an ongoing challenge to discovering much-needed structure-property relationships. Here, we introduce Segment Clustering, a novel unsupervised hypothesis generation tool for investigating single molecule break junction distance-conductance traces. In contrast to previous machine learning approaches for single molecule data, Segment Clustering identifies groupings of similar pieces of traces instead of entire traces. This offers a new and advantageous perspective into dataset structure because it facilitates the identification of meaningful local trace behaviors that may otherwise be obscured by random fluctuations over longer distance scales. We illustrate the power and broad applicability of this approach with two case studies that address common challenges encountered in single molecule studies: First, Segment Clustering is used to extract primary molecular features from a varying background to increase the precision and robustness of conductance measurements, enabling small changes in conductance in response to molecular design to be identified with confidence. Second, Segment Clustering is applied to a known data mixture to qualitatively separate distinct molecular features in a rigorous and unbiased manner. These examples demonstrate two powerful ways in which Segment Clustering can aid in the development of structure-property relationships in molecular quantum transport, an outstanding challenge in the field of molecular electronics.
condensed matter
We propose a clear definition of the gluon condensate within the large-$\beta_0$ approximation as an attempt toward a systematic argument on the gluon condensate. We define the gluon condensate such that it is free from a renormalon uncertainty, consistent with the renormalization scale independence of each term of the operator product expansion (OPE), and an identical object irrespective of observables. The renormalon uncertainty of $\mathcal{O}(\Lambda^4)$, which renders the gluon condensate ambiguous, is separated from a perturbative calculation by using a recently suggested analytic formulation. The renormalon uncertainty is absorbed into the gluon condensate in the OPE, which makes the gluon condensate free from the renormalon uncertainty. As a result, we can define the OPE in a renormalon-free way. Based on this renormalon-free OPE formula, we discuss numerical extraction of the gluon condensate using the lattice data of the energy density operator defined by the Yang--Mills gradient flow.
high energy physics phenomenology
The mirrors installed on Imaging Atmospheric Cherenkov Telescopes like the MAGIC telescopes in La Palma, Canary Islands, are constantly exposed to the harsh environment. They have to withstand wind-induced corrosion from dust and sand, changing temperatures, and rain. Because of the size of the telescope, protecting the structure with a dome is not practical. The current mirrors used in MAGIC are aluminum front-coated glass mirrors, covered by a thin quartz layer. But even with this protective layer, significant decrease in reflectivity can be seen on timescales of several years. The quartz layer is very delicate and can be easily scratched or damaged, which also makes cleaning the mirrors almost impossible. We have tested a novel design of glass mirrors that can be easily cleaned and should show almost no degradation in reflectivity due to environmental influences. The protective layer is a ultra-thin glass sheet which is back-coated with aluminum, making it possible to simply wipe the mirror with household cleaning tools. In this contribution we will present results from laboratory tests of reflectivity and focusing properties of prototype mirrors, as well as long-term tests on-site at the MAGIC telescopes. We will also outline plans for exchanging a large fraction of MAGIC mirrors with this novel design, guaranteeing a peak performance of MAGIC for the coming years.
astrophysics
In our analysis, we show that Efrati et al.'s publication is inconsistent with the mathematics of plate theory. However it is more consistent with the mathematics of shell theory, but with an incorrect strain tensor. Thus, the authors' numerical results imply that a thin object can be stretched substantially with very little force, which is physically unrealistic and mathematically disprovable. All the theoretical work of the authors, i.e. nonlinear plate equations in curvilinear coordinates, can easily be rectified with the inclusion of both a sufficiently differentiable diffeomorphism and a set of external loadings, such as an external strain field.
physics
We present the first study on the amplification of magnetic fields by the turbulent dynamo in the highly subsonic regime, with Mach numbers ranging from $10^{-3}$ to $0.4$. We find that for the lower Mach numbers the saturation efficiency of the dynamo, $(E_{\mathrm{mag}}/E_{\mathrm{kin}})_{\mathrm{sat}}$, increases as the Mach number decreases. Even in the case when injection of energy is purely through longitudinal forcing modes, $(E_{\mathrm{mag}}/E_{\mathrm{kin}})_{\mathrm{sat}}$ $\gtrsim 10^{-2}$ at a Mach number of $10^{-3}$. We apply our results to magnetic field amplification in the early Universe and predict that a turbulent dynamo can amplify primordial magnetic fields to $\gtrsim$ $10^{-16}$ Gauss on scales up to 0.1 pc and $\gtrsim$ $10^{-13}$ Gauss on scales up to 100 pc. This produces fields compatible with lower limits of the intergalactic magnetic field inferred from blazar $\gamma$-ray observations.
astrophysics
In Cartesian coordinate system, the fields generated by a point charge moving parallel to the axis of a rectangular vacuum chamber can be formulated in terms of eigenfunctions of the rectangular waveguide using the mode expansion method. In combination with the conventional impedance theory, the Green-function forms of the wake functions and impedance for space-charge effects can be obtained, and are found to be functions of the positions of both the source and test particles. Using the Green's functions calculated for a point charge, the wake fields and impedance of a beam with various distributions can be calculated, and should be useful to model the three-dimensional space-charge effects. This paper summarizes our findings and also show comparisons to the existing theories.
physics
Implementation security is a critical problem in quantum key distribution (QKD). With the advent of measurement-device-independent QKD, all security loopholes of the measurement unit have been closed. Securing the source, however, remains an elusive issue. Despite the tremendous progress made by developing security proofs that accommodate most typical source imperfections, such proofs usually disregard the effect of pulse correlations. That is, they disregard the fact that the state of an emitted signal can depend on the signals selected previously. Here, we close this gap by introducing a simple yet general method to prove the security of QKD with arbitrary pulse correlations. Our method is compatible with those security proofs that accommodate all the other source imperfections, thus paving the way towards achieving implementation security in QKD with arbitrary flawed devices. Moreover, we introduce a new security proof, which we call the reference technique, that provides high performance in the presence of source imperfections.
quantum physics
Software packages usually report the results of statistical tests using p-values. Users often interpret these by comparing them to standard thresholds, e.g. 0.1%, 1% and 5%, which is sometimes reinforced by a star rating (***, **, *). We consider an arbitrary statistical test whose p-value p is not available explicitly, but can be approximated by Monte Carlo samples, e.g. by bootstrap or permutation tests. The standard implementation of such tests usually draws a fixed number of samples to approximate p. However, the probability that the exact and the approximated p-value lie on different sides of a threshold (the resampling risk) can be high, particularly for p-values close to a threshold. We present a method to overcome this. We consider a finite set of user-specified intervals which cover [0,1] and which can be overlapping. We call these p-value buckets. We present algorithms that, with arbitrarily high probability, return a p-value bucket containing p. We prove that for both a bounded resampling risk and a finite runtime, overlapping buckets need to be employed, and that our methods both bound the resampling risk and guarantee a finite runtime for such overlapping buckets. To interpret decisions with overlapping buckets, we propose an extension of the star rating system. We demonstrate that our methods are suitable for use in standard software, including for low p-value thresholds occurring in multiple testing settings, and that they can be computationally more efficient than standard implementations.
statistics
TeV/m acceleration gradients using crystals as originally envisioned by R. Hofstadter, an early pioneer of HEP, have remained unrealizable. Fundamental obstacles that have hampered efforts on particle acceleration using bulk-crystals arise from collisional energy loss and emittance degradation in addition to severe beam disruption despite the favorable effect of particle channeling along interatomic planes in bulk. We aspire for the union of nanoscience with accelerator science to not only overcome these problems using nanostructured tubes to avoid direct impact of the beam on bulk ion-lattice but also to utilize the highly tunable characteristics of nanomaterials. We pioneer a novel surface wave mechanism in nanostructured materials with a strong electrostatic component which not only attains tens of TeV/m gradients but also has focusing fields. Under our initiative, the proof-of-principle demonstration of tens of TeV/m gradients and beam nanomodulation is underway. Realizable nanostructure accelerators naturally promise new horizons in HEP as well as in a wide range of areas of research that utilize beams of high-energy particles or photons.
physics
Mean field approach, although a generally reliable tool that captures major short range correlations, often fails in symmetric low dimensional strongly correlated electronic systems like those described by the Hubbard model. In these situations a symmetry is \almost broken". The problem is linked to the restoration of the symmetry due to strong uctuations (both quantum and thermal) on all scales. The restoration of symmetry in statistical models of scalar \order parameter" fields was treated recently successfully on the gaussian approximation level by symmetrization of the correlators. Here the idea is extended to fermionic systems in which the order parameter is composite. Furthermore the precision of the correlators can be improved perturbatively. Such a scheme (based on covariant gaussian approximation) is demonstrated on the 1D and 2D one band Hubbard models by comparison of the correlator with exact diagonalization and MC simulations respectively.
condensed matter
We derive a manifestly duality-symmetric formulation of the action principle for conformal gravity linearized around Minkowski space-time. The analysis is performed in the Hamiltonian formulation, the fourth-order character of the equations of motion requiring the formal treatment of the three-dimensional metric perturbation and the extrinsic curvature as independent dynamical variables. The constraints are solved in terms of two symmetric potentials that are interpreted as a dual three-dimensional metric and a dual extrinsic curvature. The action principle can be written in terms of these four dynamical variables, duality acting as simultaneous rotations in the respective spaces spanned by the three-dimensional metrics and the extrinsic curvatures. A twisted self-duality formulation of the equations of motion is also provided.
high energy physics theory
The 1-D mean-field equation describing the evolution of the subsurface toroidal field can be used together with the observed surface radial field to model the subsurface toroidal flux density. We aim to test this model and determine the relationship between the observationally inferred surface toroidal field (as a proxy for flux emergence), and the modelled subsurface toroidal flux density. We use a combination of sunspot area observations, the surface toroidal field inferred from WSO line-of-sight magnetic field observations, and compare with the results of a 1-D mean-field evolution equation for the subsurface toroidal field, driven by the observed radial field from the National Solar Observatory/Kitt Peak and SOLIS observations. We derive calibration curves relating the subsurface toroidal flux density to the observed surface toroidal field strengths and sunspot areas. The calibration curves are for two regimes, one corresponding to ephemeral region emergence outside of the butterfly wings, the other to active region emergence in the butterfly wings. We discuss this in terms of the size and vertical velocity associated with the two types of flux emergence.
astrophysics
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of the local objective functions can not be accessed by the agents. To solve this problem, we propose a projected primal-dual dynamics using only the objective function's approximate subgradients. We first prove that the formulated optimization problem can only be solved with an approximate error depending upon the accuracy of the available subgradients. Then, we show the exact solvability of this optimization problem if the accumulated approximation error is not too large. After that, we also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence. The effectiveness of our algorithms is verified by a numerical example.
mathematics
The most general operator product expansion in conformal field theory is obtained using the embedding space formalism and a new uplift for general quasi-primary operators. The uplift introduced here, based on quasi-primary operators with spinor indices only and standard projection operators, allows a unified treatment of all quasi-primary operators irrespective of their Lorentz group irreducible representations. This unified treatment works at the level of the operator product expansion and hence applies to all correlation functions. A very useful differential operator appearing in the operator product expansion is established and its action on appropriate products of embedding space coordinates is explicitly computed. This computation leads to tensorial generalizations of the usual Exton function for all correlation functions. Several important identities and contiguous relations are also demonstrated for these new tensorial functions. From the operator product expansion all correlation functions for all quasi-primary operators, irrespective of their Lorentz group irreducible representations, can be computed recursively in a systematic way. The resulting answer can be expressed in terms of tensor structures that carry all the Lorentz group information and linear combinations of the new tensorial functions. Finally, a summary of the well-defined rules allowing the computation of all correlation functions constructively is presented.
high energy physics theory
Extrasolar debris disks are the dust disks found around nearby main sequence stars arising from the break-up of asteroids and comets orbiting the stars. Far-IR surveys (e.g., with Herschel) showed that ~20% of stars host detectable dust levels. While dust temperatures suggest a location at 10s of au comparable with our Kuiper belt, orders of magnitude more dust is required implying a planetesimal population more comparable with the primordial Kuiper belt. High resolution imaging (e.g., with ALMA) has mapped the nearest and brightest disks, providing evidence for structures shaped by an underlying planetary system. Some of these are analogous to structures in our own Kuiper belt (e.g., the hot and cold classical, resonant, scattered disk and cometary populations), while others have no Solar System counterpart. CO gas is seen in some debris disks, and inferred to originate in the destruction of planetesimals with a similar volatile-rich composition to Solar System comets. This chapter reviews our understanding of extrasolar Kuiper belts and of how our own Kuiper belt compares with those of neighbouring stars.
astrophysics
Single image super-resolution (SISR) is one of the most challenging problems in the field of computer vision. Among the deep convolutional neural network based methods, attention mechanism has shown the enormous potential. However, due to the diverse network architectures, there is a lack of a universal attention mechanism for the SISR task. In this paper, we propose a lightweight and efficient Balanced Attention Mechanism (BAM), which can be generally applicable for different SISR networks. It consists of Avgpool Channel Attention Module (ACAM) and Maxpool Spatial Attention Module (MSAM). These two modules are connected in parallel to minimize the error accumulation and the crosstalk. To reduce the undesirable effect of redundant information on the attention generation, we only apply Avgpool for channel attention because Maxpool could pick up the illusive extreme points in the feature map across the spatial dimensions, and we only apply Maxpool for spatial attention because the useful features along the channel dimension usually exist in the form of maximum values for SISR task. To verify the efficiency and robustness of BAM, we apply it to 12 state-of-the-art SISR networks, among which eight were without attention thus we plug BAM in and four were with attention thus we replace its original attention module with BAM. We experiment on Set5, Set14 and BSD100 benchmark datasets with the scale factor of x2 , x3 and x4 . The results demonstrate that BAM can generally improve the network performance. Moreover, we conduct the ablation experiments to prove the minimalism of BAM. Our results show that the parallel structure of BAM can better balance channel and spatial attentions, thus outperforming the series structure of prior Convolutional Block Attention Module (CBAM).
electrical engineering and systems science
The N-body problem has become one of the hottest topics in the fields of computational dynamics and cosmology. The large dynamical range in some astrophysical problems led to the use of adaptive time steps to integrate particle trajectories, however, the search of optimal strategies is still challenging. We quantify the performance of the hierarchical time step integrator Hamiltonian Splitting (HamSp) for collisionless multistep simulations. We compare with the constant step Leap-Frog (LeapF) integrator and the adaptive one (AKDK). Additionally, we explore the impact of different time step assigning functions. There is a computational overhead in HamSp however there are two interesting advantages: choosing a convenient time-step function may compensate and even turn around the efficiency compared with AKDK. We test both reversibility and time symmetry. The symmetrized nature of the HamSp integration is able to provide time-reversible integration for medium time scales and overall deliver better energy conservation for long integration times, and the linear and angular momentum are preserved at machine precision. We address the impact of using different integrators in astrophysical systems. We found that in most situations both AKDK and HamSp are able to correctly simulate the problems. We conclude that HamSp is an attractive and competitive alternative to AKDK, with, in some cases, faster and with better energy and momentum conservation. The use of recently discussed Bridge splitting techniques with HamSp may allow to reach considerably high efficiency.
astrophysics
We derive the recent star formation histories of 23 active dwarf galaxies using HST observations from the Legacy Extragalactic UV Survey (LEGUS). We apply a color-magnitude diagram fitting technique using two independent sets of stellar models, PARSEC-COLIBRI and MIST. Despite the non-negligible recent activity, none of the 23 star forming dwarfs show enhancements in the last 100 Myr larger than three times the 100-Myr-average. The unweighted mean of the individual SFHs in the last 100 Myr is also consistent with a rather constant activity, irrespective of the atomic gas fraction. We confirm previous results that for dwarf galaxies the CMD-based average star formation rates (SFRs) are generally higher than the FUV-based SFR. For half of the sample, the 60-Myr-average CMD-based SFR is more than two times the FUV SFR. In contrast, we find remarkable agreement between the 10-Myr-average CMD-based SFR and the H${\alpha}$-based SFR. Finally, using core helium burning stars of intermediate mass we study the pattern of star formation spatial progression over the past 60 Myr, and speculate on the possible triggers and connections of the star formation activity with the environment in which these galaxies live. Approximately half of our galaxies show spatial progression of star formation in the last 60 Myr, and/or very recent diffuse and off-center activity compared to RGB stars.
astrophysics
Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantify the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number $n$ of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of $n$, with a resulting approximation error that universally scales as $O(n^{-1})$. These properties are established in the form of a Law of Large Numbers and a Central Limit Theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$.
statistics
Change detection is a fundamental task in computer vision. Despite significant advances have been made, most of the change detection methods fail to work well in challenging scenes due to ubiquitous noise and interferences. Nowadays, post-processing methods (e.g. MRF, and CRF) aiming to enhance the binary change detection results still fall short of the requirements on universality for distinctive scenes, applicability for different types of detection methods, accuracy, and real-time performance. Inspired by the nature of image filtering, which separates noise from pixel observations and recovers the real structure of patches, we consider utilizing image filters to enhance the detection masks. In this paper, we present an integrated filter which comprises a weighted local guided image filter and a weighted spatiotemporal tree filter. The spatiotemporal tree filter leverages the global spatiotemporal information of adjacent video frames and meanwhile the guided filter carries out local window filtering of pixels, for enhancing the coarse change detection masks. The main contributions are three: (i) the proposed filter can make full use of the information of the same object in consecutive frames to improve its current detection mask by computations on a spatiotemporal minimum spanning tree; (ii) the integrated filter possesses both advantages of local filtering and global filtering; it not only has good edge-preserving property but also can handle heavily textured and colorful foreground regions; and (iii) Unlike some popular enhancement methods (MRF, and CRF) that require either a priori background probabilities or a posteriori foreground probabilities for every pixel to improve the coarse detection masks, our method is a versatile enhancement filter that can be applied after many different types of change detection methods, and is particularly suitable for video sequences.
computer science
We present a detailed description of the special procedures for calibration and quality assurance of Atacama Large Millimeter/submillimeter Array (ALMA) observations in Very Long Baseline Interferometry (VLBI) mode. These procedures are required to turn the phased ALMA array into a fully calibrated VLBI station. As an illustration of these methodologies, we present full-polarization observations carried out with ALMA as a phased array at 3mm (Band 3) and 1.3mm (Band 6) as part of Cycle-4. These are the first VLBI science observations conducted with ALMA and were obtained during a 2017 VLBI campaign in concert with other telescopes worldwide as part of the Global mm-VLBI Array (GMVA, April 1-3) and the Event Horizon Telescope (EHT, April 5-11) in ALMA Bands 3 and 6, respectively.
astrophysics
This paper studies the convergence rate of a message-passing distributed algorithm for solving a large-scale linear system. This problem is generalised from the celebrated Gaussian Belief Propagation (BP) problem for statistical learning and distributed signal processing, and this message-passing algorithm is generalised from the well-celebrated Gaussian BP algorithm. Under the assumption of generalised diagonal dominance, we reveal, through painstaking derivations, several bounds on the convergence rate of the message-passing algorithm. In particular, we show clearly how the convergence rate of the algorithm can be explicitly bounded using the diagonal dominance properties of the system. When specialised to the Gaussian BP problem, our work also offers new theoretical insight into the behaviour of the BP algorithm because we use a purely linear algebraic approach for convergence analysis.
electrical engineering and systems science
This work studies dynamics controlling the transition between different microstates of two charge D1-D5 black holes by network methods, in which microstates of the system are defined as network nodes, while transitions between them are defined as edges. It is found that the eigenspectrum of this network's Laplacian matrix, which is identified with Hamiltonians of the microstate system, has completely the same Nearest-Neighbor Spacing Distribution as that of general Gaussian Orthogonal Ensemble of Random Matrices. According to the BGS, i.e. Bohigas, Giannoni and Schmit conjecture, this forms evidence for chaotic features of the D1-D5 microstate dynamics. This evidence is further strengthened by observations that inverse of the first/minimal nonzero eigenvalue of the Laplacian matrix is proportional to logarithms of the microstate number of the system. By Sekino and Susskind, this means that dynamics of the D1-D5 black hole microstates are not only chaotic, but also the fastest scrambler in nature.
high energy physics theory
In this paper, we propose an AC power flow based cascading failure model that explicitly considers external weather conditions, extreme temperatures in particular, and evaluates the impact of extreme temperature on the initiation and propagation of cascading blackouts. Specifically, load and dynamic line rating changes are modeled due to temperature disturbance, the probabilities for transmission line and generator outages are evaluated, and the timing for each type of events is carefully calculated to decide the actual event sequence. It should be emphasized that the correlated events, in the advent of external temperature changes, could together contribute to voltage instability. Besides, we model undervoltage load shedding and operator re-dispatch as control strategies for preventing the propagation of cascading failures. The effectiveness of the proposed model is verified by simulation results on the RTS-96 3-area system and it is found that temperature disturbances can lead to correlated load change and line/generator tripping, which together will greatly increase the risk of cascading and voltage instability. Critical temperature change, critical area with temperature disturbance, identification of most vulnerable buses, and comparison of different control strategies are also carefully investigated.
electrical engineering and systems science
We construct a family of holographic duals to anisotropic states in a strongly coupled gauge theory. On the field theory side the anisotropy is generated by giving a vacuum expectation value to a dimension three operator. We obtain our gravity duals by considering the geometry corresponding to the intersection of D3- and D5- branes along 2+1 dimensions. Our backgrounds are supersymmetric and solve the fully backreacted equations of motion of ten-dimensional supergravity with smeared D5-brane sources. In all cases the geometry flows to $AdS_{5}\times {\mathbb S}^5$ in the UV, signaling an isotropic UV fixed point of the dual field theory. In the IR, depending on the parameters of the solution, we find two possible behaviors: an isotropic fixed point or a geometry with anisotropic Lifshitz-like scaling symmetry. We study several properties of the solutions, including the entanglement entropy of strips. We show that any natural extension of existing $c$-functions will display non-monotonic behavior, conforming with the presence of new degrees of freedom only at intermediate energy scales.
high energy physics theory
Mixed cooperative-competitive control scenarios such as human-machine interaction with individual goals of the interacting partners are very challenging for reinforcement learning agents. In order to contribute towards intuitive human-machine collaboration, we focus on problems in the continuous state and control domain where no explicit communication is considered and the agents do not know the others' goals or control laws but only sense their control inputs retrospectively. Our proposed framework combines a learned partner model based on online data with a reinforcement learning agent that is trained in a simulated environment including the partner model. Thus, we overcome drawbacks of independent learners and, in addition, benefit from a reduced amount of real world data required for reinforcement learning which is vital in the human-machine context. We finally analyze an example that demonstrates the merits of our proposed framework which learns fast due to the simulated environment and adapts to the continuously changing partner due to the partner approximation.
electrical engineering and systems science
Recently, $\alpha$-Rank, a graph-based algorithm, has been proposed as a solution to ranking joint policy profiles in large scale multi-agent systems. $\alpha$-Rank claimed tractability through a polynomial time implementation with respect to the total number of pure strategy profiles. Here, we note that inputs to the algorithm were not clearly specified in the original presentation; as such, we deem complexity claims as not grounded, and conjecture solving $\alpha$-Rank is NP-hard. The authors of $\alpha$-Rank suggested that the input to $\alpha$-Rank can be an exponentially-sized payoff matrix; a claim promised to be clarified in subsequent manuscripts. Even though $\alpha$-Rank exhibits a polynomial-time solution with respect to such an input, we further reflect additional critical problems. We demonstrate that due to the need of constructing an exponentially large Markov chain, $\alpha$-Rank is infeasible beyond a small finite number of agents. We ground these claims by adopting amount of dollars spent as a non-refutable evaluation metric. Realising such scalability issue, we present a stochastic implementation of $\alpha$-Rank with a double oracle mechanism allowing for reductions in joint strategy spaces. Our method, $\alpha^\alpha$-Rank, does not need to save exponentially-large transition matrix, and can terminate early under required precision. Although theoretically our method exhibits similar worst-case complexity guarantees compared to $\alpha$-Rank, it allows us, for the first time, to practically conduct large-scale multi-agent evaluations. On $10^4 \times 10^4$ random matrices, we achieve $1000x$ speed reduction. Furthermore, we also show successful results on large joint strategy profiles with a maximum size in the order of $\mathcal{O}(2^{25})$ ($\approx 33$ million joint strategies) -- a setting not evaluable using $\alpha$-Rank with reasonable computational budget.
computer science
We present a general model-independent approach to study the coupled dark energy and the string Swampland criteria. We show how the dark sector interaction is degenerated with the equation of state of dark energy in the context of the expansion of the Universe. With priors for either of them, the dynamics of dark energy and the dark sector interactions can be reconstructed together with the bounds of the Swampland criteria. Combining cosmic chronometers, baryon acoustic oscillation (BAO), and Type Ia supernovae our results suggest a mild $1 \sigma$ significance of dark sector interactions at low redshift for the coupled quintessence. The Lyman-$\alpha$ BAO at $z=2.34$ leads a $2 \sigma$ signal of nonzero interactions at high redshift. The implications for coupled quintessence are discussed.
astrophysics
Tensor models generalize the matrix-model approach to 2-dimensional quantum gravity to higher dimensions. Some models allowing a $1/N$ expansion have been explored, most of them generating branched-polymer geometries. Recently, enhancements yielding an additional 2d quantum-gravity (planar) phase and an intermediate regime of proliferating baby-universes have been found. It remains an open issue to find models escaping these lower dimensionality universality classes. Here we analyse the dominant regime and critical behaviour of a range of new models which are candidates for such effective geometries, in particular interactions based on the utility graph $K_{3,3}$. We find that, upon proper enhancement, the two-phase structure of a branched-polymer and a 2d gravity regime is the common case in $U(N)$-invariant rank $D=4$ tensor models of small orders. Not only the well known so-called necklace interactions but also $K_{3,3}$-type interactions turn out as the source for the planar regime. We give a systematic account of the enhancement scaling, the counting of leading-order diagrams and the multi-critical behaviour of a wide range of interactions, in particular for all order-6 interactions of rank 3 and 4. These findings support the claim of universality of such mixtures of branched-polymer and planar diagrams at criticality. In particular, this hints at the necessity to consider new ingredients, or interactions of higher order and rank, in order to obtain higher dimensional continuum geometry from tensor models.
high energy physics theory
We report on the variability of rotation periods of solar coronal layers with respect to temperature (or, height). For this purpose, we have used the observations from Atmospheric Imaging Assembly (AIA) telescope on board Solar Dynamics Observatory (SDO) space mission. The images used are at the wavelengths 94 {\AA}, 131 {\AA}, 171 {\AA}, 193 {\AA}, 211 {\AA}, and 335 {\AA} for the period from 2012 to 2018. Analysis of solar full disk images obtained at these wavelengths by AIA is carried out using flux modulation method. Seventeen rectangular strips/bins at equal interval of 10 degrees (extending from 80 degree South to 80 degree North on the Sun) are selected to extract a time series of extreme ultraviolet (EUV) intensity variations to obtain auto-correlation coefficient. The peak of Gaussian fit to first secondary maxima in the autocorrelogram gives synodic rotation period. Our analysis shows the differential rotation with respect to latitude as well as temperature (or, height). In the present study, we find that the sidereal rotation periods of different coronal layers decrease with increasing temperature (or, height). Average sidereal rotation period at the lowest temperature (~ 600000 Kelvin) corresponding to AIA-171 {\AA} which originates from the upper transition region/quiet corona is 27.03 days. The sidereal rotation period decreases with temperature (or, height) to 25.47 days at the higher temperature (~10 million Kelvin) corresponding to the flaring regions of solar corona as seen in AIA-131 {\AA} observations.
astrophysics
The success of deep convolutional neural networks is partially attributed to the massive amount of annotated training data. However, in practice, medical data annotations are usually expensive and time-consuming to be obtained. Considering multi-modality data with the same anatomic structures are widely available in clinic routine, in this paper, we aim to exploit the prior knowledge (e.g., shape priors) learned from one modality (aka., assistant modality) to improve the segmentation performance on another modality (aka., target modality) to make up annotation scarcity. To alleviate the learning difficulties caused by modality-specific appearance discrepancy, we first present an Image Alignment Module (IAM) to narrow the appearance gap between assistant and target modality data.We then propose a novel Mutual Knowledge Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge to facilitate the target-modality segmentation. To be specific, we formulate our framework as an integration of two individual segmentors. Each segmentor not only explicitly extracts one modality knowledge from corresponding annotations, but also implicitly explores another modality knowledge from its counterpart in mutual-guided manner. The ensemble of two segmentors would further integrate the knowledge from both modalities and generate reliable segmentation results on target modality. Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation by utilizing additional MRI data and outperforms other state-of-the-art multi-modality learning methods.
electrical engineering and systems science
In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learnt structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.
statistics
We present a tidal model for treating the rotational evolution in the general three-body problem with arbitrary viscosities, in which all the masses are considered to be extended and all the tidal interactions between pairs are taken into account. Based on the creep tide theory, we present the set of differential equations that describes the rotational evolution of each body, in a formalism that is easily extensible to the N tidally-interacting body problem. We apply our model to the case of a circumbinary planet and use a Kepler-38 like binary system as a working example. We find that, in this low planetary eccentricity case, the most likely final stationary rotation state is the 1:1 spin-orbit resonance, considering an arbitrary planetary viscosity inside the estimated range for the solar system planets. We derive analytical expressions for the mean rotational stationary state, based on high-order power series of the semimajor axes ratio a1 /a2 and low-order expansions of the eccentricities. These are found to reproduce very accurately the mean behaviour of the low-eccentric numerical integrations for arbitrary planetary relaxation factors, and up to a1/a2 \sim 0.4. Our analytical model is used to predict the stationary rotation of the Kepler circumbinary planets and find that most of them are probably rotating in a sub-synchronous state, although the synchrony shift is much less important than the one estimated in Zoppetti et al. (2019, 2020). We present a comparison of our results with those obtained with the Constant Time Lag and find that, unlike what we assumed in our previous works, the cross torques have a non-negligible net secular contribution, and must be taken into account when computing the tides over each body in an N-extended-body system from an arbitrary reference frame. These torques are naturally taken into account in the creep theory.
astrophysics
Recent multi-channel astrophysics observations and the soon-to-be published new measured electromagnetic and gravitation data provide information on the inner structure of the compact stars. These macroscopic observations can significantly increase our knowledge on the neutron star enteriors, providing constraints on the microscopic physical properties. On the other hand, due to the masquarade problem, there are still uncertainties on the various nuclear-matter models and their parameters as well. Calculating the properties of the dense nuclear matter, effective field theories are the most widely-used tools. However the values of the microscopical parameters need to be set consistently to the nuclear and astrophysical measurements. In this work we investigate how uncertainties are induced by the variation of the microscopical parameters. We use a symmetric nuclear matter in an extended $\sigma$-$\omega$ model. We calculate the dense matter equation of state and give the mass-radius diagram. We present that the Landau mass and compressibility modulus of the nuclear matter have definite linear relation to the maximum mass of a Schwarzschild neutron star.
high energy physics phenomenology
We investigate how the addition of quantum resources changes the statistical complexity of quantum circuits by utilizing the framework of quantum resource theories. Measures of statistical complexity that we consider include the Rademacher complexity and the Gaussian complexity, which are well-known measures in computational learning theory that quantify the richness of classes of real-valued functions. We derive bounds for the statistical complexities of quantum circuits that have limited access to certain resources and apply our results to two special cases: (1) stabilizer circuits that are supplemented with a limited number of T gates and (2) instantaneous quantum polynomial-time Clifford circuits that are supplemented with a limited number of CCZ gates. We show that the increase in the statistical complexity of a quantum circuit when an additional quantum channel is added to it is upper bounded by the free robustness of the added channel. Finally, we derive bounds for the generalization error associated with learning from training data arising from quantum circuits.
quantum physics
Let $p$ be a prime number. Every $n$-variable polynomial $f(\underline x)$ over a finite field of characteristic $p$ defines an Artin--Schreier--Witt tower of varieties whose Galois group is isomorphic to $\mathbb{Z}_p$. Our goal of this paper is to study the Newton polygon of the $L$-function associated to a finite character of $\mathbb{Z}_p$ and a generic polynomial whose convex hull is an $n$-dimensional paralleltope $\Delta$. We denote this polygon by $\mathrm{GNP}(\Delta)$. We prove a lower bound of $\mathrm{GNP}(\Delta)$, which is called the improved Hodge polygon $\mathrm{IHP}(\Delta)$. We show that $\mathrm{IHP}(\Delta)$ lies above the usual Hodge polygon $\mathrm{HP}(\Delta)$ at certain infinitely many points, and when $p$ is larger than a fixed number determined by $\Delta$, it coincides with $\mathrm{GNP}(\Delta)$ at these points. As a corollary, we roughly determine the distribution of the slopes of $\mathrm{GNP}(\Delta)$.
mathematics
Our ability to manipulate the behavior of complex networks depends on the design of efficient control algorithms and, critically, on the availability of an accurate and tractable model of the network dynamics. While the design of control algorithms for network systems has seen notable advances in the past few years, knowledge of the network dynamics is a ubiquitous assumption that is difficult to satisfy in practice, especially when the network topology is large and, possibly, time-varying. In this paper we overcome this limitation, and develop a data-driven framework to control a complex dynamical network optimally and without requiring any knowledge of the network dynamics. Our optimal controls are constructed using a finite set of experimental data, where the unknown complex network is stimulated with arbitrary and possibly random inputs. In addition to optimality, we show that our data-driven formulas enjoy favorable computational and numerical properties even compared to their model-based counterpart. Although our controls are provably correct for networks with linear dynamics, we also characterize their performance against noisy experimental data and in the presence of nonlinear dynamics, as they arise when mitigating cascading failures in power-grid networks and when manipulating neural activity in brain networks.
electrical engineering and systems science
Modern engineering systems include many components of different types and functions. Verifying that these systems satisfy given specifications can be an arduous task, as most formal verification methods are limited to systems of moderate size. Recently, contract theory has been proposed as a modular framework for defining specifications. In this paper, we present a contract theory for discrete-time dynamical control systems relying on assume/guarantee contracts, which prescribe assumptions on the input of the system and guarantees on the output. We then focus on contracts defined by linear constraints, and develop efficient computational tools for verification of satisfaction and refinement based on linear programming. We exemplify these tools in a simulation example, proving a certain safety specification for a two-vehicle autonomous driving setting.
electrical engineering and systems science
The sound field separation methods can separate the target field from the interfering noises, facilitating the study of the acoustic characteristics of the target source, which is placed in a noisy environment. However, most of the existing sound field separation methods are derived in the frequency-domain, thus are best suited for separating stationary sound fields. In this paper, a time-domain sound field separation method is developed that can separate the non-stationary sound field generated by the target source over a sphere in real-time. A spherical array sets up a boundary between the target source and the interfering sources, such that the outgoing field on the array is only generated by the target source. The proposed method decomposes the pressure and the radial particle velocity measured by the array into spherical harmonics coefficients, and recoveries the target outgoing field based on the time-domain relationship between the decomposition coefficients and the theoretically derived spatial filter responses. Simulations show the proposed method can separate non-stationary sound fields both in free field and room environments, and over a longer duration with small errors. The proposed method could serve as a foundation for developing future time-domain spatial sound field manipulation algorithms.
electrical engineering and systems science
Recent work highlights that tens of Galactic double neutron stars are likely to be detectable in the millihertz band of the space-based gravitational-wave observatory, LISA. Kyutoku and Nishino point out that some of these binaries might be detectable as radio pulsars using the Square Kilometer Array (SKA). We point out that the joint LISA+SKA detection of a $f_\text{gw}\gtrsim$1 mHz binary, corresponding to a binary period of $\lesssim$400 s, would enable precision measurements of ultra-relativistic phenomena. We show that, given plausible assumptions, multi-messenger observations of ultra-relativistic binaries can be used to constrain the neutron star equation of state with remarkable fidelity. It may be possible to measure the mass-radius relation with a precision of $\approx$0.2% after 10 yr of observations with the SKA. Such a measurement would be roughly an order of magnitude more precise than possible with other proposed observations. We summarize some of the other remarkable science made possible with multi-messenger observations of millihertz binaries, and discuss the prospects for the detection of such objects.
astrophysics
The fundamental interaction between free electrons and photons stands at the base of both classical and quantum physics. In classical physics, this interaction has recently found renewed interest in the fields of free-electron acceleration and radiation sources. In quantum physics, this interaction has recently facilitated shaping the wavefunction of a single electron in time and space, giving rise to coherent attosecond single-electron combs. Yet, to this day -- in all experiments involving the interaction between free electrons and light -- the light was considered as a classical wave, disregarding its quantum nature. Here, we observe the effect of the quantum statistics of photons on free-electron interactions with light. We demonstrate interactions passing continuously from Poissonian to super-Poissonian and up to thermal statistics, unveiling a surprising manifestation of Bohr's Correspondence Principle: the continuous transition from quantum walk to classical random walk of a free electron on the energy ladder. The electron walker serves as the probe in non-destructive quantum detection experiments, measuring the photon statistics as well as degrees of coherence ${g^{(2)} (0)}$ and higher-orders ${g^{(n)} (0)}$. Unlike conventional quantum-optical detectors, the electron can perform both quantum weak measurements and projective measurements of light by evolving into an entangled joint-state with the photons. Our findings suggest hitherto inaccessible concepts in quantum optics: free-electron-based non-destructive quantum tomography of light, even with ultrafast modulation of the photon statistics or the field quadratures. The study of high-efficiency electron-light coupling constitutes an important step towards combined attosecond-temporal and sub-{\AA}-spatial resolution microscopy.
quantum physics
Several theories of particle physics beyond the Standard Model consider that neutrinos can decay. In this work we assume that the standard mechanism of neutrino oscillations is altered by the decay of the heaviest neutrino mass state into a sterile neutrino and, depending on the model, a scalar or a Majoron. We study the sensitivity of the forthcoming KM3NeT-ORCA experiment to this scenario and find that it could improve the current bounds coming from oscillation experiments, where three-neutrino oscillations have been considered, by roughly two orders of magnitude. We also study how the presence of this neutrino decay can affect the determination of the atmospheric oscillation parameters $\sin^2\theta_{23}$ and $\Delta m_{31}^2$, as well as the sensitivity to the neutrino mass ordering.
high energy physics phenomenology
Visual perception is the most critical input for driving decisions. In this study, our aim is to understand relationship between saliency and driving decisions. We present a novel attention-based saliency map prediction model for making braking decisions This approach constructs a holistic model to the driving task and can be extended for other driving decisions like steering and acceleration. The proposed model is a deep neural network model that feeds extracted features from input image to a recurrent neural network with an attention mechanism. Then predicted saliency map is used to make braking decision. We trained and evaluated using driving attention dataset BDD-A, and saliency dataset CAT2000.
computer science
We propose an algorithm that is capable of synthesizing high quality target speaker's singing voice given only their normal speech samples. The proposed algorithm first integrate speech and singing synthesis into a unified framework, and learns universal speaker embeddings that are shareable between speech and singing synthesis tasks. Specifically, the speaker embeddings learned from normal speech via the speech synthesis objective are shared with those learned from singing samples via the singing synthesis objective in the unified training framework. This makes the learned speaker embedding a transferable representation for both speaking and singing. We evaluate the proposed algorithm on singing voice conversion task where the content of original singing is covered with the timbre of another speaker's voice learned purely from their normal speech samples. Our experiments indicate that the proposed algorithm generates high-quality singing voices that sound highly similar to target speaker's voice given only his or her normal speech samples. We believe that proposed algorithm will open up new opportunities for singing synthesis and conversion for broader users and applications.
computer science
The perturbative approach to quantum field theories has made it possible to obtain incredibly accurate theoretical predictions in high-energy physics. Although various techniques have been developed to boost the efficiency of these calculations, some ingredients remain specially challenging. This is the case of multiloop scattering amplitudes that constitute a hard bottleneck to solve. In this paper, we delve into the application of a disruptive technique based on the loop-tree duality theorem, which is aimed at an efficient computation of such objects by opening the loops to nondisjoint trees. We study the multiloop topologies that first appear at four loops and assemble them in a clever and general expression, the N$^4$MLT {\it universal topology}. This general expression enables to open any scattering amplitude of up to four loops, and also describes a subset of higher order configurations to all orders. These results confirm the conjecture of a factorized opening in terms of simpler known subtopologies, which also determines how the causal structure of the entire loop amplitude is characterized by the causal structure of its subtopologies. In addition, we confirm that the loop-tree duality representation of the N$^4$MLT universal topology is manifestly free of noncausal thresholds, thus pointing towards a remarkably more stable numerical implementation of multiloop scattering amplitudes.
high energy physics phenomenology
Coherent forward scattering processes by neutrino-scalar nonstandard interactions (SNSI) induce an effective neutrino mass. In the Early Universe, a large neutrino effective mass restricts the production of neutrinos. The SNSI effect is modulated by two effective couplings, these account for the coupling between neutrinos and electrons/positrons, $G_{\rm eff}$, and the neutrino self-interaction, $G_{\rm S}$. These parameters are directly related to the effective number of relativistic species and non-zero values imply a smaller than expected $N_{\rm eff}$. We employ big bang nucleosynthesis to constraint the SNSI effect. We find that $ G_{\rm eff }< 1.2$ MeV$^{-2}$ and $ G_{\rm S }< 2.0 \times 10^{7}$ MeV$^{-2}$ at 68\% CL. For a scalar mass in the range $10^{-15} {\rm eV}\lesssim m_{\phi}\lesssim 10^{-5}{\rm eV}$, our neutrino-scalar coupling constraint is more restrictive than any previous result.
high energy physics phenomenology
The main goal of galaxy surveys is to map the distribution of the galaxies, for the purpose of understanding the properties of this distribution and its implications for the content and evolution of the universe. However, in order to realise the potential of these surveys, we need to ensure that we are using the correct analysis: the relativistic analysis, which has been widely studied recently. In this work, the known relativistic overdensity of galaxy surveys is re-examined. Subtle, yet crucial parameters which appear to have been missed by previous works are uncovered. The possible implication of these parameters on the observed galaxy power spectrum is demonstrated for a generic survey, in the cosmological concordance model. The results show that these parameters can alter the predictions of galaxy clustering on all scales; hence, the relativistic effects (on ultra-large scales). In particular, the results show that ignoring these parameters - as in the analysis in previous works - will lead to a false overestimation of the galaxy clustering strength in structure formation.
astrophysics
We present a novel deep zero-shot learning (ZSL) model for inferencing human-object-interaction with verb-object (VO) query. While the previous two-stream ZSL approaches only use the semantic/textual information to be fed into the query stream, we seek to incorporate and embed the semantics into the visual representation stream as well. Our approach is powered by Semantics-to-Space (S2S) architecture where semantics derived from the residing objects are embedded into a spatial space of the visual stream. This architecture allows the co-capturing of the semantic attributes of the human and the objects along with their location/size/silhouette information. To validate, we have constructed a new dataset, Verb-Transferability 60 (VT60). VT60 provides 60 different VO pairs with overlapping verbs tailored for testing two-stream ZSL approaches with VO query. Experimental evaluations show that our approach not only outperforms the state-of-the-art, but also shows the capability of consistently improving performance regardless of which ZSL baseline architecture is used.
computer science
Euclidean quantum gravity might be defined by stochastic quantisation that is governed by a higher order Langevin equation rather than a first order stochastic equation. In a transitory phase where the Lorentz time cannot be defined, the parameter that orders the evolution of quantum gravity phenomena is the stochastic time. This changes the definition of causality in the period of primordial cosmology. The prediction of stochastically quantised gravity is that there will a transition from an oscillating quantum phase to a semi-classical one, when the Lorentz time emerges. The end of the transition, as it can be observed from now and described by inflation models, is a diluted Universe, following the inflation phenomenological evolution. It is filled at the beginning with scattered classical primordial black holes. The smallest ones will quickly decay in matter, with a standard quantum field theory evolution till our period. The stable heavier black holes will remain, forming a good fraction of the dark matter and the large black holes observed in the galaxies. In a theoretically related way, this framework suggests the possibility of a gravitational parton content for "point-like" particles, in the same five dimensional quantum field theory context as in the primordial cosmology, with a (+----) signature for the 5d metrics. The very precise and explicit result expressed in this paper is actually far more modest than its motivation. We compute explicitly the meaning of a second order Langevin equation in zero dimensions and define precisely what is second order stochastic quantisation in a soluble case.
high energy physics theory
The syndecans represent an ongoing research field focused on their regulatory roles in normal and pathological conditions. Syndecan's role in cancer progression becomes well-documented, implicating their importance in diagnosis and even proposing various cancer potential treatments. Thus, the characterization of the unbinding properties at the single molecules level will appeal to their use as targets for therapeutics. In our study, syndecan-1 and syndecan-4 were measured during the interaction with the vitronectin HEP II binding site. Our findings show that syndecans are calcium ion-dependent molecules that reveal distinct, unbinding properties indicating the alterations in heparin sulfate chain structure, possibly in the chain sequence or sulfation pattern. In that way, we suppose that HS chain affinity to ECM proteins may govern cancer invasion by altering syndecan ability to interact with cancer-related receptors present in the tumor microenvironment, thereby promoting the activation of various signaling cascades regulating tumor cell behavior.
physics
The synthetic control method (SCM) allows estimation of the causal effect of an intervention in settings where panel data on just a few treated units and control units are available. We show that the existing SCM as well as its extensions can be easily modified to estimate how much of the "total" effect goes through observed causal channels. The additional assumptions needed are arguably very mild in many settings. Furthermore, in an illustrative empirical application we estimate the effects of adopting the euro on labor productivity in several countries and show that a reduction in the Economic Complexity Index helped to mitigate the negative short run effects of adopting the new currency in some countries and boosted the positive effects in others.
statistics
The interaction between a linear electron beam and a guided electromagnetic wave is studied in the contest of exceptional points of degeneracy (EPD) supported by such an interactive system. The study focuses on the case of a linear beam traveling wave tube (TWT) with a realistic helix waveguide slow-wave structure (SWS). The interaction is formulated by an analytical model that is a generalization of the Pierce model, assuming a one-dimensional electron flow along a dispersive single-mode guiding SWS and taking into account space-charge effects in the system. The augmented model using phase velocity and characteristic impedance obtained via full-wave simulations is validated by calculating gain versus frequency and comparing it with that from more complex electron beam simulators. This comparison also shows the accuracy of our new model compared with respect to the non-dispersive Pierce model. EPDs are then investigated using the augmented model, observing the coalescence of complex-valued wavenumbers and the system's eigenvectors. The point in the complex dispersion diagram at which the TWT-system starts/ceases to exhibit a convection instability, i.e., a mode starts/ceases to grow exponentially along the TWT, is the EPD. We also demonstrate the EPD existence by showing that the Puiseux fractional power series expansion well approximates the bifurcation of the dispersion diagram at the EPD. This latter concept also explains the "exceptional" sensitivity of the TWT-system to changes in the beam's electron velocity when operating near an EPD.
physics
We advance the vortex cell approach to turbulence \cite{TSVS} by elaborating the Clebsch field dynamics on the surface of vortex cells. We argue that resulting statistical system can be described as 3D Ising model interacting with Compactified Bosonic String on the Riemann surface describing phase boundary. The cells come out as $\sigma=1$ phase of the Ising model in strong magnetic field. At large circulation around large loop the dominant Ising configuration correspond to minimal volume cells covering the area inside the loop, which naturally leads to the Area law and exponential PDF tails.
high energy physics theory
A fair assignment of credit for multi-authored publications is a long-standing issue in scientometrics. In the calculation of the $h$-index, for instance, all co-authors receive equal credit for a given publication, independent of a given author's contribution to the work or of the total number of co-authors. Several attempts have been made to distribute the credit in a more appropriate manner. In a recent paper, Hirsch has suggested a new way of credit assignment that is fundamentally different from the previous ones: All credit for a multi-author paper goes to a single author, the called ``$\alpha$-author'', defined as the person with the highest current $h$-index not the highest $h$-index at the time of the paper's publication) (J. E. Hirsch, Scientometrics 118, 673 (2019)). The collection of papers this author has received credit for as $\alpha$-author is then used to calculate a new index, $h_{\alpha}$, following the same recipe as for the usual $h$ index. The objective of this new assignment is not a fairer distribution of credit, but rather the determination of an altogether different property, the degree of a person's scientific leadership. We show that given the complex time dependence of $h$ for individual scientists, the approach of using the current $h$ value instead of the historic one is problematic, and we argue that it would be feasible to determine the $\alpha$-author at the time of the paper's publication instead. On the other hand, there are other practical considerations that make the calculation of the proposed $h_{\alpha}$ very difficult. As an alternative, we explore other ways of crediting papers to a single author in order to test early career achievement or scientific leadership.
computer science
Motivated by the successful interpretation of these observed $P_c$ and $P_{cs}$ states under the meson-baryon molecular picture, we systematically investigate the possible hidden-charm molecular pentaquark states with triple strangeness which is due to the $\Omega_{c}^{(*)}\bar{D}_s^{(*)}$ interactions. We perform a dynamical calculation of the possible hidden-charm molecular pentaquarks with triple strangeness by the one-boson-exchange model, where the $S$-$D$ wave mixing effect and the coupled channel effect are taken into account in our calculation. Our results suggest that the $\Omega_{c}\bar D_s^*$ state with $J^P={3}/{2}^{-}$ and the $\Omega_{c}^{*}\bar D_s^*$ state with $J^P={5}/{2}^{-}$ can be recommended as the candidates of the hidden-charm molecular pentaquark with triple strangeness. Furthermore, we discuss the two-body hidden-charm strong decay behaviors of these possible hidden-charm molecular pentaquarks with triple strangeness by adopting the quark-interchange model. These predictions are expected to be tested at the LHCb, which can be as a potential research issue with more accumulated experimental data in near future.
high energy physics phenomenology
Determining the number of common factors is an important and practical topic in high dimensional factor models. The existing literatures are mainly based on the eigenvalues of the covariance matrix. Due to the incomparability of the eigenvalues of the covariance matrix caused by heterogeneous scales of observed variables, it is very difficult to give an accurate relationship between these eigenvalues and the number of common factors. To overcome this limitation, we appeal to the correlation matrix and show surprisingly that the number of eigenvalues greater than $1$ of population correlation matrix is the same as the number of common factors under some mild conditions. To utilize such a relationship, we study the random matrix theory based on the sample correlation matrix in order to correct the biases in estimating the top eigenvalues and to take into account of estimation errors in eigenvalue estimation. This leads us to propose adjusted correlation thresholding (ACT) for determining the number of common factors in high dimensional factor models, taking into account the sampling variabilities and biases of top sample eigenvalues. We also establish the optimality of the proposed methods in terms of minimal signal strength and optimal threshold. Simulation studies lend further support to our proposed method and show that our estimator outperforms other competing methods in most of our testing cases.
statistics
The annual temperature cycle of the earth closely follows the annual cycle of solar flux. At temperate latitudes, both driving and response cycles are well described by a strong annual sinusoidal component and a non-vanishing semiannual component. A new analysis of historical weather station records in the United States determines persistent annual and semiannual variation with high precision. Historical annual temperature ranges are consistent with prior studies. Semiannual temperature cycles were much stronger than expected based on the semiannual solar driving. Instead, these cycles were consistent with multiplicative effects of two annual cycles. Our methods provide a quantitative window into the climate's nonlinear response to solar driving, which is of potential value in testing climate models.
physics
We present an analysis of the chemical abundance properties of $\approx$650 star-forming galaxies at $z \approx0.6-1.8$. Using integral-field observations from the $K$-band Multi-Object Spectrograph (KMOS), we quantify the [NII]/H$\alpha$ emission-line ratio, a proxy for the gas-phase Oxygen abundance within the interstellar medium. We define the stellar mass-metallicity relation at $z \approx0.6-1.0$ and $z \approx1.2-1.8$ and analyse the correlation between the scatter in the relation and fundamental galaxy properties (e.g. H$\alpha$ star-formation rate, H$\alpha$ specific star-formation rate, rotation dominance, stellar continuum half-light radius and Hubble-type morphology). We find that for a given stellar mass, more highly star-forming, larger and irregular galaxies have lower gas-phase metallicities, which may be attributable to their lower surface mass densities and the higher gas fractions of irregular systems. We measure the radial dependence of gas-phase metallicity in the galaxies, establishing a median, beam smearing-corrected, metallicity gradient of $ \Delta Z/ \Delta R=0.002 \pm0.004$ dex kpc$^{-1}$, indicating on average there is no significant dependence on radius. The metallicity gradient of a galaxy is independent of its rest-frame optical morphology, whilst correlating with its stellar mass and specific star-formation rate, in agreement with an inside-out model of galaxy evolution, as well as its rotation dominance. We quantify the evolution of metallicity gradients, comparing the distribution of $\Delta Z/ \Delta R$ in our sample with numerical simulations and observations at $z \approx0-3$. Galaxies in our sample exhibit flatter metallicity gradients than local star-forming galaxies, in agreement with numerical models in which stellar feedback plays a crucial role redistributing metals.
astrophysics
We investigate in depth the relation between the first detection time of an isolated quantum system that is repeatedly perturbed by strong local measurements with a large fixed frequency $1/\tau$, determining whether it is in some given state $| \psi_\text{d} \rangle$, and the time of absorption to the same state of the same system with the added imaginary potential $2i\hbar | \psi_\text{d} \rangle \langle \psi_\text{d} | / \tau$. As opposed to previous works, we compare directly the solutions of both problems in the small $\tau$, i.e., Zeno, limit. We find a scaling collapse in $F(t)$ with respect to $\tau$ and compute the total detection probability as well as the moments of the first detection time probability density $F(t)$ in the Zeno limit. We show that both solutions approach the same result in this small $\tau$ limit, as long as the initial state $| \psi_\text{in} \rangle$ is not parallel to the detection state, i.e. as long as $| \langle \psi_\text{d} | \psi_\text{in} \rangle | < 1$. However, when this condition is violated, the small probability density to detect the state on time scales much larger than $\tau$ is precisely a factor of four different for all such times. We express the solution of the Zeno limit of both problems formally in terms of an electrostatic analogy. Our results are corroborated with numerical simulations.
quantum physics
We study the photoproduction of exclusive $2\pi^+2\pi^-$ mesons in ultra-peripheral heavy-ion collisions at RHIC and LHC energies. Predictions in photon-nucleus interactions are calculated for various resonances at central and forward rapidities. The recent H1 preliminary data are utilized to improve the description of the poorly known $\gamma p \to 4\pi^\pm p$ process. We present the comparisons of our results to the available STAR data at RHIC, and made predictions for LHC energies.
high energy physics phenomenology
This study solves the problem of accurate detection of internal faults and classification of transients in a 5-bus interconnected system for Phase Angle Regulators (PAR) and Power Transformers. The analysis prevents mal-operation of differential relays in case of transients other than faults which include magnetizing inrush, sympathetic inrush, external faults with CT saturation, capacitor switching, non-linear load switching, and ferroresonance. A gradient boosting classifier (GBC) is used to distinguish the internal faults from the transient disturbances based on 1.5 cycles of 3-phase differential currents registered by a change detector. After the detection of an internal fault, GBCs are used to locate the faulty unit (Power Transformer, PAR series, or exciting unit) and identify the type of fault. In case a transient disturbance is detected, another GBC classifies them into the six transient disturbances. Five most relevant frequency and time domain features obtained using Information Gain are used to train and test the classifiers. The proposed algorithm distinguishes the internal faults from the other transients with a balanced accuracy of 99.95%. The faulty transformer unit is located with a balanced accuracy of 99.5% and the different transient disturbances are identified with a balanced accuracy of 99.3%. Moreover, the reliability of the scheme is verified for different rating and connection of the transformers involved, CT saturation, and noise levels in the signals. These GBC classifiers can work together with a conventional differential relay and offer a supervisory control over its operation. PSCAD/EMTDC software is used for simulation of the transients and to develop the two and three-winding transformer models for creating the internal faults including inter-turn and inter-winding faults.
electrical engineering and systems science
We study semi-dynamical systems associated to delay differential equations. We give a simple criteria to obtain weak and strong persistence and provide sufficient conditions to guarantee uniform persistence. Moreover, we show the existence of non-trivial $T$-periodic solutions via topological degree techniques. Finally, we prove that, in some sense, the conditions are also necessary.
mathematics
We present numerical simulations, using two complementary setups, of rotating Boussinesq thermal convection in a three-dimensional Cartesian geometry with misaligned gravity and rotation vectors. This model represents a small region at a non-polar latitude in the convection zone of a star or planet. We investigate the effects of rotation on the bulk properties of convection at different latitudes, focusing on determining the relation between the heat flux and temperature gradient. We show that our results may be interpreted using rotating mixing length theory (RMLT). The simplest version of RMLT (due to Stevenson) considers the single mode that transports the most heat. This works reasonably well in explaining our results, but there is a systematic departure from these predictions (up to approximately $30\%$ in the temperature gradient) at mid-latitudes. We develop a more detailed treatment of RMLT that includes the transport afforded by multiple modes, and we show that this accounts for most of the systematic differences. We also show that convectively-generated zonal flows and meridional circulations are produced in our simulations, and that their properties depend strongly on the dimensions of the box. These flows also affect the heat transport, contributing to departures from RMLT at some latitudes. However, we find the theoretical predictions of the multi-mode theory for the mid-layer temperature gradient, the root-mean-square (RMS) vertical velocity, the RMS temperature fluctuation, and the spatial spectrum of the heat transport at different latitudes, are all in reasonably good agreement with our numerical results when zonal flows are small.
astrophysics
LaCrGe$_3$ is an itinerant ferromagnet with a Curie temperature of $T_{\rm c}$ = 85 K and exhibits an avoided ferromagnetic quantum critical point under pressure through a modulated antiferromagnetic phase as well as tri-critical wing structure in its temperature-pressure-magnetic field ($T$-$p$-$H$) phase diagram. In order to understand the static and dynamical magnetic properties of LaCrGe$_3$, we carried out $^{139}$La nuclear magnetic resonance (NMR) measurements. Based on the analysis of NMR data, using the self-consistent-renomalization (SCR) theory, the spin fluctuations in the paramagnetic state are revealed to be isotropic ferromagnetic and three dimensional (3D) in nature. Moreover, the system is found to follow the generalized Rhodes-Wohfarth relation which is expected in 3D itinerant ferromagnetic systems. As compared to other similar itinerant ferromagnets, the Cr 3$d$ electrons and their spin fluctuations are characterized to have a relatively high degree of localization in real space.
condensed matter
Speed of sound waves in gases and liquids is governed by medium compressibility. There exists another type of non-dispersive waves which speed depends on stress instead of medium elasticity. A well-known example is the Alfven wave propagating, with a speed determined by a magnetic tension, in plasma permeated by a magnetic field. Later, an elastic analog of the Alfven waves has been predicted in a flow of dilute polymer solution, where elastic stress engendered by polymer stretching determines the elastic wave speed. Here, we present quantitative evidence of elastic Alfven waves observed in elastic turbulence of a viscoelastic creeping flow between two obstacles hindering a channel flow. The key finding in the experimental proof is a nonlinear dependence of the elastic wave speed $c_{\mathrm{el}}$ on Weissenberg number $\mathrm{Wi}$, which deviates from the prediction based on a model of linear polymer elasticity.
physics
Let $R$ be a discrete valuation ring, and $K$ its fraction field. In 1967, Raynaud initiated the notion of maximal $R$-model for torsors over $K$, and it was further developed by Lewin-M\'en\'egaux. In this paper, motivated by a conjectural ramification theory for infinitesimal torsors, we investigate this notion of maximal model in greater detail. We prove the maximality, the compatibility along inductions, and an existence result for group schemes of semi-direct products.
mathematics
We investigate R-optimal designs for multi-response regression models with multi-factors, where the random errors in these models are correlated. Several theoretical results are derived for Roptimal designs, including scale invariance, reflection symmetry, line and plane symmetry, and dependence on the covariance matrix of the errors. All the results can be applied to linear and nonlinear models. In addition, an efficient algorithm based on an interior point method is developed for finding R-optimal designs on discrete design spaces. The algorithm is very flexible, and can be applied to any multi-response regression model.
statistics
We study the thermal evolution of neutron stars described within the equation of state with induced surface tension (IST) that reproduces properties of normal nuclear matter, fulfills the proton flow constraint, provides a high-quality description of hadron multiplicities created during the nuclear-nuclear collision experiments, and it is equally compatible with the constraints from astrophysical observations and the GW170817 event. The model features strong direct Urca processes for the stars above $1.91~M_{\odot}$. The IST equation of state shows very good agreement with the available cooling data, even without introducing nuclear pairing. We also analysed the effect of the singlet proton/neutron and triplet neutron pairing on the cooling of neutron stars of different mass. We show that the description of the compact object in the center of the Cassiopeia A does not necessarily require an inclusion of neutron superfluidity and/or proton superconductivity. Our results indicate that data of Cassiopeia A can be adequately well reproduced by a $1.66~M_{\odot}$ star with an atmosphere of light elements. Moreover, the IST EoS reproduces each of the observational datasets for the surface temperature of Cassiopeia A either by a rapidly cooling $\sim$ $1.955~M_{\odot}$ star with paired and unpaired matter or by a $1.91~M_{\odot}$ star with the inclusion of neutron and proton pairings in the singlet channel.
astrophysics
Social recommendation is effective in improving the recommendation performance by leveraging social relations from online social networking platforms. Social relations among users provide friends' information for modeling users' interest in candidate items and help items expose to potential consumers (i.e., item attraction). However, there are two issues haven't been well-studied: Firstly, for the user interests, existing methods typically aggregate friends' information contextualized on the candidate item only, and this shallow context-aware aggregation makes them suffer from the limited friends' information. Secondly, for the item attraction, if the item's past consumers are the friends of or have a similar consumption habit to the targeted user, the item may be more attractive to the targeted user, but most existing methods neglect the relation enhanced context-aware item attraction. To address the above issues, we proposed DICER (Dual Side Deep Context-aware Modulation for SocialRecommendation). Specifically, we first proposed a novel graph neural network to model the social relation and collaborative relation, and on top of high-order relations, a dual side deep context-aware modulation is introduced to capture the friends' information and item attraction. Empirical results on two real-world datasets show the effectiveness of the proposed model and further experiments are conducted to help understand how the dual context-aware modulation works.
computer science
In this paper, we address the problem of direction finding using coprime array, which is one of the most preferred sparse array configurations. Motivated by the fact that non-uniform element spacing hinders full utilization of the underlying information in the receive signals, we propose a direction-of-arrival (DoA) estimation algorithm based on low-rank reconstruction of the Toeplitz covariance matrix. The atomic-norm representation of the measurements from the interpolated virtual array is considered, and the equivalent dual-variable rank minimization problem is formulated and solved using a cyclic optimization approach. The recovered covariance matrix enables the application of conventional subspace-based spectral estimation algorithms, such as MUSIC, to achieve enhanced DoA estimation performance. The estimation performance of the proposed approach, in terms of the degrees-of-freedom and spatial resolution, is examined. We also show the superiority of the proposed method over the competitive approaches in the root-mean-square error sense.
electrical engineering and systems science
The recent success of question answering systems is largely attributed to pre-trained language models. However, as language models are mostly pre-trained on general domain corpora such as Wikipedia, they often have difficulty in understanding biomedical questions. In this paper, we investigate the performance of BioBERT, a pre-trained biomedical language model, in answering biomedical questions including factoid, list, and yes/no type questions. BioBERT uses almost the same structure across various question types and achieved the best performance in the 7th BioASQ Challenge (Task 7b, Phase B). BioBERT pre-trained on SQuAD or SQuAD 2.0 easily outperformed previous state-of-the-art models. BioBERT obtains the best performance when it uses the appropriate pre-/post-processing strategies for questions, passages, and answers.
computer science
We introduce the notion of compatibility dimension for a set of quantum measurements: it is the largest dimension of a Hilbert space on which the given measurements are compatible. In the Schr\"odinger picture, this notion corresponds to testing compatibility with ensembles of quantum states supported on a subspace, using the incompatibility witnesses of Carmeli, Heinosaari, and Toigo. We provide several bounds for the compatibility dimension, using approximate quantum cloning or algebraic techniques inspired by quantum error correction. We analyze in detail the case of two orthonormal bases, and, in particular, that of mutually unbiased bases.
quantum physics
The thermodynamic properties of quantum heat engines are stochastic owing to the presence of thermal and quantum fluctuations. We here experimentally investigate the efficiency and nonequilibrium entropy production statistics of a spin-1/2 quantum Otto cycle. We first study the correlations between work and heat within a cycle by extracting their joint distribution for different driving times. We show that near perfect anticorrelation, corresponding to the tight-coupling condition, can be achieved. In this limit, the reconstructed efficiency distribution is peaked at the macroscopic efficiency and fluctuations are strongly suppressed. We further test the second law in the form of a joint fluctuation relation for work and heat. Our results characterize the statistical features of a small-scale thermal machine in the quantum domain and provide means to control them.
quantum physics
We initiate the study of applications of machine learning to Seiberg duality, focusing on the case of quiver gauge theories, a problem also of interest in mathematics in the context of cluster algebras. Within the general theme of Seiberg duality, we define and explore a variety of interesting questions, broadly divided into the binary determination of whether a pair of theories picked from a series of duality classes are dual to each other, as well as the multi-class determination of the duality class to which a given theory belongs. We study how the performance of machine learning depends on several variables, including number of classes and mutation type (finite or infinite). In addition, we evaluate the relative advantages of Naive Bayes classifiers versus Convolutional Neural Networks. Finally, we also investigate how the results are affected by the inclusion of additional data, such as ranks of gauge/flavor groups and certain variables motivated by the existence of underlying Diophantine equations. In all questions considered, high accuracy and confidence can be achieved.
high energy physics theory
[ABRIDGED] The purpose of this work is to evaluate how several elements produced by different nucleosynthesis processes behave with stellar age and provide empirical relations to derive stellar ages from chemical abundances. We derive different sets of ages using Gaia parallaxes for a sample of more than 1000 FGK dwarf stars for which he have spectra from the HARPS-GTO program. We analyze the temporal evolution of different abundance ratios to find the best chemical clocks. We find that [$\alpha$/Fe] ratio (average of Mg, Si and Ti), [O/Fe] and [Zn/Fe] are good age proxies with a lower dispersion than the age-metallicity dispersion. Several abundance ratios present a significant correlation with age for chemically separated thin disk stars (i.e. low-$\alpha$) but in the case of the chemically defined thick disk stars (i.e. high-$\alpha$) only the elements Mg, Si, Ca and TiII show a clear correlation with age. We find that the thick disk stars are more enriched in light-s elements than thin disk stars of similar age. The maximum enrichment of s-process elements in the thin disk occurs in the youngest stars which in turn have solar metallicity. The slopes of the [X/Fe]-age relations are quite constant for O, Mg, Si, Ti, Zn, Sr and Eu regardless of the metallicity. However, this is not the case for Al, Ca, Cu and most of the s-process elements, which display very different trends depending on the metallicity. This demonstrates the limitations of using simple linear relations based on certain abundance ratios to obtain ages for stars of different metallicities. Finally, we show that by using 3D relations with a chemical clock and two stellar parameters (either Teff, [Fe/H] or stellar mass) we can explain up to 89% of age variance in a star. A similar result is obtained when using 2D relations with a chemical clock and one stellar parameter, being up to a 87% of the variance explained.
astrophysics
We report the experimental implementation of the Dicke model in the semiclassical approximation, which describes a large number of two-level atoms interacting with a single-mode electromagnetic field in a perfectly reflecting cavity. This is managed by making use of two non-linearly coupled active, synthetic LC circuits, implemented by means of analog electrical components. The simplicity and versatility of our platform allows us not only to experimentally explore the coexistence of regular and chaotic trajectories in the Dicke model but also to directly observe the so-called ground-state and excited-state ``quantum'' phase transitions. In this analysis, the trajectories in phase space, Lyapunov exponents and the recently introduced Out-of-Time-Order-Correlator (OTOC) are used to identify the different operating regimes of our electronic device. Exhaustive numerical simulations are performed to show the quantitative and qualitative agreement between theory and experiment.
quantum physics
The response of the antenna is a source of uncertainty in measurements with the Experiment to Detect the Global EoR Signature (EDGES). We aim to validate the beam model of the low-band (50-100 MHz) dipole antenna with comparisons between models and against data. We find that simulations of a simplified model of the antenna over an infinite perfectly conducting ground plane are, with one exception, robust to changes of numerical electromagnetic solver code or algorithm. For simulations of the antenna with the actual finite ground plane and realistic soil properties, we find that two out of three numerical solvers agree well. Applying our analysis pipeline to a simulated driftscan observation from an early EDGES low-band instrument that had a 10 m $\times$ 10 m ground plane, we find residual levels after fitting and removing a five-term foreground model to data binned in Local Sidereal Time (LST) average about 250 mK with $\pm$40 mK variation between numerical solvers. A similar analysis of the primary 30 m $\times$ 30 m sawtooth ground plane reduced the LST-averaged residuals to about 90 mK with $\pm$10 mK between the two viable solvers. More broadly we show that larger ground planes generally perform better than smaller ground planes. Simulated data have a power which is within 4$\%$ of real observations, a limitation of net accuracy of the sky and beam models. We observe that residual spectral structures after foreground model fits match qualitatively between simulated data and observations, suggesting that the frequency dependence of the beam is reasonably represented by the models. We find that soil conductivity of 0.02 Sm$^{-1}$ and relative permittivity of 3.5 yield good agreement between simulated spectra and observations. This is consistent with the soil properties reported by Sutinjo et al. (2015) for the Murchison Radio-astronomy Observatory, where EDGES is located.
astrophysics
The single-scatter approximation is fundamental in many tomographic imaging problems including x-ray scatter imaging and optical scatter imaging for certain media. In all cases, noisy measurements are affected by both local scatter events and nonlocal attenuation. Prior works focus on reconstructing one of two images: scatter density or total attenuation. However, both images are media specific and useful for object identification. Nonlocal effects of the attenuation image on the data are summarized by the broken ray transform (BRT). While analytic inversion formulas exist, poor conditioning of the inverse problem is only exacerbated by noisy measurements and sampling errors. This has motivated interest in the related star transforms incorporating BRT measurements from multiple source-detector pairs. However, all analytic methods operate on the log of the data. For media comprising regions with no scatter a new approach is required. We are the first to present a joint estimation algorithm based on Poisson data models for a single-scatter measurement geometry. Monotonic reduction of the log-likelihood function is guaranteed for our iterative algorithm while alternating image updates. We also present a fast algorithm for computing the discrete BRT forward operator. Our generalized approach can incorporate both transmission and scatter measurements from multiple source-detector pairs. Transmission measurements resolve low-frequency ambiguity in the joint image estimation problem, while multiple scatter measurements resolve the attenuation image. The benefits of joint estimation, over single-image estimation, vary with problem scaling. Our results quantify these benefits and should inform design of future acquisition systems.
electrical engineering and systems science
In this paper, we prove absence of temperature chaos for the two-dimensional discrete Gaussian free field using the convergence of the full extremal process, which has been obtained recently by Biskup and Louidor. This means that the overlap of two points chosen under Gibbs measures at different temperatures has a nontrivial distribution. Whereas this distribution is the same as for the random energy model when the two points are sampled at the same temperature, we point out here that they are different when temperatures are distinct: more precisely, we prove that the mean overlap of two points chosen under Gibbs measures at different temperatures for the DGFF is strictly smaller than the REM's one. Therefore, although neither of these models exhibits temperature chaos, one could say that the DGFF is more chaotic in temperature than the REM.
mathematics
In perturbative amplitudes in quantum field theory and string field theory, Cutkosky rule expresses the anti-hermitian part of a Feynman diagram in terms of sum over all its cut diagrams, and this in turn is used to prove unitarity of the theory. For D-instanton contribution to a string theory amplitude, the cutting rule needed for the proof of unitarity is somewhat different; we need to sum over only those cut diagrams for which all the world-sheet boundaries ending on some particular D-instanton lie on the same side of the cut. By working with the closed string effective action, obtained after integrating out the open string modes, we prove that the D-instanton amplitudes actually satisfy these cutting rules, provided the effective action is real. The violation of unitarity in the closed string sector of two dimensional string theory can be traced to the failure of this reality condition. In the critical superstring theory, multi-instanton and multi anti-instanton amplitudes satisfy the reality condition. Contribution to the amplitudes from the instanton anti-instanton sector satisfies the reality condition if we make a specific choice of integration cycle over the configuration space of string fields, whereas contribution due to the non-BPS D-instantons will need to either vanish or have an overall real normalization in order for it to give real contribution. We use Picard-Lefschetz theory to argue that these conditions are indeed satisfied in superstring theories.
high energy physics theory
High-multiplicity pp collisions at the Large Hadron Collider (LHC) energies have created special importance in view of the Underlying Event (UE) observables. The recent results of LHC, such as long range angular correlation, flow-like patterns, strangeness enhancement etc. in high multiplicity events are not yet completely understood. In the same direction, the understanding of multiplicity dependence of J/$\psi$ production is highly necessary. Transverse spherocity, which is an event shape variable, helps to investigate the particle production by isolating the hard and the soft components. In the present study, we have investigated the multiplicity dependence of J/$\psi$ production at mid-rapidity and forward rapidity through the transverse spherocity analysis and tried to understand the role of jets by separating the isotropic and jetty events from the minimum bias collisions. We have analyzed the J/$\psi$ production at the mid-rapidity and forward rapidities via dielectron and dimuon channels, respectively using 4C tuned PYTHIA8 event generator. The analysis has been performed in two different center-of-mass energies: $\sqrt{s}$ = 5.02 and 13 TeV, to see the energy dependence of jet contribution to the multiplicity dependence study of J/$\psi$ production. Furthermore, we have studied the production dynamics through the dependence of thermodynamic parameters on event multiplicity and transverse spherocity.
high energy physics phenomenology
Quantum machine learning (QML) can complement the growing trend of using learned models for a myriad of classification tasks, from image recognition to natural speech processing. A quantum advantage arises due to the intractability of quantum operations on a classical computer. Many datasets used in machine learning are crowd sourced or contain some private information. To the best of our knowledge, no current QML models are equipped with privacy-preserving features, which raises concerns as it is paramount that models do not expose sensitive information. Thus, privacy-preserving algorithms need to be implemented with QML. One solution is to make the machine learning algorithm differentially private, meaning the effect of a single data point on the training dataset is minimized. Differentially private machine learning models have been investigated, but differential privacy has yet to be studied in the context of QML. In this study, we develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm. This marks the first proof-of-principle demonstration of privacy-preserving QML. The experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy. Although the quantum model is simulated and tested on a classical computer, it demonstrates potential to be efficiently implemented on near-term quantum devices (noisy intermediate-scale quantum [NISQ]). The approach's success is illustrated via the classification of spatially classed two-dimensional datasets and a binary MNIST classification. This implementation of privacy-preserving QML will ensure confidentiality and accurate learning on NISQ technology.
quantum physics
Through magneto-transport measurements and analysis of the observed Shubnikov de Haas oscillations in (010) (AlxGa1-x)2O3/Ga2O3 heterostructures, spin-splitting of the Landau levels in the (010) Ga2O3 two-dimensional electron gas (2DEG) has been studied. Analysis indicates that the spin-splitting results from the Zeeman effect. By fitting the both the first and second harmonic of the oscillations as a function of magnetic field, we determine the magnitude of the Zeeman splitting to be 0.4$\hbar\omega_c$, with a corresponding effective g-factor of 2.7, for magnetic field perpendicular to the 2DEG.
condensed matter
The Lieb-Robinson theorem states that information propagates with a finite velocity in quantum systems on a lattice with nearest-neighbor interactions. What are the speed limits on information propagation in quantum systems with power-law interactions, which decay as $1/r^\alpha$ at distance $r$? Here, we present a definitive answer to this question for all exponents $\alpha>2d$ and all spatial dimensions $d$. Schematically, information takes time at least $r^{\min\{1, \alpha-2d\}}$ to propagate a distance~$r$. As recent state transfer protocols saturate this bound, our work closes a decades-long hunt for optimal Lieb-Robinson bounds on quantum information dynamics with power-law interactions.
quantum physics
In this note we consider smooth elliptic Calabi-Yau four-folds whose fiber ceases to be flat over compact Riemann surfaces of genus $g$ in the base. These non-flat fibers contribute Kaehler moduli to the four-fold but also add to the three-form cohomology for $g>0$. In F-/M-theory these sectors are to be interpreted as compactifications of six/five dimensional $\mathcal{N}=(1,0)$ superconformal matter theories. The three-form cohomology leads to additional chiral singlets proportional to the dimension of five dimensional Coulomb branch of those sectors. We construct explicit examples for E-string theories as well as higher rank cases. For the E-string theories we further investigate conifold transitions that remove those non-flat fibers. First, we show how non-flat fibers can be deformed from curves down to isolated points in the base. This removes the chiral singlet of the three-forms and leads to non-perturbative four-point couplings among matter fields which can be understood as remnants of the former E-string. Alternatively, the non-flat fibers can be avoided by performing birational base changes, analogous to 6D tensor branches. For compact bases these transitions alternate all Hodge numbers but leave the Euler number invariant.
high energy physics theory
The Coulomb drag effect arises due to electron-electron interactions, when two metallic conductors are placed in close vicinity to each other. It manifests itself as a charge current or voltage drop induced in one of the conductors, if the current flows through the second one. Often it can be interpreted as an effect of rectification of the non-equilibrium $quantum$ noise of current. Here, we investigate the Coulomb drag effect in mesoscopic electrical circuits and show that it can be mediated by $classical$ fluctuations of the circuit collective mode. Moreover, by considering this phenomenon in the context of the full counting statistics of charge transport we demonstrate that not only the noise power, but also the third cumulant of current may contribute to the drag current. We discuss the situations, where this contribution becomes dominant.
condensed matter
Let $A$ be a real $n\times n$ matrix and $z,b\in \mathbb R^n$. The piecewise linear equation system $z-A\vert z\vert = b$ is called an \textit{absolute value equation}. We consider two solvers for this problem, one direct, one semi-iterative, and extend their previously known ranges of convergence.
mathematics
The new type of coronavirus disease (COVID-19), which started in Wuhan, China in December 2019, continues to spread rapidly affecting the whole world. It is essential to have a highly sensitive diagnostic screening tool to detect the disease as early as possible. Currently, chest CT imaging is preferred as the primary screening tool for evaluating the COVID-19 pneumonia by radiological imaging. However, CT imaging requires larger radiation doses, longer exposure time, higher cost, and may suffer from patient movements. X-Ray imaging is a fast, cheap, more patient-friendly and available in almost every healthcare facility. Therefore, we have focused on X-Ray images and developed an end-to-end deep learning model, i.e. Ensemble-CVDNet, to distinguish COVID-19 pneumonia from non-COVID pneumonia and healthy cases in this work. The proposed model is based on a combination of three lightweight pre-trained models SqueezeNet, ShuffleNet, and EfficientNet-B0 at different depths, and combines feature maps in different abstraction levels. In the proposed end to-end model, networks are used as feature extractors in parallel after fine-tuning, and some additional layers are used at the top of them. The proposed model is evaluated in the COVID-19 Radiography Database, a public data set consisting of 219 COVID-19, 1341 Healthy, and 1345 Viral Pneumonia chest X-Ray images. Experimental results show that our lightweight Ensemble-CVDNet model provides 98.30% accuracy, 97.78% sensitivity, and 97.61% F1 score using only 5.62M parameters. Moreover, it takes about 10ms to process and predict an X-Ray image using the proposed method using a mid level GPU. We believe that the method proposed in this study can be a helpful diagnostic screening tool for radiologists in the early diagnosis of the disease.
electrical engineering and systems science
Recent years have witnessed growing interest in reduced cost radar systems operating with low power. Multiple-input multiple-output (MIMO) radar technology is known to achieve high performance sensing by probing with multiple orthogonal waveforms. However, implementing a low cost low power MIMO radar is challenging. One of the reasons for this difficulty stems from the increased cost and power consumption required by analog-to-digital convertors (ADCs) in acquiring the multiple waveforms at the radar receiver. In this work we study reduced cost MIMO radar receivers restricted to operate with low resolution ADCs. We design bit-limited MIMO radar (BiLiMO) receivers which are capable of accurately recovering their targets while operating under strict resolution constraints. This is achieved by applying an additional analog filter to the acquired waveforms, and designing the overall hybrid analog-digital system to facilitate target identification using task-based quantization methods. In particular, we exploit the fact that the target parameters can be recovered from a compressed representation of the received waveforms. We thus tune the acquisition system to recover this representation with minimal distortion, from which the targets can be extracted in digital, and characterize the achievable error in identifying the targets. Our numerical results demonstrate that the proposed BiLiMO receiver operating with a bit budget of one bit per sample achieves target recovery performance which approaches that of costly MIMO radars operating with unlimited resolution ADCs, while substantially outperforming MIMO receivers operating only in the digital domain under the same bit limitations.
electrical engineering and systems science
This work considers methods for imposing sparsity in Bayesian regression with applications in nonlinear system identification. We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models. We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems. In the case of orthogonal covariates, we analytically demonstrate favorable performance with regards to learning a small set of active terms in a linear system with a sparse solution. Several example problems are presented to compare the set of proposed methods in terms of advantages and limitations to ARD in bases with hundreds of elements. The aim of this paper is to analyze and understand the assumptions that lead to several algorithms and to provide theoretical and empirical results so that the reader may gain insight and make more informed choices regarding sparse Bayesian regression.
statistics
In many settings it is important for one to be able to understand why a model made a particular prediction. In NLP this often entails extracting snippets of an input text `responsible for' corresponding model output; when such a snippet comprises tokens that indeed informed the model's prediction, it is a faithful explanation. In some settings, faithfulness may be critical to ensure transparency. Lei et al. (2016) proposed a model to produce faithful rationales for neural text classification by defining independent snippet extraction and prediction modules. However, the discrete selection over input tokens performed by this method complicates training, leading to high variance and requiring careful hyperparameter tuning. We propose a simpler variant of this approach that provides faithful explanations by construction. In our scheme, named FRESH, arbitrary feature importance scores (e.g., gradients from a trained model) are used to induce binary labels over token inputs, which an extractor can be trained to predict. An independent classifier module is then trained exclusively on snippets provided by the extractor; these snippets thus constitute faithful explanations, even if the classifier is arbitrarily complex. In both automatic and manual evaluations we find that variants of this simple framework yield predictive performance superior to `end-to-end' approaches, while being more general and easier to train. Code is available at https://github.com/successar/FRESH
computer science
This Science White Paper, prepared in response to the ESA Voyage 2050 call for long-term mission planning, aims to describe the various science possibilities that can be realized with an L-class space observatory that is dedicated to the study of the interactions of cosmic microwave background (CMB) photons with the cosmic web. Our aim is specifically to use the CMB as a backlight -- and survey the gas, total mass, and stellar content of the entire observable Universe by means of analyzing the spatial and spectral distortions imprinted on it. These distortions result from two major processes that impact on CMB photons: scattering by electrons (Sunyaev-Zeldovich effect in diverse forms, Rayleigh scattering, resonant scattering) and deflection by gravitational potential (lensing effect). Even though the list of topics collected in this White Paper is not exhaustive, it helps to illustrate the exceptional diversity of major scientific questions that can be addressed by a space mission that will reach an angular resolution of 1.5 arcmin (goal 1 arcmin), have an average sensitivity better than 1 uK-arcmin, and span the microwave frequency range from roughly 50 GHz to 1 THz. The current paper also highlights the synergy of our BACKLIGHT mission concept with several upcoming and proposed ground-based CMB experiments.
astrophysics
Nonlinear metasurfaces incorporate many of the functionalities of their linear counterparts such as wavefront shaping but simultaneously they perform nonlinear optical transformations. This dual functionality leads to a rather unintuitive physical behavior which is still widely unexplored for many photonic applications. The nonlinear processes render some basic principles governing the functionality of linear metasurfaces not directly applicable, such as the superposition principle and the geometric optics approximation. On the other hand, nonlinear metasurfaces facilitate new phenomena that are not possible in the linear regime. Here, we study the imaging of objects through a dielectric nonlinear metalens. We illuminate objects by infrared light and record their generated images at the visible third-harmonic wavelengths. We revisit the classical lens theory and suggest a generalized Gaussian lens equation for nonlinear imaging, verified both experimentally and analytically. We also demonstrate experimentally higher-order spatial correlations facilitated by the nonlinear metalens, resulting in additional image features.
physics
The computational requirements posed by multi-dimensional simulations of type Ia supernovae make it difficult to incorporate complex nuclear networks to follow the release of nuclear energy along with the propagation of the flame. Instead, these codes usually model the flame and use simplified nuclear kinetics, with the goal of determining a sufficiently accurate rate of nuclear energy generation and, afterwards, post-processing the thermodynamic trajectories with a large nuclear network to obtain more reliable nuclear yields. In this work, I study the performance of simplified nuclear networks with respect to reproduction of the nuclear yields obtained with a one-dimensional supernova code equipped with a large nuclear network. I start by defining a strategy to follow the properties of matter in nuclear statistical equilibrium (NSE). I propose to use published tables of NSE properties, together with a careful interpolation routine. Short networks (iso7 and 13{\alpha}) are able to give an accurate yield of 56Ni, after post-processing, but can fail by order of magnitude in predicting the ejected mass of even mildly abundant species (> 0.001 solar masses). A network of 21 species reproduces the nucleosynthesis of the Chandrasekhar and sub-Chandrasekhar explosions studied here with average errors better than 20% for the whole set of stable elements and isotopes followed in the models.
astrophysics
Distributed change-point detection has been a fundamental problem when performing real-time monitoring using sensor-networks. We propose a distributed detection algorithm, where each sensor only exchanges CUSUM statistic with their neighbors based on the average consensus scheme, and an alarm is raised when local consensus statistic exceeds a pre-specified global threshold. We provide theoretical performance bounds showing that the performance of the fully distributed scheme can match the centralized algorithms under some mild conditions. Numerical experiments demonstrate the good performance of the algorithm especially in detecting asynchronous changes.
electrical engineering and systems science