text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Formation control algorithms for multi-agent systems have gained much attention in the recent years due to the increasing amount of mobile and aerial robotic swarms. The design of safe controllers for these vehicles is a substantial aspect for an increasing range of application domains. However, parts of the vehicle's dynamics and external disturbances are often unknown or very time-consuming to model. To overcome this issue, we present a safe formation control law for multiagent systems based on double integrator dynamics by using Gaussian Processes for an online learning of the unknown dynamics. The presented approach guarantees a bounded error to desired formations with high probability, where the bound is explicitly given. A numerical example highlights the effectiveness of the learning-based formation control law. | electrical engineering and systems science |
The major simplification in a number of quantum integrable systems is the existence of special coordinates in which the eigenstates take a factorised form. Despite many years of studies, the basis realising the separation of variables (SoV) remains unknown in N=4 SYM and similar models, even though it is widely believed they are integrable. In this paper we initiate the SoV approach for observables with nontrivial coupling dependence in a close cousin of N=4 SYM - the fishnet 4D CFT. We develop the functional SoV formalism in this theory, which allows us to compute non-perturbatively some nontrivial observables in a form suitable for numerical evaluation. We present some applications of these methods. In particular, we discuss the possible SoV structure of the one-point correlators in presence of a defect, and write down a SoV-type expression for diagonal OPE coefficients involving an arbitrary state and the Lagrangian density operator. We believe that many of the findings of this paper can be applied in the N=4 SYM case, as we speculate in the last part of the article. | high energy physics theory |
We introduce a method to rotate arbitrarily the excitation profile of universal broadband composite pulse sequences for robust high-fidelity population inversion. These pulses compensate deviations in any experimental parameter (e.g. pulse amplitude, pulse duration, detuning from resonance, Stark shifts, unwanted frequency chirp, etc.) and are applicable with any pulse shape. The rotation allows to achieve higher order robustness to any combination of pulse area and detuning errors at no additional cost. The latter can be particularly useful, e.g., when detuning errors are due to Stark shifts that are correlated with the power of the applied field. We demonstrate the efficiency and universality of these composite pulses by experimental implementation for rephasing of atomic coherences in a $\text{Pr}^{3+}\text{:}\text{Y}_2\text{SiO}_5\:$ crystal. | quantum physics |
In this workshop paper, we use an empirical example from our ongoing fieldwork, to showcase the complexity and situatedness of the process of making sense of algorithmic results; i.e. how to evaluate, validate, and contextualize algorithmic outputs. So far, in our research work, we have focused on such sense-making processes in data analytic learning environments such as classrooms and training workshops. Multiple moments in our fieldwork suggest that meaning, in data analytics, is constructed through an iterative and reflexive dialogue between data, code, assumptions, prior knowledge, and algorithmic results. A data analytic result is nothing short of a sociotechnical accomplishment - one in which it is extremely difficult, if not at times impossible, to clearly distinguish between 'human' and 'technical' forms of data analytic work. We conclude this paper with a set of questions that we would like to explore further in this workshop. | computer science |
The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm has become in recent years one of the most used and insightful techniques for the exploratory data analysis of high-dimensional data. tSNE reveals clusters of high-dimensional data points at different scales while it requires only minimal tuning of its parameters. Despite these advantages, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of tSNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the tSNE embedding for large datasets. In this work, we present a novel approach to the minimization of the tSNE objective function that heavily relies on modern graphics hardware and has linear computational complexity. Our technique does not only beat the state of the art, but can even be executed on the client side in a browser. We propose to approximate the repulsion forces between data points using adaptive-resolution textures that are drawn at every iteration with WebGL. This approximation allows us to reformulate the tSNE minimization problem as a series of tensor operation that are computed with TensorFlow.js, a JavaScript library for scalable tensor computations. | computer science |
The fundamental solution of the classical approximation of a three waves kinetic equation that happens in the kinetic theory of a condensed gas of bosons near the critical temperature is obtained. It is also proved to be unique in a suitable space of distributions and several of its properties are described. The fundamental solution is used to solve the initial value problem for a general class of initial data. | mathematics |
We use the stellar evolution code MESA-binary and follow the evolution of three exoplanets and two brown dwarfs (BDs) to determine their potential role in the future evolution of their parent star on the red giant branch (RGB) and on the asymptotic giant branch (AGB). We limit this study to exoplanets and BDs with orbits that have semi-major axis of 1AU<a0<20AU, a high eccentricity, e0>0.25, and having a parent star of mass M>1Mo. We find that the star HIP 75458 will engulf its planet HIP 75458 b during its RGB phase. The planet will remove the envelope and terminate the RGB evolution, leaving a bare helium core of mass 0.4Mo that will evolve to form a helium white dwarf. Only in one system out of five, the planet beta Pic c will enter the envelope of its parent star during the AGB phase. For that to occur, we have to reduce the wind mass-loss rate by a factor of about four from its commonly used value. This strengthens an early conclusion, which was based on exoplanets with circular orbits, that states that to have a non-negligible fraction of AGB stars that engulf planets we should consider lower wind mass-loss rates of isolated AGB stars (before they are spun-up by a companion). Such an engulfed planet might lead to the shaping of the AGB mass-loss geometry to form an elliptical planetary nebula. | astrophysics |
Segment routing (SR) combines the advantages of source routing supported by centralized software-defined networking (SDN) paradigm and hop-by-hop routing applied in distributed IP network infrastructure. However, because of the computation inefficiency, it is nearly impossible to evaluate whether various types of networks will benefit from the SR with multiple segments using conventional approaches. In this paper, we propose a flexible $Q$-SR model as well as its formulation in order to fully explore the potential of SR from an algorithmic perspective. The model leads to a highly extensible framework to design and evaluate algorithms that can be adapted to various network topologies and traffic matrices. For the offline setting, we develop a fully polynomial time approximation scheme (FPTAS) which can finds a $(1+\omega)$-approximation solution for any specified $\omega>0$ in time that is a polynomial function of the network size. To the best of our knowledge, the proposed FPTAS is the first algorithm that can compute arbitrarily accurate solution. For the online setting, we develop an online primal-dual algorithm that proves $O(1)$-competitive and violates link capacities by a factor of $O(\log n)$, where $n$ is the node number. We also prove performance bounds for the proposed algorithms. We conduct simulations on realistic topologies to validate SR parameters and algorithmic parameters in both offline and online scenarios. | computer science |
Given a set of strings, the shortest common superstring problem is to find the shortest possible string that contains all the input strings. The problem is NP-hard, but a lot of work has gone into designing approximation algorithms for solving the problem. We present the first time and space efficient implementation of the classic greedy heuristic which merges strings in decreasing order of overlap length. Our implementation works in $O(n \log \sigma)$ time and bits of space, where $n$ is the total length of the input strings in characters, and $\sigma$ is the size of the alphabet. After index construction, a practical implementation of our algorithm uses roughly $5 n \log \sigma$ bits of space and reasonable time for a real dataset that consists of DNA fragments. | computer science |
The cosmological axions/axion-like particles can compose a significant part of dark matter, however the uncertainty of their mass is large. Here we propose to search the axions using a cylindrical capacitor, in which the static electric field converts dark matter axions into an oscillating magnetic field. Using a static electric field can greatly reduce the magnetic field background compared to using the $\vec B$ field that the thermal current in the magnet-coil could be hard to annihilate. A cylindrical setup shields the electric field to the laboratory as well as encompasses the axion induced magnetic field within the capacitor, which results an increased magnetic field strength. The induced oscillating magnetic field then can be picked up by a SQUID-based magnetic-meter. Adding a superconductor ring-coil system into the induced magnetic field region can further boost the sensitivity and maintain the axion dark matter inherent bandwidth. The proposed setup is capable of a wide mass range searches as the signal can also be modulated by adjusting the angle between the electric field and the axion flow. | high energy physics phenomenology |
We review Lie polynomials as a mathematical framework that underpins the structure of the so-called double copy relationship between gauge and gravity theories (and a network of other theories besides). We explain how Lie polynomials naturally arise in the geometry and cohomology of $\mathcal{M}_{0,n}$, the moduli space of $n$ points on the Riemann sphere up to Mobi\"us transformation. We introduce a twistorial correspondence between the cotangent bundle $T^*_D\mathcal{M}_{0,n}$, the bundle of forms with logarithmic singularities on the divisor $D$ as the twistor space, and $\mathcal{K}_n$ the space of momentum invariants of $n$ massless particles subject to momentum conservation as the analogue of space-time. This gives a natural framework for Cachazo He and Yuan (CHY) and ambitwistor-string formulae for scattering amplitudes of gauge and gravity theories as being the corresponding Penrose transform. In particular we show that it gives a natural correspondence between CHY half-integrands and scattering forms, certain $n-3$-forms on $\mathcal{K}_n$, introduced by Arkani-Hamed, Bai, He and Yan (ABHY). We also give a generalization and more invariant description of the associahedral $n-3$-planes in $\mathcal{K}_n$ introduced by ABHY.} | high energy physics theory |
We obtain a Lorentz covariant wave equation whose complex wave function transforms under a Lorentz boost according to the following rule, $\Psi(x)\rightarrow e^{\frac{i}{\hbar}f(x)}\Psi(x)$. We show that the spacetime dependent phase $f(x)$ is the most natural relativistic extension of the phase associated with the transformation rule for the non-relativistic Schroedinger wave function when it is subjected to a Galilean transformation. We then generalize the previous analysis by postulating that $\Psi(x)$ transforms according to the above rule under proper Lorentz transformations (boosts or spatial rotations). This is the most general transformation rule compatible with a Lorentz invariant physical theory whose observables are bilinear functions of the field $\Psi(x)$. We use the previous wave equations to describe several physical systems. In particular, we solve the bound state and scattering problems of two particles which interact both electromagnetically and gravitationally (static electromagnetic and gravitational fields). The former interaction is modeled via the minimal coupling prescription while the latter enters via an external potential. We also formulate logically consistent classical and quantum field theories associated with these Lorentz covariant wave equations. We show that it is possible to make those theories equivalent to the Klein-Gordon theory whenever we have self-interacting terms that do not break their Lorentz invariance or if we introduce electromagnetic interactions via the minimal coupling prescription. For interactions that break Lorentz invariance, we show that the present theories imply that particles and antiparticles behave differently at decaying processes, with the latter being more unstable. This suggests a possible connection between Lorentz invariance-breaking interactions and the matter-antimatter asymmetry problem. | high energy physics phenomenology |
Recent studies found that the diffusive transport of conserved quantities in non-integrable many-body systems has an imprint on quantum entanglement: while the von Neumann entropy of a state grows linearly in time $t$ under a global quench, all $n$th R\'enyi entropies with $n > 1$ grow with a diffusive scaling $\sqrt{t}$. To understand this phenomenon, we introduce an amplitude $A(t)$, which is the overlap of the time-evolution operator $U(t)$ of the entire system with the tensor product of the two evolution operators of the subsystems of a spatial bipartition. As long as $|A(t)| \ge e^{-\sqrt{Dt}}$, which we argue holds true for generic diffusive non-integrable systems, all $n$th R\'enyi entropies with $n >1$ (annealed-averaged over initial product states) are bounded from above by $\sqrt{t}$. We prove the following inequality for the disorder average of the amplitude, $\overline{|A(t)|} \ge e^{ - \sqrt{Dt}} $, in a local spin-$\frac{1}{2}$ random circuit with a $\text{U}(1)$ conservation law by mapping to the survival probability of a symmetric exclusion process. Furthermore, we numerically show that the typical decay behaves asymptotically, for long times, as $|A(t)| \sim e^{ - \sqrt{Dt}} $ in the same random circuit as well as in a prototypical non-integrable model with diffusive energy transport but no disorder. | condensed matter |
Second only to initial mass, the rate of wind-driven mass loss determines the final mass of a massive star and the nature of its remnant. Motivated by the need to reconcile observational values and theory, we use a recently vetted technique to analyze the mass-loss rates in a sample of OB stars that generate bowshock nebulae. We measure peculiar velocities from new Gaia parallax and proper motion data and their spectral types from new optical and infrared spectroscopy. For our sample of 67 central stars in morphologically selected bowshocks nebulae, 67 are OB stars. The median peculiar velocity is 11 km/s, significantly smaller than classical `runaway star' velocities. Mass-loss rates for these O and early B stars agree with recently lowered theoretical predictions, ranging from ~10^-7 Msun/yr for mid-O dwarfs to 10^-9 Msun/yr for late-O dwarfs---a factor of about 2.7 lower than the often-used Vink et al. (2001) formulation. Our results provide the first observational mass-loss rates for B0--B3 dwarfs and giants---10^-9 to 10^-8 Msun/yr. We find evidence for an increase in the mass-loss rates below a critical effective temperature, consistent with predictions of the bi-stability phenomenon in the range Teff=19,000--27,000 K. The sample exhibits a correlation between modified wind momentum and luminosity, consistent in slope but lower by 0.43 dex in magnitude compared to canonical wind-luminosity relations. We identify a small subset of objects deviating most significantly from theoretical expectations as probable radiation-driven bow wave nebulae by virtue of their low stellar-to-nebular luminosity ratios. For these, the inferred mass-loss rates must be regarded as upper limits. | astrophysics |
We study wireless power transmission by an energy source to multiple energy harvesting nodes with the aim to maximize the energy efficiency. The source transmits energy to the nodes using one of the available power levels in each time slot and the nodes transmit information back to the energy source using the harvested energy. The source does not have any channel state information and it only knows whether a received codeword from a given node was successfully decoded or not. With this limited information, the source has to learn the optimal power level that maximizes the energy efficiency of the network. We model the problem as a stochastic Multi-Armed Bandits problem and develop an Upper Confidence Bound based algorithm, which learns the optimal transmit power of the energy source that maximizes the energy efficiency. Numerical results validate the performance guarantees of the proposed algorithm and show significant gains compared to the benchmark schemes. | electrical engineering and systems science |
The Earth's ocean mass is only 2.3 x 10^{-4} of the whole planet mass. Even including water in the interior, it would be at most 10^{-3}-10^{-2}. Ancient Mars may have had a similar or slightly smaller water fraction. It is important to clarify the water delivery mechanism to rocky planets in habitable zones in exoplanetary systems, as well as that to the Earth and Mars. Here, we consider water delivery to planets by icy pebbles after the snowline inwardly passes the planetary orbits and derive the water mass fraction (f_{water}) of the final planet as a function of disk parameters and discuss the parameters that reproduce f_{water} comparable to that inferred for the Earth and ancient Mars. We calculate the growth of icy pebbles and their radial drift with a 1D model, and accretion of icy pebbles onto planets, by simultaneously solving the snowline migration and the disk dissipation, to evaluate f_{water} of the planets. We find that f_{water} is regulated by the total mass (M_{res}) of icy dust materials preserved in the outer disk regions at the timing (t = t_{snow}) of the snowline passage of the planetary orbit. Because M_{res} decays rapidly after the pebble formation front reaches the disk outer edge (at t = t_{pff}), f_{water} is sensitive to the ratio t_{snow}/t_{pff}, which is determined by the disk parameters. We find t_{snow}/t_{pff} < 10 or > 10 is important. Deriving an analytical formula for f_{water} that reproduces the numerical results, we find that f_{water} of a rocky planet near 1 au is ~ 10^{-4}-10^{-2}, in the disks with initial disk size ~ 30-50 au and the initial disk mass accretion rate ~ (10^{-8}-10^{-7}) M_sun/y. Because these disks may be median or slightly compact/massive disks, the water fraction of rocky planets in habitable zones may be often similar to that of the Earth, if the icy pebble accretion is responsible for the water delivery. | astrophysics |
Let R be a commutative ring, M an R-module. In this paper, we will introduce the concept of n-pure submodules of M as a generalization of pure submodules and obtain some related results. | mathematics |
We classify phases of a bosonic lattice model based on the computational complexity of classically simulating the system. We show that the system transitions from being classically simulable to classically hard to simulate as it evolves in time, extending previous results to include on-site number-conserving interactions and long-range hopping. Specifically, we construct a "complexity phase diagram" with "easy" and "hard" phases, and derive analytic bounds on the location of the phase boundary with respect to the evolution time and the degree of locality. We find that the location of the phase transition is intimately related to upper bounds on the spread of quantum correlations and protocols to transfer quantum information. Remarkably, although the location of the transition point is unchanged by on-site interactions, the nature of the transition point changes dramatically. Specifically, we find that there are two kinds of transitions, sharp and coarse, broadly corresponding to interacting and noninteracting bosons, respectively. Our work motivates future studies of complexity in many-body systems and its interplay with the associated physical phenomena. | quantum physics |
The CDEX (China Dark matter Experiment) now deploys ~10 kg pPCGe (p-type Point Contact Germanium) detectors in CJPL (China Jinping Underground Laboratory). It aims to detect rare events such as dark matter and 0vbb (neutrinoless double beta decay). The discrimination of bulk and very bulk events are essential for improvements of the analysis threshold of dark matter. Very bulk events are generated near the p+ point surface of pPCGe, which are usually from radioactive materials of electronic devices. Due to different locations of charge collection, bulk and very bulk events have different pulse shape. This paper will present two linear PSD (Pulse Shape Discrimination) methods: CCM (Charge Comparison Method) and Fisher's LDA (Linear Discriminant Analysis), to realize the discrimination of bulk and very bulk events. The results show that FOMs (Figure of Merit) are 1.38 $\pm$ 0.33 and 1.62 $\pm$ 0.18 by CCM and Fisher's LDA respectively. | physics |
We study the effect of the Rashba spin-orbit coupling on the Fermi arcs of topological Dirac semimetals. The Rashba coupling is induced by breaking the inversion symmetry at the surface. Remarkably, this coupling could be enhanced by the interaction with the substrate and controlled by an external electric field. We study analytically and numerically the rotation of the spin of the surface states as a function of the electron's momentum and the coupling strength. Furthermore, a detailed analysis of the spin-dependent two-terminal conductance is presented in the clean limit and with the addition of a random distribution of impurities. Depending on the magnitude of the quadratic terms in the Hamiltonian, the spin-flip conductance may become dominant, thus showing the potential of the system for spintronic applications, since the effect is robust even in the presence of disorder. | condensed matter |
In this letter we propose a new methodology for crystal structure prediction, which is based on the evolutionary algorithm USPEX and the machine-learning interatomic potentials actively learning on-the-fly. Our methodology allows for an automated construction of an interatomic interaction model from scratch replacing the expensive DFT with a speedup of several orders of magnitude. Predicted low-energy structures are then tested on DFT, ensuring that our machine-learning model does not introduce any prediction error. We tested our methodology on a problem of prediction of carbon allotropes, dense sodium structures and boron allotropes including those which have more than 100 atoms in the primitive cell. All the the main allotropes have been reproduced and a new 54-atom structure of boron have been found at very modest computational efforts. | condensed matter |
We consider the baby Skyrme model in a physically motivated limit of reaching the restricted or BPS baby Skyrme model, which is a model that enjoys area-preserving diffeomorphism invariance. The perturbation consists of the kinetic Dirichlet term with a small coefficient $\epsilon$ as well as the standard pion mass term, with coefficient $\epsilon m_1^2$. The pions remain lighter than the soliton for any $\epsilon$ and therefore the model is physically acceptable, even in the $\epsilon \to 0$ limit. The version of the BPS baby Skyrme model we use has BPS solutions with Gaussian tails. We perform full numerical computations in the $\epsilon\to 0$ limit and even reach the strict $\epsilon=0$ case, finding new nontrivial BPS solutions, for which we do not yet know the analytic form. | high energy physics theory |
A large number of the most-subscribed YouTube channels target children of very young age. Hundreds of toddler-oriented channels on YouTube feature inoffensive, well produced, and educational videos. Unfortunately, inappropriate content that targets this demographic is also common. YouTube's algorithmic recommendation system regrettably suggests inappropriate content because some of it mimics or is derived from otherwise appropriate content. Considering the risk for early childhood development, and an increasing trend in toddler's consumption of YouTube media, this is a worrisome problem. In this work, we build a classifier able to discern inappropriate content that targets toddlers on YouTube with 84.3% accuracy, and leverage it to perform a first-of-its-kind, large-scale, quantitative characterization that reveals some of the risks of YouTube media consumption by young children. Our analysis reveals that YouTube is still plagued by such disturbing videos and its currently deployed counter-measures are ineffective in terms of detecting them in a timely manner. Alarmingly, using our classifier we show that young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos. | computer science |
Hyperloop is a high-speed ground-based transportation system utilizing sealed tubes, with the aim of ultimately transporting passengers between metropolitan cities in efficiently designed autonomous capsules. In recent years, the design and development of sub-scale prototypes for these Hyperloop pods has set the foundation for realizing more practical and scalable pod architectures. This paper proposes a practical, power and space optimized on-board electronics architecture, coupled with an end-to-end computationally efficient pose estimation algorithm. Considering the high energy density and discharge rate of on-board batteries, this work additionally presents a robust system for fault detection, protection and management of batteries, along with the design of the surrounding electrical system. Performance evaluation and verification of proposed algorithms and circuits has been carried out by software simulations using both Python and Simulink. | electrical engineering and systems science |
We discuss the compatibility of the combined annual modulation effect measured by DAMA/LIBRA-phase1 and DAMA/LIBRA-phase2 with an explanation in terms of inelastic scattering events induced by the most general Galilean-invariant effective contact interaction of a Weakly Interacting Massive Particle (WIMP) dark matter particle of spin 0, 1/2 or 1. We take into account all the possible interferences among operators by studying the intersections among the ellipsoidal surfaces of constant signal of DAMA and other experiments in the space of the coupling constants of the effective theory. In our analysis we assume a standard Maxwellian velocity distribution in the Galaxy. We find that, compared to the elastic case, inelastic scattering partially relieves but does not eliminate the existing tension between the DAMA effect and the constraints from the null results of other experiments. Such tension is very large in all the parameter space with the exception of a small region for WIMP mass $m_{\chi}\simeq$ 10 GeV and mass splitting $\delta\gtrsim$ 20 keV, where it is partially, but not completely relieved. In such region the bounds from fluorine targets are evaded in a kinematic way because the minimal WIMP incoming speed required to trigger upscatters off fluorine exceeds the maximal WIMP velocity in the Galaxy, or is very close to it. As a consequence, we also find that the residual tension between DAMA and other results is more sensitive on the astrophysical parameters compared to the elastic case. We find that the configurations with the smallest tension can produce enough yearly modulation in some of the DAMA bins in compliance with the constraints from other experiments, but the ensuing shape of the modulation spectrum is too steep compared to the measured one. For such configurations the recent COSINE-100 bound is evaded in a natural way due to their large expected modulation fractions. | high energy physics phenomenology |
The 'Arcsine' laws of Brownian particles in one dimension describe distributions of three quantities: the time $t_m$ to reach maximum position, the time $t_r$ spent on the positive side and the time $t_\ell$ of the last visit to the origin. Interestingly, the cumulative distribution of all the three quantities are same and given by Arcsine function. In this paper, we study distribution of these three times $t_m,~t_r$ and $t_\ell$ in the context of single run-and-tumble particle in one dimension, which is a simple non-Markovian process. We compute exact distributions of these three quantities for arbitrary time and find that all three distributions have delta function part and a non-delta function part. Interestingly, we find that the distributions of $t_m$ and $t_r$ are identical (reminiscent of the Brownian particle case) when the initial velocities of the particle are chosen with equal probability. On the other hand, for $t_\ell$, only the non-delta function part is same with the other two. In addition, we find explicit expressions of the joint distributions of the maximum displacement and the time at which this maxima occurs. We verify all our analytical results through numerical simulations. | condensed matter |
By introducing some new tricks, we prove that the nonlinear problem of Kirchhoff-type \begin{equation*} \left\{ \begin{array}{ll} -\left(a+b\int_{\R^3}|\nabla u|^2\mathrm{d}x\right)\triangle u+V(x)u=f(u), & x\in \R^3; u\in H^1(\R^3), \end{array} \right. \end{equation*} admits two class of ground state solutions under the general "Berestycki-Lions assumptions" on the nonlinearity $f$ which are almost necessary conditions, as well as some weak assumptions on the potential $V$. Moreover, we also give a simple minimax characterization of the ground state energy. Our results improve and complement previous ones in the literature. | mathematics |
We explore ways to quantify multipartite correlations, in quantum information and in holography. We focus on optimized correlation measures, linear combinations of entropies minimized over all possible purifications of a state that satisfy monotonicity conditions. These contain far more information about correlations than entanglement entropy alone. We present a procedure to derive such quantities, and construct a menagerie of symmetric optimized correlation measures on three parties. These include tripartite generalizations of the entanglement of purification, the squashed entanglement, and the recently introduced Q-correlation and R-correlation. Some correlation measures vanish only on product states, and thus quantify both classical and quantum correlations; others vanish on any separable state, capturing quantum correlations alone. We then use a procedure motivated by the surface-state correspondence to construct holographic duals for the correlation measures as linear combinations of bulk surfaces. The geometry of the surfaces can preserve, partially break, or fully break the symmetry of the correlation measure. The optimal purification is encoded in the locations of certain points, whose locations are fixed by constraints on the areas of combinations of surfaces. This gives a new concrete connection between information theoretic quantities evaluated on a boundary state and detailed geometric properties of its dual. | high energy physics theory |
The paper presents the method of attractive cylinders -- a generalization of the atrractive ellipsoid method to the cases of tracking and observation. Based on the developed method, an algorithm for calculating the parameters of the controller, which ensures the boundedness of tracking or observation errors in the presence of bounded external disturbances, is proposed. The effectiveness of the proposed method is demonstrated by examples. | electrical engineering and systems science |
The Marchenko method retrieves the responses to virtual sources in the Earth's subsurface from reflection data at the surface, accounting for all orders of multiple reflections. The method is based on two integral representations for focusing- and Green's functions. In discretized form, these integrals are represented by finite summations over the acquisition geometry. Consequently, the method requires ideal geometries of regularly sampled and co-located sources and receivers. Recently new representations were derived, which handle imperfectly sampled data. These new representations use point-spread functions (PSFs) that reconstruct results as if they were acquired using a perfect geometry. Here, the iterative Marchenko scheme is adapted, using these new representations, to account for imperfect sampling. This new methodology is tested on a 2D numerical example. The results show clear improvement between the proposed scheme and the standard iterative scheme. By removing the requirement for perfect geometries, the Marchenko method can be more widely applied to field data. | physics |
One of the most striking manifestations of electronic properties of topological insulators is the dependence of the photocurrent direction on the helicity of circularly polarized optical excitation. The helicity dependent photocurrents, underpinned by spin-momentum locking of surface Dirac electrons, are weak and easily overshadowed by bulk contributions. Here we show that the chiral response can be enhanced by nanostructuring. The tight confinement of electromagnetic fields in the resonant nanostructure enhances the photoexcitation of spin-polarized surface states of topological insulator Bi1.5Sb0.5Te1.8Se1.2, leading to an 11-fold increase of the circular photogalvanic effect and an unprecedented photocurrent dichroism (\r{ho}circ=0.87) at room temperature. The control of spin-transport in topological materials by structural design is a previously unrecognised ability of metamaterials that bridges the gap between nanophotonics and spin-electronics, providing new opportunities for developing polarization sensitive photodetectors. | physics |
In this review we will concentrate on the nonperturbative effects in the properties of hadrons made from the light-heavy and heavy-heavy quarks in the framework of Instanton Liquid Model (ILM) of QCD vacuum. We briefly discuss the main features of ILM and its applicability in the heavy quark sector. The properties of gluonic systems, light and heavy quark correlators in the instanton background are also analyzed. Consideration of the both, perturbative and nonperturbative, gluon effects in the instanton background for the single heavy quark will lead to the mass shift due to the direct-instanton nonperturbative and ILM modified perturbative contributions, respectively. For the interacting heavy quark-antiquarks, the potential consists the direct instanton induced part and the one-gluon exchange (OGE) perturbative part. OGE interactions are screened at large distances due to the nonperturbative dynamics. We discuss the estimations of instanton contributions in the phenomenological Cornell type potential model. As related to the experimental data we discuss the charmonium properties and the role of instanton effects in their spectra and transitions. We discuss also the main features of heavy-light quarks systems in the ILM. As an example, it is considering the process of pions emission by exited heavy quarkonium states. | high energy physics phenomenology |
We consider a long topological Josephson junction formed on a conducting 2D surface of a 3D topological insulator (TI). The superconducting correlations are proximity-induced by s-wave superconductors covering the surface. The 1D spacing between the coverings is either unfilled or filled by a magnetic insulator. Generally, the Josephson current mediated by the TI surface is determined by scattering modes as well as by the states localized around the junction (Andreev bound states or Andreev bands). We find out that it is crucial to take into account both contributions to determine the current--phase relation of the topological Josephson junction. We analyze the dependence of the Josephson current on the thickness of the junction as well as the deviations from the sinusoidal shape of the current--phase relation. | condensed matter |
Optimally encoding classical information in a quantum system is one of the oldest and most fundamental challenges of quantum information theory. Holevo's bound places a hard upper limit on such encodings, while the Holevo-Schumacher-Westmoreland (HSW) theorem addresses the question of how many classical messages can be "packed" into a given quantum system. In this article, we use Sen's recent quantum joint typicality results to prove a one-shot multiparty quantum packing lemma generalizing the HSW theorem. The lemma is designed to be easily applicable in many network communication scenarios. As an illustration, we use it to straightforwardly obtain quantum generalizations of well-known classical coding schemes for the relay channel: multihop, coherent multihop, decode-forward, and partial decode-forward. We provide both finite blocklength and asymptotic results, the latter matching existing classical formulas. Given the key role of the classical packing lemma in network information theory, our packing lemma should help open the field to direct quantum generalization. | quantum physics |
In order to understand the resourcefulness of a natural quantum system in quantum communication tasks, we study the dense coding capacity (DCC) and teleportation fidelity (TF) of Haar uniformly generated random multipartite states of various ranks. We prove that when a rank-2 two-qubit state, a Werner state, and a pure state possess the same amount of entanglement, the DCC of a rank-2 state belongs to the envelope made by pure and Werner states. In a similar way, we obtain an upper bound via the generalized Greenberger-Horne-Zeilinger state for rank-2 three-qubit states when the dense coding with two senders and a single receiver is performed and entanglement is measured in the senders:receiver bipartition. The normalized frequency distribution of DCC for randomly generated two-, three- and four-qubit density matrices with global as well as local decodings at the receiver's end are reported. The estimation of mean DCC for two-qubit states is found to be in good agreement with the numerical simulations. Universally, we observe that the performance of random states for dense coding as well as teleportation decreases with the increase of the rank of states which we have shown to be surmounted by the local pre-processing operations performed on the shared states before starting the protocols, irrespective of the rank of the states. The local pre-processing employed here is based on positive operator valued measurements along with classical communication and we show that unlike dense coding with two-qubit random states, senders' operations are always helpful to probabilistically enhance the capabilities of implementing dense coding as well as teleportation. | quantum physics |
We present a study of the relative orientation between the magnetic field projected onto the plane of sky ($B_{\perp}$) on scales down to 0.4 pc, inferred from the polarized thermal emission of Galactic dust observed by Planck at 353 GHz, and the distribution of gas column density ($N_{\rm H}$) structures on scales down to 0.026 pc, derived from the observations by Herschel in submillimeter wavelengths, toward ten nearby ($d$$<$450 pc) molecular clouds. Using the histogram of relative orientation technique in combination with tools from circular statistics, we found that the mean relative orientation between $N_{\rm H}$ and $B_{\perp}$ toward these regions increases progressively from 0\deg, where the $N_{\rm H}$ structures lie mostly parallel to $B_{\perp}$, with increasing $N_{\rm H}$, in many cases reaching 90\deg, where the $N_{\rm H}$ structures lie mostly perpendicular to $B_{\perp}$. We also compared the relative orientation between $N_{\rm H}$ and $B_{\perp}$ and the distribution of $N_{\rm H}$, which is characterized by the slope of the tail of the $N_{\rm H}$ probability density functions (PDFs). We found that the slopes of the $N_{\rm H}$ PDF tail are steepest in regions where $N_{\rm H}$ and $B_{\perp}$ are close to perpendicular. This coupling between the $N_{\rm H}$ distribution and the magnetic field suggests that the magnetic fields play a significant role in structuring the interstellar medium in and around molecular clouds. However, we found no evident correlation between the star formation rates, estimated from the counts of young stellar objects, and the relative orientation between $N_{\rm H}$ and $B_{\perp}$ in these regions. | astrophysics |
Galaxy clusters at high redshift are key targets for understanding matter assembly in the early Universe, yet they are challenging to locate. A sample of >2000 high-z candidate structures has been found using Planck's all-sky submm maps, and a sub-set of 234 have been followed up with Herschel-SPIRE, which showed that the emission can be attributed to large overdensities of dusty star-forming galaxies. In order to resolve and characterise the individual galaxies we targeted the eight brightest SPIRE sources in the centre of the Planck peak PLCK G073.4-57.5 using ALMA at 1.3 mm, and complemented these observations with data from IRAC, WIRCam J,K, and SCUBA-2. We detected a total of 18 millimetre galaxies brighter than 0.3 mJy in 2.4 arcmin^2. The ALMA source density is 8-30 times higher than average background estimates and larger than seen in typical 'proto-cluster' fields. We were able to match all but one of the ALMA sources to a NIR counterpart. The most significant (four) SCUBA-2 sources are not included in the ALMA pointings, but we find an 8sigma stacking detection of the ALMA sources in the SCUBA-2 map at 850 um. We derive photo-z, L_IR, SFR, stellar mass, T_dust, M_dust for all of the ALMA galaxies; the photo-zs identify two groups each of five sources, at z~1.5 and 2.4. The two groups show two 'red sequences' (i.e. similar NIR [3.6]-[4.5] colours and different J-K colours). The majority of the ALMA-detected galaxies are on the SFR versus stellar mass main sequence, and half of the sample is more massive than the characteristic stellar mass at the corresponding redshift. Serendipitous CO line detections in two of the galaxies appear to match their photometric redshifts at z~1.54. We performed an analysis of star-formation efficiencies and CO- and mm-continuum-derived gas fractions of our ALMA sources, combined with a sample of 1<z<3 cluster and proto-cluster members. | astrophysics |
In this paper we propose a robust approach to model photoplethysmography (PPG) signals. After decomposing the signal into two components, we focus the analysis on the pulsatile part, related to cardiac information. The goal is to enable a deeper understanding of the information contained in the pulse shape, together with that derived from the rhythm. Our approach combines functional data analysis with a state space representation and guarantees fitting robustness and flexibility on stationary signals, without imposing a priori information on the waveform and heart rhythm. With a Bayesian approach, we learn the distribution of the parameters, used for understanding and monitoring PPG signals. The model can be used for data compression, for inferring medical parameters and to understand condition-related waveform characteristics. In particular, we detail a procedure for the detection of premature contractions based on the residuals of the fit. This method can handle both atrial and ventricular premature contractions, and classify the type by only using information from the model fit. | statistics |
We consider a nematic liquid crystal occupying the three-dimensional domain in the exterior of a spherical colloid particle. The nematic is subject to Dirichlet boundary conditions that enforce orthogonal attachment of nematic molecules to the surface of the particle. Our main interest is to understand the behavior of energy-critical configurations of the Landau-de Gennes $Q$-tensor model in the limit of vanishing correlation length. We demonstrate existence of configurations with a single Saturn-ring defect approaching the equator of the particle and no other line or point defects. We show this by analyzing asymptotics of energy minimizers under two symmetry constraints: rotational equivariance around the vertical axis and reflection across the horizontal plane. Energy blow-up at the ring defect is a significant obstacle to constructing well-behaved comparison maps needed to eliminate the possibility of point defects. The boundary estimates we develop to address this issue are new and should be applicable to a wider class of problems. | mathematics |
We present a Kernel Ridge Regression (KRR) based supervised learning method combined with Genetic Algorithms (GAs) for the calculation of quasiparticle energies within Many-Body Green's Functions Theory. These energies representing electronic excitations of a material are solutions to a set of non-linear equations, containing the electron self-energy (SE) in the $GW$ approximation. Due to the frequency-dependence of this SE, standard approaches are computationally expensive and may yield non-physical solutions, in particular for larger systems. In our proposed model, we use KRR as a self-adaptive surrogate model which reduces the number of explicit calculations of the SE. Transforming the standard fixed-point problem of finding quasiparticle energies into a global optimization problem with a suitably defined fitness function, application of the GA yields uniquely the physically relevant solution. We demonstrate the applicability of our method for a set of molecules from the $GW$100 dataset, which are known to exhibit a particularly problematic structure of the SE. Results of the KRR-GA model agree within less than 0.01 eV with the reference standard implementation, while reducing the number of required SE evaluations roughly by a factor of ten. | physics |
In this paper we construct full support character sheaves for stably graded Lie algebras. Conjecturally these are precisely the cuspidal character sheaves. Irreducible representations of Hecke algebras associated to complex reflection groups at roots of unity enter the description. We do so by analysing the Fourier transform of the nearby cycle sheaves constructed in [GVX2]. | mathematics |
We present a novel Bayesian approach to semiotic dynamics, which is a cognitive analogue of the naming game model restricted to two conventions. The one-shot learning that characterizes the agent dynamics in the basic naming game is replaced by a word-learning process, in which agents learn a new word by generalizing from the evidence garnered through pairwise-interactions with other agents. The principle underlying the model is that agents, like humans, can learn from a few positive examples and that such a process is modeled in a Bayesian probabilistic framework. We show that the model presents some analogies but also crucial differences with respect to the dynamics of the basic two-convention naming game model. The model introduced aims at providing a starting point for the construction of a general framework for studying the combined effects of cognitive and social dynamics. | physics |
A recent idea, put forward by Mund, Rehren and Schroer, is discussed; it suggests that in gauge quantum field theory one can replace the point-localized gauge fields by string-localized vector potentials built from gauge invariant observables and a principle of string-independence. Based on a kinematical model, describing unmovable (static) fields carrying opposite charges, it is shown that these string-localized potentials cannot be used for the description of the gauge bridges between electrically charged fields. These bridges are needed in order to ensure the validity of Gauss' law. This observation does not preclude the existence of Poincar\'e invariant theories, describing the coupling of string-localized gauge invariant potentials to matter fields. But these potentials are not a full-fledged substitute for the gauge fields in ``usual'' quantum electrodynamics. | high energy physics theory |
We investigate the interplay between early universe cosmology and dark matter direct detection, considering axion models with naturally suppressed couplings to photons. In the context of the cosmological relaxation of the electroweak scale, we focus on a scenario of \emph{Relaxion Dark Matter}, in which the relaxion field constitutes all the observed dark matter relic density and its allowed mass range is fixed to a few $\mathrm{keV}$ by construction. In particular, we show that a relaxion particle with mass $m_\phi= 3.0 \,\mathrm{keV}$ which couples to electrons with $g_{\phi, e}= 6.8 \times 10^{-14}$ is consistent with the XENON1T excess, while accounting for the observed dark matter and satisfying astro/cosmo probes. This scenario uses the electroweak scale as the link connecting the relaxion production at early times with the dark matter absorption rate in direct detection. | high energy physics phenomenology |
We study monodromy defects in $O(N)$ symmetric scalar field theories in $d$ dimensions. After a Weyl transformation, a monodromy defect may be described by placing the theory on $S^1\times H^{d-1}$, where $H^{d-1}$ is the hyperbolic space, and imposing on the fundamental fields a twisted periodicity condition along $S^1$. In this description, the codimension two defect lies at the boundary of $H^{d-1}$. We first study the general monodromy defect in the free field theory, and then develop the large $N$ expansion of the defect in the interacting theory, focusing for simplicity on the case of $N$ complex fields with a one-parameter monodromy condition. We also use the $\epsilon$-expansion in $d=4-\epsilon$, providing a check on the large $N$ approach. When the defect has spherical geometry, its expectation value is a meaningful quantity, and it may be obtained by computing the free energy of the twisted theory on $S^1\times H^{d-1}$. It was conjectured that the logarithm of the defect expectation value, suitably multiplied by a dimension dependent sine factor, should decrease under a defect RG flow. We check this conjecture in our examples, both in the free and interacting case, by considering a defect RG flow that corresponds to imposing alternate boundary conditions on one of the low-lying Kaluza-Klein modes on $H^{d-1}$. We also show that, adapting standard techniques from the AdS/CFT literature, the $S^1\times H^{d-1}$ setup is well suited to the calculation of the defect CFT data, and we discuss various examples, including one-point functions of bulk operators, scaling dimensions of defect operators, and four-point functions of operator insertions on the defect. | high energy physics theory |
Automatic segmentation of brain abnormalities is challenging, as they vary considerably from one pathology to another. Current methods are supervised and require numerous annotated images for each pathology, a strenuous task. To tackle anatomical variability, Unsupervised Anomaly Detection (UAD) methods are proposed, detecting anomalies as outliers of a healthy model learned using a Variational Autoencoder (VAE). Previous work on UAD adopted a 2D approach, meaning that MRIs are processed as a collection of independent slices. Yet, it does not fully exploit the spatial information contained in MRI. Here, we propose to perform UAD in a 3D fashion and compare 2D and 3D VAEs. As a side contribution, we present a new loss function guarantying a robust training. Learning is performed using a multicentric dataset of healthy brain MRIs, and segmentation performances are estimated on White-Matter Hyperintensities and tumors lesions. Experiments demonstrate the interest of 3D methods which outperform their 2D counterparts. | electrical engineering and systems science |
The superpotential in four-dimensional heterotic effective theories contains terms arising from holomorphic Chern-Simons invariants associated to the gauge and tangent bundles of the compactification geometry. These effects are crucial for a number of key features of the theory, including vacuum stability and moduli stabilization. Despite their importance, few tools exist in the literature to compute such effects in a given heterotic vacuum. In this work we present new techniques to explicitly determine holomorphic Chern-Simons invariants in heterotic string compactifications. The key technical ingredient in our computations are real bundle morphisms between the gauge and tangent bundles. We find that there are large classes of examples, beyond the standard embedding, where the Chern-Simons superpotential vanishes. We also provide explicit examples for non-flat bundles where it is non-vanishing and fractionally quantized, generalizing previous results for Wilson lines. | high energy physics theory |
The architectures of deep neural networks (DNN) rely heavily on the underlying grid structure of variables, for instance, the lattice of pixels in an image. For general high dimensional data with variables not associated with a grid, the multi-layer perceptron and deep brief network are often used. However, it is frequently observed that those networks do not perform competitively and they are not helpful for identifying important variables. In this paper, we propose a framework that imposes on blocks of variables a chain structure obtained by step-wise greedy search so that the DNN architecture can leverage the constructed grid. We call this new neural network Deep Variable-Block Chain (DVC). Because the variable blocks are used for classification in a sequential manner, we further develop the capacity of selecting variables adaptively according to a number of regions trained by a decision tree. Our experiments show that DVC outperforms other generic DNNs and other strong classifiers. Moreover, DVC can achieve high accuracy at much reduced dimensionality and sometimes reveals drastically different sets of relevant variables for different regions. | statistics |
When a channel model is not available, the end-to-end training of encoder and decoder on a fading noisy channel generally requires the repeated use of the channel and of a feedback link. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. In this paper, we propose to obviate the limitations of joint training via meta-learning. The proposed approach is based on a meta-training phase in which the online gradient-based meta-learning of the decoder is coupled with the joint training of the encoder via the transmission of pilots and the use of a feedback link. Accounting for channel variations during the meta-training phase, this work demonstrates the advantages of meta-learning in terms of number of pilots as compared to conventional methods when the feedback link is only available for meta-training and not at run time. | electrical engineering and systems science |
The application of machine learning methods to particle physics often doesn't provide enough understanding of the underlying physics. An interpretable model which provides a way to improve our knowledge of the mechanism governing a physical system directly from the data can be very useful. In this paper, we introduce a simple artificial physical generator based on the Quantum chromodynamical (QCD) fragmentation process. The data simulated from the generator are then passed to a neural network model which we base only on the partial knowledge of the generator. We aim to see if the interpretation of the generated data can provide the probability distributions of basic processes of such a physical system. This way, some of the information we omitted from the network model on purpose is recovered. We believe this approach can be beneficial in the analysis of real QCD processes. | physics |
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance reduction for noisy energy estimators, which promotes much more effective swaps. Theoretically, we provide a non-asymptotic analysis on the exponential acceleration for the underlying continuous-time Markov jump process; moreover, we consider a generalized Girsanov theorem which includes the change of Poisson measure to overcome the crude discretization based on the Gr\"{o}wall's inequality and yields a much tighter error in the 2-Wasserstein ($\mathcal{W}_2$) distance. Numerically, we conduct extensive experiments and obtain the state-of-the-art results in optimization and uncertainty estimates for synthetic experiments and image data. | statistics |
Magnetism of Cu$_2$(OH)$_3$Br single crystals based on a triangular lattice is studied by means of magnetic susceptibility, pulsed-field magnetization, and specific heat measurements. There are two inequivalent Cu$^{2+}$ sites in an asymmetric unit. Both Cu$^{2+}$ sublattices undergo a long-range antiferromagnetic (AFM) order at $T\rm_N$ = 9.3 K. Upon cooling, an anisotropy crossover from Heisenberg to $XY$ behavior is observed below 7.5 K from the anisotropic magnetic susceptibility. The magnetic field applied within the $XY$ plane induces a spin-flop transition of Cu$^{2+}$ ions between 4.9 T and 5.3 T. With further increasing fields, the magnetic moment is gradually increased but is only about half of the saturation of a Cu$^{2+}$ ion even in 30 T. The individual reorientation of the inequivalent Cu$^{2+}$ spins under field is proposed to account for the magnetization behavior. The observed spin-flop transition is likely related to one Cu site, and the AFM coupling among the rest Cu spins is so strong that the 30-T field cannot overcome the anisotropy. The temperature dependence of the magnetic specific heat, which is well described by a sum of two gapped AFM contributions, is a further support for the proposed scenario. | condensed matter |
There has long been a discrepancy between the size distributions of Ar$_n^+$ clusters measured by different groups regarding whether or not magic numbers appear at sizes corresponding to the closure of icosahedral (sub-)shells. We show that the previously observed magic cluster size distributions are likely the result of an unresolved Ar$_n$H$^+$ component, that is, from protonated argon clusters. We find that the proton impurity gives cluster geometries that are much closer to those for neutral rare gas clusters, which are known to form icosahedral structures, than the pure cationic clusters, explaining why the mass spectra from protonated argon clusters better matches these structural models. Our results thus show that even small impurities, e.g.\ a single proton, can significantly influence the properties of clusters. | physics |
Optomechanical systems are suitable for elucidating quantum phenomena at the macroscopic scale in the sense of the mass scale. The systems should be well-isolated from the environment to avoid classical noises, which conceal quantum signals. Optical levitation is a promising way to isolate optomechanical systems from the environment. To realize optical levitation, all degrees of freedom need to be trapped. Until now, longitudinal trapping and rotational trapping of a mirror with optical radiation pressure have been studied in detail and validated with various experiments. However, less attention has been paid to the transversal trapping of a mirror. Herein, we report a pioneering result where we experimentally confirmed transversal trapping of a mirror of a Fabry-P\'erot cavity using a torsional pendulum. Through this demonstration, we experimentally proved that optical levitation is realizable with only two Fabry-P\'erot cavities that are aligned vertically. This work paves the way toward optical levitation and realizing a macroscopic quantum system. | quantum physics |
We argue that the quantum-theoretical structures studied in several recent lines of research cannot be adequately described within the standard framework of quantum circuits. This is in particular the case whenever the combination of subsystems is described by a nontrivial blend of direct sums and tensor products of Hilbert spaces. We therefore propose an extension to the framework of quantum circuits, given by \textit{routed linear maps} and \textit{routed quantum circuits}. We prove that this new framework allows for a consistent and intuitive diagrammatic representation in terms of circuit diagrams, applicable to both pure and mixed quantum theory, and exemplify its use in several situations, including the superposition of quantum channels and the causal decompositions of unitaries. We show that our framework encompasses the `extended circuit diagrams' of Lorenz and Barrett [arXiv:2001.07774 (2020)], which we derive as a special case, endowing them with a sound semantics. | quantum physics |
We formulate a six dimensional $U(1)$ gauge theory compactified on a (two dimensional) sphere $S^2$ with flux and localized brane sources. Profiles of the lowest Kaluza-Klein (KK) wavefunctions and their masses are derived analytically. In contrast to ordinary sphere compactifications, the above setup can lead to the degeneracy of and the sharp localizations of the linearly independent lowest KK modes, depending on the number of branes and their tensions. Moreover, it can naturally accommodate CP violation in Yukawa interactions. | high energy physics theory |
This paper presents stellar mass functions and i-band luminosity functions for Sloan Digital Sky Survey (SDSS) galaxies at $i < 21$ using clustering redshifts, from which we also compute targeting completeness measurements for the Baryon Oscillation Spectroscopic Survey (BOSS). Clustering redshifts is a method of obtaining the redshift distribution of a sample of galaxies with only photometric information by measuring the angular crosscorrelation with a spectroscopic sample in different redshift bins. We construct a spectroscopic sample containing data from the BOSS + eBOSS surveys, allowing us to recover redshift distributions from photometric data out to $z\simeq 2.5$. We produce k-corrected i-band luminosity functions and stellar mass functions by applying clustering redshifts to SDSS DR8 galaxies in small bins of colour and magnitude. There is little evolution in the mass function between $0.2 < z < 0.8$, implying the most massive galaxies form most of their mass before $z = 0.8$. These mass functions are used to produce stellar mass completeness estimates for the Baryon Oscillation Spectroscopic Survey (BOSS), giving a stellar mass completeness of $80\%$ above $M_{\star} > 10^{11.4}$ between $0.2 < z < 0.7$, with completeness falling significantly at redshifts higher than 0.7, and at lower masses. Large photometric datasets will be available in the near future (DECaLS, DES, Euclid), so this, and similar techniques will become increasingly useful in order to fully utilise this data. | astrophysics |
Lorentz invariance (LI) has a central role in science and its violation (LIV) at some high-energy scale has been related to possible solutions for several of the most intriguing puzzles in nature such as dark matter, dark energy, cosmic rays generation in extreme astrophysical objects and quantum gravity. We report on a search for LIV signal based on the propagation of gamma rays from astrophysical sources to Earth. An innovative data analysis is presented which allowed us to extract unprecedented information from the most updated data set composed of 111 energy spectra of 38 different sources measured by current gamma-ray observatories. No LIV signal was found, and we show that the data are best described by LI assumption. We derived limits for the LIV energy scale at least 3 times better than the ones currently available in the literature for subluminal signatures of LIV in high-energy gamma rays. | astrophysics |
We provide a simple and general construction of infinite families of consistent, modular-covariant pairs of characters satisfying the basic requirements to describe two-character RCFT. These correspond to solutions of generic second-order modular linear differential equations. To find these solutions, we first construct "quasi-characters" from the Kaneko-Zagier equation and subsequent works by Kaneko and collaborators, together with coset dual generalisations that we provide in this paper. We relate our construction to the Hecke images recently discussed by Harvey and Wu. | high energy physics theory |
This article introduces a new class of models for multiple networks. The core idea is to parametrize a distribution on labelled graphs in terms of a Fr\'{e}chet mean graph (which depends on a user-specified choice of metric or graph distance) and a parameter that controls the concentration of this distribution about its mean. Entropy is the natural parameter for such control, varying from a point mass concentrated on the Fr\'{e}chet mean itself to a uniform distribution over all graphs on a given vertex set. We provide a hierarchical Bayesian approach for exploiting this construction, along with straightforward strategies for sampling from the resultant posterior distribution. We conclude by demonstrating the efficacy of our approach via simulation studies and two multiple-network data analysis examples: one drawn from systems biology and the other from neuroscience. | statistics |
New challenges in submillimeter wave astronomy require instruments with a combination of high sensitivity and angular resolution, wide field of view and multiwave (multicolor) spectral range. New large single mm/submm telescopes are in high demand, as well as their inclusion in the global Event Horizon Telescope (EHT) VLBI network. At the same time, there are no large mm/submm telescopes in Asia at all while appropriate sites exist and their appearance in Asia or Eurasia is long overdue. Kinetic inductance detectors (KID) are ideal for large-format array implementation, which will be necessary for future telescope development. Concept of multicolor subTHz KID-array MUSICAM demo camera and its instrumental testing is given. It allows us to perform some necessary steps toward the creation of the Eurasian SubMillimeter Telescopes (ESMT), which concept and scientific tasks are presented as well. | astrophysics |
We compare two deletion-based methods for dealing with the problem of missing observations in linear regression analysis. One is the complete-case analysis (CC, or listwise deletion) that discards all incomplete observations and only uses common samples for ordinary least-squares estimation. The other is the available-case analysis (AC, or pairwise deletion) that utilizes all available data to estimate the covariance matrices and applies these matrices to construct the normal equation. We show that the estimates from both methods are asymptotically unbiased and further compare their asymptotic variances in some typical situations. Surprisingly, using more data (i.e., AC) does not necessarily lead to better asymptotic efficiency in many scenarios. Missing patterns, covariance structure and true regression coefficient values all play a role in determining which is better. We further conduct simulation studies to corroborate the findings and demystify what has been missed or misinterpreted in the literature. Some detailed proofs and simulation results are available in the online supplemental materials. | statistics |
In Phys. Rev. Lett. 120, 200603 (2018), a segmented XXZ spin chain with zero anisotropy in one half and a large anisotropy on the other half gave rise to a spin current rectification which is perfect in the thermodynamic limit. Here we extend the previous study to segmented chains with interacting integrable as well as non-integrable halves, considering even cases in which no ballistic transport can emerge in either half. We demonstrate that, also in this more general case, it is possible to obtain giant rectification when the two interacting half chains are sufficiently different. We also show that the mechanism causing this effect is the emergence of an energy gap in the excitation spectrum of the out-of-equilibrium insulating steady state in one of the two biases. Finally we demonstrate that in the thermodynamic limit there is no perfect rectification when each of the two half chains is interacting. | condensed matter |
In this paper, we study pattern formations in an aggregation and diffusion cell migration model with Dirichlet boundary condition. The formal continuum limit of the model is a nonlinear parabolic equation with a diffusivity which can become negative if the cell density is small and spatial oscillations and aggregation occur in the numerical simulations. In the classical diffusion migration model with positive diffusivity and non-birth term, species will vanish eventually with Dirichlet boundary. However, because of the aggregation mechanism under small cell density, the total species density is conservative in the discrete aggregation diffusion model. Also, the discrete system converges to a unique positive steady-state with the initial density lying in the diffusion domain. Furthermore, the aggregation mechanism in the model induces rich asymptotic dynamical behaviors or patterns even with 5 discrete space points which gives a theoretical explanation that the interaction between aggregation and diffusion induces patterns in biology. In the corresponding continuous backward forward parabolic equation, the existence of the solution, maximum principle, the asymptotic behavior of the solution is also investigated. | mathematics |
Low-order perturbation corrections to the electronic grand potential, internal energy, chemical potential, and entropy of a gas of noninteracting, identical molecules at a nonzero temperature are determined numerically as the $\lambda$-derivatives of the respective quantity calculated exactly (by thermal full configuration interaction) with a perturbation-scaled Hamiltonian, $\hat{H}_0 + \lambda\hat{V}$. The data thus obtained from the core definition of any perturbation theory serve as a benchmark against which analytical formulas can be validated. The first- and second-order corrections from finite-temperature many-body perturbation theory disagree with these benchmark data. This is because the theory neglects the variation of chemical potential with $\lambda$, thereby failing to converge at the exact, full-interaction ($\lambda=1$) limit, unless the exact chemical potential is known in advance. The renormalized finite-temperature perturbation theory [S. Hirata and X. He, J. Chem. Phys., 138, 204112 (2013)] is also found to be incorrect. | physics |
The quantum analogue of ptychography, a powerful coherent diffractive imaging technique, is a simple method for reconstructing $d$-dimensional pure states. It relies on measuring partially overlapping parts of the input state in a single orthonormal basis and feeding the outcomes to an iterative phase-retrieval algorithm for postprocessing. We provide a proof of concept demonstration of this method by determining pure states given by superpositions of $d$ transverse spatial modes of an optical field. A set of $n$ rank-$r$ projectors, diagonal in the spatial mode basis, is used to generate $n$ partially overlapping parts of the input and each part is projectively measured in the Fourier transformed basis. For $d$ up to 32, we successfully reconstructed hundreds of random states using $n=5$ and $n=d$ rank-$\lceil d/2\rceil$ projectors. The extension of quantum ptychography for other types of photonic spatial modes is outlined. | quantum physics |
Precise scientific analysis in collider-based particle physics is possible because of complex simulations that connect fundamental theories to observable quantities. The significant computational cost of these programs limits the scope, precision, and accuracy of Standard Model measurements and searches for new phenomena. We therefore introduce Deep neural networks using Classification for Tuning and Reweighting (DCTR), a neural network-based approach to reweight and fit simulations using all kinematic and flavor information -- the full phase space. DCTR can perform tasks that are currently not possible with existing methods, such as estimating non-perturbative fragmentation uncertainties. The core idea behind the new approach is to exploit powerful high-dimensional classifiers to reweight phase space as well as to identify the best parameters for describing data. Numerical examples from $e^+e^-\rightarrow\text{jets}$ demonstrate the fidelity of these methods for simulation parameters that have a big and broad impact on phase space as well as those that have a minimal and/or localized impact. The high fidelity of the full phase-space reweighting enables a new paradigm for simulations, parameter tuning, and model systematic uncertainties across particle physics and possibly beyond. | high energy physics phenomenology |
AdS$_7$ supersymmetric solutions in type IIA have been classified, and they are infinitely many. Moreover, every such solution has a non-supersymmetric sister. In this paper, we study the perturbative and non-perturbative stability of these non-supersymmetric solutions, focusing on cases without orientifolds. Perturbatively, we first look at the KK spectrum of spin-2 excitations. This does not exhibit instabilities, but it does show that there is no separation of scales for either the BPS and the non-BPS case, thus proving for supersymmetric AdS$_7$ a well-known recent conjecture. We then use 7d gauged supergravity and a brane polarization computation to access part of the spectrum of KK scalars. The result signals an instability for all non-supersymmetric solutions except those that have a single D8 on each side. We finally look at non-perturbative instabilities, and find that NS5 bubbles make these remaining solutions decay. | high energy physics theory |
Deep Neural Networks (DNNs) show a significant impact on medical imaging. One significant problem with adopting DNNs for skin cancer classification is that the class frequencies in the existing datasets are imbalanced. This problem hinders the training of robust and well-generalizing models. Data Augmentation addresses this by using existing data more effectively. However, standard data augmentation implementations are manually designed and produce only limited reasonably alternative data. Instead, Generative Adversarial Networks (GANs) is utilized to generate a much broader set of augmentations. This paper proposes a novel enhancement for the progressive generative adversarial networks (PGAN) using self-attention mechanism. Self-attention mechanism is used to directly model the long-range dependencies in the feature maps. Accordingly, self-attention complements PGAN to generate fine-grained samples that comprise clinically-meaningful information. Moreover, the stabilization technique was applied to the enhanced generative model. To train the generative models, ISIC 2018 skin lesion challenge dataset was used to synthesize highly realistic skin lesion samples for boosting further the classification result. We achieve an accuracy of 70.1% which is 2.8% better than the non-augmented one of 67.3%. | electrical engineering and systems science |
Previous work has provided methods for decomposing unitary matrices to series of quantum multiplexers, but the multiplexers created in this way are highly non-minimal. This paper presents a new approach for optimizing quantum multiplexers with arbitrary single-qubit quantum target functions. For quantum multiplexers, we define standard forms and two types of new forms: fixed polarity quantum forms (FPQF) and Kronecker quantum forms (KQF), which are analogous to Minterm Sum of Products forms, Fixed Polarity Reed-Muller (FPRM) forms, and Kronecker Reed-Muller (KRM) forms, respectively, for classical logic functions. Drawing inspiration from the usage of butterfly diagrams for FPRM and KRM forms, we devise a method to exhaustively construct all FPQF and KQF forms. Thus, the new forms can be used to optimize quantum circuits with arbitrary target unitary matrices, rather than only multi-controlled NOT gates such as CNOT, CCNOT, and their extensions. Experimental results on FPQF and KQF forms, as well as FPRM and KRM classical forms, applied to various target gates such as NOT, V, V+, Hadamard, and Pauli rotations, demonstrate that FPQF and KQF forms greatly reduce the gate cost of quantum multiplexers in both randomly generated data and FPRM benchmarks. | quantum physics |
In this work, we study the phenomena of quantum entanglement by computing de Sitter entanglement entropy from von Neumann measure. For this purpose we consider a bipartite quantum field theoretic setup in presence of axion originating from ${\bf Type~ II~B}$ string theory. We consider the initial vacuum to be CPT invariant non-adiabatic $\alpha$ vacua state under ${\bf SO(1,4)}$ ismometry, which is characterized by a real one-parameter family. To implement this technique we use a ${\bf S^2}$ which divide the de Sitter into two exterior and interior sub-regions. First, we derive the wave function of axion in an open chart for $\alpha$ vacua by applying Bogoliubov transformation on the solution for Bunch-Davies vacuum state. Further, we quantify the density matrix by tracing over the contribution from the exterior region. Using this result we derive entanglement entropy, R$\acute{e}$nyi entropy and explain the long-range quantum effects in primordial cosmological correlations. We also provide a comparison between the results obtained from Bunch-Davies vacuum and the generalized $\alpha$ vacua, which implies that the amount of quantum entanglement and the long-range effects are larger for non zero value of the parameter $\alpha$. Most significantly, our derived results for $\alpha$ vacua provides the necessary condition for generating non zero entanglement entropy in primordial cosmology. | high energy physics theory |
An FI- or an OI-module $\mathbf{M}$ over a corresponding noetherian polynomial algebra $\mathbf{P}$ may be thought of as a sequence of compatible modules $\mathbf{M}_n$ over a polynomial ring $\mathbf{P}_n$ whose number of variables depends linearly on $n$. In order to study invariants of the modules $\mathbf{M}_n$ in dependence of $n$, an equivariant Hilbert series is introduced if $\mathbf{M}$ is graded. If $\mathbf{M}$ is also finitely generated, it is shown that this series is a rational function. Moreover, if this function is written in reduced form rather precise information about the irreducible factors of the denominator is obtained. This is key for applications. It follows that the Krull dimension of the modules $\mathbf{M}_n$ grows eventually linearly in $n$, whereas the multiplicity of $\mathbf{M}_n$ grows eventually exponentially in $n$. Moreover, for any fixed degree $j$, the vector space dimensions of the degree $j$ components of $\mathbf{M}_n$ grow eventually polynomially in $n$. As a consequence, any graded Betti number of $\mathbf{M}_n$ in a fixed homological degree and a fixed internal degree grows eventually polynomially in $n$. Furthermore, evidence is obtained to support a conjecture that the Castelnuovo-Mumford regularity and the projective dimension of $\mathbf{M}_n$ both grow eventually linearly in $n$. It is also shown that modules $\mathbf{M}$ whose width $n$ components $\mathbf{M}_n$ are eventually Artinian can be characterized by their equivariant Hilbert series. Using regular languages and finite automata, an algorithm for computing equivariant Hilbert series is presented. | mathematics |
The emergence of data-driven demand analysis has led to the increased use of generative modelling to learn the probabilistic dependencies between random variables. Although their apparent use has mostly been limited to image recognition and classification in recent years, generative machine learning algorithms can be a powerful tool for travel behaviour research by replicating travel behaviour by the underlying properties of data structures. In this paper, we examine the use of generative machine learning approach for analyzing multiple discrete-continuous (MDC) travel behaviour data. We provide a plausible perspective of how we can exploit the use of machine learning techniques to interpret the underlying heterogeneities in the data. We show that generative models are conceptually similar to the choice selection behaviour process through information entropy and variational Bayesian inference. Without loss of generality, we consider a restricted Boltzmann machine (RBM) based algorithm with multiple discrete-continuous layers, formulated as a variational Bayesian inference optimization problem. We systematically describe the proposed machine learning algorithm and develop a process of analyzing travel behaviour data from a generative learning perspective. We show parameter stability from model analysis and simulation tests on an open dataset with multiple discrete-continuous dimensions from a data size of 293,330 observations. For interpretability, we derive the conditional probabilities, elasticities and perform statistical analysis on the latent variables. We show that our model can generate statistically similar data distributions for travel forecasting and prediction and performs better than purely discriminative methods in validation. Our results indicate that latent constructs in generative models can accurately represent the joint distribution consistently on MDC data. | statistics |
Given a closed subgroup $G\subset U_N^+$ which is homogeneous, in the sense that we have $S_N\subset G\subset U_N^+$, the corresponding Tannakian category $C$ must satisfy $span(\mathcal{NC}_2)\subset C\subset span(P)$. Based on this observation, we construct a certain integer $p\in\mathbb N\cup\{\infty\}$, that we call "easiness level" of $G$. The value $p=1$ corresponds to the case where $G$ is easy, and we explore here, with some theory and examples, the case $p>1$. As a main application, we show that $S_N\subset S_N^+$ and other liberation inclusions, known to be maximal in the easy setting, remain maximal at the easiness level $p=2$ as well. | mathematics |
We report the Transiting Exoplanet Survey Satellite ($TESS$) detection of a multi-planet system orbiting the $V=10.9$ K0 dwarf TOI 125. We find evidence for up to five planets, with varying confidence. Three high signal-to-noise transit signals correspond to sub-Neptune-sized planets ($2.76$, $2.79$, and $2.94\ R_{\oplus}$), and we statistically validate the planetary nature of the two inner planets ($P_b = 4.65$ days, $P_c = 9.15$ days). With only two transits observed, we report the outer object ($P_{.03} = 19.98$ days) as a high signal-to-noise ratio planet candidate. We also detect a candidate transiting super-Earth ($1.4\ R_{\oplus}$) with an orbital period of only $12.7$ hours and a candidate Neptune-sized planet ($4.2\ R_{\oplus}$) with a period of $13.28$ days, both at low signal-to-noise. This system is amenable to mass determination via radial velocities and transit timing variations, and provides an opportunity to study planets of similar size while controlling for age and environment. The ratio of orbital periods between TOI 125 b and c ($P_c/P_b = 1.97$) is slightly smaller than an exact 2:1 commensurability and is atypical of multiple planet systems from $Kepler$, which show a preference for period ratios just $wide$ of first-order period ratios. A dynamical analysis refines the allowed parameter space through stability arguments and suggests that, despite the nearly commensurate periods, the system is unlikely to be in resonance. | astrophysics |
Hybrid analog and digital beamforming (HBF) has been recognized as an attractive technique offering a tradeoff between hardware implementation limitation and system performance for future broadband millimeter wave (mmWave) communications. In contrast to most current works focusing on the HBF design for orthogonal frequency division multiplexing based mmWave systems, this paper investigates the HBF design for single carrier (SC) systems due to the advantage of low peak-to-average power ratio in transmissions. By applying the alternating minimization method, we propose an efficient HBF scheme based on the minimum mean square error criterion. Simulation results show that the proposed scheme outperforms the conventional HBF scheme for SC systems. | electrical engineering and systems science |
We forecast the reionization history constraints, inferred from Lyman-alpha damping wing absorption features, for a future sample of $\sim 20$ $z \geq 6$ gamma-ray burst (GRB) afterglows. We describe each afterglow spectrum by a three-parameter model. First, L characterizes the size of the ionized region (the "bubble size") around a GRB host halo. Second, $\langle{x_{\rm HI}\rangle}$ is the volume-averaged neutral fraction outside of the ionized bubble around the GRB, which is approximated as spatially uniform. Finally, $N_{\mathrm{HI}}$ denotes the column-density of a local damped Lyman-alpha absorber (DLA) associated with the GRB host galaxy. The size distribution of ionized regions is extracted from a numerical simulation of reionization, and evolves strongly across the Epoch of Reionization (EoR). The model DLA column densities follow the empirical distribution determined from current GRB afterglow spectra. We use a Fisher matrix formalism to forecast the $\langle{x_{\rm HI}(z)\rangle}$ constraints that can be obtained from follow-up spectroscopy of afterglows with SNR = 20 per R=3,000 resolution element at the continuum. We find that the neutral fraction may be determined to better than 10-15\% (1-$\sigma$) accuracy from this data across multiple independent redshift bins at $z \sim 6-10$, spanning much of the EoR, although the precision degrades somewhat near the end of reionization. A more futuristic survey with $80$ GRB afterglows at $z \geq 6$ can improve the precision here by a factor of $2$ and extend measurements out to $z \sim 14$. We further discuss how these constraints may be combined with estimates of the escape fraction of ionizing photons, derived from the DLA column density distribution towards GRBs extracted at slightly lower redshift. This combination will help in testing whether we have an accurate census of the sources that reionized the universe. | astrophysics |
One of the fundamental task in graph data mining is to find a planted community(dense subgraph), which has wide application in biology, finance, spam detection and so on. For a real network data, the existence of a dense subgraph is generally unknown. Statistical tests have been devised to testing the existence of dense subgraph in a homogeneous random graph. However, many networks present extreme heterogeneity, that is, the degrees of nodes or vertexes don't concentrate on a typical value. The existing tests designed for homogeneous random graph are not straightforwardly applicable to the heterogeneous case. Recently, scan test was proposed for detecting a dense subgraph in heterogeneous(inhomogeneous) graph(\cite{BCHV19}). However, the computational complexity of the scan test is generally not polynomial in the graph size, which makes the test impractical for large or moderate networks. In this paper, we propose a polynomial-time test that has the standard normal distribution as the null limiting distribution. The power of the test is theoretically investigated and we evaluate the performance of the test by simulation and real data example. | statistics |
We investigate the phase structure of the compactified $2$-dimensional nonlinear $SU(3)/U(1)^2$ flag sigma model with respect to two $\theta$-terms. Based on the circle compactification with the ${\mathbb Z}_{3}$-twisted boundary condition, which preserves an 't Hooft anomaly of the original uncompactified theory, we perform the semiclassical analysis based on the dilute instanton gas approximation (DIGA). We clarify classical vacua of the theory and derive fractional instanton solutions connecting these vacua. The resulting phase structure based on DIGA exhibits the quantum phase transitions and triple degeneracy at special points in the $(\theta_1,\theta_2)$-plane, which is consistent with the phase diagram obtained from the anomaly matching and global inconsistency conditions. This result indicates the adiabatic continuity between the flag sigma models on ${\mathbb R}^{2}$ and ${\mathbb R}\times S^{1}$ with small compactification radius. We further estimate contributions from instanton--anti-instanton configuration (bion) and show the existence of the imaginary ambiguity, which is expected to be cancelled by that of the perturbative Borel resummation. | high energy physics theory |
Since gravitational waves (GWs) propagate freely through a perfect fluid, coalescing compact binary systems as standard sirens allow to measure the luminosity distance directly and provide distance measurements unaffected by the cosmic opacity. DECi-hertz Interferometer Gravitational-wave Observatory (DECIGO) is a future Japanese space gravitational-wave antenna sensitive to frequency range between target frequencies of LISA and ground-based detectors. Combining the predicted future GW observations from DECIGO and three current popular astrophysical probes (HII regions, SNe Ia Pantheon sample, quasar sample) in electromagnetic (EM) domains, one would be able to probe the opacity of the Universe at different redshifts. In this paper, we show that the cosmic opacity parameter can be constrained to a high precision ($\Delta \epsilon\sim 10^{-2}$) out to high redshifts ($z\sim$5). In order to reconstruct the evolution of cosmic opacity without assuming any particular functional form of it, the cosmic opacity tests should be applied to individual redshift bins independently. Therefore, we also calculate the optical depth at individual redshifts and averaged $\tau(z)$ within redshift bins. Our findings indicate that, compared with the results obtained from the HII galaxies and Pantheon SNe Ia, there is an improvement in precision when the quasar sample is considered. While non-zero optical depth is statistically significant only for redshift ranges $0<z<0.5$, $1<z<2$, and $2.5<z<3.5$, such tendency is different from that obtained in the framework of its parametrized form. Therefore the importance of cosmic-opacity test without a prescribed phenomenological function should be emphasized. | astrophysics |
We first report GeV $\gamma$-ray emission from SNR G15.9+0.2 in this work. We find that its power-law spectral index is 2.13$\pm$0.05 with 13.29$\sigma$ significance level, and the $\gamma$-ray emission can be characterized by a 2D Gauss spatial distribution, which has a better improvement than the case of a point source. Moreover, we find that its likely counterparts from radio, X-ray, and TeV energy bands are well coincident with its spatial location. Analyzing the variability from 12.4 years of the light curve (LC), we find that this LC exists weak variability with a 3.30$\sigma$ variability significance level. Investigating the 2$\sigma$ error region of its best-fit position, we do not find certified active galactic nuclei (AGNs) and AGNs' candidates from the region of this SNR, so we suggest that the new $\gamma$-ray emission is likely to originate from SNR G15.9+0.2. On this basis, we discussed the likely origins of its $\gamma$-ray radiation combined with the distribution of surrounding molecular clouds. | astrophysics |
In this paper, we constructed a new generalization of a class of discrete bidimensional models, the so called Quantum Double Models, by introduce matter qunits to the faces of the lattice that supports these models. This new generalization can be interpreted as the algebraic dual of a first, where we introduce matter qunits to the vertices of this same lattice. By evaluating the algebraic and topological orders of these new models, we prove that, as in the first generalization, a new phenomenon of quasiparticle confinement may appear again: this happens when the co-action homomorphism between matter and gauge groups is non-trivial. Consequently, this homomorphism not only classifies the different models that belong to this new class, but also suggests that they can be interpreted as a 2-dimensional restriction of the 2-lattice gauge theories. | quantum physics |
In this paper, we take up an old thread of development concerning the characterization of supersymmetric theories without any use of anticommuting variables that goes back to one of the authors' very early work [1]. Our special focus here will be on the formulation of supersymmetric Yang-Mills theories, extending previous results beyond $D=4$ dimensions. This perspective is likely to provide new insights into these theories, and in particular the maximally extended $N=4$ theory. As a new result we present a novel derivation of the admissible dimensions for interacting (pure) super-Yang-Mills theories to exist. This article is dedicated to the memory of Peter Freund, amongst many other things an early contributor to supersymmetry, and an author of one of the very first papers on superconformal gauge theories [2]. The final section contains some personal reminiscences of H.N.'s encounters with Peter Freund. | high energy physics theory |
This paper introduces a measure of diffusion of binary outcomes over a large, sparse network. The measure captures the aggregated spillover effect of the outcomes in the first period on their neighboring outcomes in the second period. We associate the network with a set of conditional independence restrictions, and show that when there is an observed proxy network that satisfies these conditional independence restrictions, the measure of diffusion is identified as a spatio-temporal dependence measure of observed outcomes. When the proxy network does not satisfy the restrictions but the spillover effect is nonnegative, the spatio-temporal dependence measure serves as a lower bound for the diffusion. Using this, we propose a confidence lower bound for diffusion and establish its asymptotic validity. Our Monte Carlo simulation studies demonstrate the finite sample stability of the inference across a range of network configurations. We apply the method to Indian village data to measure the diffusion of microfinancing decisions over social networks of households and find that the diffusion parameter is significantly different from zero at 1% level. | statistics |
We propose a new model for estimating the free energy of forming a molecular cavity in a solvent, by assuming this energy is dominated by the electrostatic energy associated with creating the static (interface) potential inside the cavity. The new model approximates the cavity-formation energy as that of a shell capacitor: the inner, solute-shaped conductor is held at the static potential, and the outer conductor (at the first solvation shell) is held at zero potential. Compared to cavity energies computed using free-energy pertubation with explicit-solvent molecular dynamics, the new model exhibits surprising accuracy (Mobley test set, RMSE 0.45 kcal/mol). Combined with a modified continuum model for solute-solvent van der Waals interactions, the total nonpolar model has RMSE of 0.55 kcal/mol on this test set, which is remarkable because the two terms largely cancel. The overall nonpolar model has a small number of physically meaningful parameters and compares favorably to other published models of nonpolar solvation. Finally, when the proposed nonpolar model is combined with our solvation-layer interface condition (SLIC) continuum electrostatic model, which includes asymmetric solvation-shell response, we predict solvation free energies with an RMS error of 1.35 kcal/mol relative to experiment, comparable to the RMS error of explicit-solvent FEP (1.26 kcal/mol). Moreover, all parameters in our model have a clear physical meaning, and employing reasonable temperature dependencies yields remarkable correlation with solvation entropies. | physics |
We present a sensitive, tunable radio-frequency resonator designed to detect reactive changes in nanoelectronic devices down to dilution refrigerator temperatures. The resonator incorporates GaAs varicap diodes to allow electrical tuning of the resonant frequency and the coupling to the input line. We find a resonant frequency tuning range of 8.4 MHz at 55 mK that increases to 29 MHz at 1.5 K. To assess the impact on performance of different tuning conditions, we connect a quantum dot in a silicon nanowire field-effect transistor to the resonator, and measure changes in the device capacitance caused by cyclic electron tunneling. At 250 mK, we obtain an equivalent charge sensitivity of $43~\mu e / \sqrt{\text{Hz}}$ when the resonator and the line are impedance-matched and show that this sensitivity can be further improved to $31~\mu e / \sqrt{\text{Hz}}$ by re-tuning the resonator. We understand this improvement by using an equivalent circuit model and demonstrate that for maximum sensitivity to capacitance changes, in addition to impedance matching, a high-quality resonator with low parasitic capacitance is desired. | condensed matter |
Target imbalance affects the performance of recent deep learning methods in many medical image segmentation tasks. It is a twofold problem: class imbalance - positive class (lesion) size compared to negative class (non-lesion) size; lesion size imbalance - large lesions overshadows small ones (in the case of multiple lesions per image). While the former was addressed in multiple works, the latter lacks investigation. We propose a loss reweighting approach to increase the ability of the network to detect small lesions. During the learning process, we assign a weight to every image voxel. The assigned weights are inversely proportional to the lesion volume, thus smaller lesions get larger weights. We report the benefit from our method for well-known loss functions, including Dice Loss, Focal Loss, and Asymmetric Similarity Loss. Additionally, we compare our results with other reweighting techniques: Weighted Cross-Entropy and Generalized Dice Loss. Our experiments show that inverse weighting considerably increases the detection quality, while preserves the delineation quality on a state-of-the-art level. We publish a complete experimental pipeline for two publicly available datasets of CT images: LiTS and LUNA16 (https://github.com/neuro-ml/inverse_weighting). We also show results on a private database of MR images for the task of multiple brain metastases delineation. | electrical engineering and systems science |
We study the order statistics of a random walk (RW) of $n$ steps whose jumps are distributed according to symmetric Erlang densities $f_p(\eta)\sim |\eta|^p \,e^{-|\eta|}$, parametrized by a non-negative integer $p$. Our main focus is on the statistics of the gaps $d_{k,n}$ between two successive maxima $d_{k,n}=M_{k,n}-M_{k+1,n}$ where $M_{k,n}$ is the $k$-th maximum of the RW between step 1 and step $n$. In the limit of large $n$, we show that the probability density function of the gaps $P_{k,n}(\Delta) = \Pr(d_{k,n} = \Delta)$ reaches a stationary density $P_{k,n}(\Delta) \to p_k(\Delta)$. For large $k$, we demonstrate that the typical fluctuations of the gap, for $d_{k,n}= O(1/\sqrt{k})$ (and $n \to \infty$), are described by a non-trivial scaling function that is independent of $k$ and of the jump probability density function $f_p(\eta)$, thus corroborating our conjecture about the universality of the regime of typical fluctuations (see G. Schehr, S. N. Majumdar, Phys. Rev. Lett. 108, 040601 (2012)). We also investigate the large fluctuations of the gap, for $d_{k,n} = O(1)$ (and $n \to \infty$), and show that these two regimes of typical and large fluctuations of the gaps match smoothly. | condensed matter |
Video style transfer is getting more attention in AI community for its numerous applications such as augmented reality and animation productions. Compared with traditional image style transfer, performing this task on video presents new challenges: how to effectively generate satisfactory stylized results for any specified style, and maintain temporal coherence across frames at the same time. Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos. Specifically, MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on their similarity with content features. The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures. Moreover, MCCNet is also designed to explicitly align the features to input which ensures the output maintains the content structures as well as the temporal continuity. To further improve the performance of MCCNet under complex light conditions, we also introduce the illumination loss during training. Qualitative and quantitative evaluations demonstrate that MCCNet performs well in both arbitrary video and image style transfer tasks. | computer science |
We approach to the ortho-positronium (o-Ps) as a relativistic two-body problem in $2+1$ dimensions in which o-Ps is composed of two-oppositely charged particles interacting via an attractive Coulomb force. In addition to separation of center of mass and relative coordinates, mapping the background into the polar space-time gives possibility of construction of possible spin eigen-states of o-Ps. This approach makes the energy spectrum complex in order to describe o-Ps that can decay. From the complex energy expression, we find the annihilation energy, binding energy and the life-time of o-Ps, in S state. | high energy physics phenomenology |
We propose a generalization of the Wasserstein distance of order 1 to the quantum states of $n$ qudits. The proposal recovers the Hamming distance for the vectors of the canonical basis, and more generally the classical Wasserstein distance for quantum states diagonal in the canonical basis. The proposed distance is invariant with respect to permutations of the qudits and unitary operations acting on one qudit and is additive with respect to the tensor product. Our main result is a continuity bound for the von Neumann entropy with respect to the proposed distance, which significantly strengthens the best continuity bound with respect to the trace distance. We also propose a generalization of the Lipschitz constant to quantum observables. The notion of quantum Lipschitz constant allows us to compute the proposed distance with a semidefinite program. We prove a quantum version of Marton's transportation inequality and a quantum Gaussian concentration inequality for the spectrum of quantum Lipschitz observables. Moreover, we derive bounds on the contraction coefficients of shallow quantum circuits and of the tensor product of one-qudit quantum channels with respect to the proposed distance. We discuss other possible applications in quantum machine learning, quantum Shannon theory, and quantum many-body systems. | quantum physics |
Nuclear $\beta$ decays as well as the decay of the neutron are well-established low-energy probes of physics beyond the Standard Model (SM). In particular, with the axial-vector coupling of the nucleon $g_A$ determined from lattice QCD, the comparison between experiment and SM prediction is commonly used to derive constraints on right-handed currents. Further, in addition to the CKM element $V_{us}$ from kaon decays, $V_{ud}$ from $\beta$ decays is a critical input for the test of CKM unitarity. Here, we point out that the available information on $\beta$ decays can be re-interpreted as a stringent test of lepton flavor universality (LFU). In fact, we find that the ratio of $V_{us}$ from kaon decays over $V_{us}$ from $\beta$ decays (assuming CKM unitarity) is extremely sensitive to LFU violation (LFUV) in $W$-$\mu$-$\nu$ couplings thanks to a CKM enhancement by $(V_{ud}/V_{us})^2\sim 20$. From this perspective, recent hints for the violation of CKM unitarity can be viewed as further evidence for LFUV, fitting into the existing picture exhibited by semi-leptonic $B$ decays and the anomalous magnetic moments of muon and electron. Finally, we comment on the future sensitivity that can be reached with this LFU violating observable and discuss complementary probes of LFU that may reach a similar level of precision, such as $\Gamma(\pi\to\mu\nu)/\Gamma(\pi\to e\nu)$ at the PEN and PiENu experiments or even direct measurements of $W\to\mu\nu$ at an FCC-ee. | high energy physics phenomenology |
Bayesian methods which utilize Bayes' theorem to update the knowledge of desired parameters after each measurement, are used in a wide range of quantum science. For various applications in quantum science, efficiently and accurately determining a quantum transition frequency is essential. However, the exact relation between a desired transition frequency and the controllable experimental parameters is usually absent. Here, we propose an efficient scheme to search the suitable conditions for a desired quantum transition via an adaptive Bayesian algorithm, and experimentally demonstrate it by using coherent population trapping in an ensemble of laser-cooled $^{87}$Rb atoms. The transition frequency is controlled by an external magnetic field, which can be tuned in realtime by applying a d.c. voltage. Through an adaptive Bayesian algorithm, the voltage can automatically converge to the desired one from a random initial value only after few iterations. In particular, when the relation between the target frequency and the applied voltage is nonlinear, our algorithm shows significant advantages over traditional methods. This work provides a simple and efficient way to determine a transition frequency, which can be widely applied in the fields of precision spectroscopy, such as atomic clocks, magnetometers, and nuclear magnetic resonance. | quantum physics |
Low-frequency (80-240 MHz) radio observations of the solar corona are presented using the Murchison Widefield Array (MWA), and several discoveries are reported. The corona is reviewed, followed by chapters on Type III bursts and circularly-polarized quiescent emission. The second chapter details new Type III burst dynamics. One source component at higher frequencies splits into two at lower frequencies, where the two components rapidly diverge. This is attributed to electron beams traversing a divergent magnetic field configuration, which is supported by extreme ultraviolet jet observations outlining a coronal null point. The third chapter uses Type III burst heights as density probes. Harmonic plasma emission implies ~4x enhancements over background models. This can be explained by electron beams traveling along dense fibers or by propagation effects that elevate apparent source heights. The quiescent corona is compared to model predictions to conclude that propagation effects can largely but not entirely explain the apparent density enhancements. The fourth chapter surveys over 100 spectropolarimetric observing runs. Around 700 compact sources are detected with polarization fractions from less than 0.5% to nearly 100%. They are interpreted as plasma emission noise storm sources down to levels not previously observable. A "bullseye" structure is reported for coronal holes, where an outer ring surrounds an oppositely-polarized central component that does not match the sign expected of thermal bremsstrahlung. The large-scale polarization structure is shown to be well-correlated with that of a global magnetic field model. The last chapter summarizes results and outlines future work. A preliminary comparison of polarization images to model predictions is shared, along with coronal mass ejection observations revealing a radio arc that is morphologically similar to the white-light structure. | astrophysics |
Mobile cloud computing is an emerging field that is gaining popularity across borders at a rapid pace. Similarly, the field of health informatics is also considered as an extremely important field. This work observes the collaboration between these two fields to solve the traditional problem of extracting Electrocardiogram signals from trace reports and then performing analysis. The developed system has two front ends, the first dedicated for the user to perform the photographing of the trace report. Once the photographing is complete, mobile computing is used to extract the signal. Once the signal is extracted, it is uploaded into the server and further analysis is performed on the signal in the cloud. Once this is done, the second interface, intended for the use of the physician, can download and view the trace from the cloud. The data is securely held using a password-based authentication method. The system presented here is one of the first attempts at delivering the total solution, and after further upgrades, it will be possible to deploy the system in a commercial setting. | electrical engineering and systems science |
The non-contact infrared thermometer (NCIT) is an important basic tool for fever screening and self-health monitoring. However, it is susceptible to the thermal shock when working in a low temperature environment, which will cause a time-consuming and inaccurate human body temperature measurements. To overcome the effects of thermal shock, a hybrid temperature compensation method combining hardware and algorithm is proposed. Firstly, the principle of infrared temperature measurement is described and the influence of thermal shock on infrared thermometer is analyzed. Then, the hybrid temperature compensation scheme is constructed by mounting a heating ring on the infrared sensor shell, and using the proportional integral derivative (PID) algorithm and the pulse width modulation (PWM) technology to control it heating. In this way, the internal ambient temperature of infrared sensor can be raised rapidly closing to the external ambient temperature, and the stable outputs of the infrared sensor are also accelerated. Finally, some experiments are carried out in a laboratory. The results show that the proposed method can quickly and accurately measure the temperature of standard black body when the ambient temperatures are 5 , 15 and 25 Celsius respectively, the measurement error is only 0.2 Celsius , and the measurement time is less than 2 seconds. This study would be beneficial to improve performance of NCIT, especially the infrared ear thermometer. | physics |
For nearly 40 years, dark matter has been widely assumed to be cold and collisionless. Cold dark matter models make fundamental predictions for the behavior of dark matter on small (<10 kpc) scales. These predictions include cuspy density profiles at the centers of dark matter halos and a halo mass function that increases as dN/dM ~ M^-1.9 down to very small masses. We suggest two observational programs relying on extremely large telescopes to critically test these predictions, and thus shed new light on the nature of dark matter. (1) Combining adaptive optics-enabled imaging with deep spectroscopy to measure the three-dimensional motions of stars within a sample of Local Group dwarf galaxies that are the cleanest dark matter laboratories known in the nearby universe. From these observations the inner slope of the dark matter density profile can be determined with an accuracy of 0.20 dex, enabling a central cusp to be distinguished from a core at 5 sigma significance. (2) Diffraction-limited AO imaging and integral field spectroscopy of gravitationally lensed galaxies and quasars to quantify the abundance of dark substructures in the halos of the lens galaxies and along the line of sight. Observations of 50 lensed arcs and 50 multiply-imaged quasars will be sufficient to measure the halo mass function over the range 10^7 < M < 10^10 Msun at cosmological scales, independent of the baryonic and stellar composition of those structures. These two observational probes provide complementary information about the small scale structure, with a joint self-consistent analysis mitigating limitations of either probe. This program will produce the strongest existing constraints on the properties of dark matter on small scales, allowing conclusive tests of alternative warm, fuzzy, and self-interacting dark matter models. | astrophysics |
Using axisymetric general relativistic magnetohydrodynamics simulations we study evolution of accretion torus around black hole endowed with different initial magnetic field configurations. Due to accretion of material onto black hole, parabolic magnetic field will develop in accretion torus funnel around vertical axis, for any initial magnetic field configuration. | astrophysics |
General flavor changing interactions for a Goldstone boson (GB) to fermions due to a spontaneous global $U(1)_G$ symmetry breaking are discussed. This GB may be the Axion, solving the strong QCD CP problem, if there is a QCD anomaly for the $U(1)_G$ charge assignments for quarks. Or it may be the Majoron in models from lepton number violation in producing seesaw Majorana neutrino masses if the symmetry breaking scale is much higher than the electroweak scale. It may also, in principle, play the roles of Axion and Majoron simultaneously as far as providing solution for the strong CP problem and generating a small Majorana neutrino masses are concerned. Great attentions have been focused on flavor conserving GB interactions. Recently flavor changing Axion and Majoron models have been studied in the hope to find new physics from rare decays in the intensity frontier. In this work, we will provide a systematic model building aspect study for a GB having flavor changing neutral current (FCNC) interactions in both the quark and lepton sectors, or separately in the quark, charged lepton and neutrino sectors with sources of FCNC interactions identified in detail. We provide a general proof of the equivalence of using physical GB components and GB broken generators for calculating GB couplings to two gluons and two photons, and some issues related to models having spontaneous CP violation are discussed. We will also provide some details for obtaining FCNC GB interactions in several popular models, such as the Type-I, -II, -III seesaw and Left-Right symmetric models, and point out some special features in these models. | high energy physics phenomenology |
We give a characterization of flat affine connections on manifolds by means of a natural affine representation of the universal covering of the Lie group of diffeomorphisms preserving the connection. From the infinitesimal point of view, this representation is determined by the 1-connection form and the fundamental form of the bundle of linear frames of the manifold. We show that the group of affine transformations of a real flat affine $n$-dimensional manifold, acts on $\mathbb{R}^n$ leaving an open orbit when its dimension is greater than $n$. Moreover, when the dimension of the group of affine transformations is $n$, this orbit has discrete isotropy. For any given Lie subgroup $H$ of affine transformations of the manifold, we show the existence of an associative envelope of the Lie algebra of $H$, relative to the connection. The case when $M$ is a Lie group and $H$ acts on $G$ by left translations is particularly interesting. We also exhibit some results about flat affine manifolds whose group of affine transformations admits a flat affine bi-invariant structure. The paper is illustrated with several examples. | mathematics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.