text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We investigate the role of non-local correlations in LiFeAs by exploring an ab-initio-derived multi-orbital Hubbard model for LiFeAs via the Two-Particle Self-Consistent (TPSC) approach. The multi-orbital formulation of TPSC approximates the irreducible interaction vertex to be an orbital-dependent constant, which is self-consistently determined from local spin and charge sum rules. Within this approach, we disentangle the contribution of local and non-local correlations in LiFeAs and show that in the local approximation one recovers the dynamical-mean field theory (DMFT) result. The comparison of our theoretical results to most recent angular-resolved photoemission spectroscopy (ARPES) and de-Haas van Alphen (dHvA) data shows that non-local correlations in LiFeAs are decisive to describe the measured spectral function $A(\vec k,\omega)$, Fermi surface and scattering rates. These findings underline the importance of non-local correlations and benchmark different theoretical approaches for iron-based superconductors. | condensed matter |
We study temporal variations of the emission lines of Halpha, Hepsilon, H and K Ca II, D1 and D2 Na I, 4026 and 5876 A He I in the HARPS spectra of Proxima Centauri across an extended time of 13.2 years, from May 27, 2004, to September 30, 2017. Aims. We analyse the common behaviour and differences in the intensities and profiles of different emission lines in flare and quiet modes of Proxima activity. Methods. We compare the pseudo-equivalent widths (pEW) and profiles of the emission lines in the HARPS high-resolution (R ~ 115,000) spectra observed at the same epochs. Results. All emission lines show variability with a timescale of at least 10 min. The strength of all lines except He I 4026 A correlate with \Halpha. During strong flares the `red asymmetry' appears in the Halpha emission line indicating the infall of hot condensed matter into the chromosphere with velocities greater than 100 km/s disturbing chromospheric layers. As a result, the strength of the Ca II lines anti-correlates with Halpha during strong flares. The He I lines at 4026 and 5876 A appear in the strong flares. The cores of D1 and D2 Na I lines are also seen in emission. During the minimum activity of Proxima Centauri, Ca II lines and Hepsilon almost disappear while the blue part of the Na I emission lines is affected by the absorption in the extending and condensing flows. Conclusions. We see different behaviour of emission lines formed in the flare regions and chromosphere. Chromosphere layers of Proxima Cen are likely heated by the flare events; these layers are cooled in the `non-flare' mode. The self-absorption structures in cores of our emission lines vary with time due to the presence of a complicated system of inward and outward matter flows in the absorbing layers. | astrophysics |
We analyse optical photometric data of short term variability (flickering) of the accreting white dwarf in the jet-ejecting symbiotic star MWC560. The observations are obtained in 17 nights during the period November 2011 - October 2019. The colour-magnitude diagram shows that the hot component of the system becomes redder as it gets brighter. For the flickering source we find that it has colour 0.14 < B-V < 0.40, temperature in the range 6300 < T_fl < 11000 K, and radius 1.2 < R_fl < 18 Rsun. We find a strong correlation (correlation coefficient 0.76, significance < 0.001) between B band magnitude and the average radius of the flickering source - as the brightness of the system increases the size of the flickering source also increases. The estimated temperature is similar to that of the bright spot of cataclysmic variables. In 2019 the flickering is missing, and the B-V colour of the hot component becomes bluer. | astrophysics |
We find the leading electro-weak corrections to the HQET/NRQCD Lagrangian. These corrections appear in the Wilson coefficients of the two and four quark operators and are considered here up to $\mathcal{O}(1/m^3)$ at one-loop order. The two quark operators up to this order will include new CP-violating terms, which we derived analogously to the CP preserving QCD result at one-loop order. | high energy physics phenomenology |
Ultrafast laser excitation can trigger multiplex coherent dynamics in molecules. Here, we report attosecond transient absorption experiments addressing simultaneous probing of electronic and vibrational dynamics in a prototype molecule, deuterium bromide (DBr), following its strong-field ionization. Electronic and vibrational coherences in the ionic X$^2\Pi_{3/2}$ and X$^2\Pi_{1/2}$ states are characterized in the Br-$3d$ core-level absorption spectra via quantum beats with 12.6-fs and 19.9-fs periodicities, respectively. Polarization scans reveal that the phase of the electronic quantum beats depends on the probe direction, experimentally showing that the coherent electronic motion corresponds to the oscillation of the hole density along the ionization-field direction. The vibrational quantum beats are found to maintain a relatively constant amplitude, whereas the electronic quantum beats exhibit a partial decrease in time. Quantum wave-packet simulations show that the decoherence effect from the vibrational motion is insignificant because of the parallel relation between the X$^2\Pi_{3/2}$ and X$^2\Pi_{1/2}$ potentials. A comparison between the DBr and HBr results suggests that rotation motion is responsible for the decoherence since it leads to initial alignment prepared by the strong-field ionization. | physics |
Phase curves and secondary eclipses of gaseous exoplanets are diagnostic of atmospheric composition and meteorology, and the long observational baseline and high photometric precision from the Kepler Mission make its dataset well-suited for exploring phase curve variability, which provides additional insights into atmospheric dynamics. Observations of the hot Jupiter Kepler-76b span more than 1,000 days, providing an ideal dataset to search for atmospheric variability. In this study, we find that Kepler-76b's secondary eclipse, with a depth of $87 \pm 6$ parts-per-million (ppm), corresponds to an effective temperature of 2,830$^{+50}_{-30}$ K. Our results also show clear indications of variability in Kepler-76b's atmospheric emission and reflectivity, with the phase curve amplitude typically $50.5 \pm 1.3$ ppm but varying between 35 and 70 ppm over tens of days. As is common for hot Jupiters, Kepler-76b's phase curve shows a discernible offset of $\left( 9 \pm 1.3 \right)^\circ$ eastward of the sub-stellar point and varying in concert with the amplitude. These variations may arise from the advance and retreat of thermal structures and cloud formations in Kepler-76b's atmosphere; the resulting thermal perturbations may couple with the super-rotation expected to transport aerosols, giving rise to a feedback loop. Looking forward, the TESS Mission can provide new insight into planetary atmospheres, with good prospects to observe both secondary eclipses and phase curves among targets from the mission. TESS's increased sensitivity in red wavelengths as compared to Kepler means that it will probably probe different aspects of planetary atmospheres. | astrophysics |
A critical question in astrobiology is whether exoEarth candidates (EECs) are Earth-like, in that they originate life that progressively oxygenates their atmospheres similarly to Earth. We propose answering this question statistically by searching for O2 and O3 on EECs with missions such as HabEx or LUVOIR. We explore the ability of these missions to constrain the fraction, fE, of EECs that are Earth-like in the event of a null detection of O2 or O3 on all observed EECs. We use the Planetary Spectrum Generator to simulate observations of EECs with O2 and O3 levels based on Earth's history. We consider four instrument designs: LUVOIR-A (15m), LUVOIR-B (8m), HabEx with a starshade (4m, "HabEx/SS"), HabEx without a starshade (4m, "HabEx/no-SS"); as well as three estimates of the occurrence rate of EECs (eta_earth): 24%, 5%, and 0.5%. In the case of a null-detection, we find that for eta_earth = 24%, LUVOIR-A, LUVOIR-B, and HabEx/SS would constrain fE to <= 0.094, <= 0.18, and <= 0.56, respectively. This also indicates that if fE is greater than these upper limits, we are likely to detect O3 on at least 1 EEC. Conversely, we find that HabEx/no-SS cannot constrain fE, due to the lack of an coronagraph ultraviolet channel. For eta_earth = 5%, only LUVOIR-A and LUVOIR-B would be able to constrain fE, to <= 0.45 and <= 0.85, respectively. For eta_earth = 0.5%, none of the missions would allow us to constrain fE, due to the low number of detectable EECs. We conclude that the ability to constrain fE is more robust to uncertainties in eta_earth for missions with larger aperture mirrors. However all missions are susceptible to an inconclusive null detection if eta_earth is sufficiently low. | astrophysics |
Although Bayesian Optimization (BO) has been employed for accelerating materials design in computational materials engineering, existing works are restricted to problems with quantitative variables. However, real designs of materials systems involve both qualitative and quantitative design variables representing material compositions, microstructure morphology, and processing conditions. For mixed-variable problems, existing Bayesian Optimization (BO) approaches represent qualitative factors by dummy variables first and then fit a standard Gaussian process (GP) model with numerical variables as the surrogate model. This approach is restrictive theoretically and fails to capture complex correlations between qualitative levels. We present in this paper the integration of a novel latent-variable (LV) approach for mixed-variable GP modeling with the BO framework for materials design. LVGP is a fundamentally different approach that maps qualitative design variables to underlying numerical LV in GP, which has strong physical justification. It provides flexible parameterization and representation of qualitative factors and shows superior modeling accuracy compared to the existing methods. We demonstrate our approach through testing with numerical examples and materials design examples. It is found that in all test examples the mapped LVs provide intuitive visualization and substantial insight into the nature and effects of the qualitative factors. Though materials designs are used as examples, the method presented is generic and can be utilized for other mixed variable design optimization problems that involve expensive physics-based simulations. | statistics |
We use the linear sigma model with quarks to locate the critical end point in the effective QCD phase diagram accounting for fluctuations in temperature and quark chemical potential. For this purpose, we use the non-equilibrium formalism provided by the superstatistics framework. We compute the effective potential in the high- and low-temperature approximations up to sixth order and include the contribution of ring diagrams to account for plasma screening effects. We fix the model parameters from relations between the thermal sigma and pion masses imposing a first order phase transition at zero temperature and a finite critical value for the baryon chemical potential that we take of order of the nucleon mass. We find that the CEP displacement due to fluctuations in temperature and/or quark chemical potential is almost negligible. | high energy physics phenomenology |
In the paper, within the background field method, the renormalization and the gauge dependence is studied as for an SU(2) Yang-Mills theory with multiplets of spinor and scalar fields. By extending the quantum action of the BV-formalism with an extra fermion vector field and a constant fermion parameter, the multiplicative character of the renormalizability is proven. The renormalization of all the physical parameters of the theory under consideration is shown to be gauge-independent. | high energy physics theory |
We analyze stationary BPS black hole solutions to 4d $\mathcal{N}=2$ abelian gauged supergravity. Using an appropriate near horizon ansatz, we construct rotating attractors with magnetic flux realizing a topological twist along the horizon surface, for any theory with a symmetric scalar manifold. An analytic flow to asymptotically locally $AdS_4$ is presented for a subclass of these near horizon geometries, and an explicit new example of supersymmetric $AdS_4$ rotating black hole is discussed in detail. We further note that, upon tuning the gauging to special values, one can obtain solutions with different asymptotics, and in particular reductions of doubly spinning asymptotically $AdS_5$ black holes. Finally we present a proposal for the form of the BPS entropy function with rotation, which we expect to be holographically related to the refined twisted index of the dual theory. | high energy physics theory |
Non-minimal Higgs sectors are strongly constrained by the agreement of the measured couplings of the 125 GeV Higgs with Standard Model predictions. This agreement can be explained by an approximate $\mathbb{Z}_2$ symmetry under which the additional Higgs bosons are odd. This allows the additional Higgs bosons to be approximately inert, meaning that they have suppressed VEVs and suppressed mixing with the Standard Model Higgs. In this case, single production of the new Higgs bosons is suppressed, but electroweak pair production is unsuppressed. We study the phenomenology of a minimal 2 Higgs doublet model that realizes this scenario. In a wide range of parameters, the phenomenology of the model is essentially fixed by the masses of the exotic Higgs bosons, and can therefore be explored systematically. We study a number of different plausible signals in this model, and show that several LHC searches can constrain or discover additional Higgs bosons in this parameter space. We find that the reach is significantly extended at the high luminosity LHC. | high energy physics phenomenology |
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available. | computer science |
We present a partonic picture for diffractive onium-nucleus scattering in the large-number-of-color limit from which the distribution of rapidity gaps in a certain kinematic region can be deduced. This picture allows us to draw a parallel between diffractive dissociation and the genealogy of partonic evolution, the latter being essentially similar to a branching-diffusion process in which the branching is the parton splitting, and the diffusion occurs in the transverse momenta of the partons. In particular, we show that the rapidity gap distribution corresponds to the distribution of the splitting rapidity of the last common ancestor of the partons whose transverse momenta are smaller than the nuclear saturation scale, when the scattering process is viewed in the restframe of the nucleus. Numerical calculations are also implemented to support the analytical predictions. | high energy physics phenomenology |
This paper revisits the special type of a neural network known under two names. In the statistics and machine learning community it is known as a multi-class logistic regression neural network. In the neural network community, it is simply the soft-max layer. The importance is underscored by its role in deep learning: as the last layer, whose autput is actually the classification of the input patterns, such as images. Our exposition focuses on mathematically rigorous derivation of the key equation expressing the gradient. The fringe benefit of our approach is a fully vectorized expression, which is a basis of an efficient implementation. The second result of this paper is the positivity of the second derivative of the cross-entropy loss function as function of the weights. This result proves that optimization methods based on convexity may be used to train this network. As a corollary, we demonstrate that no $L^2$-regularizer is needed to guarantee convergence of gradient descent. | statistics |
In this paper we discuss correlation function computations in massive topological Landau-Ginzburg orbifolds, extending old results of Vafa. We then apply these computations to provide further tests of the nonabelian mirrors proposal and two-dimensional Hori-Seiberg dualities with $(S)O_{\pm}$ gauge groups and their mirrors. | high energy physics theory |
We look at BPS systems involving two interacting Sine-Gordon like fields both when one of them has a kink solution and the second one either a kink or an antikink solution. The interaction between the two fields is controlled by a parameter $\lambda$ which has to satisfy $| \lambda|< 2$. We then take these solitonic static solutions (with solitons well localised) and construct from them systems involving two solitons in each field (kinks and antikinks) and then use them as initial conditions for their evolution in Lorentz covariant versions of such models. This way we study their interactions and compare them with similar interactions involving only one Sine-Gordon field. In particular, we look at the behaviour of two static kinks in each field (which for one field repel each other) and of a system involving kinks and anti-kinks (which for one field attract each other) and look how their behaviour depends on the strength of the interaction $\lambda$ between the two fields. Our simulations have led us to look again at the static BPS solutions of systems involving more fields. We have found that such ostensibly 'static' BPS solutions can exhibit small motions due to the excitation of their zero modes. These excitations arise from small unavoidable numerical errors (the overall translation is cancelled by the conservation of momentum) but as systems of two or more fields have more that one zero mode such motions can be generated and are extremely small. The energy of our systems has been conserved to within $10^{-5}\%$. | high energy physics theory |
Many approaches to astronomical data reduction and analysis cannot tolerate missing data: corrupted pixels must first have their values imputed. This paper presents astrofix, a robust and flexible image imputation algorithm based on Gaussian Process Regression (GPR). Through an optimization process, astrofix chooses and applies a different interpolation kernel to each image, using a training set extracted automatically from that image. It naturally handles clusters of bad pixels and image edges and adapts to various instruments and image types. The mean absolute error of astrofix is several times smaller than that of median replacement and interpolation by a Gaussian kernel. We demonstrate good performance on both imaging and spectroscopic data, including the SBIG 6303 0.4m telescope and the FLOYDS spectrograph of Las Cumbres Observatory and the CHARIS integral-field spectrograph on the Subaru Telescope. | astrophysics |
Recently, a new type of constant Fayet-Iliopoulos (FI) terms was introduced in $\mathcal{N}=1$ supergravity, which do not require the gauging of the $R$-symmetry. We revisit and generalise these constructions, building a new class of K\"ahler invariant FI terms parametrised by a function of the gravitino mass as functional of the chiral superfields, which is then used to describe new models of inflation. They are based on a no-scale supergravity model of the inflaton chiral multiplet, supplemented by an abelian vector multiplet with the new FI-term. We show that the inflaton potential is compatible with the CMB observational data, with a vacuum energy at the minimum that can be tuned to a tiny positive value. Finally, the axionic shift symmetry can be gauged by the $U(1)$ which becomes massive. These models offer a mechanism for fixing the gravitino mass in no-scale supergravities, that corresponds to a flat direction of the scalar potential in the absence of the new FI-term; its origin in string theory is an interesting open problem. | high energy physics theory |
K2 observations of the weak-lined T Tauri binary V928 Tau A+B show the detection of a single, asymmetric eclipse which may be due to a previously unknown substellar companion eclipsing one component of the binary with an orbital period $>$ 66 days. Over an interval of about 9 hours, one component of the binary dims by around 60%, returning to its normal brightness about 5 hours later. From modeling of the eclipse shape we find evidence that the eclipsing companion may be surrounded by a disk or a vast ring system. The modeled disk has a radius of $0.9923\,\pm\,0.0005\,R_*$, with an inclination of $56.78\,\pm\, 0.03^\circ$, a tilt of $41.22\,\pm\,0.05^\circ$, an impact parameter of $-0.2506\,\pm\,0.0002\,R_*$ and an opacity of 1.00. The occulting disk must also move at a transverse velocity of $6.637\,\pm\,0.002\,R_*\,\mathrm{day}^{-1}$, which depending on whether it orbits V928 Tau A or B, corresponds to approximately 73.53 or 69.26 $\mathrm{km s}^{-1}$. A search in ground based archival data reveals additional dimming events, some of which suggest periodicity, but no unambiguous period associated with the eclipse observed by K2. We present a new epoch of astrometry which is used to further refine the orbit of the binary, presenting a new lower bound of 67 years, and constraints on the possible orbital periods of the eclipsing companion. The binary is also separated by 18" ($\sim$2250 au) from the lower mass CFHT-BD-Tau 7, which is likely associated with V928 Tau A+B. We also present new high dispersion optical spectroscopy that we use to characterize the unresolved stellar binary. | astrophysics |
In recent years, surveillance cameras are widely deployed in public places, and the general crime rate has been reduced significantly due to these ubiquitous devices. Usually, these cameras provide cues and evidence after crimes are conducted, while they are rarely used to prevent or stop criminal activities in time. It is both time and labor consuming to manually monitor a large amount of video data from surveillance cameras. Therefore, automatically recognizing violent behaviors from video signals becomes essential. This paper summarizes several existing video datasets for violence detection and proposes the RWF-2000 database with 2,000 videos captured by surveillance cameras in real-world scenes. Also, we present a new method that utilizes both the merits of 3D-CNNs and optical flow, namely Flow Gated Network. The proposed approach obtains an accuracy of 87.25% on the test set of our proposed database. The database and source codes are currently open to access. | computer science |
We introduce a new graphical framework for designing quantum error correction codes based on classical principles. A key feature of this graphical language, over previous approaches, is that it is closely related to that of factor graphs or graphical models in classical information theory and machine learning. It enables us to formulate the description of the recently-introduced `coherent parity check' quantum error correction codes entirely within the language of classical information theory. This makes our construction accessible without requiring background in quantum error correction or even quantum mechanics in general. More importantly, this allows for a collaborative interplay where one can design new quantum error correction codes derived from classical codes. | quantum physics |
Sparse array arrangement has been widely used in vector-sensor arrays because of increased degree-of-freedoms for identifying more sources than sensors. For large-size sparse vector-sensor arrays, one-bit measurements can further reduce the receiver system complexity by using low-resolution ADCs. In this paper, we present a sparse cross-dipole array with one-bit measurements to estimate Direction of Arrivals (DOA) of electromagnetic sources. Based on the independence assumption of sources, we establish the relation between the covariance matrix of one-bit measurements and that of unquantized measurements by Bussgang Theorem. Then we develop a Spatial-Smooth MUSIC (SS-MUSIC) based method, One-Bit MUSIC (OB-MUSIC), to estimate the DOAs. By jointly utilizing the covariance matrices of two dipole arrays, we find that OB-MUSIC is robust against polarization states. We also derive the Cramer-Rao bound (CRB) of DOA estimation for the proposed scheme. Furthermore, we theoretically analyze the applicability of the independence assumption of sources, which is the fundamental of the proposed and other typical methods, and verify the assumption in typical communication applications. Numerical results show that, with the same number of sensors, one-bit sparse cross-dipole arrays have comparable performance with unquantized uniform linear arrays and thus provide a compromise between the DOA estimation performance and the system complexity. | electrical engineering and systems science |
WASP-18b is an utra-hot Jupiter with a temperature difference of upto 2500K between day and night. Such giant planets begin to emerge as planetary laboratory for understanding cloud formation and gas chemistry in well-tested parameter regimes in order to better understand planetary mass loss and for linking observed element ratios to planet formation and evolution. We aim to understand where clouds form, their interaction with the gas phase chemistry through depletion and enrichment, the ionisation of the atmospheric gas and the possible emergence of an ionosphere on ultra-hot Jupiters. We utilize 1D profiles from a 3D atmosphere simulations for WASP-18b as input for kinetic cloud formation and gas-phase chemical equilibrium calculations. We solve our kinetic cloud formation model for these 1D profiles that sample the atmosphere of WASP-18b at 16 different locations along the equator and in the mid-latitudes and derive consistently the gas-phase composition. The dayside of WASP-18b emerges as completely cloud-free due to the very high atmospheric temperatures. In contrast, the nightside is covered in geometrically extended and chemically heterogeneous clouds with disperse particle size distributions. The atmospheric C/O increases to $>0.7$ and the enrichment of the atmospheric gas with cloud particles is $\rho_{\rm d}/\rho_{\rm gas}>10^{-3}$. The clouds that form at the limbs appear located farther inside the atmosphere and they are the least extended. Not all day-night terminator regions form clouds. The gas-phase is dominated by H$_2$, CO, SiO, H$_2$O, H$_2$S, CH$_4$, SiS. In addition, the dayside has a substantial degree of ionisation due to ions like Na$^+$, K$^+$, Ca$^+$, Fe$^+$. Al$^+$ and Ti$^+$ are the most abundant of their element classes. We find that WASP-18b, as one example for ultra-hot Jupiters, develops an ionosphere on the dayside. | astrophysics |
In various topological phases, nontrivial states appear at the boundaries of the system. In this paper, we investigate anomalous dielectric response caused by such states caused by the pi Zak phase. First, by using the one-dimensional Su-Schrieffer-Heeger model, we show that, when the system is insulating and the Zak phase is pi, the polarization suddenly rises to a large value close to e/2, by application of an external electric field. The pi Zak phase indicates existence of half-filled edge states, and we attribute this phenomenon to charge transfer between the edge states at the two ends of the system. We extend this idea to two- and three-dimensional insulators with the pi Zak phase over the Brillouin zone, and find similar anomalous dielectric response. We also show that diamond and silicon slabs with (111) surfaces have the pi Zak phase by ab intio calculations, and show that this anomalous response survives even surface reconstruction involving an odd number of original surface unit cells. Another material example with an anomalous dielectric response is polytetrafluoroethylene (PTFE), showing plateaus of polarization at +e by ab initio calculation, in agreement with our theory. | condensed matter |
Atmospheric neutrino experiments can show the "oscillation dip" feature in data, due to their sensitivity over a large $L/E$ range. In experiments that can distinguish between neutrinos and antineutrinos, like INO, oscillation dips can be observed in both these channels separately. We present the dip-identification algorithm employing a data-driven approach -- one that uses the asymmetry in the upward-going and downward-going events, binned in the reconstructed $L/E$ of muons -- to demonstrate the dip, which would confirm the oscillation hypothesis. We further propose, for the first time, the identification of an "oscillation valley" in the reconstructed ($E_\mu$,$\,\cos\theta_\mu$) plane, feasible for detectors like ICAL having excellent muon energy and direction resolutions. We illustrate how this two-dimensional valley would offer a clear visual representation and test of the $L/E$ dependence, the alignment of the valley quantifying the atmospheric mass-squared difference. Owing to the charge identification capability of the ICAL detector at INO, we always present our results using $\mu^{-}$ and $\mu^{+}$ events separately. Taking into account the statistical fluctuations and systematic errors, and varying oscillation parameters over their currently allowed ranges, we estimate the precision to which atmospheric neutrino oscillation parameters would be determined with the 10-year simulated data at ICAL using our procedure. | high energy physics phenomenology |
One-dimensional quantum systems admit duality relations that put hard core spinless bosons and fermions in one-to-one correspondence via Girardeau's mapping theorem. The simplest models of soft bosons interacting via zero-range potentials can also be mapped onto dual interacting fermions. However, a systematic approach to one-dimensional statistical transmutation for arbitrary low-energy interactions in the spinless and spinful or multicomponent cases has remained elusive. I develop a general theory of local unitary transformations between one-dimensional quantum systems of bosons and fermions with arbitrary spin or internal structure, single-particle dispersion -- including non-relativistic, relativistic or otherwise -- and low-energy interactions in the universal regime. These transformations generate families of new duality relations and models that relate the strong and weak coupling limits of the respective dual theories. | condensed matter |
We illustrate the extraordinary discovery potential for extragalactic astrophysics of a far-IR/submm all-sky spectroscopic survey with a 3m-class space telescope. Spectroscopy provides both a 3D view of the Universe and allows us to take full advantage of the sensitivity of present-day instrumentation, overcoming the spatial confusion that affects broadband far-IR/submm surveys. Emission lines powered by star formation will be detected in galaxies out to $z \simeq 8$. It will provide measurements of spectroscopic redshifts, SFRs, dust masses, and metal content for millions of galaxies at the peak epoch of cosmic star formation and of hundreds of them at the epoch of reionization. Many of these galaxies will be strongly lensed; the brightness amplification and stretching of their sizes will make it possible to investigate (by means of follow-up with high-resolution instruments) their internal structure and dynamics on the scales of giant molecular clouds. This will provide direct information on the physics driving the evolution. Furthermore, the arc-min resolution of the telescope at submm wavelengths is ideal for detecting the cores of galaxy proto-clusters, out to the epoch of reionization. Tens of millions of these galaxy-clusters-in-formation will be detected at $z \simeq 2$-3, with a tail out to $z \simeq 7$, and thousands of detections at 6 < z < 7. Their study will allow us to track the growth of the most massive halos well beyond what is possible with classical cluster surveys (mostly limited to $z < 1.5$-2), tracing the history of star formation in dense environments and teaching us how star formation and galaxy-cluster formation are related across all epochs. Such a survey will overcome the current lack of spectroscopic redshifts of dusty star-forming galaxies and galaxy proto-clusters, representing a quantum leap in far-IR/submm extragalactic astrophysics. | astrophysics |
We study weakly interacting mixtures of ultracold atoms composed of bosonic and fermionic species in 2D and 1D. When interactions between particles are appropriately tuned, self-bound quantum liquids can be formed. We show that while formation of these droplets in 2D is due to the higher order correction terms contributing to the total energy and originating in quantum fluctuations, in 1D geometry the quantum fluctuations have a negligible role on formation of the self-bound systems. The leading mean-field interactions are then sufficient for droplet formation in 1D. We analyse stability conditions for 2D and 1D systems and predict values of equilibrium densities of droplets. | condensed matter |
We obtain an explicit H\"older regularity result for viscosity solutions of a class of second order fully nonlinear equations leaded by operator that are neither convex/concave nor uniformly elliptic. | mathematics |
Automated medical image segmentation is an important step in many medical procedures. Recently, deep learning networks have been widely used for various medical image segmentation tasks, with U-Net and generative adversarial nets (GANs) being some of the commonly used ones. Foreground-background class imbalance is a common occurrence in medical images, and U-Net has difficulty in handling class imbalance because of its cross entropy (CE) objective function. Similarly, GAN also suffers from class imbalance because the discriminator looks at the entire image to classify it as real or fake. Since the discriminator is essentially a deep learning classifier, it is incapable of correctly identifying minor changes in small structures. To address these issues, we propose a novel context based CE loss function for U-Net, and a novel architecture Seg-GLGAN. The context based CE is a linear combination of CE obtained over the entire image and its region of interest (ROI). In Seg-GLGAN, we introduce a novel context discriminator to which the entire image and its ROI are fed as input, thus enforcing local context. We conduct extensive experiments using two challenging unbalanced datasets: PROMISE12 and ACDC. We observe that segmentation results obtained from our methods give better segmentation metrics as compared to various baseline methods. | electrical engineering and systems science |
In the present work we use the modified versions of the MIT bag model, on which both a vector field and a self-interacting term are introduced, to obtain hot quark matter and to investigate the QCD phase diagram. We first analyze two-flavored quark matter constrained to both the freeze-out and the liquid-gas phase transition at the hadronic phase. Later, three-flavored quark matter subject to $\beta$ equilibrium and charge neutrality is used to compute quark star macroscopic properties, which are confronted with recent observational massive and canonical star radius results. Finally, a comparison with QCD phase diagrams obtained from the Nambu-Jona-Lasinio model is performed. | high energy physics phenomenology |
We prove joint universality theorems on the half plane of absolute convergence for general classes of Dirichlet series with an Euler-product, where in addition to vertical shifts we also allow scaling. This generalizes our recent joint universality results for Dirichlet $L$-functions. In contrast to classical universality, we do not need that the Dirichlet series in question have an analytic continuation beyond their region of absolute convergence. Also we may allow weaker orthogonality conditions for pairs of Dirichlet series than in the previous joint universality results of Lee-Nakamura-Pa\'{n}kowski. We take care to avoid using the Ramanujan conjecture in our proof and hence as a consequence of our universality theorem, we obtain stronger results on zeros of linear combinations of $L$-functions in the half plane of absolute convergence than previous results of Booker-Thorne and Righetti. For example as a consequence of our main universality result we have that certain linear combinations of Hecke $L$-series coming from Maass wave forms have infinitely many zeros in any strip $1<$Re$(s)<1+\delta$. | mathematics |
A high fidelity multi-physics Eulerian computational framework is presented for the simulation of supersonic parachute inflation during Mars landing. Unlike previous investigations in this area, the framework takes into account an initial folding pattern of the parachute, the flow compressibility effect on the fabric material porosity, and the interactions between supersonic fluid flows and the suspension lines. Several adaptive mesh refinement (AMR)-enabled, large edge simulation (LES)-based, simulations of a full-size disk-gap-band (DGB) parachute inflating in the low-density, low-pressure, carbon dioxide (CO2) Martian atmosphere are reported. The comparison of the drag histories and the first peak forces between the simulation results and experimental data collected during the NASA Curiosity Rover's Mars atmospheric entry shows reasonable agreements. Furthermore, a rudimentary material failure analysis is performed to provide an estimate of the safety factor for the parachute decelerator system. The proposed framework demonstrates the potential of using Computational Fluid Dynamics (CFD) and Fluid-Structure Interaction (FSI)-based simulation tools for future supersonic parachute design. | computer science |
Metal pollution in white dwarf photospheres originates from the accretion of some combination of planets, moons, asteroids, comets, boulders, pebbles and dust. When large bodies reside in dynamically stagnant locations -- unable themselves to pollute nor even closely approach the white dwarf -- then smaller reservoirs of impact debris may become a complementary or the primary source of metal pollutants. Here, we take a first step towards exploring this possibility by computing limits on the recoil mass that escapes the gravitational pull of the target object following a single impact onto an atmosphere-less surface. By considering vertical impacts only with the full-chain analytical prescription from Kurosawa & Takada (2019), we provide lower bounds for the ejected mass for basalt, granite, iron and water-rich target objects across the radii range 10^{0-3} km. Our use of the full-chain prescription as opposed to physical experiments or hydrocode simulations allows us to quickly sample a wide range of parameter space appropriate to white dwarf planetary systems. Our numerical results could be used in future studies to constrain freshly-generated small debris reservoirs around white dwarfs given a particular planetary system architecture, bombardment history, and impact geometries. | astrophysics |
It is widely believed and in part established that exact global symmetries are inconsistent with quantum gravity. One then expects that approximate global symmetries can be quantitatively constrained by quantum gravity or swampland arguments. We provide such a bound for an important class of global symmetries: Those arising from a gauged $U(1)$ with the vector made massive via Higgsing with an axion. The latter necessarily couples to instantons, and their action can be constrained, using both the electric and magnetic version of the axionic weak gravity conjecture, in terms of the cutoff of the theory. As a result, instanton-induced symmetry breaking operators with a suppression factor not smaller than $\exp(-M_{\rm P}^2/\Lambda^2)$ are present, where $\Lambda$ is a cutoff of the 4d effective theory. We provide a general argument and clarify the meaning of $\Lambda$. Simple 4d and 5d models are presented to illustrate this, and we recall that this is the standard way in which things work out in string compactifications with brane instantons. The relation of our constraint to bounds that can be derived from wormholes or gravitational instantons and to those motivated by black-hole effects at finite temperature are discussed, and we present a generalization of the Giddings-Strominger wormhole solution to the case of a gauge-derived $U(1)$ global symmetry. Finally, we discuss potential loopholes to our arguments. | high energy physics theory |
Global scale quantum communication links will form the backbone of the quantum internet. However, exponential loss in optical fibres precludes any realistic application beyond few hundred kilometres. Quantum repeaters and space-based systems offer to overcome this limitation. Here, we analyse the use of quantum memory (QM)-equipped satellites for quantum communication focussing on global range repeaters and Measurement-Device-Independent (MDI) QKD. We demonstrate that satellites equipped with QMs provide three orders of magnitude faster entanglement distribution rates than existing protocols based on fibre-based repeaters or space systems without QMs. We analyse how entanglement distribution performance depends on memory characteristics, determine benchmarks to assess performance of different tasks, and propose various architectures for light-matter interfaces. Our work provides a practical roadmap to realise unconditionally secure quantum communications over global distances with current technologies. | quantum physics |
Some spectral data analysis methods that are useful for the two-dimensional imaging diagnostics data are introduced. It is shown that the frequency spectrum, the local dispersion relation, the flow shear, and the nonlinear energy transfer rates can be estimated using the proper analysis methods. | physics |
The tree reconstruction problem is to find an embedded straight-line tree that approximates a given cloud of unorganized points in $\mathbb{R}^m$ up to a certain error. A practical solution to this problem will accelerate a discovery of new colloidal products with desired physical properties such as viscosity. We define the Approximate Skeleton of any finite point cloud $C$ in a Euclidean space with theoretical guarantees. The Approximate Skeleton ASk$(C)$ always belongs to a given offset of $C$, i.e. the maximum distance from $C$ to ASk$(C)$ can be a given maximum error. The number of vertices in the Approximate Skeleton is close to the minimum number in an optimal tree by factor 2. The new Approximate Skeleton of any unorganized point cloud $C$ is computed in a near linear time in the number of points in $C$. Finally, the Approximate Skeleton outperforms past skeletonization algorithms on the size and accuracy of reconstruction for a large dataset of real micelles and random clouds. | computer science |
The growing sample of LIGO-Virgo black holes (BHs) opens new perspectives for the study of massive binary evolution. Here, we study the impact of mass accretion efficiency on the properties of binary BH (BBH) mergers, by means of population synthesis simulations. We model mass accretion efficiency with the parameter $f_{\rm MT}\in[0.05,1]$, which represents the fraction of mass lost from the donor which is effectively accreted by the companion. Lower values of $f_{\rm MT}$ result in lower BBH merger rate densities and produce mass spectra skewed towards lower BH masses. Our hierarchical Bayesian analysis, applied to BBH mergers in the first and second observing run of LIGO-Virgo, yields almost zero support for values of $f_{\rm MT}\le{}0.3$. This result holds for all the values of the common-envelope efficiency parameter we considered in this study ($\alpha_{\rm CE}=1,$ 5 and 10). The lower boundaries of the 95% credible intervals are equal to $f_{\rm MT}= 0.40,0.45$ and 0.48 for $\alpha_{\rm CE}= 1,$ 5 and 10, respectively. This confirms that future gravitational-wave data can be used to put constraints on several uncertain binary evolution processes. | astrophysics |
Assume the existence of a Fukaya category $\mathrm{Fuk}(X)$ of a compact symplectic manifold $X$ with some expected properties. In this paper, we show $\mathscr{A} \subset \mathrm{Fuk}(X)$ split generates a summand $\mathrm{Fuk}(X)_e \subset \mathrm{Fuk}(X)$ corresponding to an idempotent $e \in QH^\bullet(X)$ if the Mukai pairing of $\mathscr{A}$ is perfect. Moreover we show $HH^\bullet(\mathscr{A}) \cong QH^\bullet(X) e$. As an application we compute the quantum cohomology and the Fukaya category of a blow-up of $\mathbb{C} P^2$ at four points with a monotone symplectic structure. | mathematics |
Aims. We analyze the autocorrelation function of a large contiguous sample of galaxy clusters, the Constrain Dark Energy with X-ray (CODEX) sample, in which we take particular care of cluster definition. These clusters were X-ray selected using the RASS survey and then identified as galaxy clusters using the code redMaPPer run on the photometry of the SDSS. We develop methods for precisely accounting for the sample selection effects on the clustering and demonstrate their robustness using numerical simulations. Methods. Using the clean CODEX sample, which was obtained by applying a redshift-dependent richness selection, we computed the two-point autocorrelation function of galaxy clusters in the $0.1<z<0.3$ and $0.3<z<0.5$ redshift bins. We compared the bias in the measured correlation function with values obtained in numerical simulations using a similar cluster mass range. Results. By fitting a power law, we measured a correlation length $r_0=18.7 \pm 1.1$ and slope $\gamma=1.98 \pm 0.14$ for the correlation function in the full redshift range. By fixing the other cosmological parameters to their WMAP9 values, we reproduced the observed shape of the correlation function under the following cosmological conditions: $\Omega_{m_0}=0.22^{+0.04}_{-0.03}$ and $S_8=\sigma_8 (\Omega_{m_0} /0.3)^{0.5}=0.85^{+0.10}_{-0.08}$ with estimated additional systematic errors of $\sigma_{\Omega_{m_0}} = 0.02$ and $\sigma_{S_8} = 0.20$. We illustrate the complementarity of clustering constraints by combining them with CODEX cosmological constraints based on the X-ray luminosity function, deriving $\Omega_{m_0} = 0.25 \pm 0.01$ and $\sigma_8 = 0.81^{+0.01}_{-0.02}$ with an estimated additional systematic error of $\sigma_{\Omega_{m_0}} = 0.07$ and $\sigma_{\sigma_8} = 0.04$. The mass calibration and statistical quality of the mass tracers are the dominant source of uncertainty. | astrophysics |
In this work, we study the chemical compositions and kinematic properties of six metal-poor stars with [Fe/H] $< -2.5$ in the Galactic halo. From high-resolution (R $\sim$~110,000) spectroscopic observations obtained with the Lick/APF, we determined individual abundances for up to 23 elements, to quantitatively evaluate our sample. We identify two carbon-enhanced metal-poor stars (J1630+0953 and J2216+0246) without enhancement in neutron-capture elements (CEMP-no stars), while the rest of our sample stars are carbon-intermediate. By comparing the light-element abundances of the CEMP stars with predicted yields from non-rotating zero-metallicity massive-star models, we find that possible the progenitors of J1630+0953 and J2216+0246 could be in the 13-25 M$_{\odot}$ mass range, with explosion energies 0.3-1.8$ \times 10^{51}$ erg. In addition, the detectable abundance ratios of light and heavy elements suggest that our sample stars are likely formed from a well-mixed gas cloud, which is consistent with previous studies. We also present a kinematic analysis, which suggests that most of our program stars likely belong to the inner-halo population, with orbits passing as close as $\sim$ 2.9 kpc from the Galactic center. We discuss the implications of these results on the critical constraints on the origin and evolution of CEMP stars, as well as the nature of the Population III progenitors of the lowest metallicity stars in our Galaxy. | astrophysics |
This paper is concerned with the long time behavior of Langevin dynamics of {\em Coulomb gases} in $\mathbf{R}^d$ with $d\geq 2$, that is a second order system of Brownian particles driven by an external force and a pairwise repulsive Coulomb force. We prove that the system converges exponentially to the unique Boltzmann-Gibbs invariant measure under a weighted total variation distance. The proof relies on a novel construction of Lyapunov function for the Coulomb system. | mathematics |
We explore the OPE of certain twist operators in symmetric product ($S_N$) orbifold CFTs, extending our previous work arXiv:1804.01562 to the case of $\mathcal{N}=(4,4)$ supersymmetry. We consider a class of twist operators related to the chiral primaries by spectral flow parallel to the twist. We conjecture that at large $N$, the OPE of two such operators contains only fields in this class, along with excitations by fractional modes of the superconformal currents. We provide evidence for this by studying the coincidence limits of two 4-point functions to several non-trivial orders. We show how the fractional excitations of the twist operators in our restricted class fully reproduce the crossing channels appearing in the coincidence limits of the 4-point functions. | high energy physics theory |
This work deals with the problem of simultaneous regulation and model parameter estimation in adaptive model predictive control. We propose an adaptive model predictive control and conditions which guarantee a persistently exciting closed loop sequence by only looking forward in time into the receding prediction horizon. Earlier works needed to look backwards and preserve prior regressor data. Instead, we present a procedure for the offline generation of a persistently exciting reference trajectory perturbing the equilibrium. With the new approach we demonstrate exponential convergence of nonlinear systems under the influence of the adaptive model predictive control combined with a recursive least squares identifier with forgetting factor despite bounded noise. The results are, at this stage, local in state and parameter-estimate space. | electrical engineering and systems science |
Let $\Lambda,\Gamma$ be rings and $R=\left(\begin{array}{cc}\Lambda & 0 \\ M & \Gamma\end{array}\right)$ the triangular matrix ring with $M$ a $(\Gamma,\Lambda)$-bimodule. Let $X$ be a right $\Lambda$-module and $Y$ a right $\Gamma$-module. We prove that $(X, 0)$$\oplus$$(Y\otimes_\Gamma M, Y)$ is a silting right $R$-module if and only if both $X_{\Lambda}$ and $Y_{\Gamma}$ are silting modules and $Y\otimes_\Gamma M$ is generated by $X$. Furthermore, we prove that if $\Lambda$ and $\Gamma$ are finite dimensional algebras over an algebraically closed field and $X_{\Lambda}$ and $Y_{\Gamma}$ are finitely generated, then $(X, 0)$$\oplus$$(Y\otimes_\Gamma M, Y)$ is a support $\tau$-tilting $R$-module if and only if both $X_{\Lambda}$ and $Y_{\Gamma}$ are support $\tau$-tilting modules, $\Hom_\Lambda(Y\otimes_\Gamma M,\tau X)=0$ and $\Hom_\Lambda(e\Lambda, Y\otimes_\Gamma M)=0$ with $e$ the maximal idempotent such that $\Hom_\Lambda(e\Lambda, X)=0$. | mathematics |
Gravitational interactions between a protoplanetary disk and its embedded planet is one of the formation mechanisms of gaps and rings found in recent ALMA observations. To quantify the gap properties measured in not only surface density but also rotational velocity profiles, we run two-dimensional hydrodynamic simulations of protoplanetary disks by varying three parameters: the mass ratio $q$ of a planet to a central star, the ratio of the disk scale height $h_p$ to the orbital radius $r_p$ of the planet, and the viscosity parameter $\alpha$. We find the gap depth $\delta_\Sigma$ in the gas surface density depends on a single dimensionless parameter $K\equiv q^2(h_p/r_p)^{-5}\alpha^{-1}$ as $\delta_\Sigma = (1 + 0.046K)^{-1}$, consistent with the previous results of Kanagawa et al. (2015a). The gap depth $\delta_V$ in the rotational velocity is given by $\delta_V= 0.007 (h_p/r_p) K^{1.38}/(1 +0.06K^{1.03})$. The gap width, in both surface density and rotational velocity, has a minimum of about $4.7 h_p$ when the planet mass $M_p$ is around the disk thermal mass $M_{\text{th}}$, while it increases in a power-law fashion as $M_p/M_{\text{th}}$ increases or decrease from unity. Such a minimum in the gap width arises because spirals from sub-thermal planets have to propagate before they shock the disk gas and open a gap. We compare our relations for the gap depth and width with the previous results, and discuss their applicability to observations. | astrophysics |
A starshade suppresses starlight by a factor of 1E11 in the image plane of a telescope, which is crucial for directly imaging Earth-like exoplanets. The state of the art in high contrast post-processing and signal detection methods were developed specifically for images taken with an internal coronagraph system and focus on the removal of quasi-static speckles. These methods are less useful for starshade images where such speckles are not present. This paper is dedicated to investigating signal processing methods tailored to work efficiently on starshade images. We describe a signal detection method, the generalized likelihood ratio test (GLRT), for starshade missions and look into three important problems. First, even with the light suppression provided by the starshade, rocky exoplanets are still difficult to detect in reflected light due to their absolute faintness. GLRT can successfully flag these dim planets. Moreover, GLRT provides estimates of the planets' positions and intensities and the theoretical false alarm rate of the detection. Second, small starshade shape errors, such as a truncated petal tip, can cause artifacts that are hard to distinguish from real planet signals; the detection method can help distinguish planet signals from such artifacts. The third direct imaging problem is that exozodiacal dust degrades detection performance. We develop an iterative generalized likelihood ratio test to mitigate the effect of dust on the image. In addition, we provide guidance on how to choose the number of photon counting images to combine into one co-added image before doing detection, which will help utilize the observation time efficiently. All the methods are demonstrated on realistic simulated images. | astrophysics |
In this paper, we compute the (total) Milnor-Witt motivic cohomology of the complement of a hyperplane arrangement in an affine space. | mathematics |
The underlying hypothesis of this work is that the active galactic nuclei (AGNs) are wormhole mouths rather than supermassive black holes (SMBHs). Under some - quite general - assumptions such wormholes may emit gamma radiation as a result of a collision of accreting flows inside the wormholes. This radiation has a distinctive spectrum much different from those of jets or accretion disks of AGNs. An observation of such radiation would serve as evidence of the existence of wormholes. | astrophysics |
In this paper we extend the setting of the online prediction with expert advice to function-valued forecasts. At each step of the online game several experts predict a function, and the learner has to efficiently aggregate these functional forecasts into a single forecast. We adapt basic mixable (and exponentially concave) loss functions to compare functional predictions and prove that these adaptations are also mixable (exp-concave). We call this phenomena integral mixability (exp-concavity). As an application of our main result, we prove that various loss functions used for probabilistic forecasting are mixable (exp-concave). The considered losses include Sliced Continuous Ranking Probability Score, Energy-Based Distance, Optimal Transport Costs & Sliced Wasserstein-2 distance, Beta-2 & Kullback-Leibler divergences, Characteristic function and Maximum Mean Discrepancies. | computer science |
The open string field theory of Witten (SFT) has a close formal similarity with Chern-Simons theory in three dimensions. This similarity is due to the fact that the former theory has concepts corresponding to forms, exterior derivative, wedge product and integration over the manifold. In this paper, we introduce the interior product and the Lie derivative in the $KBc$ subsector of SFT. The interior product in SFT is specified by a two-component "tangent vector" and lowers the ghost number by one (like the ordinary interior product maps a $p$-form to $(p-1)$-form). The Lie derivative in SFT is defined as the anti-commutator of the interior product and the BRST operator. The important property of these two operations is that they respect the $KBc$ algebra. Deforming the original $(K,B,c)$ by using the Lie derivative, we can consider an infinite copies of the $KBc$ algebra, which we call the $KBc$ manifold. As an application, we construct the Wilson line on the manifold, which could play a role in reproducing degenerate fluctuation modes around a multi-brane solution. | high energy physics theory |
We investigate the regularity of the free boundary for the Signorini problem in $\mathbb{R}^{n+1}$. It is known that regular points are $(n-1)$-dimensional and $C^\infty$. However, even for $C^\infty$ obstacles $\varphi$, the set of non-regular (or degenerate) points could be very large, e.g. with infinite $\mathcal{H}^{n-1}$ measure. The only two assumptions under which a nice structure result for degenerate points has been established are: when $\varphi$ is analytic, and when $\Delta\varphi < 0$. However, even in these cases, the set of degenerate points is in general $(n-1)$-dimensional (as large as the set of regular points). In this work, we show for the first time that, "usually", the set of degenerate points is small. Namely, we prove that, given any $C^\infty$ obstacle, for "almost every" solution the non-regular part of the free boundary is at most $(n-2)$-dimensional. This is the first result in this direction for the Signorini problem. Furthermore, we prove analogous results for the obstacle problem for the fractional Laplacian $(-\Delta)^s$, and for the parabolic Signorini problem. In the parabolic Signorini problem, our main result establishes that the non-regular part of the free boundary is $(n-1-\alpha_\circ)$-dimensional for almost all times $t$, for some $\alpha_\circ > 0$. Finally, we construct some new examples of free boundaries with degenerate points. | mathematics |
The excess in electron recoil events reported recently by the XENON1T experiment may be interpreted as evidence for a sizable transition magnetic moment $\mu_{\nu_e\nu_\mu}$ of Majorana neutrinos. We show the consistency of this scenario when a single component transition magnetic moment takes values $\mu_{\nu_e\nu_\mu} \in(1.65 - 3.42) \times 10^{-11} \mu_B$. Such a large value typically leads to unacceptably large neutrino masses. In this paper we show that new leptonic symmetries can solve this problem and demonstrate this with several examples. We first revive and then propose a simplified model based on $SU(2)_H$ horizontal symmetry. Owing to the difference in their Lorentz structures, in the $SU(2)_H$ symmetric limit, $m_\nu$ vanishes while $\mu_{\nu_e\nu_\mu}$ is nonzero. Our simplified model is based on an approximate $SU(2)_H$, which we also generalize to a three family $SU(3)_H$-symmetry. Collider and low energy tests of these models are analyzed. We have also analyzed implications of the XENON1T data for the Zee model and its extensions which naturally generate a large $\mu_{\nu_e\nu_\mu}$ with suppressed $m_\nu$ via a spin symmetry mechanism, but found that the induced $\mu_{\nu_e\nu_\mu}$ is not large enough to explain recent data. Finally, we suggest a mechanism to evade stringent astrophysical limits on neutrino magnetic moments arising from stellar evolution by inducing a medium-dependent mass for the neutrino. | high energy physics phenomenology |
Spectral functions at finite temperature and two-loop order are investigated, for a medium consisting of massless particles. We consider them in the timelike and spacelike domains, allowing the propagating particles to be any valid combination of bosons and fermions. Divergences (if present) are analytically derived and set aside for the remaining finite part to be calculated numerically. To illustrate the utility of these 'master' functions, we consider transverse and longitudinal parts of the QCD vector channel spectral function. | high energy physics phenomenology |
Multisite trials, in which treatment is randomized separately in multiple sites, offer a unique opportunity to disentangle treatment effect variation due to "compositional" differences in the distributions of unit-level features from variation due to "contextual" differences in site-level features. In particular, if we can re-weight (or "transport") each site to have a common distribution of unit-level covariates, the remaining effect variation captures contextual differences across sites. In this paper, we develop a framework for transporting effects in multisite trials using approximate balancing weights, where the weights are chosen to directly optimize unit-level covariate balance between each site and the target distribution. We first develop our approach for the general setting of transporting the effect of a single-site trial. We then extend our method to multisite trials, assess its performance via simulation, and use it to analyze a series of multisite trials of welfare-to-work programs. Our method is available in the balancer R package. | statistics |
Motivated by the possibility of an intermediate U(1) quantum spin liquid phase in out-of-plane magnetic fields and enhanced magnetic fluctuations in exfoliated {\alpha}-RuCl3 flakes, we study magneto-Raman spectra of exfoliated multilayer {\alpha}-RuCl3 in out-of-plane magnetic fields of -6 T to 6 T at temperatures of 670 mK - 4 K. While the literature currently suggests that bulk {\alpha}-RuCl3 is in an antiferromagnetic zigzag phase with R3bar symmetry at low temperature, we do not observe R3bar symmetry in exfoliated {\alpha}-RuCl3 at low temperatures. While we saw no magnetic field driven transitions, the Raman modes exhibit unexpected stochastic shifts in response to applied magnetic field that are above the uncertainties inferred from Bayesian analysis. These stochastic shifts are consistent with the emergence of magnetostrictive interactions in exfoliated {\alpha}-RuCl3. | condensed matter |
High energy cosmic rays reach the surface of the Sun and start showers with thousands of secondary particles. Most of them will be absorbed by the Sun, but a fraction of the neutral ones will escape and reach the Earth. Here we incorporate a new ingredient that is essential to understand the flux of these solar particles: the cosmic ray shadow of the Sun. We use Liouville's theorem to argue that the only effect of the solar magnetic field on the isotropic cosmic ray flux is to interrupt some of the trajectories that were aiming to the Earth and create a shadow. This shadow reveals the average solar depth crossed by cosmic rays of a given rigidity. The absorbed cosmic ray flux is then processed in the thin Solar surface and, assuming that the emission of neutral particles by low-energy charged particles is isotropic, we obtain (i) a flux of gammas that is consistent with Fermi-LAT observations, (ii) a flux of 100-300 neutrons/(year m^2) produced basically in the spallation of primary He nuclei, and (iii) a neutrino flux that is above the atmospheric neutrino background at energies above 0.1-0.6 TeV (depending on the solar phase and the zenith inclination). More precise measurements of the cosmic ray shadow and of the solar gamma flux, together with the possible discovery of the neutron and neutrino signals, would provide valuable information about the magnetic field, the cycle, and the interior of the Sun. | high energy physics phenomenology |
The Hilbert space $\mathcal H$ of backward renormalisation of an anyonic quantum spin chain affords a unitary representation of Thompson's group $F$ via local scale transformations. Given a vector in the canonical dense subspace of $\mathcal H$ we show how to calculate the corresponding spectral measure for any element of $F$ and illustrate with some examples. Introducing the "essential part" of an element we show that the spectral measure of any vector in $\mathcal H$ is, apart from possibly finitely many eigenvalues, absolutely continuous with respect to Lebesgue measure. The same considerations and results hold for the Brown-Thompson groups $F_n$ (for which $F=F_2$). | mathematics |
For every constant c > 0, we show that there is a family {P_{N, c}} of polynomials whose degree and algebraic circuit complexity are polynomially bounded in the number of variables, that satisfies the following properties: * For every family {f_n} of polynomials in VP, where f_n is an n variate polynomial of degree at most n^c with bounded integer coefficients and for N = \binom{n^c + n}{n}, P_{N,c} \emph{vanishes} on the coefficient vector of f_n. * There exists a family {h_n} of polynomials where h_n is an n variate polynomial of degree at most n^c with bounded integer coefficients such that for N = \binom{n^c + n}{n}, P_{N,c} \emph{does not vanish} on the coefficient vector of h_n. In other words, there are efficiently computable equations for polynomials in VP that have small integer coefficients. In fact, we also prove an analogous statement for the seemingly larger class VNP. Thus, in this setting of polynomials with small integer coefficients, this provides evidence \emph{against} a natural proof like barrier for proving algebraic circuit lower bounds, a framework for which was proposed in the works of Forbes, Shpilka and Volk (2018), and Grochow, Kumar, Saks and Saraf (2017). Our proofs are elementary and rely on the existence of (non-explicit) hitting sets for VP (and VNP) to show that there are efficiently constructible, low degree equations for these classes. Our proofs also extend to finite fields of small size. | computer science |
Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel \emph{Fusion-supervised Deep Cross-modal Hashing} (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH. | computer science |
We present the first catalog of gamma-ray sources emitting above 56 and 100 TeV with data from the High Altitude Water Cherenkov (HAWC) Observatory, a wide field-of-view observatory capable of detecting gamma rays up to a few hundred TeV. Nine sources are observed above 56 TeV, all of which are likely Galactic in origin. Three sources continue emitting past 100 TeV, making this the highest-energy gamma-ray source catalog to date. We report the integral flux of each of these objects. We also report spectra for three highest-energy sources and discuss the possibility that they are PeVatrons. | astrophysics |
Data assimilation combines forecasts from a numerical model with observations. Most of the current data assimilation algorithms consider the model and observation error terms as additive Gaussian noise, specified by their covariance matrices Q and R, respectively. These error covariances, and specifically their respective amplitudes, determine the weights given to the background (i.e., the model forecasts) and to the observations in the solution of data assimilation algorithms (i.e., the analysis). Consequently, Q and R matrices significantly impact the accuracy of the analysis. This review aims to present and to discuss, with a unified framework, different methods to jointly estimate the Q and R matrices using ensemble-based data assimilation techniques. Most of the methodologies developed to date use the innovations, defined as differences between the observations and the projection of the forecasts onto the observation space. These methodologies are based on two main statistical criteria: (i) the method of moments, in which the theoretical and empirical moments of the innovations are assumed to be equal, and (ii) methods that use the likelihood of the observations, themselves contained in the innovations. The reviewed methods assume that innovations are Gaussian random variables, although extension to other distributions is possible for likelihood-based methods. The methods also show some differences in terms of levels of complexity and applicability to high-dimensional systems. The conclusion of the review discusses the key challenges to further develop estimation methods for Q and R. These challenges include taking into account time-varying error covariances, using limited observational coverage, estimating additional deterministic error terms, or accounting for correlated noises. | statistics |
We describe the physics of fermionic Lifschitz theories once the anisotropicscaling exponent is made arbitrarily small. In this limit the system acquires an enhanced(Carrollian) boost symmetry. We show, both through the explicit computation of the pathintegral Jacobian and through the solution of the Wess-Zumino consistency conditions, thatthe translation symmetry in the anisotropic direction becomes anomalous. This turns outto be a mixed anomaly between boosts and translations. In a Newton-Cartan formulationof the space-time geometry such anomaly is sourced by torsion. We use these results togive an effective field theory description of the anomalous transport coefficients, which wereoriginally computed through Kubo formulas in [1]. Along the way we provide a link withwarped CFTs. | high energy physics theory |
We study the phenomenology of light scalars of masses $m_1$ and $m_2$ coupling to heavy flavour-violating vector bosons of mass $m_V$. For $m_{1,2}\lesssim $ few GeV, this scenario triggers the rare $B$ meson decays $B_s^0\to 3\mu^+ 3\mu^-$, $B^0\to 3\mu^+ 3\mu^-$, $B^+\to K^+ 3\mu^+ 3\mu^-$ and $B_s^0\to K^{0*} 3\mu^+ 3\mu^-$; the last two being the most important ones for $m_1\sim m_2$. None of these signals has been studied experimentally; therefore we propose analyses to test these channels at the LHCb. We demonstrate that the reach of this facility extends to branching ratios as small as $6.0\times 10^{-9}$, $1.6\times 10^{-9}$, $5.9\times 10^{-9}$ and $1.8\times 10^{-8}$ for the aforementioned channels, respectively. For $m_{1,2}\gg \mathcal{O}(1)$ GeV, we show that slightly modified versions of current multilepton and multitau searches at the LHC can probe wide regions of the parameter space of this scenario. Altogether, the potential of the searches we propose outperform other constraints such as those from meson mixing. | high energy physics phenomenology |
This paper is concerned with an inverse random source problem for the one-dimensional stochastic Helmholtz equation with attenuation. The source is assumed to be a microlocally isotropic Gaussian random field with its covariance operator being a classical pseudo-differential operator. The random sources under consideration are equivalent to the generalized fractional Gaussian random fields which include rough fields and can be even rougher than the white noise, and hence should be interpreted as distributions. The well-posedness of the direct source scattering problem is established in the distribution sense. The micro-correlation strength of the random source, which appears to be the strength in the principal symbol of the covariance operator, is proved to be uniquely determined by the wave field in an open measurement set. Numerical experiments are presented for the white noise model to demonstrate the validity and effectiveness of the proposed method. | mathematics |
This paper presents an efficient multi-fidelity Bayesian optimization approach for analog circuit synthesis. The proposed method can significantly reduce the overall computational cost by fusing the simple but potentially inaccurate low-fidelity model and a few accurate but expensive high-fidelity data. Gaussian Process (GP) models are employed to model the low- and high-fidelity black-box functions separately. The nonlinear map between the low-fidelity model and high-fidelity model is also modelled as a Gaussian process. A fusing GP model which combines the low- and high-fidelity models can thus be built. An acquisition function based on the fusing GP model is used to balance the exploitation and exploration. The fusing GP model is evolved gradually as new data points are selected sequentially by maximizing the acquisition function. Experimental results show that our proposed method reduces up to 65.5\% of the simulation time compared with the state-of-the-art single-fidelity Bayesian optimization method, while exhibiting more stable performance and a more promising practical prospect. | electrical engineering and systems science |
Here we have studied the notion of rough $I$-convergence as an extension of the idea of rough convergence in a cone metric space using ideals. We have further introduced the notion of rough $I^*$-convergence of sequences in a cone metric space to find the relationship between rough $I$ and $I^*$-convergence of sequences. | mathematics |
The Kepler, K2 and TESS transit surveys are revolutionizing our understanding of planets orbiting close to their host stars and our understanding of exoplanet systems in general, but there remains a gap in our understanding of wide-orbit planets. This gap in our understanding must be filled if we are to understand planet formation and how it affects exoplanet habitability. We summarize current and planned exoplanet detection programs using a variety of methods: microlensing (including WFIRST), radial velocities, Gaia astrometry, and direct imaging. Finally, we discuss the prospects for joint analyses using results from multiple methods and obstacles that could hinder such analyses. We endorse the findings and recommendations published in the 2018 National Academy report on Exoplanet Science Strategy. This white paper extends and complements the material presented therein. | astrophysics |
Polar codes have attracted much attention in the past decade due to their capacity-achieving performance. The higher decoding capacity is required for 5G and beyond 5G (B5G). Although the cyclic redundancy check (CRC)- assisted successive cancellation list bit-flipping (CA-SCLF) decoders have been developed to obtain a better performance, the solution to error bit correction (bit-flipping) problem is still imperfect and hard to design. In this work, we leverage the expert knowledge in communication systems and adopt deep learning (DL) technique to obtain the better solution. A low-complexity long short-term memory network (LSTM)-assisted CA-SCLF decoder is proposed to further improve the performance of conventional CA-SCLF and avoid complexity and memory overhead. Our test results show that we can effectively improve the BLER performance by 0.11dB compared to prior work and reduce the complexity and memory overhead by over 30% of the network. | electrical engineering and systems science |
We highlight the importance of eclipsing double-line binaries in our understanding on star formation and evolution. We review the recent discoveries of low-mass and sub-stellar eclipsing binaries belonging to star-forming regions, open clusters, and globular clusters identified by ground-based surveys and space missions with high-resolution spectroscopic follow-up. These discoveries provide benchmark systems with known distances, metallicities, and ages to calibrate masses and radii predicted by state-of-the-art evolutionary models to a few percent. We report their density and discuss current limitations on the accuracy of the physical parameters. We discuss future opportunities and highlight future guidelines to fill gaps in age and metallicity to improve further our knowledge of low-mass stars and brown dwarfs. | astrophysics |
Mysteries about the origin of high-energy cosmic neutrinos have deepened by the recent IceCube measurement of a large diffuse flux in the 10-100 TeV range. Based on the standard disk-corona picture of active galactic nuclei (AGN), we present a phenomenological model enabling us to systematically calculate the spectral sequence of multimessenger emission from the AGN coronae. We show that protons in the coronal plasma can be stochastically accelerated up to PeV energies by plasma turbulence, and find that the model explains the large diffuse flux of medium-energy neutrinos if the cosmic rays carry only a few percent of the thermal energy. We find that the Bethe-Heitler process plays a crucial role in connecting these neutrinos and cascaded MeV gamma rays, and point out that the gamma-ray flux can even be enhanced by the reacceleration of secondary pairs. Critical tests of the model are given by its prediction that a significant fraction of the MeV gamma-ray background correlates with about 10 TeV neutrinos, and nearby Seyfert galaxies including NGC 1068 are promising targets for IceCube, KM3Net, IceCube-Gen2, and future MeV gamma-ray telescopes. | astrophysics |
Optical cameras are gaining popularity as the suitable sensor for relative navigation in space due to their attractive sizing, power and cost properties when compared to conventional flight hardware or costly laser-based systems. However, a camera cannot infer depth information on its own, which is often solved by introducing complementary sensors or a second camera. In this paper, an innovative model-based approach is instead demonstrated to estimate the six-dimensional pose of a target object relative to the chaser spacecraft using solely a monocular setup. The observed facet of the target is tackled as a classification problem, where the three-dimensional shape is learned offline using Gaussian mixture modeling. The estimate is refined by minimizing two different robust loss functions based on local feature correspondences. The resulting pseudo-measurements are then processed and fused with an extended Kalman filter. The entire optimization framework is designed to operate directly on the $SE\text{(3)}$ manifold, uncoupling the process and measurement models from the global attitude state representation. It is validated on realistic synthetic and laboratory datasets of a rendezvous trajectory with the complex spacecraft Envisat. It is demonstrated how it achieves an estimate of the relative pose with high accuracy over its full tumbling motion. | computer science |
Quantum enhanced measurement termed as quantum metrology is important for quantum information science and technology. A crucial measure for the minimum achievable statistical uncertainty in the estimation of the parameters is given by the quantum Fisher information (QFI). Although QFI of closed systems has been investigated, how the nonequilibrium environments influence the QFI of open systems remains elusive. In this study, we consider a two-fermionic open system immersed in two fermionic reservoirs, and investigate how nonequilibrium environments influence the quantum Fisher information (QFI) and explore the relationship between the QFI and the entanglement strength of this open system. The parameters to be estimated are the inter-site tunneling rate $\Delta$ and the coupling strength $\Gamma_1$ between one fermion site and the environment it is in contact with. We find that when the tunneling rate is small, its QFI $\mathcal{F}_{\Delta}$ predominantly increases with the biases or nonequilibriumness between the two reservoirs, while for the large tunneling rate $\mathcal{F}_{\Delta}$ mostly decreases with the degree of nonequilibriumness. This feature is in agreement with the trend of the entanglement or coherence of this open system. For the local coupling strength $\Gamma_1$, its QFI $\mathcal{F}_{\Gamma_1}$ increases monotonically in all cases considered. The universal increasing trend gives an apparent discrepancy for the behaviors of the entanglements or the coherence in the same nonequilibrium conditions. Our results suggest that in an open system a large QFI for a local parameter does not necessarily indicate a strong quantum correlation in the system but instead a strongly nonequilibrium condition in the system. | quantum physics |
In this paper we study the system of a scalar quantum field confined between two plane, isotropic, and homogeneous parallel plates at thermal equilibrium. We represent the plates by the most general lossless and frequency-independent boundary conditions that satisfy the conditions of isotropy and homogeneity and are compatible with the unitarity of the quantum field theory. Under these conditions we compute the thermal correction to the quantum vacuum energy as a function of the temperature and the parameters encoding the boundary condition. The latter enables us to obtain similar results for the pressure between plates and the quantum thermal correction to the entropy. We find out that our system is thermodynamically stable for any boundary conditions, and we identify a critical temperature below which certain boundary conditions yield attractive, repulsive, and null Casimir forces. | high energy physics theory |
In observational clinic registries, time to treatment is often of interest, but treatment can be given at any time during follow-up and there is no structure or intervention to ensure regular clinic visits for data collection. To address these challenges, we introduce the time-dependent propensity process as a generalization of the propensity score. We show that the propensity process balances the entire time-varying covariate history which cannot be achieved by existing propensity score methods and that treatment assignment is strongly ignorable conditional on the propensity process. We develop methods for estimating the propensity process using observed data and for matching based on the propensity process. We illustrate the propensity process method using the Emory Amyotrophic Lateral Sclerosis (ALS) Registry data. | statistics |
Major mergers of gas-rich galaxies provide promising conditions for the formation of supermassive black holes (SMBHs; $\gtrsim10^5$ M$_\odot$) by direct collapse because they can trigger mass inflows as high as $10^4-10^5$ M$_\odot$ yr$^{-1}$ on sub-parsec scales. However, the channel of SMBH formation in this case, either dark collapse (direct collapse without prior stellar phase) or supermassive star (SMS; $\gtrsim10^4$ M$_\odot$), remains unknown. Here, we investigate the limit in accretion rate up to which stars can maintain hydrostatic equilibrium. We compute hydrostatic models of SMSs accreting at $1-1000$ M$_\odot$ yr$^{-1}$, and estimate the departures from equilibrium a posteriori by taking into account the finite speed of sound. We find that stars accreting above the atomic cooling limit ($\gtrsim10$ M$_\odot$ yr$^{-1}$) can only maintain hydrostatic equilibrium once they are supermassive. In this case, they evolve adiabatically with a hylotropic structure, that is, entropy is locally conserved and scales with the square root of the mass coordinate. Our results imply that stars can only become supermassive by accretion at the rates of atomically cooled haloes ($\sim0.1-10$ M$_\odot$ yr$^{-1}$). Once they are supermassive, larger rates are possible. | astrophysics |
We present a provably optimal differentially private algorithm for the stochastic multi-arm bandit problem, as opposed to the private analogue of the UCB-algorithm [Mishra and Thakurta, 2015; Tossou and Dimitrakakis, 2016] which doesn't meet the recently discovered lower-bound of $\Omega \left(\frac{K\log(T)}{\epsilon} \right)$ [Shariff and Sheffet, 2018]. Our construction is based on a different algorithm, Successive Elimination [Even-Dar et al. 2002], that repeatedly pulls all remaining arms until an arm is found to be suboptimal and is then eliminated. In order to devise a private analogue of Successive Elimination we visit the problem of private stopping rule, that takes as input a stream of i.i.d samples from an unknown distribution and returns a multiplicative $(1 \pm \alpha)$-approximation of the distribution's mean, and prove the optimality of our private stopping rule. We then present the private Successive Elimination algorithm which meets both the non-private lower bound [Lai and Robbins, 1985] and the above-mentioned private lower bound. We also compare empirically the performance of our algorithm with the private UCB algorithm. | statistics |
In any consistent massive quantum field theory there are well known bounds on scattering amplitudes at high energies. In conformal field theory there is no scattering amplitude, but the Mellin amplitude is a well defined object analogous to the scattering amplitude. We prove bounds at high energies on Mellin amplitudes in conformal field theories, valid under certain technical assumptions. Such bounds are derived by demanding the absence of spurious singularities in position space correlators. We also conjecture a stronger bound, based on evidence from several explicit examples. | high energy physics theory |
The windowed offset linear canonical transform (WOLCT) can be identified as a generalization of the windowed linear canonical transform (WLCT). In this paper, we generalize several different uncertainty principles for the WOLCT, including Heisenberg uncertainty principle, Hardy's uncertainty principle, Beurling's uncertainty principle, Lieb's uncertainty principle, Donoho-Stark's uncertainty principle, Amrein-Berthier-Benedicks's uncertainty principle, Nazarov's uncertainty principle and Logarithmic uncertainty principle. | mathematics |
Knowledge of thermal properties is essential to design and evaluate thermal systems and processes using nanofluids. This paper presents different analytical models to predict thermal conductivity and viscosity. The efforts have been made to develop machine learning models to predict these properties. An extensive literature survey was carried out to collect thermal properties data of different nanofluids to train and test these machine learning models. The most influential properties like thermal conductivity, diameter, volume concentration of nanoparticles, base fluid thermal conductivity and nanofluid temperature are used as input variables to the thermal conductivity models and molecular weight, diameter and volume fraction nanoparticles, base fluid viscosity and nanofluid temperature are taken as an input variable to the viscosity model. Data is divided into two-part, one part is used to train the models and remaining part is used to test it. Result shows linear regression and ANN model do predict thermal conductivity more closely and ANN predict viscosity more accurately compared to analytical models. | physics |
Gender and racial diversity in the mediated images from the media shape our perception of different demographic groups. In this work, we investigate gender and racial diversity of 85,957 advertising images shared by the 73 top international brands on Instagram and Facebook. We hope that our analyses give guidelines on how to build a fully automated watchdog for gender and racial diversity in online advertisements. | computer science |
The backup control barrier function (CBF) was recently proposed as a tractable formulation that guarantees the feasibility of the CBF quadratic programming (QP) via an implicitly defined control invariant set. The control invariant set is based on a fixed backup policy and evaluated online by forward integrating the dynamics under the backup policy. This paper is intended as a tutorial of the backup CBF approach and a comparative study to some benchmarks. First, the backup CBF approach is presented step by step with the underlying math explained in detail. Second, we prove that the backup CBF always has a relative degree 1 under mild assumptions. Third, the backup CBF approach is compared with benchmarks such as Hamilton Jacobi PDE and Sum-of-Squares on the computation of control invariant sets, which shows that one can obtain a control invariant set close to the maximum control invariant set under a good backup policy for many practical problems. | electrical engineering and systems science |
Blind delegation protocols allow a client to delegate a computation to a server so that the server learns nothing about the input to the computation apart from its size. For the specific case of quantum computation we know that blind delegation protocols can achieve information-theoretic security. In this paper we prove, provided certain complexity-theoretic conjectures are true, that the power of information-theoretically secure blind delegation protocols for quantum computation (ITS-BQC protocols) is in a number of ways constrained. In the first part of our paper we provide some indication that ITS-BQC protocols for delegating $\sf BQP$ computations in which the client and the server interact only classically are unlikely to exist. We first show that having such a protocol with $O(n^d)$ bits of classical communication implies that $\mathsf{BQP} \subset \mathsf{MA/O(n^d)}$. We conjecture that this containment is unlikely by providing an oracle relative to which $\mathsf{BQP} \not\subset \mathsf{MA/O(n^d)}$. We then show that if an ITS-BQC protocol exists with polynomial classical communication and which allows the client to delegate quantum sampling problems, then there exist non-uniform circuits of size $2^{n - \mathsf{\Omega}(n/log(n))}$, making polynomially-sized queries to an $\sf NP^{NP}$ oracle, for computing the permanent of an $n \times n$ matrix. The second part of our paper concerns ITS-BQC protocols in which the client and the server engage in one round of quantum communication and then exchange polynomially many classical messages. First, we provide a complexity-theoretic upper bound on the types of functions that could be delegated in such a protocol, namely $\mathsf{QCMA/qpoly \cap coQCMA/qpoly}$. Then, we show that having such a protocol for delegating $\mathsf{NP}$-hard functions implies $\mathsf{coNP^{NP^{NP}}} \subseteq \mathsf{NP^{NP^{PromiseQMA}}}$. | quantum physics |
Future upgrades to the LHC will pose considerable challenges for traditional particle track reconstruction methods. We investigate how artificial Neural Networks and Deep Learning could be used to complement existing algorithms to increase performance. Generating seeds of detector hits is an important phase during the beginning of track reconstruction and improving the current heuristics of seed generation seems like a feasible task. We find that given sufficient training data, a comparatively compact, standard feed-forward neural network can be trained to classify seeds with great accuracy and at high speeds. Thanks to immense parallelization benefits, it might even be worthwhile to completely replace the seed generation process with the Neural Network instead of just improving the seed quality of existing generators. | physics |
The Euphrosyne asteroid family occupies a unique zone in orbital element space around 3.15 au and may be an important source of the low-albedo near-Earth objects. The parent body of this family may have been one of the planetesimals that delivered water and organic materials onto the growing terrestrial planets. We aim to characterize the compositional properties as well as the dynamical properties of the family. We performed a systematic study to characterize the physical properties of the Euphrosyne family members via low-resolution spectroscopy using the IRTF telescope. In addition, we performed smoothed-particle hydrodynamics (SPH) simulations and N-body simulations to investigate the collisional origin, determine a realistic velocity field, study the orbital evolution, and constrain the age of the Euphrosyne family. Our spectroscopy survey shows that the family members exhibit a tight taxonomic distribution, suggesting a homogeneous composition of the parent body. Our SPH simulations are consistent with the Euphrosyne family having formed via a reaccumulation process instead of a cratering event. Finally, our N-body simulations indicate that the age of the family is 280 Myr +180/-80 Myr, which is younger than a previous estimate. | astrophysics |
We characterize bounded simply-connected planar $W^{1,p}$-extension domains for $1 < p <2$ as those bounded domains $\Omega \subset \mathbb R^2$ for which any two points $z_1,z_2 \in \mathbb R^2 \setminus \Omega$ can be connected with a curve $\gamma\subset \mathbb R^2 \setminus \Omega$ satisfying $$\int_{\gamma} dist(z,\partial \Omega)^{1-p}\, dz \lesssim |z_1-z_2|^{2-p}.$$ Combined with known results, we obtain the following duality result: a Jordan domain $\Omega \subset \mathbb R^2$ is a $W^{1,p}$-extension domain, $1 < p < \infty$, if and only if the complementary domain $\mathbb R^2 \setminus \bar\Omega$ is a $W^{1,p/(p-1)}$-extension domain. | mathematics |
The production of particles $D$, $\bar D^{*}$ simulated in pp collisions at $\sqrt{s}=7$ TeV at mid-rapidity using the PACIAE model. The simulation results of $D^{0}$, $D^{+}$, $D^{*+}$, $\pi^{+}$ and $K^{+}$ from PACIAE agree well with the experimental results. The $X(3872)$ could be tetraquarks state, nucleus-like state and molecular state. Then, we use a dynamically constrained phase-space coalescence model plus PACIAE model to produce the $D\bar D^{*}$ bound states and study the exotic state $X(3872)$ as different structures in $pp$ collisions at $\sqrt{s}=7$ and 13 TeV. The yield, transverse momentum distribution, and rapidity distribution of the exotic state $X(3872)$ as tetraquarks state, nucleus-like state and molecular state were predicted. | high energy physics phenomenology |
In this paper, we bridge two disciplines: systems & control and environmental psychology. We develop second order Behavior and Personal norm (BP) based models (which are consistent with some studies on opinion dynamics) for describing and predicting human activities related to the final use of energy, where psychological variables, financial incentives and social interactions are considered. Based on these models, we develop a human-physical system (HPS) framework consisting of three layers: (i) human behavior, (ii) personal norms and (iii) the physical system (i.e., an AC power grid). Then, we formulate a social-physical welfare optimization problem and solve it by designing a primal-dual controller, which generates the optimal incentives to humans and the control inputs to the power grid. Finally, we assess in simulation the proposed models and approaches. | electrical engineering and systems science |
The Gravitational-wave Optical Transient Observer (GOTO) is a wide-field telescope project focused on detecting optical counterparts to gravitational-wave sources. The GOTO Telescope Control System (G-TeCS) is a custom robotic control system which autonomously manages the GOTO telescope hardware and nightly operations. Since the commissioning the GOTO prototype on La Palma in 2017, development of the control system has focused on the alert handling and scheduling systems. These allow GOTO to receive and process transient alerts and then schedule and carry out observations, all without the need for human involvement. GOTO is ultimately anticipated to include multiple telescope arrays on independent mounts, both on La Palma and at a southern site in Australia. When complete these mounts will be linked to form a single multi-site observatory, requiring more advanced scheduling systems to best optimise survey and follow-up observations. | astrophysics |
Thermal states of light are widely used in quantum optics for various quantum phenomena testing. Particularly, they can be utilized for characterization of photon creation and photon annihilation operations. During the last decade the problem of photon subtraction from multimode quantum states become of much significance. Therefore, in this work we present a technique for statistical parameter estimation of multimode multiphoton subtracted thermal states of light, which can be used for multimode photon annihilation test. | quantum physics |
It is shown that many results on the statistical robustness of kernel-based pairwise learning can be derived under basically no assumptions on the input and output spaces. In particular neither moment conditions on the conditional distribution of Y given X = x nor the boundedness of the output space is needed. We obtain results on the existence and boundedness of the influence function and show qualitative robustness of the kernel-based estimator. The present paper generalizes results by Christmann and Zhou (2016) by allowing the prediction function to take two arguments and can thus be applied in a variety of situations such as ranking. | statistics |
In this paper we introduce the notion of horizontally affine, $h$-affine in short, maps on step-two Carnot groups. When the group is a free step-two Carnot group, we show that such class of maps has a rich structure related to the exterior algebra of the first layer of the group. Using the known fact that arbitrary step-two Carnot groups can be written as a quotient of a free step-two Carnot group, we deduce from the free step-two case a description of $h$-affine maps in this more general setting, together with several characterizations of step-two Carnot groups where $h$-affine are affine in the usual sense, when identifying the group with a real vector space. Our interest for $h$-affine maps stems from their relationship with a class of sets called precisely monotone, recently introduced in the literature, as well as from their relationship with minimal hypersurfaces. | mathematics |
This paper presents a factor graph formulation and particle-based sum-product algorithm (SPA) for scalable detection and tracking of extended objects. The proposed method efficiently performs probabilistic multiple-measurement to object association, represents object extents by random matrices, and introduces the states of newly detected objects dynamically. Scalable detection and tracking of objects is enabled by modeling association uncertainty by measurement-oriented association variables and newly detected objects by a Poisson birth process. Contrary to conventional extended object tracking (EOT) methods with random-matrix models, a fully particle-based approach makes it possible to represent the object extent by different geometric shapes. The proposed method can reliably determine the existence and track a large number of closely-spaced extended objects without gating and clustering of measurements. We demonstrate significant performance advantages of our method compared to the recently proposed Poisson multi-Bernoulli mixture filter numerically. In particular, we consider a simulated scenario with ten closely-spaced extended objects as well as a real urban autonomous driving scenario where measurements of the environment are captured by a LIDAR sensor. | electrical engineering and systems science |
We present a detailed analysis of three XMM-Newton observations of the black hole low-mass X-ray binary IGR~J17091-3624 taken during its 2016 outburst. Radio observations obtained with the Australia Telescope Compact Array (ATCA) indicate the presence of a compact jet during all observations. From the best X-ray data fit results we concluded that the observations were taken during a transition from a hard accretion state to a hard-intermediate accretion state. For Observations 1 and 2 a local absorber can be identified in the EPIC-pn spectra but not in the RGS spectra, preventing us from distinguishing between absorption local to the source and that from the hot ISM component. For Observation 3, on the other hand, we have identified an intrinsic ionized static absorber in both EPIC-pn and RGS spectra. The absorber, observed simultaneously with a compact jet emission, is characterized by an ionization parameter of 1.96< log({\xi}) <2.05 and traced mainly by Ne X, Mg XII, Si XIII and Fe XVIII. | astrophysics |
We develop a theory of Cech-Bott-Chern cohomology and in this context we naturally come up with the relative Bott-Chern cohomology. In fact Bott-Chern cohomology has two relatives and they all arise from a single complex. Thus we study these three cohomologies in a unified way and obtain a long exact sequence involving the three. We then study the localization problem of characteristic classes in the relative Bott-Chern cohomology. For this we define the cup product and integration in our framework and we discuss local and global duality homomorphisms. After reviewing some materials on connections, we give a vanishing theorem relevant to our localization. With these, we prove a residue theorem for a vector bundle admitting a Hermitian connection compatible with an action of the non-singular part of a singular distribution. As a typical case, we discuss the action of a distribution on the normal bundle of an invariant submanifold (so-called the Camacho-Sad action) and give a specific example. | mathematics |
We have developed a novel differential equation solver software called PND based on the physics-informed neural network for molecular dynamics simulators. Based on automatic differentiation technique provided by Pytorch, our software allows users to flexibly implement equation of atom motions, initial and boundary conditions, and conservation laws as loss function to train the network. PND comes with a parallel molecular dynamics (MD) engine in order for users to examine and optimize loss function design, and different conservation laws and boundary conditions, and hyperparameters, thereby accelerate the PINN-based development for molecular applications. | physics |
Existence of a spectral singularity (SS) in the spectrum of {a Schr\"{o}dinger operator with} a non-Hermitian potential requires exact matching of parameters of the potential. We provide a necessary and sufficient condition for a potential to have a SS at a given wavelength. It is shown that potentials with SSs at prescribed wavelengths can be obtained by a simple and effective procedure. In particular, the developed approach allows one to obtain potentials with several SSs and with SSs of the second order, as well as potentials obeying a given symmetry, say, $\PT-$symmetric potentials. Also, the problem can be solved when it is required to obtain a potential obeying a given symmetry, say, $\PT-$symmetric potential. We illustrate all these opportunities with examples. We also describe splitting of a second-order SSs under change of the potential parameters, and discuss possibilities of experimental observation of SSs of different orders. | physics |
We consider quantum models corresponding to superymmetrizations of the two-dimensional harmonic oscillator based on worldline $d=1$ realizations of the supergroup SU$(\,{\cal N}/2\,|1)$, where the number of supersymmetries ${\cal N}$ is arbitrary even number. Constructed models possess the hidden supersymmetry SU$(\,{\cal N}/2\,|2)$. Degeneracies of energy levels are spanned by representations of the hidden supersymmetry group. | high energy physics theory |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.