text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We formulate the notion of an isomorphism of GKM graphs. We then show that two GKM graphs have isomorphic graph equivariant cohomology algebras if and only if the graphs are isomorphic. | mathematics |
We consider applications of a finitary version of the Affine Representability theorem, which follows from recent work of Belov-Kanel, Rowen, and Vishne. Using this result we are able to show that when given a finite set of polynomial identities, there is an algorithm that terminates after a finite number of steps which decides whether these identities force a ring to be commutative. We then revisit old commutativity theorems of Jacobson and Herstein in light of this algorithm and obtain general results in this vein. In addition, we completely characterize the homogeneous multilinear identities that imply the commutativity of a ring. | mathematics |
We examine parallaxes and distances for Galactic luminous blue variables (LBVs) in Gaia DR2. The sample includes 11 LBVs and 14 LBV candidates. For about half of the sample, DR2 distances are either similar to commonly adopted literature values, or the DR2 values have large uncertainties. For the rest, reliable DR2 distances differ significantly from values in the literature, and in most cases the Gaia DR2 distance is smaller. Two key results are that the S Doradus instability strip may not be as clearly defined as previously thought, and that there exists a population of LBVs at relatively low luminosities. LBVs seem to occupy a wide swath from the end of the main sequence at the blue edge to 8000 K at the red side, with a spread in luminosity reaching as low as log(L/Lsun)=4.5. The lower-luminosity group corresponds to effective single-star initial masses of 10-25 Msun, and includes objects that have been considered as confirmed LBVs. We discuss implications for LBVs including (1) their instability and origin in binary evolution, (2) connections to some supernova (SN) impostors such as the class of SN 2008S-like objects, and (3) LBVs that may be progenitors of SNe with dense circumstellar material across a wide initial mass range. Although some of the Gaia DR2 distances for LBVs have large uncertainty, this represents the most direct and consistent set of Galactic LBV distance estimates available in the literature. | astrophysics |
We aim to design a fairness-aware allocation approach to maximize the geographical diversity and avoid unfairness in the sense of demographic disparity. During the development of this work, the COVID-19 pandemic is still spreading in the U.S. and other parts of the world on large scale. Many poor communities and minority groups are much more vulnerable than the rest. To provide sufficient vaccine and medical resources to all residents and effectively stop the further spreading of the pandemic, the average medical resources per capita of a community should be independent of the community's demographic features but only conditional on the exposure rate to the disease. In this article, we integrate different aspects of resource allocation and seek a synergistic intervention strategy that gives vulnerable populations with higher priority when distributing medical resources. This prevention-centered strategy is a trade-off between geographical coverage and social group fairness. The proposed principle can be applied to other scarce resources and social benefits allocation. | mathematics |
If you want to get accurate predictions for the motion of water and air propelled D.I.Y rockets, neglecting air resistance is not an option. But the theoretical analysis including air drag leads to a system of differential equations which can only be solved numerically. We propose an approximation which simply works by the estimate of a definite integral and which is even feasible for undergraduate physics courses. The results only slightly deviate from the reference data (received by the Runge-Kutta method). The motion is divided into several flight phases that are discussed separately and the resulting equations are solved by analytic and numeric methods. The different results from the flight phases are collected and are compared to data that has been achieved by well explained and documented experiments. Furthermore, we theoretically estimate the rocket's drag coefficient. The result is confirmed by a wind tunnel experiment. | physics |
Developed as NASA Astrophysics Probe-class mission, the Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) is designed to identify the sources of ultra-high energy cosmic rays (UHECRs) and to observe cosmic neutrinos. POEMMA consists of two spacecraft flying in a loose formation at 525 km altitude, 28.5$^\circ$ inclination orbits. Each spacecraft hosts a Schmidt telescope with a large collecting area and wide Field-of-View (FoV). A novel focal plane is employed that is optimized to observe both the UV fluorescence signal from extensive air showers (EASs) and the optical Cherenkov signals from EASs. In UHECR stereo fluorescence mode, POEMMA will measure the spectrum, composition, and full-sky distribution of the UHECRs above 20 EeV with high statistics along with remarkable sensitivity to UHE neutrinos. The POEMMA spacecraft are designed to quickly re-orient to a Target-of-Opportunity (ToO) neutrino mode to observe transient astrophysical sources with unique sensitivity. In this mode, POEMMA will be able to detect cosmic tau neutrino events above 20 PeV by measuring the upward-moving EASs for $\tau$-lepton decays induced from tau neutrino interactions in the Earth. In this paper, POEMMA's science goals and instrument design are summarized with a focus on the SiPM implementation in POEMMA, along with a detailed discussion of the properties of the Cherenkov EAS signal in the context of wide wavelength sensitivity offered by SiPMs. A comparison of the fluorescence response between SiPMs and the MAPMTs currently planned for use in POEMMA will also be discussed, assessing the potential for SiPMs to perform EAS fluorescence measurements. | astrophysics |
Using a home-built Ku band ESR spectrometer equipped with an arbitrary waveform generator and a stripline resonator, we implement two types of pulses that would benefit quantum computers: BB1 composite pulse and a microwave frequency comb. Broadband type 1 (BB1) composite pulse is commonly used to combat systematic errors but previous experiments were carried out only on extremely narrow linewidth samples. Using a sample with a linewidth of 9.35 MHz, we demonstrate that BB1 composite pulse is still effective against pulse length errors at a Rabi frequency of 38.46 MHz. The fast control is realized with low microwave power which is required for initialization of electron spin qubits at 0.6 T. We also digitally design and implement a microwave frequency comb to excite multiple spin packets of a different sample. Using this pulse, we demonstrate coherent and well resolved excitations spanning over the entire spectrum of the sample (ranging from -20 to 20 MHz). In anticipation of scaling up to a system with large number of qubits, this approach provides an efficient technique to selectively and simultaneously control multiple qubits defined in the frequency-domain. | quantum physics |
While our visible Universe could be a 3-brane, some cosmological scenarios consider that other 3-branes could be hidden in the extra-dimensional bulk. Matter disappearance toward a hidden brane is mainly discussed for neutron - both theoretically and experimentally - but other particles are poorly studied. Recent experimental results offer new constraints on positronium or quarkonium invisible decays. In the present work, we show how a two-brane Universe allows for such invisible decays. We put this result in the context of the recent experimental data to constrain the brane energy scale $M_B$ (or effective brane thickness $M_B^{-1}$) and the interbrane distance $d$ for a relevant two-brane Universe in a $SO(3,1)$-broken 5D bulk. Quarkonia present poor bounds compared to results deduced from previous passing-through-walls-neutron experiments for which scenarios with $M_B < 2.5 \times 10^{17}$ GeV and $d > 0.5$ fm are excluded. By contrast, positronium experiments can compete with neutron experiments depending on the matter content of each brane. To constrain scenarios up to the Planck scale, positronium experiments in vacuum cavity should be able to reach $\text{Br}(\text{o-Ps} \rightarrow \text{invisible}) \approx 10^{-6}$. | high energy physics phenomenology |
An opinion piece by Abigail Thompson in the Notices of the American Mathematical Society has engendered a lot of discussion, including three open letters with over 1400 signatures. We analyze the professional profiles of signatories of these three letters, and, in particular, their citation records. We find that when restricting to R1 math professors, the means of their citations and citations per year are ordered $\mu(A) < \mu(B) < \mu(C)$. The significance of these findings are validated using a one-sided permutation test. | statistics |
Recently, the possibility that several starless telluric planets may form around supermassive black holes (SMBHs) and receive an energy input from the hole's accretion disk, which, under certain plausible circumstances, may make them habitable in a terrestrial sense, has gained increasing attention. In particular, an observer on a planet orbiting at distance $r=100$ Schwarzschild radii from a maximally rotating Kerr SMBH with mass $M_\bullet = 1\times 10^8\,M_\odot$ in a plane slightly outside the equator of the latter, would see the gravitationally lensed accretion disk the same size as the Sun as seen from the Earth. Moreover, the accretion rate might be imagined to be set in such a way that the apparent disk's temperature would be identical to that of the solar surface. We demonstrate that the post-Newtonian (pN) de Sitter and Lense--Thirring precessions of the spin axis of such a world would rapidly change, among other things, its tilt, $\varepsilon$, to its orbital plane by tens to hundreds of degrees over a time span of, say, just $\Delta t =400\,\mathrm{yr}$, strongly depending on the obliquity $\eta_\bullet$ of the SMBH's spin to the orbital plane. Thus, such relativistic features would have per se a relevant impact on the long-term habitability of the considered planet. Other scenarios are examined as well. | astrophysics |
We consider the sharp interface limit of a convective Allen-Cahn equation, which can be part of a Navier-Stokes/Allen-Cahn system, for different scalings of the mobility $m_\varepsilon=m_0\varepsilon^\theta$ as $\varepsilon\to 0$. In the case $\theta>2$ we show a (non-)convergence result in the sense that the concentrations converge to the solution of a transport equation, but they do not behave like a rescaled optimal profile in normal direction to the interface as in the case $\theta=0$. Moreover, we show that an associated mean curvature functional does not converge the corresponding functional for the sharp interface. Finally, we discuss the convergence in the case $\theta=0,1$ by the method of formally matched asymptotics. | mathematics |
We use a specialized boundary-value problem solver for mixed-type functional differential equations to numerically examine the landscape of traveling wave solutions to the diatomic Fermi-Pasta-Ulam-Tsingou (FPUT) problem. By using a continuation approach, we are able to uncover the relationship between the branches of micropterons and nanopterons that have been rigorously constructed recently in various limiting regimes. We show that the associated surfaces are connected together in a nontrivial fashion and illustrate the key role that solitary waves play in the branch points. Finally, we numerically show that the diatomic solitary waves are stable under the full dynamics of the FPUT system. | mathematics |
Let $q$ be a prime power of the form $q=12c^2+4c+3$ with $c$ an arbitrary integer. In this paper we construct a difference family with parameters $(2q^2;q^2,q^2,q^2,q^2-1;2q^2-2)$ in ${\mathbb Z}_2\times ({\mathbb F}_{q^2},+)$. As a consequence, by applying the Wallis-Whiteman array, we obtain Hadamard matrices of order $4(2q^2+1)$ for the aforementioned $q$'s. | mathematics |
Reproducing kernel Hilbert spaces (RKHSs) play an important role in many statistics and machine learning applications ranging from support vector machines to Gaussian processes and kernel embeddings of distributions. Operators acting on such spaces are, for instance, required to embed conditional probability distributions in order to implement the kernel Bayes rule and build sequential data models. It was recently shown that transfer operators such as the Perron-Frobenius or Koopman operator can also be approximated in a similar fashion using covariance and cross-covariance operators and that eigenfunctions of these operators can be obtained by solving associated matrix eigenvalue problems. The goal of this paper is to provide a solid functional analytic foundation for the eigenvalue decomposition of RKHS operators and to extend the approach to the singular value decomposition. The results are illustrated with simple guiding examples. | mathematics |
If dark matter was produced in the early Universe by the decoupling of its annihilations into known particles, there is a sharp experimental target for the size of its coupling. We show that if dark matter was produced by inelastic scattering against a lighter particle from the thermal bath, then its coupling can be exponentially smaller than the coupling required for its production from annihilations. As an application, we demonstrate that dark matter produced by inelastic scattering against electrons provides new thermal relic targets for direct detection and fixed target experiments. | high energy physics phenomenology |
The effect of rare-earth ions (R = Er$_3^+$, Tb$_3^+$, and Ce$_{3.75}^+$) on the dielectric properties and the electric polarization induced by local polar phase separation domains in solid solutions of R$_{0.8}$Ce$_{0.2}$Mn$_2$O$_5$ (R = Er, Tb) multiferroics has been studied. These parameters are found to qualitatively differ from those of initial RMn$_2$O$_5$ (R = Er, Tb) crystals studied before. It is shown that the properties of the polar phase separation domains that form in a subsystem of Mn3+ and Mn4+ ions due to a finite probability of tunneling electrons between these ions with different valences are substantially dependent on the values of crystal fields in which these domains exist. A combined influence of Er$_3^+$, Tb$_3^+$, and Ce$_{3.75}^+$ ions is found to substantially change the crystal field in R$_{0.8}$Ce$_{0.2}$ Mn$_2$O$_5$ (R = Er, Tb) as compared to RMn$_2$O$_5$ (R = Er, Tb). | condensed matter |
We investigate $\phi$-meson electroproduction off the proton target, i.e., $\gamma^* p \to \phi p$, by employing a tree-level effective Lagrangian approach in the kinematical ranges of $Q^2$ = (0$-$4) $\mathrm{GeV}^2$, $W$ = (2$-$5) GeV, and $|t| \leq 2\,\mathrm{GeV}^2$. In addition to the universally accepted Pomeron exchange, we consider various meson exchanges in the $t$ channel with the Regge method. Direct $\phi$-meson radiations in the $s$- and $u$-channels are also taken into account. We find that the $Q^2$ dependence of the transverse ($\sigma_{\mathrm{T}}$) and longitudinal ($\sigma_{\mathrm{L}}$) cross sections are governed by Pomeron and $(a_0,f_0)$ scalar meson exchanges, respectively. Meanwhile, the contributions of $(\pi,\eta)$ pseudoscalar- and $f_1(1285)$ axial-vector-meson exchanges are much more suppressed. The results of the interference cross sections ($\sigma_{\mathrm{LT}}, \sigma_{\mathrm{LT}}$) and the spin-density matrix elements indicate that $s$-channel helicity conservation holds at $Q^2$ = (1$-$4) $\mathrm{GeV}^2$. The result of the parity asymmetry yield $P \simeq 0.95$ at $W$ = 2.5 GeV, meaning that natural-parity exchange dominates the reaction process. Our numerical results are in fair agreement with the experimental data and thus the use of our effective Reggeized model is justified over the considered kinematical ranges of $Q^2$, $W$, and $t$. | high energy physics phenomenology |
The composition as well as the very existence of the interior of a Schwarzschild black hole (BH) remains at the forefront of open problems in fundamental physics. To address this issue, we turn to Hawking's "principle of ignorance", which says that, for an observer with limited information, all descriptions that are consistent with known physics are equally valid. We compare three different observers who view the BH from the outside and agree on the external Schwarzschild geometry. First, the modernist, who accepts the classical BH as the final state of gravitational collapse, the singularity theorems that underlie this premise and the central singularity that the theorems predict. The modernist is willing to describe matter in terms of quantum fields in curved space but insists on (semi)classical gravity. Second is the skeptic, who wishes to evade any singular behavior by finding a loophole to the singularity theorems within the realm of classical general relativity (GR). The third is a postmodernist who similarly wants to circumvent the singularity theorems but is willing to invoke exotic quantum physics in the gravitational and/or matter sector to do so. The postmodern view suggests that the uncertainty principle can stabilize a classically singular BH in a similar manner to the stabilization of the classically unstable hydrogen atom: Strong quantum effects in the matter and gravitational sectors resolve the would-be singularity over horizon-sized length scales. The postmodern picture then requires a significant departure from (semi)classical gravity, as well as some exotic matter beyond the standard model of particle physics (SM). We find that only the postmodern framework is consistent with what is known so far about BH physics and conclude that a valid description of the BH interior needs matter beyond the SM and gravitational physics beyond (semi)classical GR. | high energy physics theory |
Velocity field provides a complementary avenue to constrain cosmological information, either through the peculiar velocity surveys or the kinetic Sunyaev Zel'dovich effect. One of the commonly used statistics is the mean radial pairwise velocity. Here, we consider the three-point mean relative velocity, i.e. the mean relative velocities between pairs in a triplet. Using halo catalogs from the Quijote suite of N-body simulations, we first showcase how the analytical prediction for the mean relative velocities between pairs in a triplet achieve better than 4-5% accuracy using standard perturbation theory at leading order for triangular configurations with a minimum separation of $r \geq 50\ h^{-1}$Mpc. Furthermore, we present the three-point relative velocity as a novel probe of neutrino mass estimation. We explore the full cosmological information content of the halo mean pairwise velocities, and the mean relative velocities between halo pairs in a triplet. We undertake this through the Fisher-matrix formalism using 22,000 simulations from the Quijote suite, and considering all triangular configurations with a minimum and a maximum separation of $20\ h^{-1}$Mpc and $120\ h^{-1}$Mpc, respectively. We find that the mean relative velocities in a triplet allows a 1$\sigma$ neutrino mass ($M_\nu$) constraint of 0.065 eV, that is roughly 13 times better than the mean pairwise velocity constraint (0.877 eV). This information gain is not limited only to neutrino mass, but extends to other cosmological parameters: $\Omega_{\mathrm{m}}$, $\Omega_{\mathrm{b}}$, $h$, $n_{\mathrm{s}}$ and $\sigma_{8}$ achieving a gain of 8.9, 11.8, 15.5, 20.9 and 10.9 times respectively. These results illustrate the possibility of exploiting the mean three-point relative velocities for constraining the cosmological parameters accurately from future cosmic microwave background experiments and peculiar velocity surveys. | astrophysics |
The search for spectroscopic biosignatures with the next-generation of space telescopes could provide observational constraints on the abundance of exoplanets with signs of life. An extension of this spectroscopic characterization of exoplanets is the search for observational evidence of technology, known as technosignatures. Current mission concepts that would observe biosignatures from ultraviolet to near-infrared wavelengths could place upper limits on the fraction of planets in the galaxy that host life, although such missions tend to have relatively limited capabilities of constraining the prevalence of technosignatures at mid-infrared wavelengths. Yet searching for technosignatures alongside biosignatures would provide important knowledge about the future of our civilization. If planets with technosignatures are abundant, then we can increase our confidence that the hardest step in planetary evolution--the Great Filter--is probably in our past. But if we find that life is commonplace while technosignatures are absent, then this would increase the likelihood that the Great Filter awaits to challenge us in the future. | astrophysics |
The application of imaging techniques based on ensembles of nitrogen-vacancy (NV) sensors in diamond to characterise electrical devices has been proposed, but the compatibility of NV sensing with operational gated devices remains largely unexplored. Here we fabricate graphene field-effect transistors (GFETs) directly on the diamond surface and characterise them via NV microscopy. The current density within the gated graphene is reconstructed from NV magnetometry under both mostly p- and n-type doping, but the exact doping level is found to be affected by the measurements. Additionally, we observe a surprisingly large modulation of the electric field at the diamond surface under an applied gate potential, seen in NV photoluminescence and NV electrometry measurements, suggesting a complex electrostatic response of the oxide-graphene-diamond structure. Possible solutions to mitigate these effects are discussed. | condensed matter |
The Hodgkin-Huxley model describes the behavior of the cell membrane in neurons, treating each part of it as an electric circuit element, namely capacitors, memristors, and voltage sources. We focus on the activation channel of potassium ions, due to its simplicity, while keeping most of the features displayed by the original model. This reduced version is essentially a classical memristor, a resistor whose resistance depends on the history of electric signals that have crossed it, coupled to a voltage source and a capacitor. Here, we will consider a quantized Hodgkin-Huxley model based on a quantum memristor formalism. We compare the behavior of the membrane voltage and the potassium channel conductance, when the circuit is subjected to AC sources, in both classical and quantum realms. Numerical simulations show an expected adaptation of the considered channel conductance depending on the signal history in all regimes. Remarkably, the computation of higher moments of the voltage manifest purely quantum features related to the circuit zero-point energy. Finally, we study the implementation of the Hodgkin-Huxley quantum memristor as an asymmetric rf SQUID in superconducting circuits. This study may allow the construction of quantum neuron networks inspired in the brain function, as well as the design of neuromorphic quantum architectures for quantum machine learning. | quantum physics |
This paper describes implementation details for a 3-level cognitive model, described in the paper series. The whole architecture is now modular, with different levels using different types of information. The ensemble-hierarchy relationship is maintained and placed in the bottom optimising and middle aggregating levels, to store memory objects and their relations. The top-level cognitive layer has been re-designed to model the Cognitive Process Language (CPL) of an earlier paper, by refactoring it into a network structure with a light scheduler. The cortex brain region is thought to be hierarchical - clustering from simple to more complex features. The refactored network might therefore challenge conventional thinking on that brain region. It is also argued that the function and structure in particular, of the new top level, is similar to the psychology theory of chunking. The model is still only a framework and does not have enough information for real intelligence. But a framework is now implemented over the whole design and so can give a more complete picture about the potential for results. | computer science |
Launched on April 2018, NASA's Transiting Exoplanet Survey Satellite (TESS) has been performing a wide-field survey for exoplanets orbiting stars with a goal of producing a rich database for follow-on studies. Here we present estimates of the detected exoplanet orbital periods in the 2-minute cadence mode during the TESS mission. For a two-transit detection criterion, the expected mean value of the most frequently detected orbital period is 5.01 days with the most frequently detected range of 2.12 to 11.82 days in the region with observation of 27 days. Near the poles where the observational duration is 351 days, the expected mean orbital period is 10.93 days with the most frequently detected range being from 3.35 to 35.65 days. For one-transit, the most frequently detected orbital period is 8.17 days in the region with observation of 27 days and 11.25 days near the poles. For the entire TESS mission containing several sectors, we estimate that the mean value of orbital period is 8.47 days for two-transit and 10.09 days for one-transit, respectively. If TESS yields a planet population substantially different from what's predicted here, the underlying planet occurrence rates are likely different between the stellar sample probed by TESS and that by Kepler. | astrophysics |
Quantum computing has great potential for advancing machine learning algorithms beyond classical reach. Even though full-fledged universal quantum computers do not exist yet, its expected benefits for machine learning can already be shown using simulators and already available quantum hardware. In this work, we focus on distance-based classification using actual early stage quantum hardware. We extend earlier work and present a distance-based classification algorithm using only two qubits. We show that the results are similar to the theoretically expected results. | quantum physics |
In this paper, we propose a sampling mechanism for adaptive diffusion networks that adaptively changes the amount of sampled nodes based on mean-squared error in the neighborhood of each node. It presents fast convergence during transient and a significant reduction in the number of sampled nodes in steady state. Besides reducing the computational cost, the proposed mechanism can also be used as a censoring technique, thus saving energy by reducing the amount of communication between nodes. We also present a theoretical analysis to obtain lower and upper bounds for the number of network nodes sampled in steady state. | electrical engineering and systems science |
We investigate the complexity and performance of recurrent neural network (RNN) models as post-processing units for the compensation of fibre nonlinearities in digital coherent systems carrying polarization multiplexed 16-QAM and 32-QAM signals. We evaluate three bi-directional RNN models, namely the bi-LSTM, bi-GRU and bi-Vanilla-RNN and show that all of them are promising nonlinearity compensators especially in dispersion unmanaged systems. Our simulations show that during inference the three models provide similar compensation performance, therefore in real-life systems the simplest scheme based on Vanilla-RNN units should be preferred. We compare bi-Vanilla-RNN with Volterra nonlinear equalizers and exhibit its superiority both in terms of performance and complexity, thus highlighting that RNN processing is a very promising pathway for the upgrade of long-haul optical communication systems utilizing coherent detection. | electrical engineering and systems science |
Radar uncertainty principle indicates that there is an inherent invariance in the product of the time-delay and Doppler-shift measurement accuracy and resolution which can be tuned by the waveform at transmitter. In this paper, based on the radar uncertainty principle, a conceptual waveform design is proposed for a distributed multiple-input multiple-output (MIMO) radar system in order to improve the Cramer-Rao lower bound (CRLB) of the target position and velocity. To this end, a non-convex band constrained optimization problem is formulated, and a local and the global solution to the problem are obtained by sequential quadratic programming (SQP) and particle swarm algorithms, respectively. Numerical results are also included to illustrate the effectiveness of the proposed mechanism on the CRLB of the target position and velocity. By numerical results, it is also concluded that the global solution to the optimization problem is obtained at a vertex of the bounding box. | electrical engineering and systems science |
By exploiting helical gratings (HGs), we propose and simulate flexible generation, conversion and exchange of fiber guided orbital angular momentum (OAM) modes. HGs can enable the generation of OAM modes, and the OAM conversion between two arbitrary modes guided in fibers. A specific HG can exchange the OAM states of a couple of OAM modes, i.e, OAM exchange. In addition, a Fabry Perot cavity cascaded with two identical reflective HGs can reflect converted OAM modes with a comb spectrum. The HGs based generation conversion and exchange of OAM modes are dependent on helix period, orientation, and the fold number of helical fringes. The proposed method of generation, conversion, and exchange of fiber guided OAM modes using HGs is flexible and well compatible with OAM fibers, featuring a high conversion efficiency close to 100% and a conversion bandwidth about 10 nm in transmission spectra, while less than 1 nm in reflection spectra. | physics |
The last few decades have seen significant breakthroughs in the fields of deep learning and quantum computing. Research at the junction of the two fields has garnered an increasing amount of interest, which has led to the development of quantum deep learning and quantum-inspired deep learning techniques in recent times. In this work, we present an overview of advances in the intersection of quantum computing and deep learning by discussing the technical contributions, strengths and similarities of various research works in this domain. To this end, we review and summarise the different schemes proposed to model quantum neural networks (QNNs) and other variants like quantum convolutional networks (QCNNs). We also briefly describe the recent progress in quantum inspired classic deep learning algorithms and their applications to natural language processing. | quantum physics |
One of the greatest challenges in solar physics is understanding the heating of the Sun's corona. Most theories for coronal heating postulate that free energy in the form of magnetic twist/stress is injected by the photosphere into the corona where the free energy is converted into heat either through reconnection or wave dissipation. The magnetic helicity associated with the twist/stress, however, is expected to be conserved and appear in the corona. In previous work we showed that helicity associated with the small-scale twists undergoes an inverse cascade via stochastic reconnection in the corona, and ends up as the observed large-scale shear of filament channels. Our ``helicity condensation'' model accounts for both the formation of filament channels and the observed smooth, laminar structure of coronal loops. In this paper, we demonstrate, using helicity- and energy-conserving numerical simulations of a coronal system driven by photospheric motions, that the model also provides a natural mechanism for heating the corona. We show that the heat generated by the reconnection responsible for the helicity condensation process is sufficient to account for the observed coronal heating. We study the role that helicity injection plays in determining coronal heating and find that, crucially, the heating rate is only weakly dependent on the net helicity preference of the photospheric driving. Our calculations demonstrate that motions with 100\% helicity preference are least efficient at heating the corona; those with 0\% preference are most efficient. We discuss the physical origins of this result and its implications for the observed corona. | astrophysics |
In the era of digitization, different actors in agriculture produce numerous data. Such data contains already latent historical knowledge in the domain. This knowledge enables us to precisely study natural hazards within global or local aspects, and then improve the risk prevention tasks and augment the yield, which helps to tackle the challenge of growing population and changing alimentary habits. In particular, French Plants Health Bulletins (BSV, for its name in French Bulletin de Sant{\'e} du V{\'e}g{\'e}tal) give information about the development stages of phytosanitary risks in agricultural production. However, they are written in natural language, thus, machines and human cannot exploit them as efficiently as it could be. Natural language processing (NLP) technologies aim to automatically process and analyze large amounts of natural language data. Since the 2010s, with the increases in computational power and parallelization, representation learning and deep learning methods became widespread in NLP. Recent advancements Bidirectional Encoder Representations from Transformers (BERT) inspire us to rethink of knowledge representation and natural language understanding in plant health management domain. The goal in this work is to propose a BERT-based approach to automatically classify the BSV to make their data easily indexable. We sampled 200 BSV to finetune the pretrained BERT language models and classify them as pest or/and disease and we show preliminary results. | computer science |
Single molecules in solid-state matrices have been proposed as sources of single-photon Fock states back 20 years ago. Their success in quantum optics and in many other research fields stems from the simple recipes used in the preparation of samples, with hundreds of nominally identical and isolated molecules. Main challenges as of today for their application in photonic quantum technologies are the optimization of light extraction and the on-demand emission of indistinguishable photons. We here present Hong-Ou-Mandel experiments with photons emitted by a single molecule of dibenzoterrylene in an anthracene nanocrystal at 3 K, under continuous wave and also pulsed excitation. A detailed theoretical model is applied, which relies on independent measurements for most experimental parameters, hence allowing for an analysis of the different contributions to the two-photon interference visibility, from residual dephasing to spectral filtering. | quantum physics |
In this paper, under the assumption that the diagonal coset vertex operator algebra $C(L_{\mathfrak g}(k+l,0),L_{\mathfrak g}(k,0)\otimes L_{\mathfrak g}(l,0))$ is rational and $C_2$-cofinite, the global dimension of $C(L_{\mathfrak g}(k+l,0),L_{\mathfrak g}(k,0)\otimes L_{\mathfrak g}(l,0))$ is obtained, the quantum dimensions of multiplicity spaces viewed as $C(L_{\mathfrak g}(k+l,0),L_{\mathfrak g}(k,0)\otimes L_{\mathfrak g}(l,0))$-modules are also obtained. As an application, a method to classify irreducible modules of $C(L_{\mathfrak g}(k+l,0),L_{\mathfrak g}(k,0)\otimes L_{\mathfrak g}(l,0))$ is provided. As an example, we prove that the diagonal coset vertex operator algebra $C(L_{E_8}(k+2,0),L_{E_8}(k,0)\otimes L_{E_8}(2,0))$ is rational, $C_2$-cofinite, and classify irreducible modules of $C(L_{E_8}(k+2,0),L_{E_8}(k,0)\otimes L_{E_8}(2,0))$. | mathematics |
Networks are fundamental building blocks for representing data, and computations. Remarkable progress in learning in structurally defined (shallow or deep) networks has recently been achieved. Here we introduce evolutionary exploratory search and learning method of topologically flexible networks under the constraint of producing elementary computational steady-state input-output operations. Our results include; (1) the identification of networks, over four orders of magnitude, implementing computation of steady-state input-output functions, such as a band-pass filter, a threshold function, and an inverse band-pass function. Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state. Furthermore, we find that the fraction of required driver nodes is constant during evolutionary learning, suggesting a stable system design. (3), our framework allows multiplexing of different computations using the same network. For example, using a binary representation of the inputs, the network can readily compute three different input-output functions. Finally, (4) the proposed evolutionary learning demonstrates transfer learning. If the system learns one function A, then learning B requires on average less number of steps as compared to learning B from tabula rasa. We conclude that the constrained evolutionary learning produces large robust controllable circuits, capable of multiplexing and transfer learning. Our study suggests that network-based computations of steady-state functions, representing either cellular modules of cell-to-cell communication networks or internal molecular circuits communicating within a cell, could be a powerful model for biologically inspired computing. This complements conceptualizations such as attractor based models, or reservoir computing. | computer science |
We report on a new calculation of the next-to-next-to-leading order (NNLO) QCD radiative corrections to the inclusive production of top-quark pairs at hadron colliders. The calculation is performed by using the $q_T$ subtraction formalism to handle and cancel infrared singular contributions at intermediate stages of the computation. We present numerical results for the total cross section in $pp$ collisions at $\sqrt{s}=8$ TeV and $13$ TeV, and we compare them with those obtained by using the publicly available numerical program Top++. Our computation represents the first complete application of the $q_T$ subtraction formalism to the hadroproduction of a colourful high-mass system at NNLO. | high energy physics phenomenology |
Fast radio bursts (FRBs) are short (millisecond) radio pulses originating from enigmatic sources at extragalactic distances so far lacking a detection in other energy bands. Magnetized neutron stars (magnetars) have been considered as the sources powering the FRBs, but the connection is controversial because of differing energetics and the lack of radio and X-ray detections with similar characteristics in the two classes. We report here the detection by the AGILE satellite on April 28, 2020 of an X-ray burst in coincidence with the very bright radio burst from the Galactic magnetar SGR 1935+2154. The burst detected by AGILE in the hard X-ray band (18-60 keV) lasts about 0.5 seconds, it is spectrally cutoff above 80 keV, and implies an isotropically emitted energy ~ $10^{40}$ erg. This event is remarkable in many ways: it shows for the first time that a magnetar can produce X-ray bursts in coincidence with FRB-like radio bursts; it also suggests that FRBs associated with magnetars may emit X-ray bursts of both magnetospheric and radio-pulse types that may be discovered in nearby sources. Guided by this detection, we discuss SGR 1935+2154 in the context of FRBs, and especially focus on the class of repeating-FRBs. Based on energetics, magnetars with fields B ~ $10^{15}$ G may power the majority of repeating-FRBs. Nearby repeating-FRBs offer a unique occasion to consolidate the FRB-magnetar connection, and we present new data on the X-ray monitoring of nearby FRBs. Our detection enlightens and constrains the physical process leading to FRBs: contrary to previous expectations, high-brightness temperature radio emission coexists with spectrally-cutoff X-ray radiation. | astrophysics |
A common concern in observational studies focuses on properly evaluating the causal effect, which usually refers to the average treatment effect or the average treatment effect on the treated. In this paper, we propose a data preprocessing method, the Kernel-distance-based covariate balancing, for observational studies with binary treatments. This proposed method yields a set of unit weights for the treatment and control groups, respectively, such that the reweighted covariate distributions can satisfy a set of pre-specified balance conditions. This preprocessing methodology can effectively reduce confounding bias of subsequent estimation of causal effects. We demonstrate the implementation and performance of Kernel-distance-based covariate balancing with Monte Carlo simulation experiments and a real data analysis. | statistics |
Remote state preparation (RSP) provides a useful way of transferring quantum information between two distant nodes based on the previously shared entanglement. In this paper, we study RSP of an arbitrary single-photon state in two degrees of freedom (DoFs). Using hyper-entanglement as a shared resource, our first goal is to remotely prepare the single-photon state in polarization and frequency DoFs and the second one is to reconstruct the single-photon state in polarization and time-bin DoFs. In the RSP process, the sender will rotate the quantum state in each DoF of the photon according to the knowledge of the state to be communicated. By performing a projective measurement on the polarization of the sender's photon, the original single-photon state in two DoFs can be remotely reconstructed at the receiver's quantum systems. This work demonstrates a novel capability for long-distance quantum communication. | quantum physics |
We present a family of novel methods for embedding knowledge graphs into real-valued tensors. These tensor-based embeddings capture the ordered relations that are typical in the knowledge graphs represented by semantic web languages like RDF. Unlike many previous models, our methods can easily use prior background knowledge provided by users or extracted automatically from existing knowledge graphs. In addition to providing more robust methods for knowledge graph embedding, we provide a provably-convergent, linear tensor factorization algorithm. We demonstrate the efficacy of our models for the task of predicting new facts across eight different knowledge graphs, achieving between 5% and 50% relative improvement over existing state-of-the-art knowledge graph embedding techniques. Our empirical evaluation shows that all of the tensor decomposition models perform well when the average degree of an entity in a graph is high, with constraint-based models doing better on graphs with a small number of highly similar relations and regularization-based models dominating for graphs with relations of varying degrees of similarity. | computer science |
A Halin graph is a graph obtained by embedding a tree having no nodes of degree two in the plane, and then adding a cycle to join the leaves of the tree in such a way that the resulting graph is planar. According to the four color theorem, Halin graphs are 4-vertex-colorable. On the other hand, they are not 2-vertex-colorable because they have triangles. We show that all Halin graphs are 3-vertex-colorable except even wheels. We also show how to find the perfect elimination ordering of a chordal completion for a given Halin graph. The algorithms are implemented in Python using the graphtheory package. Generators of random Halin graphs (general or cubic) are included. The source code is available from the public GitHub repository. | computer science |
For $p\geq 2$, we prove local wellposedness for the nonlinear Schr\"odinger equation $(i\partial_t + \Delta)u = \pm|u|^pu$ on $\mathbb{T}^3$ with initial data in $H^{s_c}(\mathbb{T}^3)$, where $\mathbb{T}^3$ is a rectangular irrational $3$-torus and $s_c = \frac{3}{2} - \frac{2}{p}$ is the scaling-critical regularity. This extends work of earlier authors on the local Cauchy theory for NLS on $\mathbb{T}^3$ with power nonlinearities where $p$ is an even integer. | mathematics |
The recent discovery of TeV emission from gamma-ray bursts (GRBs) by the MAGIC and H.E.S.S. Cherenkov telescopes confirmed that emission from these transients can extend to very high energies. The TeV energy domain reaches the most sensitive band of the Cherenkov Telescope Array (CTA). This newly anticipated, improved sensitivity will enhance the prospects of gravitational-wave follow-up observations by CTA to probe particle acceleration and high-energy emission from binary black hole and neutron star mergers, and stellar core-collapse events. Here we discuss the implications of TeV emission on the most promising strategies of choice for the gravitational-wave follow-up effort for CTA and Cherenkov telescopes more broadly. We find that TeV emission (i) may allow more than an hour of delay between the gravitational-wave event and the start of CTA observations; (ii) enables the use of CTA's small size telescopes that have the largest fields of view. We characterize the number of pointings needed to find a counterpart. (iii) We compute the annual follow-up time requirements and find that prioritization will be needed. (iv) Even a few telescopes could detect sufficiently nearby counterparts, raising the possibility of adding a handful of small-size or medium-size telescopes to the network at diverse geographic locations taking into account the positions of CTA and the LIGO-Virgo-KAGRA network. (v) The continued operation of VERITAS/H.E.S.S./MAGIC would be a useful compliment to CTA's follow-up capabilities by increasing the sky area that can be rapidly covered, especially for directions above and 'below' the United States in which the present network of gravitational-wave detectors is more sensitive. | astrophysics |
Mixture models provide a flexible representation of heterogeneity in a finite number of latent classes. From the Bayesian point of view, Markov Chain Monte Carlo methods provide a way to draw inferences from these models. In particular, when the number of subpopulations is considered unknown, more sophisticated methods are required to perform Bayesian analysis. The Reversible Jump Markov Chain Monte Carlo is an alternative method for computing the posterior distribution by simulation in this case. Some problems associated with the Bayesian analysis of these class of models are frequent, such as the so-called "label-switching" problem. However, as the level of heterogeneity in the population increases, these problems are expected to become less frequent and the model's performance to improve. Thus, the aim of this work is to evaluate the normal mixture model fit using simulated data under different settings of heterogeneity and prior information about the mixture proportions. A simulation study is also presented to evaluate the model's performance considering the number of components known and estimating it. Finally, the model is applied to a censored real dataset containing antibody levels of Cytomegalovirus in individuals. | statistics |
The on-top pair density [$\Pi(\mathrm{\mathbf{r}})$] is a local quantum-chemical property that reflects the probability of two electrons of any spin to occupy the same position in space. Being the simplest quantity related to the two-particle density matrix, the on-top pair density is a powerful indicator of electron correlation effects, and as such, it has been extensively used to combine density functional theory and multireference wavefunction theory. The widespread application of $\Pi(\mathrm{\mathbf{r}})$ is currently hindered by the need for post-Hartree--Fock or multireference computations for its accurate evaluation. In this work, we propose the construction of a machine learning model capable of predicting the CASSCF-quality on-top pair density of a molecule only from its structure and composition. Our model, trained on the GDB11-AD-3165 database, is able to predict with minimal error the on-top pair density of organic molecules, bypassing completely the need for $\textit{ab initio}$ computations. The accuracy of the regression is demonstrated using the on-top ratio as a visual metric of electron correlation effects and bond-breaking in real-space. In addition, we report the construction of a specialized basis set, built to fit the on-top pair density in a single atom-centered expansion. This basis, cornerstone of the regression, could be potentially used also in the same spirit of the resolution-of-the-identity approximation for the electron density. | physics |
Using porous electrodes containing nickel hexacyanoferrate (NiHCF) nanoparticles, we construct and test a device for capacitive deionization in a two flow-channel device where the intercalation electrodes are in direct contact with an anion-exchange membrane. Upon negatively charging NiHCF, cations intercalate into it and the water in its vicinity is desalinated; at the same time water in the opposing electrode becomes more saline upon positively charging the NiHCF in that electrode. In a cyclic process of charge and discharge, fresh water is continuously produced, alternating between the two channels in sync with the direction of applied current. We present proof-of-principle experiments of this technology for single salt solutions, where we analyze various levels of current and cycle durations. We analyze salt removal rate and energy consumption. In desalination experiments with salt mixtures we find a threefold enhancement for K$^+$ over Na$^+$-adsorption, which shows the potential of NiHCF intercalation electrodes for selective ion separation from mixed ionic solutions. | physics |
In this paper, we develop two new algorithms, called, \textbf{FedDR} and \textbf{asyncFedDR}, for solving a fundamental nonconvex optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous mode, making them more practical. These new algorithms also achieve communication efficiency and more importantly can handle statistical and system heterogeneity, which are the two main challenges in federated learning. Our convergence analysis shows that the new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of the proposed methods compared to existing ones using both synthetic and real datasets. | statistics |
Joint time-vertex graph signals are pervasive in real-world. This paper focuses on the fundamental problem of sampling and reconstruction of joint time-vertex graph signals. We prove the existence and the necessary condition of a critical sampling set using minimum number of samples in time and graph domain respectively. The theory proposed in this paper suggests to assign heterogeneous sampling pattern for each node in a network under the constraint of minimum resources. An efficient algorithm is also provided to construct a critical sampling set. | electrical engineering and systems science |
We present results of investigation of the poorly studied X-ray pulsar Swift J1816.7--1613 during its transition from the type I outburst to the quiescent state. Our studies are based on the data obtained from X-ray observatories \textit{Swift}, \textit{NuSTAR} and \textit{Chandra} alongside with the latest IR data from UKIDSS/GPS and \textit{Spitzer}/GLIMPSE surveys. The aim of the work is to determine parameters of the system: the strength of the neutron star magnetic field and the distance to the source, which are required for the interpretation of the source behaviour in the framework of physically motivated models. No cyclotron absorption line was detected in the broad-band energy spectrum. However, the timing analysis hints at the typical for the X-ray pulsars magnetic field from a few $\times 10^{11}$ to a few $\times 10^{12}$ G. We also estimated type of the IR-companion as a B0-2e star located at distance of 7--13~kpc. | astrophysics |
Some of the most promising candidates for next generation thermoelectrics are nanocomposites due to their low thermal conductivities that result from phonon scattering on the boundaries of the various material phases. However, in order to maximize the figure of merit ZT, it is important to understand the impact of such features on the thermoelectric power factor. In this work we consider the effect that nanoinclusions and voids have on the electronic and thermoelectric coefficients of two dimensional geometries using the fully quantum mechanical Non Equilibrium Greens Function method. This method combines in a unified approach the details of geometry, electron phonon interactions, quantisation, tunnelling, and the ballistic to diffusive nature of transport. We show that as long as the barrier height is low nanoinclusions can have a positive impact on the Seebeck coefficient and the power factor is not severely impacted by a reduction in conductance. The power factor is also shown to be approximately independent of nanoinclusion and void density in the ballistic case. On the other hand, in the presence of phonon scattering voids degrade the power factor and their influence increases with density. | condensed matter |
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. | electrical engineering and systems science |
Quantum matter, the research field studying phases of matter whose properties are intrinsically quantum mechanical, draws from areas as diverse as hard condensed matter physics, materials science, statistical mechanics, quantum information, quantum gravity, and large-scale numerical simulations. Recently, researchers interested quantum matter and strongly correlated quantum systems have turned their attention to the algorithms underlying modern machine learning with an eye on making progress in their fields. Here we provide a short review on the recent development and adaptation of machine learning ideas for the purpose advancing research in quantum matter, including ideas ranging from algorithms that recognize conventional and topological states of matter in synthetic an experimental data, to representations of quantum states in terms of neural networks and their applications to the simulation and control of quantum systems. We discuss the outlook for future developments in areas at the intersection between machine learning and quantum many-body physics. | physics |
This paper describes a method for overlap-aware speaker diarization. Given an overlap detector and a speaker embedding extractor, our method performs spectral clustering of segments informed by the output of the overlap detector. This is achieved by transforming the discrete clustering problem into a convex optimization problem which is solved by eigen-decomposition. Thereafter, we discretize the solution by alternatively using singular value decomposition and a modified version of non-maximal suppression which is constrained by the output of the overlap detector. Furthermore, we detail an HMM-DNN based overlap detector which performs frame-level classification and enforces duration constraints through HMM state transitions. Our method achieves a test diarization error rate (DER) of 24.0% on the mixed-headset setting of the AMI meeting corpus, which is a relative improvement of 15.2% over a strong agglomerative hierarchical clustering baseline, and compares favorably with other overlap-aware diarization methods. Further analysis on the LibriCSS data demonstrates the effectiveness of the proposed method in high overlap conditions. | electrical engineering and systems science |
There is currently a dearth of appropriate methods to estimate the causal effects of multiple treatments when the outcome is binary. For such settings, we propose the use of nonparametric Bayesian modeling, Bayesian Additive Regression Trees (BART). We conduct an extensive simulation study to compare BART to several existing, propensity score-based methods and to identify its operating characteristics when estimating average treatment effects on the treated. BART consistently demonstrates low bias and mean-squared errors. We illustrate the use of BART through a comparative effectiveness analysis of a large dataset, drawn from the latest SEER-Medicare linkage, on patients who were operated via robotic-assisted surgery, video-assisted thoratic surgery or open thoracotomy. | statistics |
Distributed stochastic gradient descent~(DSGD) has been widely used for optimizing large-scale machine learning models, including both convex and non-convex models. With the rapid growth of model size, huge communication cost has been the bottleneck of traditional DSGD. Recently, many communication compression methods have been proposed. Memory-based distributed stochastic gradient descent~(M-DSGD) is one of the efficient methods since each worker communicates a sparse vector in each iteration so that the communication cost is small. Recent works propose the convergence rate of M-DSGD when it adopts vanilla SGD. However, there is still a lack of convergence theory for M-DSGD when it adopts momentum SGD. In this paper, we propose a universal convergence analysis for M-DSGD by introducing \emph{transformation equation}. The transformation equation describes the relation between traditional DSGD and M-DSGD so that we can transform M-DSGD to its corresponding DSGD. Hence we get the convergence rate of M-DSGD with momentum for both convex and non-convex problems. Furthermore, we combine M-DSGD and stagewise learning that the learning rate of M-DSGD in each stage is a constant and is decreased by stage, instead of iteration. Using the transformation equation, we propose the convergence rate of stagewise M-DSGD which bridges the gap between theory and practice. | statistics |
We construct two rational approximate solutions to the Thomas-Fermi (TF) nonlinear differential equation. These expressions follow from an application of the principle of dynamic consistency. In addition to examining differences in the predicted numerical values of the two approximate solutions, we compare these values with an accurate numerical solution obtained using a fourth-order Runge-Kutta method. We also present several new integral relations satisfied by the bounded solutions of the TF equation | mathematics |
Motivated by renewed evidence for New Physics in $b \to s\ell\ell$ transitions in the form of LHCb's new measurements of theoretically clean lepton-universality ratios and the purely leptonic $B_s\to\mu^+\mu^-$ decay, we quantify the combined level of discrepancy with the Standard Model and fit values of short-distance Wilson coefficients. A combination of the clean observables $R_K$, $R_{K^*}$, and $B_s\to \mu\mu$ alone results in a discrepancy with the Standard Model at $4.0\sigma$, up from $3.5\sigma$ in 2017. One-parameter scenarios with purely left-handed or with purely axial coupling to muons fit the data well and exclude the Standard Model at $\sim 5 \sigma$ level. In a two-parameter fit to %$C_9$ and $C_{10}$, new-physics contributions with both vector and axial-vector couplings to muons the allowed region is much more defined than in 2017, principally due to the much more precise result on $B_s \to \mu^+ \mu^-$, which probes the axial coupling to muons. Including angular observables data narrows the allowed region further. A by-product of our analysis is an updated average of $\text{BR}(B_s \to \mu^+ \mu^-) = (2.8\pm 0.3) \times 10^{-9}$. | high energy physics phenomenology |
The coronavirus (COVID-19) has appeared as the greatest challenge due to its continuous structural evolution as well as the absence of proper antidotes for this particular virus. The virus mainly spreads and replicates itself among mass people through close contact which unfortunately can happen in many unpredictable ways. Therefore, to slow down the spread of this novel virus, the only relevant initiatives are to maintain social distance, perform contact tracing, use proper safety gears, and impose quarantine measures. But despite being theoretically possible, these approaches are very difficult to uphold in densely populated countries and areas. Therefore, to control the virus spread, researchers and authorities are considering the use of smartphone based mobile applications (apps) to identify the likely infected persons as well as the highly risky zones to maintain isolation and lockdown measures. However, these methods heavily depend on advanced technological features and expose significant privacy loopholes. In this paper, we propose a new method for COVID-19 contact tracing based on mobile phone users' geolocation data. The proposed method will help the authorities to identify the number of probable infected persons without using smartphone based mobile applications. In addition, the proposed method can help people take the vital decision of when to seek medical assistance by letting them know whether they are already in the list of exposed persons. Numerical examples demonstrate that the proposed method can significantly outperform the smartphone app-based solutions. | computer science |
The increased penetration of Electric Vehicles (EVs) in the transportation sector has increased the requirement of Fast Charging Direct Current (FCDC) stations to meet customer's speedy charging requirements. However, both charging stations and EVs connection to the communication infrastructure as well as the power grid makes it vulnerable to cyber attacks. In this paper, the vulnerability of the EV charging process is initially studied. We then show how a botnet of compromised EVs and FCDC stations can be utilized to launch cyber attacks on the power grid resulting in an increase in the load at a specific time. The effect of such attacks on the distribution network in terms of line congestion and voltage limit violations is investigated. Moreover, the effect of the botnet of the transmission network is also studied. | electrical engineering and systems science |
Real-time cine magnetic resonance imaging (MRI) plays an increasingly important role in various cardiac interventions. In order to enable fast and accurate visual assistance, the temporal frames need to be segmented on-the-fly. However, state-of-the-art MRI segmentation methods are used either offline because of their high computation complexity, or in real-time but with significant accuracy loss and latency increase (causing visually noticeable lag). As such, they can hardly be adopted to assist visual guidance. In this work, inspired by a new interpretation of Independent Component Analysis (ICA) for learning, we propose a novel ICA-UNet for real-time 3D cardiac cine MRI segmentation. Experiments using the MICCAI ACDC 2017 dataset show that, compared with the state-of-the-arts, ICA-UNet not only achieves higher Dice scores, but also meets the real-time requirements for both throughput and latency (up to 12.6X reduction), enabling real-time guidance for cardiac interventions without visual lag. | electrical engineering and systems science |
We present Atacama Large Millimeter/submillimeter Array (ALMA) Band 6 observations of dust continuum emission of the disk around WW Cha. The dust continuum image shows a smooth disk structure with a faint (low-contrast) dust ring, extending from $\sim 40$ au to $\sim 70$ au, not accompanied by any gap. We constructed the simple model to fit the visibility of the observed data by using MCMC method and found that the bump (we call the ring without the gap the bump) has two peaks at $40$ au and $70$ au. The residual map between the model and observation indicates asymmetric structures at the center and the outer region of the disk. These asymmetric structures are also confirmed by model-independent analysis of the imaginary part of the visibility. The asymmetric structure at the outer region is consistent with a spiral observed by SPHERE. To constrain physical quantities of the disk (dust density and temperature), we carried out radiative transfer simulations. We found that the midplane temperature around the outer peak is close to the freezeout temperature of CO on water ice ($\sim 30$ K). The temperature around the inner peak is about $50$ K, which is close to the freezeout temperature of H$_2$S and also close to the sintering temperature of several species. We also discuss the size distribution of the dust grains using the spectral index map obtained within the Band 6 data. | astrophysics |
The results of investigation of extremely compressed wave packet-light bullet (LB) penetration through an air gap under single-pulse femtosecond Mid IR filamentation in LiF are presented. It is revealed by the laser coloration method and from numerical simulations that a single-cycle LB, which was formed before an air gap up to 0.5 mm width, completely recovered after passing some distance in LiF after the gap. This distance increases nonlinearly with the gap width and LB pathway before the gap. LB in the air gap has a strongly convergent wave front with focusing radius of 20-100 {\mu}m and its divergence after the waist is considerably less than that of Gaussian beam. | physics |
We examine, in correlated mixed states of qudit-qubit systems, the set of all conditional qubit states that can be reached after local measurements at the qudit based on rank-1 projectors. While for a similar measurement at the qubit, the conditional post-measurement qudit states lie on the surface of an ellipsoid, for a measurement at the qudit we show that the set of post-measurement qubit states can form more complex solid regions. In particular, we show the emergence, for some classes of mixed states, of sets which are the convex hull of solid ellipsoids and which may lead to cone-like and triangle-like shapes in limit cases. We also analyze the associated measurement dependent conditional entropy, providing a full analytic determination of its minimum and of the minimizing local measurement at the qudit for the previous states. Separable rank-2 mixtures are also discussed. | quantum physics |
In typical embedded applications, the precise execution time of the program does not matter, and it is sufficient to meet a real-time deadline. However, modern applications in information security have become much more time-sensitive, due to the risk of timing side-channel leakage. The timing of such programs needs to be data-independent and precise. We describe a parallel synchronous software model, which executes as N parallel threads on a processor with word-length N. Each thread is a single-bit synchronous machine with precise, contention-free timing, while each of the N threads still executes as an independent machine. The resulting software supports fine-grained parallel execution. In contrast to earlier work to obtain precise and repeatable timing in software, our solution does not require modifications to the processor architecture nor specialized instruction scheduling techniques. In addition, all threads run in parallel and without contention, which eliminates the problem of thread scheduling. We use hardware (HDL) semantics to describe a thread as a single-bit synchronous machine. Using logic synthesis and code generation, we derive a parallel synchronous implementation of this design. We illustrate the synchronous parallel programming model with practical examples from cryptography and other applications with precise timing requirements. | computer science |
For an exponentially decaying potential, analytic structure of the $s$-wave S-matrix can be determined up to the slightest detail, including position of all its poles and their residues. Beautiful hidden structures can be revealed by its domain coloring. A fundamental property of the S-matrix is that any bound state corresponds to a pole of the S-matrix on the physical sheet of the complex energy plane. For a repulsive exponentially decaying potential, none of infinite number of poles of the $s$-wave S-matrix on the physical sheet corresponds to any physical state. On the second sheet of the complex energy plane, the S-matrix has infinite number of poles corresponding to virtual states and a finite number of poles corresponding to complementary pairs of resonances and anti-resonances. The origin of redundant poles and zeros is confirmed to be related to peculiarities of analytic continuation of a parameter of two linearly independent analytic functions. The overall contribution of redundant poles to the asymptotic completeness relation, provided that the residue theorem can be applied, is determined to be an oscillating function. | quantum physics |
Necessary and sufficient conditions for the exponentiation of finite-dimensional real Lie algebras of linear operators on complete Hausdorff locally convex spaces are obtained, focused on the equicontinuous case - in particular, necessary conditions for exponentiation to compact Lie groups are established. Applications to complete locally convex algebras, with special attention to locally C$^*$-algebras, are given. The definition of a projective analytic vector is introduced, playing an important role in some of the exponentiation theorems. | mathematics |
Close to half of the world population have smartphones, while a typical flagship smartphone today has been integrated with more than 20 smart components and sensors, making a smartphone a highly integrated platform that can potentially mimic the five senses of humans. Recent advancement in achieving high compactness, high performance computing, high flexibility, and multiplexed functionality in smartphones have enabled them for many cutting-edge healthcare applications, such as single-molecule imaging, medical diagnosis, and biosensing, which were conventionally done with bulky and sophisticated devices. Most of the current healthcare applications are developed based on using the photon-sensitive components, such as CMOS sensors, flash & fill lights, lens modules, and LED lights in the screen, leaving the rest of the smart and high-performance sensors rarely explored. In this Perspective, we review recent progresses in advanced sensors in modern smartphones and discuss how those sensors have great, as yet unmet, promise to offer widespread and easy-to-implement solutions to many emerging healthcare applications, including nanoscale sensing, point-of-care testing, pollution monitoring, etc. | electrical engineering and systems science |
We study the gravitational wave phenomenology in models of solitosynthesis. In such models, a first order phase transition is precipitated by a period in which non-topological solitons with a conserved global charge (Q-balls) accumulate charge. As such, the nucleation rate of critical bubbles differs significantly from thermal phase transitions. In general we find that the peak amplitude of the gravitational wave spectrum resulting from solitosynthesis is stronger than that of a thermal phase transition and the timescale of the onset of nonlinear plasma dynamics is comparable to Hubble. We demonstrate this explicitly in an asymmetric dark matter model, and discuss current and future constraints in this scenario. | high energy physics phenomenology |
Fisher [Fis75] and Baur [Bau75] showed independently in the seventies that if $T$ is a complete first-order theory extending the theory of modules, then the class of models of $T$ with pure embeddings is stable. In [Maz4, 2.12], it is asked if the same is true for any abstract elementary class $(K, \leq_p)$ such that $K$ is a class of modules and $\leq_p$ is the pure submodule relation. In this paper we give some instances where this is true: $\textbf{Theorem.}$ Assume $R$ is an associative ring with unity. Let $(K, \leq_p)$ be an AEC such that $K \subseteq R\text{-Mod}$ and $K$ is closed under finite direct sums, then: - If $K$ is closed under direct summands and pure-injective envelopes, then $(K, \leq_p)$ is $\lambda$-stable for every $\lambda \geq LS(K)$ such that $\lambda^{|R| + \aleph_0}= \lambda$. - If $K$ is closed under pure submodules and pure epimorphic images, then $(K, \leq_p)$ is $\lambda$-stable for every $\lambda$ such that $\lambda^{|R| + \aleph_0}= \lambda$. - Assume $R$ is Von Neumann regular. If $K$ is closed under submodules and has arbitrarily large models, then $(K, \leq_p)$ is $\lambda$-stable for every $\lambda$ such that $\lambda^{|R| + \aleph_0}= \lambda$. As an application of these results we give new characterizations of noetherian rings, pure-semisimple rings, dedekind domains and fields via superstability. Moreover, we show how these results can be used to show a link between being good in the stability hierarchy and being good in the axiomatizability hierarchy. Another application is the existence of universal models with respect to pure embeddings in several classes of modules. Among them, the class of flat modules and the class of injective torsion modules. | mathematics |
In this paper we study ways to establish when a Banach space can be identified as the dual or the double dual of another Banach space. To obtain these results, we relate these spaces with other, concrete Banach spaces - tipically $\ell^1$ and $\ell^\infty$ - and show that under suitable assumptions we can transfer properties of these spaces to the space we consider. In particular, we show how these results can be used to obtain in a simple way interesting results about spaces such as $BMO$ and $BV$. | mathematics |
We offer a geometric interpretation of attractor theories with singular kinetic terms as a union of multiple canonical models. We demonstrate that different domains (separated by poles) can drastically differ in their phenomenology. We illustrate this with the help of a "master model" that leads to distinct predictions depending on which side of the pole the field evolves before examining the more realistic example of $\alpha$-attractor models. Such models lead to quintessential inflation within the poles when featuring an exponential potential. However, beyond the poles, we discover a novel behaviour: the scalar field responsible for the early-time acceleration of the Universe may reach the boundary of the field-space manifold, indicating that the theory is incomplete and that a boundary condition must be imposed in order to determine its late-time behaviour. If the evolution of the field is arrested before this happens, however, we discover that quintessence can be achieved without a potential offset. Turning to multifield models with singular kinetic terms, we see that poles generalise straightforwardly to singular curves, which act as "model walls" between distinct pole-free inflationary models. As an example, we study a simple two-field $\alpha$-attractor-inspired model, whose evolution of isocurvature perturbations is sensitive to where the non-canonical field begins its trajectory. We finally discuss initial conditions in attractor theories, where the existence of multiple disconnected canonical models implies that we must make a fundamental choice: in which domain we impose a distribution for the inflaton in order to then determine the likelihood of inflation. | high energy physics theory |
This paper proposes an accurate fault location algorithm technique based on hybrid synchronized sparse voltage and sparse current phasor measurements. The proposed algorithm addresses the performance limitation of fault location algorithms based on only synchronized sparse voltage measurements (SSVM) and on only synchronized sparse current measurements (SSCM). In the proposed method, bus voltage phasor of faulty line or close to the faulty line and branch current phasor of the adjacent line is utilized. The paper contributes to improve the accuracy of fault location and deter the effect of CT saturation by using hybrid voltage and current measurements. The proposed algorithm has been tested on four bus two area power system and IEEE 14 bus system with the typical features of an actual distribution system. The robustness of algorithm has been tested by variation in fault location, fault resistance, load switching. The simulation results demonstrate the accuracy of the proposed algorithm and ensure a reliable fault detection and location method. | electrical engineering and systems science |
Clustering of inertial particles is important for many types of astrophysical and geophysical turbulence, but it has been studied predominately for incompressible flows. Here we study compressible flows and compare clustering in both compressively (irrotationally) and vortically (solenoidally) forced turbulence. Vortically and compressively forced flows are driven stochastically either by solenoidal waves or by circular expansion waves, respectively. For compressively forced flows, the power spectra of the density of inertial particles are a particularly sensitive tool for displaying particle clustering relative to the density enhancement. We use both Lagrangian and Eulerian descriptions for the particles. Particle clustering through shock interaction is found to be particularly prominent in turbulence driven by spherical expansion waves. It manifests itself through a double-peaked distribution of spectral power as a function of Stokes number. The two peaks are associated with two distinct clustering mechanisms; shock interaction for smaller Stokes numbers and the centrifugal sling effect for larger values. The clustering of inertial particles is associated with the formation of caustics. Such caustics can only be captured in the Lagrangian description, which allows us to assess the relative importance of caustics in vortically and irrotationally forced turbulence. We show that the statistical noise resulting from the limited number of particles in the Lagrangian description can be removed from the particle power spectra, allowing us a more detailed comparison of the residual spectra. We focus on the Epstein drag law relevant for rarefied gases, but show that our findings apply also to the usual Stokes drag. | physics |
We present the calculation of next-to-next-to-leading order (NNLO) corrections in perturbative QCD for the production of a Higgs boson decaying into a pair of bottom quarks in association with a leptonically decaying weak vector boson: $\mathrm{pp} \to V \mathrm{H} + X \to \ell\bar{\ell}\;\mathrm{b\bar{b}} + X$. We consider the corrections to both the production and decay sub-processes, retaining a fully differential description of the final state including off-shell propagators of the Higgs and vector boson. The calculation is carried out using the antenna subtraction formalism and is implemented in the NNLOJET framework. Clustering and identification of $\mathrm{b}$-jets is performed with the flavour-$k_t$ algorithm and results for fiducial cross sections and distributions are presented for the LHC at $\sqrt{s}=13\;\text{TeV}$. We assess the residual theory uncertainty by varying the production and decay scales independently and provide scale uncertainty bands in our results, yielding percent-level accurate predictions for observables in this Higgs production mode computed at NNLO. Confronting a na\"ive perturbative expansion of the cross section against the customary re-scaling procedure to a fixed branching ratio reveals that starting from NNLO, the latter could be inadequate in estimating missing higher-order effects through scale variations. | high energy physics phenomenology |
We reemphasize that the ratio $R_{s\mu} \equiv \overline{\mathcal{B}}(B_s\to\mu\bar\mu)/\Delta M_s$ is a measure of the tension of the Standard Model (SM) with latest measurements of $\overline{\mathcal{B}}(B_s\to\mu\bar\mu)$ that does not suffer from the persistent puzzle on the $|V_{cb}|$ determinations from inclusive versus exclusive $b\to c\ell\bar\nu$ decays and which affects the value of the CKM element $|V_{ts}|$ that is crucial for the SM predictions of both $\overline{\mathcal{B}}(B_s\to\mu\bar\mu)$ and $\Delta M_s$, but cancels out in the ratio $R_{s\mu}$. In our analysis we include higher order electroweak and QED corrections und adapt the latest hadronic input to find a tension of about $2\sigma$ for $R_{s\mu}$ measurements with the SM independently of $|V_{ts}|$. We also discuss the ratio $R_{d\mu}$ which could turn out, in particular in correlation with $R_{s\mu}$, to be useful for the search for New Physics, when the data on both ratios improves. Also $R_{d\mu}$ is independent of $|V_{cb}|$ or more precisely $|V_{td}|$. | high energy physics phenomenology |
I provide an expository account of the interplay between Minkowski and Euclidean signature gamma matrices, Majorana fermions, and discrete symmetries. | high energy physics theory |
We studied Dark Matter (DM) phenomenology with multiple DM species consisting of both scalar and vector DM particles in the Hidden Gauged SU(3) model of Arcadi et al. Because of the large parameter space in the Hidden Gauged SU(3) model we restrict ourselves to three representative benchmark points, each with multiple DM species. The relic densities for the benchmark points were found using a program developed to solve the coupled Boltzmann equations for an arbitrary number of interacting DM species with two particles in the final state. For each case, we varied the mass of the DM particles and then found the value of the dark SU(3) gauge coupling that gave the correct relic density. We found that in some regions of parameter space, DM would be difficult to observe in direct detection experiments while it would be easier to observe in indirect detection experiments while for other regions of parameter space the situation was reversed. Thus, measurements from both types of experiments complement each other and could help pinpoint the details of the hidden SU(3) model. | high energy physics phenomenology |
In classical supergiant X-ray binaries (SgXBs), the Bondi-Hoyle-Lyttleton wind accretion was usually assumed, and the angular momentum transport to the accretors is inefficient. The observed spin-up/spin-down behavior of the neutron star in SgXBs is not well understood. In this paper, we report an extended low state of Vela X-1 (at orbital phases 0.16-0.2), lasting for at least 30 ks, observed with Chandra during the onset of an unusual spin-up period. During this low state, the continuum fluxes dropped by a factor of 10 compared to the preceding flare period, and the continuum pulsation almost disappeared. Meanwhile, the Fe K$\alpha$ fluxes of the low state were similar to the preceding flare period, leading to an Fe K$\alpha$ equivalent width (EW) of 0.6 keV, as high as the Fe K$\alpha$ EW during the eclipse phase of Vela X-1. Both the pulsation cessation and the high Fe K$\alpha$ EW indicate an axisymmetric structure with a column density larger than $10^{24}\rm cm^{-2}$ on a spatial scale of the accretion radius of Vela X-1. These phenomena are consistent with the existence of an accretion disk that leads to the following spin-up of Vela X-1. It indicates that disk accretion, although not always, does occur in classical wind-fed SgXBs. | astrophysics |
Cultures across the world are distinguished by the idiosyncratic patterns in their cuisines. These cuisines are characterized in terms of their substructures such as ingredients, cooking processes and utensils. A complex fusion of these substructures intrinsic to a region defines the identity of a cuisine. Accurate classification of cuisines based on their culinary features is an outstanding problem and has hitherto been attempted to solve by accounting for ingredients of a recipe as features. Previous studies have attempted cuisine classification by using unstructured recipes without accounting for details of cooking techniques. In reality, the cooking processes/techniques and their order are highly significant for the recipe's structure and hence for its classification. In this article, we have implemented a range of classification techniques by accounting for this information on the RecipeDB dataset containing sequential data on recipes. The state-of-the-art RoBERTa model presented the highest accuracy of 73.30% among a range of classification models from Logistic Regression and Naive Bayes to LSTMs and Transformers. | computer science |
Given a smooth projective variety $X$ with a simple normal crossing divisor $D:=D_1+D_2+...+D_n$, where $D_i\subset X$ are smooth, irreducible and nef. We prove a mirror theorem for multi-root stacks $X_{D,\vec r}$ by constructing an $I$-function, a slice of Givental's Lagrangian cone for Gromov--Witten theory of multi-root stacks. We provide three applications: (1) We show that some genus zero invariants of $X_{D,\vec r}$ stabilize for sufficiently large $\vec r$. (2) We state a generalized local-log-orbifold principle conjecture and prove a version of it. (3) We show that regularized quantum periods of Fano varieties coincide with classical periods of the mirror Landau--Ginzburg potentials using orbifold invariants of $X_{D,\vec r}$. | mathematics |
Molecular lines observed towards protoplanetary disks carry information about physical and chemical processes associated with planet formation. We present ALMA Band 6 observations of C2H, HCN, and C18O in a sample of 14 disks spanning a range of ages, stellar luminosities, and stellar masses. Using C2H and HCN hyperfine structure fitting and HCN/H13CN isotopologue analysis, we extract optical depth, excitation temperature, and column density radial profiles for a subset of disks. C2H is marginally optically thick (\tau ~1-5) and HCN is quite optically thick (\tau ~ 5-10) in the inner 200 AU. The extracted temperatures of both molecules are low (10-30K), indicative of either sub-thermal emission from the warm disk atmosphere or substantial beam dilution due to chemical substructure. We explore the origins of C2H morphological diversity in our sample using a series of toy disk models, and find that disk-dependent overlap between regions with high UV fluxes and high atomic carbon abundances can explain a wide range of C2H emission features (e.g. compact vs. extended and ringed vs. ringless emission). We explore the chemical relationship between C2H, HCN, and C18O and find a positive correlation between C2H and HCN fluxes, but no relationship between C2H or HCN with C18O fluxes. We also see no evidence that C2H and HCN are enhanced with disk age. C2H and HCN seem to share a common driver, however more work remains to elucidate the chemical relationship between these molecules and the underlying evolution of C, N, and O chemistries in disks. | astrophysics |
We present a new, fully generative model for constructing astronomical catalogs from optical telescope image sets. Each pixel intensity is treated as a random variable with parameters that depend on the latent properties of stars and galaxies. These latent properties are themselves modeled as random. We compare two procedures for posterior inference. One procedure is based on Markov chain Monte Carlo (MCMC) while the other is based on variational inference (VI). The MCMC procedure excels at quantifying uncertainty, while the VI procedure is 1000 times faster. On a supercomputer, the VI procedure efficiently uses 665,000 CPU cores to construct an astronomical catalog from 50 terabytes of images in 14.6 minutes, demonstrating the scaling characteristics necessary to construct catalogs for upcoming astronomical surveys. | statistics |
Analytical phase demodulation algorithms in optical interferometry typically fail to reach the theoretical sensitivity limit set by the Cram\'er-Rao bound (CRB). We show that deep neural networks (DNNs) can perform efficient phase demodulation by achieving or exceeding the CRB by significant margins when trained with new information that is not utilized by conventional algorithms, such as noise statistics and parameter constraints. As an example, we developed and applied DNNs to wavelength shifting interferometry. When trained with noise statistics, the DNNs outperform the conventional algorithm in terms of phase sensitivity and achieve the traditional three parameter CRB. Further, by incorporating parameter constraints into the training sets, they can exceed the traditional CRB. For well confined parameters, the phase sensitivity of the DNNs can even approach a fundamental limit we refer to as the single parameter CRB. Such sensitivity improvement can translate into significant increase in single-to-noise ratio without hardware modification, or be used to relax hardware requirements. | electrical engineering and systems science |
The control of environmental conditions is crucial in many experimental work across scientific domains. In this technical note, we present how to realize a cheap humidity regulator based on a PID controller driven by an Arduino microcontroller. We argue our choices on the components and we show that the presented designs can serve as a basis to the reader for the realization of humidity regulators with specific requirements and experimental constraints. | physics |
Let $f:\, X\to Y$ be a semistable non-isotrivial family of $n$-folds over a smooth projective curve with discriminant locus $S \subseteq Y$ and with general fibre $F$ of general type. We show the strict Arakelov inequality \[\frac{\mathrm{deg}\, f_*\omega_{X/Y}^\nu}{\mathrm{rank}\, f_*\omega_{X/Y}^\nu} < {n\nu\over 2}\cdot\mathrm{deg}\,\Omega^1_Y(\log S),\] for all $\nu\in \mathbb N$ such that the $\nu$-th pluricanonical linear system $|\omega^\nu_F|$ is birational. This answers a question asked by M\"oller, Viehweg and the third named author. | mathematics |
The aim of the study is to investigate the observability of Pseudo scalar Higgs $ A $ and neutral heavy CP even Higgs boson $ H $ in the framework of 2HDM type-I using lepton collider which is supposed to be operating at $\sqrt{s}$ (center-of-mass energy) = 1000 GeV. The process under investigation is $e^{-}e^{+} \rightarrow AH \rightarrow ZHH \rightarrow jjb \bar{b}b\bar{b}$ and $e^{-}e^{+} \rightarrow AH \rightarrow b \bar{b}b\bar{b}$. Neutral Higgs pair production at electron positron collider and its decay is fully hadronic. The pseudoscalar Higgs $ A $ decays to $ Z $ boson along with neutral heavy CP even Higgs boson $ H $ . The CP even Higgs boson $ H $ decays to pair of bottom quarks. Heavy Higgs is very unstable particle which decays in no time to bottom pair quark. The different benchmark points are assumed where various Higgs mass hypotheses are considered. It has been demonstrated that the heavy Neutral Higgs and Pseudoscalar Higgs signal observability is possible within the parameter space $ (\tan \beta,m_{A}) $ with respect to all experimental and theoretical constraints. The CP odd and CP even Higgs bosons in all scenarios are observable when signal exceeds $ 5\sigma $ , which is the final extracted value of signal significance in accordance with range of whole mass. To be more specific, the region of parameter space with $200\leq m_{A}\leq 250 $ GeV and $m_{H}=150$ GeV at (CMS) $\sqrt{s} = 500$ GeV is accessible at the defined integrated luminosity of about $ 100 fb^{-1} $. Furthermore, region with $ 200\leq m_{A}\leq 330 $ GeV and $ 150\leq m_{H}\leq 250 $ GeV with mass splitting of about 50-100 GeV between A and H Higgs bosons at $\sqrt{s} = 1000$ GeV is accessible at same integrated luminosity. | high energy physics phenomenology |
The upcoming $10-100$ petawatt laser facilities may deliver laser pulses with unprecedented intensity of $10^{22}-10^{25}\rm~W cm^{-2}$, which can trigger various nonlinear quantum electrodynamic processes in plasma. For effective laser plasma interactions at such high intensity levels, guided laser propagation is critical. However, this becomes impossible via usual plasma electron response to laser fields due to electron cavitation by the laser ponderomotive force. Here, we find that ion response to the laser fields may effectively guide laser propagation at such high intensity levels. The corresponding conditions of the required ion density distribution and laser power are presented and verified by three-dimensional particle-in-cell simulations. Our theory shall serve as a guide for future experimental design involving ultrahigh intensity lasers. | physics |
If two identical copies of a completely depolarizing channel are put into a superposition of their possible causal orders, they can transmit non-zero classical information. Here, we study how well we can transmit classical information with $N$ depolarizing channels put in superposition of $M$ causal orders via quantum SWITCH. We calculate Holevo quantity if the superposition uses only cyclic permutations of channels and find that it increases with $M$ and it is independent of $N$. For a qubit it never reaches $1$ if we are increasing $M$. On the other hand, the classical capacity decreases with the dimension $d$ of the message system. Further, for $N=3$ and $N=4$ we studied superposition of all causal orders and uniformly superposed causal orders belonging to different cosets created by cyclic permutation subgroup. | quantum physics |
Theoretical models show that the power of relativistic jets of active galactic nuclei depends on the spin and mass of the central supermassive black holes, as well as the accretion. Here we report an analysis of archival observations of a sample of blazars. We find a significant correlation between jet kinetic power and the spin of supermassive black holes. At the same time, we use multiple linear regression to analyze the relationship between jet kinetic power and accretion, spin and black hole mass. We find that the spin of supermassive black holes and accretion are the most important contribution to the jet kinetic power. The contribution rates of both the spin of supermassive black holes and accretion are more than 95\%. These results suggest that the spin energy of supermassive black holes powers the relativistic jets. The jet production efficiency of almost all Fermi blazars can be explained by moderately thin magnetically arrested accretion disks around rapidly spinning black holes. | astrophysics |
We study random walks with stochastic resetting to the initial position on arbitrary networks. We obtain the stationary probability distribution as well as the mean and global first passage times, which allow us to characterize the effect of resetting on the capacity of a random walker to reach a particular target or to explore a finite network. We apply the results to rings, Cayley trees, random and complex networks. Our formalism holds for undirected networks and can be implemented from the spectral properties of the random walk without resetting, providing a tool to analyze the search efficiency in different structures with the small-world property or communities. In this way, we extend the study of resetting processes to the domain of networks. | condensed matter |
The COVID-19 crisis has shown that we can only prevent the risk of mass contagion through timely, large-scale, coordinated, and decisive actions. However, frequently the models used by experts [from whom decision-makers get their main advice] focus on a single perspective [for example, the epidemiological one] and do not consider many of the multiple forces that affect the COVID-19 outbreak patterns. The epidemiological, socioeconomic, and human mobility context of COVID-19 can be considered as a complex adaptive system. So, these interventions (for example, lock-downs) could have many and/or unexpected ramifications. This situation makes it difficult to understand the overall effect produced by any public policy measure and, therefore, to assess its real effectiveness and convenience. By using mobile phone data, socioeconomic data, and COVID-19 cases data recorded throughout the pandemic development, we aim to understand and explain [make sense of] the observed heterogeneous regional patterns of contagion across time and space. We will also consider the causal effects produced by confinement policies by developing data-based models to explore, simulate, and estimate these policies' effectiveness. We intend to develop a methodology to assess and improve public policies' effectiveness associated with the fight against the pandemic, emphasizing its convenience, the precise time of its application, and extension. The contributions of this work can be used regardless of the region. The only likely impediment is the availability of the appropriate data. | statistics |
This paper describes the system proposed for the SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection. We focused our approach on the detection problem. Given the semantics of words captured by temporal word embeddings in different time periods, we investigate the use of unsupervised methods to detect when the target word has gained or loosed senses. To this end, we defined a new algorithm based on Gaussian Mixture Models to cluster the target similarities computed over the two periods. We compared the proposed approach with a number of similarity-based thresholds. We found that, although the performance of the detection methods varies across the word embedding algorithms, the combination of Gaussian Mixture with Temporal Referencing resulted in our best system. | computer science |
A semi-device-independent framework for prepare-and-measure experiments is introduced in which an experimenter can tune the degree of distrust in the performance of the quantum devices. In this framework, a receiver operates an uncharacterised measurement device and a sender operates a preparation device that emits states with a bounded fidelity with respect to a set of target states. No assumption on Hilbert space dimension is required. The set of quantum correlations is investigated and bounded from both the interior and the exterior. Furthermore, the optimal performance of quantum state discrimination with bounded distrust is derived and applied to certification of detection efficiency. Quantum-over-classical advantages are demonstrated and the magnitude of distrust compatible with such advantages is explored. Finally, efficient schemes for semi-device-independent random number generation are presented. | quantum physics |
For a class of parametric modal regression models with measurement error, a simulation extrapolation estimation procedure is proposed in this paper for estimating the modal regression coefficients. Large sample properties of the proposed estimation procedure, including the consistency and asymptotic normality, are thoroughly investigated. Simulation studies are conducted to evaluate its robustness to potential outliers and the effectiveness in reducing the bias caused by the measurement error. | statistics |
We present the first statistical analysis of exoplanet direct imaging surveys combining adaptive optics imaging at small separations with deep seeing-limited observations at large separations allowing us to study the entire orbital separation domain from 5 to 5000~au simultaneously. Our sample of 344 stars includes only confirmed members of nearby young associations and is based on all AO direct-imaging detection limits readily available online, with addition of our own previous seeing limited surveys. Assuming that the companion distribution in mass and semi-major axis follows a power law distribution and adding a dependence on the mass of the host star, such as $d^2n\propto fM^{\alpha}a^{\beta} (M_\star/M_{\odot})^{\gamma}$d$ M $d$a$, we constrain the parameters to obtained $\alpha=-0.18^{+0.77}_{-0.65}$, $\beta=-1.43^{+0.23}_{-0.24}$, and $\gamma=0.62^{+0.56}_{-0.50}$,at a 68\% confidence level, and we obtain $f=0.11^{+0.11}_{-0.05}$, for the overall planet occurrence rate for companions with masses between 1 to 20~\mj\ in the range 5--5000~au. Thus, we find that occurrence of companions is negatively correlated with semi-major axis and companion mass (marginally) but is positively correlated with the stellar host mass. Our inferred mass distribution is in good agreement with other distributions found previously from direct imaging surveys for planets and brown dwarfs, but is shallower as a function of mass than the distributions inferred by radial velocity surveys of gas giants in the 1--3\,au range. This may suggest that planets at these wide and very-wide separations represent the low-mass tail of the brown dwarfs and stellar companion distribution rather than an extension of the distribution of the inner planets. | astrophysics |
Instrumental variable (IV) analyses are becoming common in health services research and epidemiology. Most IV analyses use naturally occurring instruments, such as distance to a hospital. In these analyses, investigators must assume the instrument is as-if randomly assigned. This assumption cannot be tested directly, but it can be falsified. Most falsification tests in the literature compare relative prevalence or bias in observed covariates between the instrument and the exposure. These tests require investigators to make a covariate-by-covariate judgment about the validity of the IV design. Often, only some of the covariates are well-balanced, making it unclear if as-if randomization can be assumed for the instrument across all covariates. We propose an alternative falsification test that compares IV balance or bias to the balance or bias that would have been produced under randomization. A key advantage of our test is that it allows for global balance measures as well as easily interpretable graphical comparisons. Furthermore, our test does not rely on any parametric assumptions and can be used to validly assess if the instrument is significantly closer to being as-if randomized than the exposure. We demonstrate our approach on a recent IV application that uses bed availability in the intensive care unit (ICU) as an instrument for admission to the ICU. | statistics |
The existence and uniqueness of formal Puiseux series solutions of non-autonomous algebraic differential equations of the first order at a nonsingular point of the equation is proven. The convergence of those Puiseux series is established. Several new examples are provided. Relationships to the celebrated Painleve theorem and lesser-known Petrovic's results are discussed in detail. | mathematics |
Human-in-the-loop Reinforcement Learning (HRL) aims to integrate human guidance with Reinforcement Learning (RL) algorithms to improve sample efficiency and performance. A common type of human guidance in HRL is binary evaluative "good" or "bad" feedback for queried states and actions. However, this type of learning scheme suffers from the problems of weak supervision and poor efficiency in leveraging human feedback. To address this, we present EXPAND (EXPlanation AugmeNted feeDback) which provides a visual explanation in the form of saliency maps from humans in addition to the binary feedback. EXPAND employs a state perturbation approach based on salient information in the state to augment the binary feedback. We choose five tasks, namely Pixel-Taxi and four Atari games, to evaluate this approach. We demonstrate the effectiveness of our method using two metrics: environment sample efficiency and human feedback sample efficiency. We show that our method significantly outperforms previous methods. We also analyze the results qualitatively by visualizing the agent's attention. Finally, we present an ablation study to confirm our hypothesis that augmenting binary feedback with state salient information results in a boost in performance. | computer science |
Given recent advances in learned video prediction, we investigate whether a simple video codec using a pre-trained deep model for next frame prediction based on previously encoded/decoded frames without sending any motion side information can compete with standard video codecs based on block-motion compensation. Frame differences given learned frame predictions are encoded by a standard still-image (intra) codec. Experimental results show that the rate-distortion performance of the simple codec with symmetric complexity is on average better than that of x264 codec on 10 MPEG test videos, but does not yet reach the level of x265 codec. This result demonstrates the power of learned frame prediction (LFP), since unlike motion compensation, LFP does not use information from the current picture. The implications of training with L1, L2, or combined L2 and adversarial loss on prediction performance and compression efficiency are analyzed. | electrical engineering and systems science |
Experimental probes of the recently discovered Higgs boson show that its behavior is close to that of the Standard Model (SM) Higgs particle. Extensions of the SM which include extra Higgs bosons are constrained by these observations, implying either the decoupling of the heavy non-standard Higgs particles or the realization of alignment, associated with vanishing mixing of the SM-like Higgs boson with the non-standard ones. Quite generally, alignment is not enforced by symmetry considerations and hence it is interesting to look for dynamical ways in which this condition can be realized. We show that this is possible in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), in which alignment is achieved for values of the coupling of the Higgs fields to the singlet field that become large close to the Grand Unification (GUT) scale. This, in turn, can be explained by the composite nature of the Higgs fields, with a compositeness scale close to the GUT scale. In this article we present this dynamical scenario and discuss its phenomenological properties. | high energy physics phenomenology |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.