text
stringlengths
11
9.77k
label
stringlengths
2
104
We study the problem of detecting whether an inhomogeneous random graph contains a planted community. Specifically, we observe a single realization of a graph. Under the null hypothesis, this graph is a sample from an inhomogeneous random graph, whereas under the alternative, there exists a small subgraph where the edge probabilities are increased by a multiplicative scaling factor. We present a scan test that is able to detect the presence of such a planted community, even when this community is very small and the underlying graph is inhomogeneous. We also derive an information theoretic lower bound for this problem which shows that in some regimes the scan test is almost asymptotically optimal. We illustrate our results through examples and numerical experiments.
mathematics
The original definition of amenability given by von Neumann in the highly non-constructive terms of means was later recast by Day using approximately invariant probability measures. Moreover, as it was conjectured by Furstenberg and proved by Kaimanovich-Vershik and Rosenblatt, the amenability of a locally compact group is actually equivalent to the existence of a single probability measure on the group with the property that the sequence of its convolution powers is asymptotically invariant. In the present article we extend this characterization of amenability to measured groupoids. It implies, in particular, that the amenability of a measure class preserving group action is equivalent to the existence of a random environment on the group parameterized by the action space, and such that the tail of the random walk in almost every environment is trivial.
mathematics
Despite a long history of use of citation count as a measure to assess the impact or influence of a scientific paper, the evolution of follow-up work inspired by the paper and their interactions through citation links have rarely been explored to quantify how the paper enriches the depth and breadth of a research field. We propose a novel data structure, called Influence Dispersion Tree (IDT) to model the organization of follow-up papers and their dependencies through citations. We also propose the notion of an ideal IDT for every paper and show that an ideal (highly influential) paper should increase the knowledge of a field vertically and horizontally. Upon suitably exploring the structural properties of IDT, we derive a suite of metrics, namely Influence Dispersion Index (IDI), Normalized Influence Divergence (NID) to quantify the influence of a paper. Our theoretical analysis shows that an ideal IDT configuration should have equal depth and breadth (and thus minimize the NID value). We establish the superiority of NID as a better influence measure in two experimental settings. First, on a large real-world bibliographic dataset, we show that NID outperforms raw citation count as an early predictor of the number of new citations a paper will receive within a certain period after publication. Second, we show that NID is superior to the raw citation count at identifying the papers recognized as highly influential through Test of Time Award among all their contemporary papers (published in the same venue). We conclude that in order to quantify the influence of a paper, along with the total citation count, one should also consider how the citing papers are organized among themselves to better understand the influence of a paper on the research field. For reproducibility, the code and datasets used in this study are being made available to the community.
computer science
We develop a universal approximation for the Renyi entropies of a pure state at late times in a non-integrable many-body system, which macroscopically resembles an equilibrium density matrix. The resulting expressions are fully determined by properties of the associated equilibrium density matrix, and are hence independent of the details of the initial state, while also being manifestly consistent with unitary time-evolution. For equilibrated pure states in gravity systems, such as those involving black holes, this approximation gives a prescription for calculating entanglement entropies using Euclidean path integrals which is consistent with unitarity and hence can be used to address the information loss paradox of Hawking. Applied to recent models of evaporating black holes and eternal black holes coupled to baths, it provides a derivation of replica wormholes, and elucidates their mathematical and physical origins. In particular, it shows that replica wormholes can arise in a system with a fixed Hamiltonian, without the need for ensemble averages.
high energy physics theory
Ensuring the correct functioning of quantum error correction (QEC) circuits is crucial to achieve fault tolerance in realistic quantum processors subjected to noise. The first checkpoint for a fully operational QEC circuit is to create genuine multipartite entanglement across all subsystems of physical qubits. We introduce a conditional witnessing technique to certify genuine multipartite entanglement (GME) that is efficient in the number of subsystems and, importantly, robust against experimental noise and imperfections. Specifically, we prove that the detection of entanglement in a linear number of bipartitions by a number of measurements that also scales linearly, suffices to certify GME. Moreover, our method goes beyond the standard procedure of separating the state from the convex hull of biseparable states, yielding an improved finesse and robustness compared to previous techniques. We apply our method to the noisy readout of stabilizer operators of the distance-three topological color code and its flag-based fault-tolerant version. In particular, we subject the circuits to combinations of three types of noise, namely, uniform depolarizing noise, two-qubit gate depolarizing noise, and bit-flip measurement noise. We numerically compare our method with the standard, yet generally inefficient, fidelity test and to a pair of efficient witnesses, verifying the increased robustness of our method. Last but not least, we provide the full translation of our analysis to a trapped-ion native gate set that makes it suitable for experimental applications.
quantum physics
The computation of multiphase flows presents a subtle energetic equilibrium between potential (i.e., surface) and kinetic energies. The use of traditional interface-capturing schemes provides no control over such a dynamic balance. In the spirit of the wellknown symmetry-preserving and mimetic schemes, whose physics-compatible discretizations rely upon preserving the underlying mathematical structures of the space, we identify the corresponding structure and propose a new discretization strategy for curvature. The new scheme ensures conservation of mechanical energy (i.e., surface plus kinetic) up to temporal integration. Inviscid numerical simulations are performed to show the robustness of such a method.
physics
We demonstrate trapping of electrons in a millimeter-sized quadrupole Paul trap driven at 1.6~GHz in a room-temperature ultra-high vacuum setup. Cold electrons are introduced into the trap by ionization of atomic calcium via Rydberg states and stay confined by microwave and static electric fields for several tens of milliseconds. A fraction of these electrons remain trapped longer and show no measurable loss for measurement times up to a second. Electronic excitation of the motion reveals secular frequencies which can be tuned over a range of several tens to hundreds of MHz. Operating a similar electron Paul trap in a cryogenic environment may provide a platform for all-electric quantum computing with trapped electron spin qubits.
quantum physics
By employing the time-dependent exact diagonalization method, we investigate the photoexcited states of the excitonic insulator in the extended Falicov-Kimball model (EFKM). We here show that the pulse irradiation can induce the interband electron-electron pair correlation in the photoexcited states, while the excitonic electron-hole pair correlation in the initial ground state is strongly suppressed. We also show that the photoexcited states contains the eigenstates of the EFKM with a finite number of interband electron-electron pairs, which are responsible for the enhancement of the electron-electron pair correlation. The mechanism found here is due to the presence of the internal SU(2) pairing structure in the EFKM and thus it is essentially the same as that for the photoinduced $\eta$-pairing in the repulsive Hubbard model reported recently [T. Kaneko et al., Phys. Rev. Lett. ${\bf 122}$, 077002 (2019)]. This also explains why the nonlinear optical response is effective to induce the electron-electron pairs in the photoexcited states of the EFKM. Furthermore, we show that, unlike the $\eta$-pairing in the Hubbard model, the internal SU(2) structure is preserved even for a nonbipartite lattice when the EFKM has the direct-type band structure, in which the pulse irradiation can induce the electron-electron pair correlation with momentum ${\it {\bf q}}$ = ${\textbf 0}$ in the photoexcited states. We also discuss briefly the effect of a perturbation that breaks the internal SU(2) structure.
condensed matter
We explore the large spin spectrum in two-dimensional conformal field theories with a finite twist gap, using the modular bootstrap in the lightcone limit. By recursively solving the modular crossing equations associated to different $PSL(2,\mathbb{Z})$ elements, we identify the universal contribution to the density of large spin states from the vacuum in the dual channel. Our result takes the form of a sum over $PSL(2,\mathbb{Z})$ elements, whose leading term generalizes the usual Cardy formula to a wider regime. Rather curiously, the contribution to the density of states from the vacuum becomes negative in a specific limit, which can be canceled by that from a non-vacuum Virasoro primary whose twist is no bigger than $c-1\over16$. This suggests a new upper bound of $c-1\over 16$ on the twist gap in any $c>1$ compact, unitary conformal field theory with a vacuum, which would in particular imply that pure AdS$_3$ gravity does not exist. We confirm this negative density of states in the pure gravity partition function by Maloney, Witten, and Keller. We generalize our discussion to theories with $\mathcal{N}=(1,1)$ supersymmetry, and find similar results.
high energy physics theory
We apply the Lifshitz theory of dispersion forces to find a contribution to the free energy of peptide films which is caused by the zero-point and thermal fluctuations of the electromagnetic field. For this purpose, using available information about the imaginary parts of dielectric permittivity of peptides, the analytic representation for permittivity of typical peptide along the imaginary frequency axis is devised. Numerical computations of the fluctuation-induced free energy are performed at room temperature for the freestanding peptide films, containing different fractions of water, and for similar films deposited on dielectric (SiO$_2$) and metal (Au) substrates. It is shown that the free energy of a freestanding peptide film is negative and, thus, contributes to its stability. The magnitude of the free energy increases with increasing fraction of water and decreases with increasing thickness of a film. For peptide films deposited on a dielectric substrate the free energy is nonmonotonous. It is negative for thicker than 100 nm films, reaches the maximum value at some film thickness, but vanishes and changes its sign for thinner than 100 nm films. The fluctuation-induced free energy of peptide films deposited on metallic substrate is found to be positive which makes films less stable. In all three cases, simple analytic expressions for the free energy of sufficiently thick films are found. The obtained results may be useful to attain film stability in the next generation of organic microdevices with further shrinked dimensions.
physics
In this paper, we present an experiment, designed to investigate and evaluate the scalability and the robustness aspects of mobile manipulation. The experiment involves performing variations of mobile pick and place actions and opening/closing environment containers in a human household. The robot is expected to act completely autonomously for extended periods of time. We discuss the scientific challenges raised by the experiment as well as present our robotic system that can address these challenges and successfully perform all the tasks of the experiment. We present empirical results and the lessons learned as well as discuss where we hit limitations.
computer science
Negative differential mobility is the phenomenon in which the velocity of a particle decreases when the force driving it increases. We study this phenomenon in Markov jump models where a particle moves in the presence of walls that act as traps. We consider transition rates that obey local detailed balance but differ in normalisation, the inclusion of a rate to cross a wall and a load factor. We illustrate the full counting statistics for different choices of the jumping rates. We also show examples of thermodynamic uncertainty relations. The variety of behaviours we encounter highlights that negative differential mobility depends crucially on the chosen rates and points out the necessity that such choices should be based on proper coarse-graining studies of a more microscopic description.
condensed matter
We calculate the homotopy type of $L_1L_{K(2)}S^0$ and $L_{K(1)}L_{K(2)}S^0$ at the prime 2, where $L_{K(n)}$ is localization with respect to Morava $K$-theory and $L_1$ localization with respect to $2$-local $K$ theory. In $L_1L_{K(2)}S^0$ we find all the summands predicted by the Chromatic Splitting Conjecture, but we find some extra summands as well. An essential ingredient in our approach is the analysis of the continuous group cohomology $H^\ast_c(\mathbb{G}_2,E_0)$ where $\mathbb{G}_2$ is the Morava stabilizer group and $E_0 = \mathbb{W}[[u_1]]$ is the ring of functions on the height $2$ Lubin-Tate space. We show that the inclusion of the constants $\mathbb{W} \to E_0$ induces an isomorphism on group cohomology, a radical simplification.
mathematics
While 5G mobile communication systems are currently in deployment, researchers around the world have already started to discuss 6G technology and funding agencies started their first programs with a 6G label. Although it may seem like a good idea from a historical point of view with returning generations every decade, this contribution will show that there is a great risk of introducing 6G labels at this time. While the reasons to not talk about 6G yet are manifold, some of the more dominant ones are i.) there exists a lack of real technology advancements introduced by a potential 6G system; ii.) the flexibility of the 5G communication system introduced by softwarization concepts, such as in the Internet community, allows for daily updates; and iii.) introducing widespread 6G discussions can have a negative impact on the deployment and evolution of 5G with completely new business cases and customer ecosystems compared to its predecessors. Finally, as we do not believe that 5G is the end of our journey, we will provide an outlook on the future of mobile communication systems, independent of the current mainstream discussion.
computer science
As an important device for detecting rotation, high sensitivity gyroscope is required for practical applications. In recent years, exceptional point (EP) shows its potential in enhancing the sensitivity of sensing in optical cavity. Here we propose an EP enhanced optical gyroscope based on mechanical PT-symmetric system in microcavity. By pumping the two optical modes with different colors, i.e. blue and red detuning, an effective mechanical PT-symmetric system can be obtained and the system can be prepared in EP with appropriate parameters. Compared with the situation of diabolic point, EP can enhance the sensitivity of gyroscope with more than one order of magnitude in the weak perturbation regime. The results show the gyroscope can be enhanced effectively by monitoring mechanical modes rather than optical modes. Our work provides a promising approach to design gyroscope with higher sensitivity in optical microcavity and has potential values in some fields including fundamental physic and precision measurement.
physics
The exact number of CNOT and single qubit gates needed to implement a Quantum Algorithm in a given architecture is one of the central problems of Quantum Computation. In this work we study the importance of concise realizations of Partially defined Unitary Transformations for better circuit construction using the case study of Dicke State Preparation. The Dicke States $(\left|D^n_k \right>)$ are an important class of entangled states with uses in many branches of Quantum Information. In this regard we provide the most efficient Deterministic Dicke State Preparation Circuit in terms of CNOT and single qubit gate counts in comparison to existing literature. We further observe that our improvements also reduce architectural constraints of the circuits. We implement the circuit for preparing $\left| D^4_2 \right>$ on the "ibmqx2" machine of the IBM QX service and observe that the error induced due to noise in the system is lesser in comparison to the existing circuit descriptions. We conclude by describing the CNOT map of the generic $\left| D^n_k \right>$ preparation circuit and analyze different ways of distributing the CNOT gates in the circuit and its affect on the induced error.
quantum physics
The spherical $p$-spin is a fundamental model for glassy physics, thanks to its analytic solution achievable via the replica method. Unfortunately the replica method has some drawbacks: it is very hard to apply to diluted models and the assumptions beyond it are not immediately clear. Both drawbacks can be overcome by the use of the cavity method, which, however, needs to be applied with care to spherical models. Here we show how to write the cavity equations for spherical $p$-spin models on complete graphs, both in the Replica Symmetric (RS) ansatz (corresponding to Belief Propagation) and in the 1-step Replica Symmetry Breaking (1RSB) ansatz (corresponding to Survey Propagation). The cavity equations can be solved by a Gaussian (RS) and multivariate Gaussian (1RSB) ansatz for the distribution of the cavity fields. We compute the free energy in both ansatzes and check that the results are identical to the replica computation, predicting a phase transition to a 1RSB phase at low temperatures. The advantages of solving the model with the cavity method are many. The physical meaning of any ansatz for the cavity marginals is very clear. The cavity method works directly with the distribution of local quantities, which allows to generalize the method to dilute graphs. What we are presenting here is the first step towards the solution of the diluted version of the spherical $p$-spin model, which is a fundamental model in the theory of random lasers and interesting $per~se$ as an easier-to-simulate version of the classical fully-connected $p$-spin model.
condensed matter
The most metal-deficient stars hold important clues about the early build-up and chemical evolution of the Milky Way, and carbon-enhanced metal-poor (CEMP) stars are of special interest. However, little is known about CEMP stars in the Galactic bulge. In this paper, we use the large spectroscopic sample of metal-poor stars from the Pristine Inner Galaxy Survey (PIGS) to identify CEMP stars ([C/Fe] > +0.7) in the bulge region and to derive a CEMP fraction. We identify 96 new CEMP stars in the inner Galaxy, of which 62 are very metal-poor ([Fe/H] < -2.0); this is more than a ten-fold increase compared to the seven previously known bulge CEMP stars. The cumulative fraction of CEMP stars in PIGS is $42^{\,+14\,}_{\,-13} \%$ for stars with [Fe/H] < -3.0, and decreases to $16^{\,+3\,}_{\,-3} \%$ for [Fe/H] < -2.5 and $5.7^{\,+0.6\,}_{\,-0.5} \%$ for [Fe/H] < -2.0. The PIGS inner Galaxy CEMP fraction for [Fe/H] < -3.0 is consistent with the halo fraction found in the literature, but at higher metallicities the PIGS fraction is substantially lower. While this can partly be attributed to a photometric selection bias, such bias is unlikely to fully explain the low CEMP fraction at higher metallicities. Considering the typical carbon excesses and metallicity ranges for halo CEMP-s and CEMP-no stars, our results point to a possible deficiency of both CEMP-s and CEMP-no stars (especially the more metal-rich) in the inner Galaxy. The former is potentially related to a difference in the binary fraction, whereas the latter may be the result of a fast chemical enrichment in the early building blocks of the inner Galaxy.
astrophysics
Semiconductor quantum dots have recently emerged as a leading platform to efficiently generate highly indistinguishable photons, and this work addresses the timely question of how good these solid-state sources can ultimately be. We establish the crucial role of lattice relaxation in these systems in giving rise to trade-offs between indistinguishability and efficiency. We analyse the two source architectures most commonly employed: a quantum dot embedded in a waveguide and a quantum dot coupled to an optical cavity. For waveguides, we demonstrate that the broadband Purcell effect results in a simple inverse relationship, where indistinguishability and efficiency cannot be simultaneously increased. For cavities, the frequency selectivity of the Purcell enhancement results in a more subtle trade-off, where indistinguishability and efficiency can be simultaneously increased, though by the same mechanism not arbitrarily, limiting a source with near-unity indistinguishability ($>99$\%) to an efficiency of approximately 96\% for realistic parameters.
quantum physics
We study the coupled system consisting of a complex matter scalar field, a U(1) gauge field, and a complex Higgs scalar field that causes spontaneously symmetry breaking. We show by numerical calculations that there are spherically symmetric nontopological soliton solutions. Homogeneous balls solutions, all fields take constant values inside the ball and in the vacuum state outside, appear in this system. It is shown that the homogeneous balls have the following properties: charge density of the matter scalar field is screened by counter charge cloud of the Higgs and gauge field everywhere; an arbitrary large size is allowed; energy density and pressure of the ball behave homogeneous nonrelativistic gas; a large ball is stable against dispersion into free particles and against decay into two smaller balls.
high energy physics theory
Dense, stabilized, frictional particulate suspensions in a viscous liquid undergo increasingly strong continuous shear thickening (CST) as the solid packing fraction, $\phi$, increases above a critical volume fraction, and discontinuous shear thickening (DST) is observed for even higher packing fractions. Recent studies have related shear thickening to a transition from mostly lubricated to predominantly frictional contacts with the increase in stress. The rheology and networks of frictional forces from two and three-dimensional simulations of shear-thickening suspensions are studied. These are analyzed using measures of the topology of the network, including tools of persistent homology. We observe that at low stress the frictional interaction networks are predominantly quasi-linear along the compression axis. With an increase in stress, the force networks become more isotropic, forming loops in addition to chain-like structures. The topological measures of Betti numbers and total persistence provide a compact means of describing the mean properties of the frictional force networks and provide a key link between macroscopic rheology and the microscopic interactions. A total persistence measure describing the significance of loops in the force network structure, as a function of stress and packing fraction, shows behavior similar to that of relative viscosity and displays a scaling law near the jamming fraction for both dimensionalities simulated.
condensed matter
We present a first calculation of the rate for plasmon production in semiconductors from nuclei recoiling against dark matter. The process is analogous to bremsstrahlung of transverse photon modes, but with a longitudinal plasmon mode emitted instead. For dark matter in the 10 MeV - 1 GeV mass range, we find that the plasmon bremsstrahlung rate is 4-5 orders of magnitude smaller than that for elastic scattering, but 4-5 orders of magnitude larger than the transverse bremsstrahlung rate. Because the plasmon can decay into electronic excitations and has characteristic energy given by the plasma frequency $\omega_p$, with $\omega_p \approx 16$ eV in Si crystals, plasmon production provides a distinctive signature and new method to detect nuclear recoils from sub-GeV dark matter.
high energy physics phenomenology
Universal Extra Dimension (UED) is a well-motivated and well-studied scenario. One of the main motivations is the presence of a dark matter (DM) candidate namely, the lightest level-1 Kaluza-Klein (KK) particle (LKP), in the particle spectrum of UED. The minimal version of UED (mUED) scenario is highly predictive with only two parameters namely, the radius of compactification and cut-off scale, to determine the phenomenology. Therefore, stringent constraint results from the WMAP/PLANCK measurement of DM relic density (RD) of the universe. The production and decays of level-1 quarks and gluons in UED scenarios give rise to multijet final states at the Large Hadron Collider (LHC) experiment. We study the ATLAS search for multijet plus missing transverse energy signatures at the LHC with 13 TeV center of mass energy and 139 inverse femtobarn integrated luminosity. In view of the fact that the DM RD allowed part of mUED parameter-space has already been ruled out by the ATLAS multijet search, we move on to a less restricted version of UED namely, the non-minimal UED (nmUED), with non-vanishing boundary-localized terms (BLTs). The presence of BLTs significantly alters the dark matter as well as the collider phenomenology of nmUED. We obtain stringent bounds on the BLT parameters from the ATLAS multijet plus missing transverse energy search.
high energy physics phenomenology
We address the propagation and hadronization of a struck quark by studying the gauge invariance of the color-averaged cut quark propagator, and by relating this to the single inclusive quark fragmentation correlator by means of new sum rules. Using suitable Wilson lines, we provide a gauge-invariant definition for the mass of the color-averaged dressed quark and decompose this into the sum of a current and an interaction-dependent component. The latter, which we argue is an order parameter for dynamical chiral symmetry breaking, also appears in the sum rule for the twist-3 $\tilde{E}$ fragmentation function, providing a specific experimental way to probe the dynamical generation of mass in Quantum Chromo Dynamics.
high energy physics phenomenology
Radiation pressure forces in cavity optomechanics allow for efficient cooling of vibrational modes of macroscopic mechanical resonators, the manipulation of their quantum states, as well as generation of optomechanical entanglement. The standard mechanism relies on the cavity photons directly modifying the state of the mechanical resonator. Hybrid cavity optomechanics provides an alternative approach by coupling mechanical objects to quantum emitters, either directly or indirectly via the common interaction with a cavity field mode. While many approaches exist, they typically share a simple effective description in terms of a single force acting on the mechanical resonator. More generally, one can study the interplay between various forces acting on the mechanical resonator in such hybrid mechanical devices. This interplay can lead to interference effects that may, for instance, improve cooling of the mechanical motion or lead to generation of entanglement between various parts of the hybrid device. Here, we provide such an example of a hybrid optomechanical system where an ensemble of quantum emitters is embedded into the mechanical resonator formed by a vibrating membrane. The interference between the radiation pressure force and the mechanically modulated Tavis--Cummings interaction leads to enhanced cooling dynamics in regimes in which neither force is efficient by itself. Our results pave the way towards engineering novel optomechanical interactions in hybrid optomechanical systems.
quantum physics
We investigate the Beyond Standard Model discovery potential in the framework of the Effective Field Theory (EFT) for the same-sign $WW$ scattering process in purely leptonic $W$ decay modes at the High-Luminosity and High-Energy phases of the Large Hadron Collider (LHC). The goal of this paper is to examine the applicability of the EFT approach, with one dimension-8 operator varied at a time, to describe a hypothetical new physics signal in the $WWWW$ quartic coupling. In the considered process there is no experimental handle on the $WW$ invariant mass, and it has previously been shown that the discovery potential at 14 TeV is rather slim. In this paper we report the results calculated for a 27 TeV machine and compare them with the discovery potential obtained at 14 TeV. We find that while the respective discovery regions shift to lower values of the Wilson coefficients, the overall discovery potential of this procedure does not get significantly larger with a higher beam energy.
high energy physics phenomenology
Coronavirus (COVID-19) emerged towards the end of 2019. World Health Organization (WHO) was identified it as a global epidemic. Consensus occurred in the opinion that using Computerized Tomography (CT) techniques for early diagnosis of pandemic disease gives both fast and accurate results. It was stated by expert radiologists that COVID-19 displays different behaviours in CT images. In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. 16x16 (Subset-1) and 32x32 (Subset-2) patches were obtained from 150 CT images to generate sub-datasets. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. Feature fusion and ranking method have been applied in order to increase the performance of the proposed method. Then, the processed data was classified with a Support Vector Machine (SVM). According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics.
electrical engineering and systems science
Voice enhancement and voice coding are imperative and important functions in a voice-communication system. However, both functions are commonly treated independently, even though both utilize similar features of the underlying signals. Our proposal is to leverage information from one function to the benefit of the other. Specifically, our proposed changes are focused on changes to the voice enhancement at the downlink side and utilizing information of the voice decoding. Preliminary results show that such an approach results in improved quality. Additionally, suggestions are provided on future extensions of the proposed concept.
electrical engineering and systems science
A Ducci sequence is a sequence of integer $n$-tuples obtained by iterating the map \[ D : (a_1, a_2, \ldots, a_n) \mapsto \big(|a_1-a_2|,|a_2-a_3|,\ldots,|a_n-a_1|\big). \] Such a sequence is eventually periodic and we denote by $P(n)$ the maximal period of such sequences for given $n$. We prove lower bounds for $P(n)$ by counting certain partitions.
mathematics
In the current unmanned aircraft systems (UASs) for sensing services, unmanned aerial vehicles (UAVs) transmit their sensory data to terrestrial mobile devices over the unlicensed spectrum. However, the interference from surrounding terminals is uncontrollable due to the opportunistic channel access. In this paper, we consider a cellular Internet of UAVs to guarantee the Quality-of-Service (QoS), where the sensory data can be transmitted to the mobile devices either by UAV-to-Device (U2D) communications over cellular networks, or directly through the base station (BS). Since UAVs' sensing and transmission may influence their trajectories, we study the trajectory design problem for UAVs in consideration of their sensing and transmission. This is a Markov decision problem (MDP) with a large state-action space, and thus, we utilize multi-agent deep reinforcement learning (DRL) to approximate the state-action space, and then propose a multi-UAV trajectory design algorithm to solve this problem. Simulation results show that our proposed algorithm can achieve a higher total utility than policy gradient algorithm and single-agent algorithm.
electrical engineering and systems science
In quantum logic, i.e., within the structure of the Hilbert lattice imposed on all closed linear subspaces of a Hilbert space, the assignment of truth values to quantum propositions (i.e., experimentally verifiable propositions relating to a quantum system) is unambiguously determined by the state of the system. So, if only pure states of the system are considered, can a probability measure mapping the probability space for truth values to the unit interval be assigned to quantum propositions? In other words, is a probability concept contingent or emergent in the logic of quantum propositions? Until this question is answered, the cause of probabilities in quantum theory cannot be completely understood. In the present paper it is shown that the interaction of the quantum system with its environment causes the irreducible randomness in the relation between quantum propositions and truth values.
quantum physics
A method to unitarize the scattering amplitude produced by infinite-range forces is developed and applied to Born terms. In order to apply $S$-matrix techniques, based on unitarity and analyticity, we first derive an $S$-matrix free of infrared divergences. This is achieved by removing a divergent phase factor due to the interactions mediated by the massless particles in the crossed channels, a procedure that is related to previous formalisms to treat infrared divergences. We apply this method in detail by unitarizing the Born terms for graviton-graviton scattering in pure gravity and we find a scalar graviton-graviton resonance with vacuum quantum numbers ($J^{PC}=0^{++}$) that we call the \textit{graviball}. Remarkably, this resonance is located below the Planck mass but deep in the complex $s$-plane (with $s$ the usual Mandelstam variable), so that its effects along the physical real $s$ axis peak for values significantly lower than this scale. We argue that the position and width of the graviball are reduced when including extra light fields in the theory. This could lead to phenomenological consequences in scenarios of quantum gravity with a large number of such fields or, in general, with a low-energy ultraviolet completion. We also apply this formalism to two non-relativistic potentials with exact known solutions for the scattering amplitudes: Coulomb scattering and an energy-dependent potential obtained from the Coulomb one with a zero at threshold. This latter case shares the same $J=0$ partial-wave projected Born term as the graviton-graviton case, except for a global factor. We find that the relevant resonance structure of these examples is reproduced by our methods, which represents a strong indication of their robustness.
high energy physics theory
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters.
electrical engineering and systems science
We present an analysis of two-current correlations for the pion in the Nambu--Jona-Lasinio model, with Pauli--Villars regularization. We provide explicit expressions in momentum space for two-current correlations corresponding to the zeroth component of the vector Dirac bilinear in the quark vertices, which has been evaluated on the lattice. The numerical results show a remarkable qualitative agreement with recent lattice data. The factorization approximation into one-body currents is discussed.
high energy physics phenomenology
Entanglement-assisted quantum error correcting codes (EAQECCs) constructed from Reed-Solomon codes and BCH codes are considered in this work. It is provided a complete and explicit formula for the parameters of EAQECCs coming from any Reed-Solomon code, for the Hermitian metric, and from any BCH code with extension degree $2$ and consecutive cyclotomic cosets, for both the Euclidean and the Hermitian metric. The main task in this work is the computation of a completely general formula for $c$, the minimum number of required maximally entangled quantum states.
computer science
Gauge theories possess nonlocal features that, in the presence of boundaries, inevitably lead to subtleties. We employ geometric methods rooted in the functional geometry of the phase space of Yang-Mills theories to: (1) characterize a basis for quasilocal degrees of freedom (dof) that is manifestly gauge-covariant also at the boundary; (2) tame the non-additivity of the regional symplectic forms upon the gluing of regions; and to (3) discuss gauge and global charges in both Abelian and non-Abelian theories from a geometric perspective. Naturally, our analysis leads to splitting the Yang-Mills dof into Coulombic and radiative. Coulombic dof enter the Gauss constraint and are dependent on extra boundary data (the electric flux); radiative dof are unconstrained and independent. The inevitable non-locality of this split is identified as the source of the symplectic non-additivity, i.e. of the appearance of new dof upon the gluing of regions. Remarkably, these new dof are fully determined by the regional radiative dof only. Finally, a direct link is drawn between this split and Dirac's dressed electron.
high energy physics theory
We demonstrate one-sided device-independent self-testing of any pure entangled two-qubit state based on a fine-grained steering inequality. The maximum violation of a fine-grained steering inequality can be used to witness certain steerable correlations, which certify all pure two-qubit entangled states. Our experimental results identify which particular pure two-qubit entangled state has been self-tested and which measurement operators are used on the untrusted side. Furthermore, we analytically derive the robustness bound of our protocol, enabling our subsequent experimental verification of robustness through state tomography. Finally, we ensure that the requisite no-signalling constraints are maintained in the experiment.
quantum physics
The Virtual Telescope for X-ray Observations (VTXO) will use lightweight Phase Fresnel Lenses (PFLs) in a virtual X-ray telescope with $\sim$1 km focal length and with $\sim$50 milli-arcsecond angular resolution. VTXO is formed by using precision formation flying of two SmallSats: a smaller OpticsSat that houses the PFLs and navigation beacons while a larger DetectorSat contains an X-ray camera, a precision start tracker, and the propulsion for the formation flying. The baseline flight dynamics uses a highly elliptical supersynchronous orbit allow the formation to hold in an inertial frame around the 90,000 km apogee for 10 hours of the 32.5 hour orbit with nearly a year mission lifetime. VTXO's fine angular resolution enables measuring the environments close to the central engines of bright compact X-ray sources. This X-ray imaging capability allows for the study of the effects of dust scattering near to the central objects such as Cyg X-3 and GX 5-1, for the search for jet structure near to the compact object in X-ray novae such as Cyg X-1 and GRS 1915+105, and for the search for structure in the termination shock of in the Crab pulsar wind nebula. The VTXO SmallSat and instrument designs, mission parameters, and science performance are described. VTXO development was supported as one of the selected 2018 NASA Astrophysics SmallSat Study (AS$^3$) missions.
astrophysics
In this paper, we theoretically prove that gradient descent can find a global minimum of non-convex optimization of all layers for nonlinear deep neural networks of sizes commonly encountered in practice. The theory developed in this paper only requires the practical degrees of over-parameterization unlike previous theories. Our theory only requires the number of trainable parameters to increase linearly as the number of training samples increases. This allows the size of the deep neural networks to be consistent with practice and to be several orders of magnitude smaller than that required by the previous theories. Moreover, we prove that the linear increase of the size of the network is the optimal rate and that it cannot be improved, except by a logarithmic factor. Furthermore, deep neural networks with the trainability guarantee are shown to generalize well to unseen test samples with a natural dataset but not a random dataset.
statistics
We investigate the tunneling magnetoresistance in magnetic tunnel junctions (MTJs) comprised of Weyl semimetal contacts. We show that chirality-magnetization locking leads to a gigantic tunneling magnetoresistance ratio, an effect that does not rely on spin filtering by the tunnel barrier. Our results indicate that the conductance in the anti-parallel configuration is more sensitive to magnetization fluctuations than in MTJs with normal ferromagnets, and predicts a TMR as large as 10^4 % when realistic magnetization fluctuations are accounted for. In addition, we show that the Fermi arc states give rise to a non-monotonic dependence of conductance on the misalignment angle between the magnetizations of the two contacts.
condensed matter
In the twisted M-theory setting, various types of fusion of M2 and M5 branes induce coproducts between the algebra of operators on M2 branes and the algebra of operators on M5 branes. By doing a perturbative computation in the gravity side, which is captured by the 5d topological holomorphic $U(1)$ Chern-Simons theory, we reproduce the non-perturbative coproducts.
high energy physics theory
When specific events seem to spur others in their wake, marked Hawkes processes enable us to reckon with their statistics. The underdetermined empirical nature of these event-triggering mechanisms hinders estimation in the multivariate setting. Spatiotemporal applications alleviate this obstacle by allowing relationships to depend only on relative distances in real Euclidean space; we employ the framework as a vessel for embedding arbitrary event types in a new latent space. By performing synthetic experiments on short records as well as an investigation into options markets and pathogens, we demonstrate that learning the embedding alongside a point process model uncovers the coherent, rather than spurious, interactions.
statistics
Quantum reflection of thermal He atoms from various surfaces (glass slide, GaAs wafer, flat and structured Cr) at grazing conditions is studied within the elastic close-coupling formalism. Comparison with the experimental results of B.S. Zhao et al, Phys. Rev. Lett. {\bf 105}, 133203 (2010) is quite reasonable but the conclusions of the present theoretical analysis are different from those discussed in the experimental work. The universal linear behavior observed in the dependence of the reflection probability on the incident wave vector component perpendicular to the surface is only valid at small values of the component whereas, at larger values, deviation from the linearity is evident, approaching a quadratic dependence at higher values. The surface roughness seems to play no important role in this scattering. Moreover, the claim that one observes a transition from quantum to classical reflection seems to be imprecise.
quantum physics
In this paper, we present a low-power anomaly detection integrated circuit (ADIC) based on a one-class classifier (OCC) neural network. The ADIC achieves low-power operation through a combination of (a) careful choice of algorithm for online learning and (b) approximate computing techniques to lower average energy. In particular, online pseudoinverse update method (OPIUM) is used to train a randomized neural network for quick and resource efficient learning. An additional 42% energy saving can be achieved when a lighter version of OPIUM method is used for training with the same number of data samples lead to no significant compromise on the quality of inference. Instead of a single classifier with large number of neurons, an ensemble of K base learner approach is chosen to reduce learning memory by a factor of K. This also enables approximate computing by dynamically varying the neural network size based on anomaly detection. Fabricated in 65nm CMOS, the ADIC has K = 7 Base Learners (BL) with 32 neurons in each BL and dissipates 11.87pJ/OP and 3.35pJ/OP during learning and inference respectively at Vdd = 0.75V when all 7 BLs are enabled. Further, evaluated on the NASA bearing dataset, approximately 80% of the chip can be shut down for 99% of the lifetime leading to an energy efficiency of 0.48pJ/OP, an 18.5 times reduction over full-precision computing running at Vdd = 1.2V throughout the lifetime.
electrical engineering and systems science
The classical Jordan curve theorem for digital curves asserts that the Jordan curve theorem remains valid in the Khalimsky plane. Since the Khalimsky plane is a quotient space of $\mathbb R^2$ induced by a tiling of squares, it is natural to ask for which other tilings of the plane it is possible to obtain a similar result. In this paper we prove a Jordan curve theorem which is valid for every locally finite tiling of $\mathbb R^2$. As a corollary of our result, we generalize some classical Jordan curve theorems for grids of points, including Rosenfeld's theorem.
mathematics
In this manuscript, which is to appear in the proceedings of the conference "MathemAmplitude 2019" in Padova, Italy, we provide an overview of the module intersection method for the the integration-by-parts (IBP) reduction of multi-loop Feynman integrals. The module intersection method, based on computational algebraic geometry, is a highly efficient way of getting IBP relations without double propagator or with a bound on the highest propagator degree. In this manner, trimmed IBP systems which are much shorter than the traditional ones can be obtained. We apply the modern, Petri net based, workflow management system GPI-Space in combination with the computer algebra system Singular to solve the trimmed IBP system via interpolation and efficient parallelization. We show, in particular, how to use the new plugin feature of GPI-Space to manage a global state of the computation and to efficiently handle mutable data. Moreover, a Mathematica interface to generate IBPs with restricted propagator degree, which is based on module intersection, is presented in this review.
high energy physics theory
There is an apparent discrepancy between the results of the charged kaon multiplicities off the deuteron target from HERMES and COMPASS experiments. In this article we point out that this discrepancy cannot be explained by different $Q^2$ values. Furthermore we examine the empirical parametrization of the fragmentation functions, DSS2017 carefully and find that the agreement between the theoretical estimate and the HERMES data is less satisfactory as claimed.
high energy physics phenomenology
Quantum phenomena have remained largely inaccessible to the general public. This can be attributed to the fact that we do not experience quantum mechanics on a tangible level in our daily lives. Games can provide an environment in which people can experience the strange behavior of the quantum world in a fun and mentally engaging way. Games could also offer an interesting test bed for near term quantum devices because they can be tailored to support varying amounts of quantum behavior through simple rule changes, which can be useful when dealing with limited resources. This paper explores the design of a variant of Chess which is built on top of unitary dynamics and includes non-trivial quantum effects such as superposition, entanglement, and interference. A movement ruleset is introduced which allows players to create superposition, entanglement, and interference effects. A measurement rule is also introduced which helps limit the size of the superposition, so the game remains tractable for a classical computer, and also helps make the game more comprehensible for a player.
quantum physics
We comment on the origin of a diffraction cone shrinkage which may not be related to the contribution of the linear Regge trajectories, but can result from the scattering matrix unitarity in the framework of the geometrical models.
high energy physics phenomenology
We complete the program of 2012.15792 about perturbative approaches for $\mathcal{N}=2$ superconformal quiver theories in four dimensions. We consider several classes of observables in presence of Wilson loops, and we evaluate them with the help of supersymmetric localization. We compute Wilson loop vacuum expectation values, correlators of multiple coincident Wilson loops and one-point functions of chiral operators in presence of them acting as superconformal defects. We extend this analysis to the most general case considering chiral operators and multiple Wilson loops scattered in all the possible ways among the vector multiplets of the quiver. Finally, we identify twisted and untwisted observables which probe the orbifold of $AdS_5\times S^5$ with the aim of testing possible holographic perspectives of quiver theories in $\mathcal{N}=2$.
high energy physics theory
Exceptional points (EPs) are exotic degeneracies of non-Hermitian systems, where the eigenvalues and the corresponding eigenvectors simultaneously coalesce in parameter space, and these degeneracies are sensitive to tiny perturbations on the system. Here we report an experimental observation of the EP in a hybrid quantum system consisting of dense nitrogen (P1) centers in diamond coupled to a coplanar-waveguide resonator. These P1 centers can be divided into three subensembles of spins, and cross relaxation occurs among them. As a new method to demonstrate this EP, we pump a given spin subensemble with a drive field to tune the magnon-photon coupling in a wide range. We observe the EP in the middle spin subensemble coupled to the resonator mode, irrespective of which spin subensemble is actually driven. This robustness of the EP against pumping reveals the key role of the cross relaxation in P1 centers. It offers a novel way to convincingly prove the existence of the cross-relaxation effect via the EP.
quantum physics
Aaronson and Ambainis (2009) and Chailloux (2018) showed that fully symmetric (partial) functions do not admit exponential quantum query speedups. This raises a natural question: how symmetric must a function be before it cannot exhibit a large quantum speedup? In this work, we prove that hypergraph symmetries in the adjacency matrix model allow at most a polynomial separation between randomized and quantum query complexities. We also show that, remarkably, permutation groups constructed out of these symmetries are essentially the only permutation groups that prevent super-polynomial quantum speedups. We prove this by fully characterizing the primitive permutation groups that allow super-polynomial quantum speedups. In contrast, in the adjacency list model for bounded-degree graphs (where graph symmetry is manifested differently), we exhibit a property testing problem that shows an exponential quantum speedup. These results resolve open questions posed by Ambainis, Childs, and Liu (2010) and Montanaro and de Wolf (2013).
quantum physics
Twin field quantum key distribution promises high key rates at long distance to beat the rate distance limit. Here, applying the sending or not sending TF QKD protocol, we experimentally demonstrate a secure key distribution breaking the absolute key rate limit of repeaterless QKD over 509 km, 408 km ultra-low loss optical fibre and 350 km standard optical fibre. Two independent lasers are used as the source with remote frequency locking technique over 500 km fiber distance; Practical optical fibers are used as the optical path with appropriate noise filtering; And finite key effects are considered in the key rate analysis. The secure key rates obtained at different distances are more than 5 times higher than the conditional limit of repeaterless QKD, a bound value assuming the same detection loss in the comparison. The achieved secure key rate is also higher than that a traditional QKD protocol running with a perfect repeaterless QKD device and even if an infinite number of sent pulses. Our result shows that the protocol and technologies applied in this experiment enable TF QKD to achieve high secure key rate at long distribution distance, and hence practically useful for field implementation of intercity QKD.
quantum physics
In this paper, we generalize proximal methods that were originally designed for convex optimization on normed vector space to non-convex pose graph optimization (PGO) on special Euclidean groups, and show that our proposed generalized proximal methods for PGO converge to first-order critical points. Furthermore, we propose methods that significantly accelerate the rates of convergence almost without loss of any theoretical guarantees. In addition, our proposed methods can be easily distributed and parallelized with no compromise of efficiency. The efficacy of this work is validated through implementation on simultaneous localization and mapping (SLAM) and distributed 3D sensor network localization, which indicate that our proposed methods are a lot faster than existing techniques to converge to sufficient accuracy for practical use.
mathematics
In this paper, we focus on the localization of a passive source from time difference of arrival (TDOA) measurements. TDOA values are computed with respect to pairs of fixed sensors that are required to be accurately time-synchronized. This constitutes a weakness as all synchronization techniques are vulnerable to delay injections. Attackers are able either to spoof the signal or to inject asymmetric delays in the communication channel. By nature, TDOA measurements are highly sensitive to time-synchronization offsets between sensors. Our first contribution is to show that timing attacks can severely affect the localization process. With a delay of a few microseconds injected on one sensor, the resulting estimate might be several kilometers away from the true location of the unknown source. We also show that residual analysis does not enable the detection and identification of timing attacks. Our second contribution is to propose a two-step TDOA-localization technique that is robust against timing attacks. It uses a known source to define a weight for each pair of sensors, reflecting the confidence in their time synchronization. Our solution then uses the weighted least-squares estimator with the newly created weights and the TDOA measurements received from the unknown source. As a result, our method either identifies the network as being too corrupt to localize, or gives a corrected estimate of the unknown position along with a confidence metric. Numerical results illustrate the performance of our technique.
electrical engineering and systems science
It is well known that every stable matching instance $I$ has a rotation poset $R(I)$ that can be computed efficiently and the downsets of $R(I)$ are in one-to-one correspondence with the stable matchings of $I$. Furthermore, for every poset $P$, an instance $I(P)$ can be constructed efficiently so that the rotation poset of $I(P)$ is isomorphic to $P$. In this case, we say that $I(P)$ realizes $P$. Many researchers exploit the rotation poset of an instance to develop fast algorithms or to establish the hardness of stable matching problems. In order to gain a parameterized understanding of the complexity of sampling stable matchings, Bhatnagar et al. [SODA 2008] introduced stable matching instances whose preference lists are restricted but nevertheless model situations that arise in practice. In this paper, we study four such parameterized restrictions; our goal is to characterize the rotation posets that arise from these models: $k$-bounded, $k$-attribute, $(k_1, k_2)$-list, $k$-range. We prove that there is a constant $k$ so that every rotation poset is realized by some instance in the first three models for some fixed constant $k$. We describe efficient algorithms for constructing such instances given the Hasse diagram of a poset. As a consequence, the fundamental problem of counting stable matchings remains $\#$BIS-complete even for these restricted instances. For $k$-range preferences, we show that a poset $P$ is realizable if and only if the Hasse diagram of $P$ has pathwidth bounded by functions of $k$. Using this characterization, we show that the following problems are fixed parameter tractable when parametrized by the range of the instance: exactly counting and uniformly sampling stable matchings, finding median, sex-equal, and balanced stable matchings.
computer science
Proof-of-work blockchains must implement a difficulty adjustment algorithm (DAA) in order to maintain a consistent inter-arrival time between blocks. Conventional DAAs are essentially feedback controllers, and as such, they are inherently reactive. This approach leaves them susceptible to manipulation and often causes them to either under- or over-correct. We present Bonded Mining, a proactive DAA that works by collecting hash rate commitments secured by bond from miners. The difficulty is set directly from the commitments and the bond is used to penalize miners who deviate from their commitment. We devise a statistical test that is capable of detecting hash rate deviations by utilizing only on-blockchain data. The test is sensitive enough to detect a variety of deviations from commitments, while almost never misclassifying honest miners. We demonstrate in simulation that, under reasonable assumptions, Bonded Mining is more effective at maintaining a target block time than the Bitcoin Cash DAA, one of the newest and most dynamic DAAs currently deployed. In this preliminary work, the lowest hash rate miner our approach supports is 1% of the total and we directly consider only two types of fundamental attacks. Future work will address these limitations.
computer science
We study the analytic structure for the eigenvalues of the one-dimensional Dirac oscillator, by analytically continuing its frequency on the complex plane. A twofold Riemann surface is found, connecting the two states of a pair of particle and antiparticle. One can, at least in principle, accomplish the transition from a positive energy state to its antiparticle state by moving the frequency continuously on the complex plane, without changing the Hamiltonian after transition. This result provides a visual explanation for the absence of a negative energy state with the quantum number n=0.
quantum physics
This article presents a filter for state-space models based on Bellman's dynamic programming principle applied to the mode estimator. The proposed Bellman filter (BF) generalises the Kalman filter (KF) including its extended and iterated versions, while remaining equally inexpensive computationally. The BF is also (unlike the KF) robust under heavy-tailed observation noise and applicable to a wider range of (nonlinear and non-Gaussian) models, involving e.g. count, intensity, duration, volatility and dependence. (Hyper)parameters are estimated by numerically maximising a BF-implied log-likelihood decomposition, which is an alternative to the classic prediction-error decomposition for linear Gaussian models. Simulation studies reveal that the BF performs on par with (or even outperforms) state-of-the-art importance-sampling techniques, while requiring a fraction of the computational cost, being straightforward to implement and offering full scalability to higher dimensional state spaces.
statistics
Subtracting event samples is a common task in LHC simulation and analysis, and standard solutions tend to be inefficient. We employ generative adversarial networks to produce new event samples with a phase space distribution corresponding to added or subtracted input samples. We first illustrate for a toy example how such a network beats the statistical limitations of the training data. We then show how such a network can be used to subtract background events or to include non-local collinear subtraction events at the level of unweighted 4-vector events.
high energy physics phenomenology
We analyze the spectral properties and peculiar behavior of solutions of a damped wave equation on a finite interval with a singular damping of the form $\alpha/x$, $\alpha>0$. We establish the exponential stability of the semigroup for all positive $\alpha$, and determine conditions for the spectrum to consist of a finite number of eigenvalues. As a consequence, we fully characterize the set of initial conditions for which there is extinction of solutions in finite time. Finally, we propose two open problems related to extremal decay rates of solutions.
mathematics
Spectrum sensing is a key technology for cognitive radios. We present spectrum sensing as a classification problem and propose a sensing method based on deep learning classification. We normalize the received signal power to overcome the effects of noise power uncertainty. We train the model with as many types of signals as possible as well as noise data to enable the trained network model to adapt to untrained new signals. We also use transfer learning strategies to improve the performance for real-world signals. Extensive experiments are conducted to evaluate the performance of this method. The simulation results show that the proposed method performs better than two traditional spectrum sensing methods, i.e., maximum-minimum eigenvalue ratio-based method and frequency domain entropy-based method. In addition, the experimental results of the new untrained signal types show that our method can adapt to the detection of these new signals. Furthermore, the real-world signal detection experiment results show that the detection performance can be further improved by transfer learning. Finally, experiments under colored noise show that our proposed method has superior detection performance under colored noise, while the traditional methods have a significant performance degradation, which further validate the superiority of our method.
electrical engineering and systems science
Three-types (three-band, two-band and one-band) of effective Hamiltonians for the HgBa$_2$CuO$_4$ and three-band effective Hamiltonian for La$_2$CuO$_4$ are derived beyond the level of the constrained-GW approximation combined with the self-interaction correction (cGW-SIC) derived in Hirayama et al. Phys. Rev. B 98, 134501 (2018) by improving the treatment of the interband Hartree energy. The charge gap and antiferromagnetic ordered moment show good agreement with the experimental results when the present effective Hamiltonian is solved, indicating the importance of the present refinement. The obtained Hamiltonians will serve to clarify the electronic structures of these copper oxide superconductors and to elucidate the superconducting mechanism.
condensed matter
We present new ALMA observations of the molecular gas and far-infrared continuum around the brightest cluster galaxy (BCG) in the cool-core cluster MACS 1931.8-2635. Our observations reveal $1.9 \pm 0.3 \times 10^{10}$ M$_{\odot}$ of molecular gas, on par with the largest known reservoirs of cold gas in a cluster core. We detect CO(1-0), CO(3-2), and CO(4-3) emission from both diffuse and compact molecular gas components that extend from the BCG center out to $\sim30$ kpc to the northwest, tracing the UV knots and H$\alpha$ filaments observed by HST. Due to the lack of morphological symmetry, we hypothesize that the $\sim300$ km s$^{-1}$ velocity of the CO in the tail is not due to concurrent uplift by AGN jets, rather we may be observing the aftermath of a recent AGN outburst. The CO spectral line energy distribution suggests that molecular gas excitation is influenced by processes related to both star formation and recent AGN feedback. Continuum emission in Bands 6 and 7 arises from dust and is spatially coincident with young stars and nebular emission observed in the UV and optical. We constrain the temperature of several dust clumps to be $\lesssim 10$ K, which is too cold to be directly interacting with the surrounding $\sim 4.8$ keV intracluster medium (ICM). The cold dust population extends beyond the observed CO emission and must either be protected from interacting with the ICM or be surrounded by local volumes of ICM that are several keV colder than observed by Chandra.
astrophysics
We prove that optimal control of light energy storage in disordered media can be reached by wavefront shaping. For this purpose, we build an operator for dwell-times from the scattering matrix, and characterize its full eigenvalue distribution both numerically and analytically in the diffusive regime, where the thickness $L$ of the medium is much larger than the mean free path $\ell$. We show that the distribution has a finite support with a maximal dwell-time larger than the most likely value by a factor $(L/\ell)^2\gg 1 $. This reveals that the highest dwell-time eigenstates deposit more energy than the open channels of the medium. Finally, we show that the dwell-time operator can be used to store energy in resonant targets buried in complex media.
physics
The presence of a confining boundary can modify the local structure of a liquid markedly. In addition, small samples of finite size are known to exhibit systematic deviations of thermodynamic quantities relative to their bulk values. Here, we consider the static structure factor of a liquid sample in slab geometry with open boundaries at the surfaces, which can be thought of as virtually cutting out the sample from a macroscopically large, homogeneous fluid. This situation is a relevant limit for the interpretation of grazing-incidence diffraction experiments at liquid interfaces and films. We derive an exact, closed expression for the slab structure factor, with the bulk structure factor as the only input. This shows that such free boundary conditions cause significant differences between the two structure factors, in particular at small wavenumbers. An asymptotic analysis of this result yields the scaling exponent and an accurate, useful approximation of these finite-size corrections. Furthermore, the open boundaries permit the interpretation of the slab as an open system, supporting particle exchange with a reservoir. We relate the slab structure factor to the particle number fluctuations and discuss conditions under which the subvolume of the slab represents a grand canonical ensemble with chemical potential $\mu$ and temperature $T$. Thus, the open slab serves as a test-bed for the small-system thermodynamics in a $\mu T$ reservoir. We provide a microscopically justified and exact result for the size dependence of the isothermal compressibility. Our findings are corroborated by simulation data for Lennard-Jones liquids at two representative temperatures.
condensed matter
The symmetry operators generating the hidden $\mathbb{Z}_2$ symmetry of the asymmetric quantum Rabi model (AQRM) at bias $\epsilon \in \frac{1}{2}\mathbb{Z}$ have recently been constructed by V. V. Mangazeev et al. [J. Phys. A: Math. Theor. 54 12LT01 (2021)]. We start with this result to determine symmetry operators for the $N$-qubit generalisation of the AQRM, also known as the biased Dicke model, at special biases. We also prove for general $N$ that the symmetry operators, which commute with the Hamiltonian of the biased Dicke model, generate a $\mathbb{Z}_2$ symmetry.
quantum physics
In this paper, we give a recursive algorithm to compute the multivariable Zassenhaus formula $$e^{X_1+X_2+\cdots +X_n}=e^{X_1}e^{X_2}\cdots e^{X_n}\prod_{k=2}^{\infty}e^{W_k}$$ and derive an effective recursion formula of $W_k$.
mathematics
The primary X-ray emission in active galactic nuclei (AGNs) is widely believed to be due to Comptonisation of the thermal radiation from the accretion disc in a corona of hot electrons. The resulting spectra can, in first approximation, be modelled with a cut-off power law, the photon index and the high-energy roll-over encoding information on the physical properties of the X-ray-emitting region. The photon index and the high-energy curvature of AGNs ($\Gamma$, E$_c$) have been largely studied since the launch of X-ray satellites operating above 10 keV. However, high-precision measurements of these two observables have only been obtained in recent years thanks to the unprecedented sensitivity of NuSTAR up to 79 keV. We aim at deriving relations between phenomenological parameters ($\Gamma$ and E$_c$) and the intrinsic properties of the X-ray-emitting region (the hot corona), namely the optical depth and temperature. We use MoCA (Monte Carlo code for Comptonisation in Astrophysics) to produce synthetic spectra for the case of an AGN with M$_{BH}$=1.5$\times$10$^8$ M$_{sun}$ and accretion rate of 10% and then compared them with the widely used power-law model with an exponential high-energy cutoff. We provide phenomenological relations relating $\Gamma$ and E$_c$ with the opacity and temperature of the coronal electrons for the case of spherical and slab-like coronae. These relations give origin to a well defined parameter space which fully contains the observed values. Exploiting the increasing number of high-energy cut-offs quoted in the literature, we report on the comparison of physical quantities obtained using MoCA with those estimated using commonly adopted spectral Comptonisation models. Finally, we discuss the negligible impact of different black hole masses and accretion rates on the inferred relations.
astrophysics
QUBIC is a novel kind of polarimeter optimized for the measurement of the B-mode polarization of the Cosmic Microwave Background, one of the major challenges of observational cosmology. The signal is expected to be of the order of a few tens of nK, prone to instrumental systematic effects and polluted by various astrophysical foregrounds which can only be controlled through multichroic observations. QUBIC is designed to address these observational issues with its unique capability to combine the advantages of interferometry in terms of control of instrumental systematic effects with those of bolometric detectors in terms of wide-band, background-limited sensitivity. The QUBIC synthesized beam has a frequency-dependent shape that allows producing maps of the CMB polarization in multiple sub-bands within the two physical bands of the instrument (150 and 220 GHz). This unique capability distinguishes QUBIC from other instruments and makes it particularly well suited to characterize and remove Galactic foreground contamination. In this article, first of a series of eight, we give an overview of the QUBIC instrument design, the main results of the calibration campaign, and present the scientific program of QUBIC including the measurement of primordial B-modes and Galactic foregrounds. We give forecasts for typical observations and measurements: with three years of integration, assuming perfect foreground removal and stable atmospheric conditions from our site in Argentina, our simulations show that we can achieve a statistical sensitivity to the effective tensor-to-scalar ratio (including primordial and foreground B-modes) $\sigma(r)=0.015$. Assuming the 220 GHz is used to subtract foreground contamination together with data from other surveys such as Planck 353 GHz channel, our sensitivity to primordial tensors is given by that of the 150 GHz channel alone and is $\sigma(r)=0.021$.
astrophysics
We propose a symmetry of $T\bar T$ deformed 2D CFT, which preserves the trace relation. The deformed conformal killing equation is obtained. Once we consider the background metric runs with the deformation parameter $\mu$, the deformation contributes an additional term in conformal killing equation, which plays the role of renormalization group flow of metric. The conformal symmetry coincides with the fixed point. On the gravity side, this deformed conformal killing equation can be described by a new boundary condition of AdS$_3$. In addition, based on the deformed conformal killing equation, we derive that the stress tensor of the deformed CFT equals to Brown-York's quasilocal stress tensor on a finite boundary with a counterterm. For a specific example, BTZ black hole, we get $T\bar T$ deformed conformal killing vectors and the associated conserved charges are also studied.
high energy physics theory
The Einstein-Podolsky-Rosen (EPR) steering, which is regarded as a category of quantum nonlocal correlations, owns the asymmetric property in contrast with the entanglement and the Bell nonlocality. For the multipartite EPR steering, monogamy, which limits the two observers to steer the third one simultaneously, emerges as an essential property. However, more configurations of shareability relations in the reduced subsystem which are beyond the monogamy could be observed by increasing the numbers of measurement setting, in which the experimental verification is still absent. Here, in an optical experiment, we provide a proof-of-principle demonstration of shareability of the EPR steering without constraint of monogamy in the three-qubit system, in which Alice could be steered by Bob and Charlie simultaneously. Moreover, based on the reduced bipartite EPR steering detection, we verify the genuine three-qubit entanglement. This work provides a basis for an improved understanding of the multipartite EPR steering and has potential applications in many quantum information protocols, such as multipartite entanglement detection and quantum cryptography.
quantum physics
An open problem in machine learning is whether flat minima generalize better and how to compute such minima efficiently. This is a very challenging problem. As a first step towards understanding this question we formalize it as an optimization problem with weakly interacting agents. We review appropriate background material from the theory of stochastic processes and provide insights that are relevant to practitioners. We propose an algorithmic framework for an extended stochastic gradient Langevin dynamics and illustrate its potential. The paper is written as a tutorial, and presents an alternative use of multi-agent learning. Our primary focus is on the design of algorithms for machine learning applications; however the underlying mathematical framework is suitable for the understanding of large scale systems of agent based models that are popular in the social sciences, economics and finance.
statistics
We consider a bilayer system of two-dimensional Bose-Einstein-condensed dipolar dark excitons (upper layer) and bright ones (bottom layer). We demonstrate that the interlayer interaction leads to a mixing between excitations from different layers. This mixing leads to the appearance of a second spectral branch in the spectrum of bright condensate. The excitation spectrum of the condensate of dark dipolar excitons then becomes optically accessible during luminescence spectra measurements of the bright condensate, which allows one to probe its kinetic properties. This approach is relevant for experimental setups, where detection via conventional techniques remains challenging; in particular, the suggested method is useful for studying dark dipolar excitons in transition metal dichalcogenide monolayers.
condensed matter
Complex systems are a proving ground for fundamental interactions between components and their collective emergent phenomena. Through intricate design, integrated photonics offers intriguing nonlinear interactions that create new patterns of light. In particular, the canonical Kerr-nonlinear resonator becomes unstable with a sufficiently intense traveling-wave excitation, yielding instead a Turing pattern composed of a few interfering waves. These resonators also support the localized soliton pulse as a separate nonlinear stationary state. Kerr solitons are remarkably versatile for applications, but they cannot emerge from constant excitation. Here, we explore an edge-less photonic-crystal resonator (PhCR) that enables spontaneous formation of a soliton pulse in place of the Turing pattern. We design a PhCR in the regime of single-azimuthal-mode engineering to re-balance Kerr-nonlinear frequency shifts in favor of the soliton state, commensurate with how group-velocity dispersion balances nonlinearity. Our experiments establish PhCR solitons as mode-locked pulses by way of ultraprecise optical-frequency measurements, and we characterize their fundamental properties. Our work shows that sub-wavelength nanophotonic design expands the palette for nonlinear engineering of light.
physics
This chapter looks at the spatial distribution and mobility patterns of essential and non-essential workers before and during the COVID-19 pandemic in London and compares them to the rest of the UK. In the 3-month lockdown that started on 23 March 2020, 20% of the workforce was deemed to be pursuing essential jobs. The other 80%% were either furloughed, which meant being supported by the government to not work, or working from home. Based on travel journey data between zones (trips were decomposed into essential and non-essential trips. Despite some big regional differences within the UK, we find that essential workers have much the same spatial patterning as non-essential for all occupational groups containing essential and non-essential workers. Also, the amount of travel time saved by working from home during the Pandemic is roughly the same proportion -80%-as the separation between essential and non-essential workers. Further, the loss of travel, reduction in workers, reductions in retail spending as well as increases in use of parks are examined in different London boroughs using Google Mobility Reports which give us a clear picture of what has happened over the last 6 months since the first Lockdown. These reports also now imply that a second wave of infection is beginning.
physics
The vector-matrix Riemann boundary value problem for the unit disk with piecewise constant matrix is constructively solved by a method of functional equations. By functional equations we mean iterative functional equations with shifts involving compositions of unknown functions analytic in mutually disjoint disks. The functional equations are written as an infinite linear algebraic system on the coefficients of the corresponding Taylor series. The compactness of the shift operators implies justification of the truncation method for this infinite system. The unknown functions and partial indices can be calculated by truncated systems.
mathematics
Beyond-intercalation batteries promise a step-change in energy storage compared to intercalation based lithium- and sodium-ion batteries. However, only performance metrics that include all cell components and operation parameters can tell whether a true advance over intercalation batteries has been achieved.
physics
Accurate positioning and fast traversal times determine the productivity in machining applications. This paper demonstrates a hierarchical contour control implementation for the increase of productivity in positioning systems. The high-level controller pre-optimizes the input to a low-level cascade controller, using a contouring predictive control approach. This control structure requires tuning of multiple parameters. We propose a sample-efficient joint tuning algorithm, where the performance metrics associated with the full geometry traversal are modelled as Gaussian processes and used to form the global cost and the constraints in a constrained Bayesian optimization algorithm. This approach enables the trade-off between fast traversal, high tracking accuracy, and suppression of vibrations in the system. The performance improvement is evaluated numerically when tuning different combinations of parameters. We demonstrate that jointly tuning the parameters of the contour- and the low-level controller achieves the best performance in terms of time, tracking accuracy, and minimization of the vibrations in the system.
electrical engineering and systems science
This work presents a novel policy iteration algorithm to tackle nonzero-sum stochastic impulse games arising naturally in many applications. Despite the obvious impact of solving such problems, there are no suitable numerical methods available, to the best of our knowledge. Our method relies on the recently introduced characterization of the value functions and Nash equilibrium via a system of quasi-variational inequalities. While our algorithm is heuristic and we do not provide a convergence analysis, numerical tests show that it performs convincingly in a wide range of situations, including the only analytically solvable example available in the literature at the time of writing.
mathematics
We discuss twisted bilayer graphene (TBG) based on a theorem of flat band ferromagnetism put forward by Mielke and Tasaki. According to this theorem, ferromagnetism occurs if the single particle density matrix of the flat band states is irreducible and we argue that this result can be applied to the quasi-flat bands of TBG that emerge around the charge-neutrality point for twist angles around the magic angle $\theta\sim1.05^\circ$. We show that the density matrix is irreducible in this case, thus predicting a ferromagnetic ground state for neutral TBG ($n=0$). We then show that the theorem can also be applied only to the flat conduction or valence bands, if the substrate induces a single-particle gap at charge neutrality. Also in this case, the corresponding density matrix turns out to be irreducible, leading to ferromagnetism at half filling ($n=\pm2$).
condensed matter
We study the applicability of quantum algorithms in computational game theory and generalize some results related to Subtraction games, which are sometimes referred to as one-heap Nim games. In quantum game theory, a subset of Subtraction games became the first explicitly defined class of zero-sum combinatorial games with provable separation between quantum and classical complexity of solving them. For a narrower subset of Subtraction games, an exact quantum sublinear algorithm is known that surpasses all deterministic algorithms for finding solutions with probability $1$. Typically, both Nim and Subtraction games are defined for only two players. We extend some known results to games for three or more players, while maintaining the same classical and quantum complexities: $\Theta\left(n^2\right)$ and $\tilde{O}\left(n^{1.5}\right)$ respectively.
quantum physics
Innovation is the driving force of human progress. Recent urn models reproduce well the dynamics through which the discovery of a novelty may trigger further ones, in an expanding space of opportunities, but neglect the effects of social interactions. Here we focus on the mechanisms of collective exploration and we propose a model in which many urns, representing different explorers, are coupled through the links of a social network and exploit opportunities coming from their contacts. We study different network structures showing, both analytically and numerically, that the pace of discovery of an explorer depends on its centrality in the social network. Our model sheds light on the role that social structures play in discovery processes.
physics
Markov decision models (MDM) used in practical applications are most often less complex than the underlying `true' MDM. The reduction of model complexity is performed for several reasons. However, it is obviously of interest to know what kind of model reduction is reasonable (in regard to the optimal value) and what kind is not. In this article we propose a way how to address this question. We introduce a sort of derivative of the optimal value as a function of the transition probabilities, which can be used to measure the (first-order) sensitivity of the optimal value w.r.t.\ changes in the transition probabilities. `Differentiability' is obtained for a fairly broad class of MDMs, and the `derivative' is specified explicitly. Our theoretical findings are illustrated by means of optimization problems in inventory control and mathematical finance.
mathematics
Data science and informatics tools have been proliferating recently within the computational materials science and catalysis fields. This proliferation has spurned the creation of various frameworks for automated materials screening, discovery, and design. Underpinning these frameworks are surrogate models with uncertainty estimates on their predictions. These uncertainty estimates are instrumental for determining which materials to screen next, but the computational catalysis field does not yet have a standard procedure for judging the quality of such uncertainty estimates. Here we present a suite of figures and performance metrics derived from the machine learning community that can be used to judge the quality of such uncertainty estimates. This suite probes the accuracy, calibration, and sharpness of a model quantitatively. We then show a case study where we judge various methods for predicting density-functional-theory-calculated adsorption energies. Of the methods studied here, we find that the best performer is a model where a convolutional neural network is used to supply features to a Gaussian process regressor, which then makes predictions of adsorption energies along with corresponding uncertainty estimates.
condensed matter
Various model-based diagnosis scenarios require the computation of the most preferred fault explanations. Existing algorithms that are sound (i.e., output only actual fault explanations) and complete (i.e., can return all explanations), however, require exponential space to achieve this task. As a remedy, to enable successful diagnosis on memory-restricted devices and for memory-intensive problem cases, we propose RBF-HS, a diagnostic search method based on Korf's well-known RBFS algorithm. RBF-HS can enumerate an arbitrary fixed number of fault explanations in best-first order within linear space bounds, without sacrificing the desirable soundness or completeness properties. Evaluations using real-world diagnosis cases show that RBF-HS, when used to compute minimum-cardinality fault explanations, in most cases saves substantial space (up to 98 %) while requiring only reasonably more or even less time than Reiter's HS-Tree, a commonly used and as generally applicable sound, complete and best-first diagnosis search.
computer science
Given (small amounts of) time-series' data from a high-dimensional, fine-grained, multiscale dynamical system, we propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model that is predictive of the fine-grained system's long-term evolution but also of its behavior under different initial conditions. We target fine-grained models as they arise in physical applications (e.g. molecular dynamics, agent-based models), the dynamics of which are strongly non-stationary but their transition to equilibrium is governed by unknown slow processes which are largely inaccessible by brute-force simulations. Approaches based on domain knowledge heavily rely on physical insight in identifying temporally slow features and fail to enforce the long-term stability of the learned dynamics. On the other hand, purely statistical frameworks lack interpretability and rely on large amounts of expensive simulation data (long and multiple trajectories) as they cannot infuse domain knowledge. The generative framework proposed achieves the aforementioned desiderata by employing a flexible prior on the complex plane for the latent, slow processes, and an intermediate layer of physics-motivated latent variables that reduces reliance on data and imbues inductive bias. In contrast to existing schemes, it does not require the a priori definition of projection operators from the fine-grained description and addresses simultaneously the tasks of dimensionality reduction and model estimation. We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic, long-term predictions of phenomena not contained in the training data are produced.
statistics
The present paper describes singing voice synthesis based on convolutional neural networks (CNNs). Singing voice synthesis systems based on deep neural networks (DNNs) are currently being proposed and are improving the naturalness of synthesized singing voices. As singing voices represent a rich form of expression, a powerful technique to model them accurately is required. In the proposed technique, long-term dependencies of singing voices are modeled by CNNs. An acoustic feature sequence is generated for each segment that consists of long-term frames, and a natural trajectory is obtained without the parameter generation algorithm. Furthermore, a computational complexity reduction technique, which drives the DNNs in different time units depending on type of musical score features, is proposed. Experimental results show that the proposed method can synthesize natural sounding singing voices much faster than the conventional method.
electrical engineering and systems science
Purpose: Probe-based Confocal Laser Endomicroscopy (pCLE) enables performing an optical biopsy, providing real-time microscopic images, via a probe. pCLE probes consist of multiple optical fibres arranged in a bundle, which taken together generate signals in an irregularly sampled pattern. Current pCLE reconstruction is based on interpolating irregular signals onto an over-sampled Cartesian grid, using a naive linear interpolation. It was shown that Convolutional Neural Networks (CNNs) could improve pCLE image quality. Although classical CNNs were applied to pCLE, input data were limited to reconstructed images in contrast to irregular data produced by pCLE. Methods: We compare pCLE reconstruction and super-resolution (SR) methods taking irregularly sampled or reconstructed pCLE images as input. We also propose to embed a Nadaraya-Watson (NW) kernel regression into the CNN framework as a novel trainable CNN layer. Using the NW layer and exemplar-based super-resolution, we design an NWNetSR architecture that allows for reconstructing high-quality pCLE images directly from the irregularly sampled input data. We created synthetic sparse pCLE images to evaluate our methodology. Results: The results were validated through an image quality assessment based on a combination of the following metrics: Peak signal-to-noise ratio, the Structural Similarity Index. Conclusion: Both dense and sparse CNNs outperform the reconstruction method currently used in the clinic. The main contributions of our study are a comparison of sparse and dense approach in pCLE image reconstruction, implementing trainable generalised NW kernel regression, and adaptation of synthetic data for training pCLE SR.
electrical engineering and systems science
Many tasks in graphics and vision demand machinery for converting shapes into consistent representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage. When the source data is noisy or ambiguous, however, artists and engineers often manually construct such representations, a tedious and potentially time-consuming process. While advances in deep learning have been successfully applied to noisy geometric data, the task of generating parametric shapes has so far been difficult for these methods. Hence, we propose a new framework for predicting parametric shape primitives using deep learning. We use distance fields to transition between shape parameters like control points and input data on a pixel grid. We demonstrate efficacy on 2D and 3D tasks, including font vectorization and surface abstraction.
computer science
Non-classical photon sources are a crucial resource for distributed quantum networks. Photons generated from matter systems with memory capability are particularly promising, as they can be integrated into a network where each source is used on-demand. Among all kinds of solid state and atomic quantum memories, room-temperature atomic vapours are especially attractive due to their robustness and potential scalability. To-date room-temperature photon sources have been limited either in their memory time or the purity of the photonic state. Here we demonstrate a single-photon source based on room-temperature memory. Following heralded loading of the memory, a single photon is retrieved from it after a variable storage time. The single-photon character of the retrieved field is validated by the strong suppression of the two-photon component with antibunching as low as $g^{(2)}_{\text{RR|W=1}} = 0.20 \pm 0.07$. Non-classical correlations between the heralding and the retrieved photons are maintained for up to $\tau_{\text{NC}}^{\mathcal R} = (0.68\pm 0.08)$ ms, more than two orders of magnitude longer than previously demonstrated with other room-temperature systems. Correlations sufficient for violating Bell inequalities exist for up to $\tau_{\text{BI}} = (0.15 \pm 0.03)$ ms.
quantum physics
This paper produces an efficient Semidefinite Programming (SDP) solution for community detection that incorporates non-graph data, which in this context is known as side information. SDP is an efficient solution for standard community detection on graphs. We formulate a semi-definite relaxation for the maximum likelihood estimation of node labels, subject to observing both graph and non-graph data. This formulation is distinct from the SDP solution of standard community detection, but maintains its desirable properties. We calculate the exact recovery threshold for three types of non-graph information, which in this paper are called side information: partially revealed labels, noisy labels, as well as multiple observations (features) per node with arbitrary but finite cardinality. We find that SDP has the same exact recovery threshold in the presence of side information as maximum likelihood with side information. Thus, the methods developed herein are computationally efficient as well as asymptotically accurate for the solution of community detection in the presence of side information. Simulations show that the asymptotic results of this paper can also shed light on the performance of SDP for graphs of modest size.
statistics
We consider cubic interactions of the form $s-Y-Y$ between a massless integer superspin $s$ supermultiplet and two massless arbitrary integer or half integer superspin $Y$ supermultiplets. We focus on non-minimal interactions generated by gauge invariant supercurrent multiplets which are bilinear in the superfield strength of the superspin $Y$ supermultiplet. We find two types of consistent supercurrents. The first one corresponds to conformal integer superspin $s$ supermultiplets, exist only for even values of $s, s=2\ell+2$, for arbitrary values of $Y$ and it is unique. The second one, corresponds to Poincar\'e integer superspin $s$ supermultiplets, exist for arbitrary values of $s$ and $Y$.
high energy physics theory
We detail techniques to optimise high-level classical simulations of Shor's quantum factoring algorithm. Chief among these is to examine the entangling properties of the circuit and to effectively map it across the one-dimensional structure of a matrix product state. Compared to previous approaches whose space requirements depend on $r$, the solution to the underlying order-finding problem of Shor's algorithm, our approach depends on its factors. We performed a matrix product state simulation of a 60-qubit instance of Shor's algorithm that would otherwise be infeasible to complete without an optimised entanglement mapping.
quantum physics
Infinite projected entangled pair states (iPEPS) provide a convenient variational description of infinite, translationally-invariant two-dimensional quantum states. However, the simulation of local excitations is not directly possible due to the translationally-invariant ansatz. Furthermore, as iPEPS are either identical or orthogonal, expectation values between different states as required during the evaluation of non-equal-time correlators are ill-defined. Here, we show that by introducing auxiliary states on each site, it becomes possible to simulate both local excitations and evaluate non-equal-time correlators in an iPEPS setting under real-time evolution. We showcase the method by simulating the t-J model after a single hole has been placed in the half-filled antiferromagnetic background and evaluating both return probabilities and spin correlation functions, as accessible in quantum gas microscopes.
condensed matter
In this paper an easy to implement method of stochastically weighing short and long memory linear processes is introduced. The method renders asymptotically exact size confidence intervals for the population mean which are significantly more accurate than their classical counterparts for each fixed sample size $n$. It is illustrated both theoretically and numerically that the randomization framework of this paper produces randomized (asymptotic) pivotal quantities, for the mean, which admit central limit theorems with smaller magnitudes of error as compared to those of their leading classical counterparts. An Edgeworth expansion result for randomly weighted linear processes whose innovations do not necessarily satisfy the Cramer condition, is also established.
statistics
The properties of semiconductors can be crucially impacted by midgap states induced by dopants, which can be native or intentionally incorporated in the crystal lattice. For Bernal-stacked bilayer graphene (BLG), which has a tunable bandgap, the existence of midgap states induced by dopants has been conjectured, but never confirmed experimentally. Here, we report scanning tunneling microscopy and spectroscopy results, supported by tight-binding calculations, that demonstrate the existence of midgap states in BLG. We show that the midgap state in BLG -- for which we demonstrate gate-tunability -- appears when the dopant is hosted on the non-dimer sublattice sites. We further evidence the presence of narrow resonances at the onset of the high energy bands (valence or conduction, depending on the dopant type) when the dopants lie on the dimer sublattice sites. These results suggest that dopants/defects can play an important role in the transport and optical properties of multilayer graphene samples, especially at energies close to the band extrema.
condensed matter
The Principle of Equivalence, stating that all laws of physics take their special-relativistic form in any local inertial frame, lies at the core of General Relativity. Because of its fundamental status, this principle could be a very powerful guide in formulating physical laws at regimes where both gravitational and quantum effects are relevant. However, its formulation implicitly presupposes that reference frames are abstracted from classical systems (rods and clocks) and that the spacetime background is well defined. It is unclear if it continues to hold when quantum systems, which can be in a quantum relationship with other physical systems, are taken as reference frames, and in a superposition of classical spacetime structures. Here, we tackle both questions by introducing a relational formalism to describe quantum systems in a superposition of curved spacetimes. We build a unitary transformation to the quantum reference frame (QRF) of a quantum system in curved spacetime, and in a superposition thereof. In both cases, a QRF can be found such that the metric looks locally minkowskian. Hence, one cannot distinguish, with a local measurement, if the spacetime is flat or curved, or in a superposition of such spacetimes. This transformation identifies a Quantum Local Inertial Frame. We also find a spacetime path-integral encoding the dynamics of a quantum particle in spacetime and show that the state of a freely falling particle can be expressed as an infinite sum of all possible classical geodesics. We then build the QRF transformation to the Fermi normal coordinates of such freely falling quantum particle and show that the metric is locally minkowskian. These results extend the Principle of Equivalence to QRFs in a superposition of gravitational fields. Verifying this principle may pave a fruitful path to establishing solid conceptual grounds for a future theory of quantum gravity.
quantum physics
We examine a recursive sequence in which $s_n$ is a literal description of what the binary expansion of the previous term $s_{n-1}$ is not. By adapting a technique of Conway, we determine limiting behaviour of $\{s_n\}$ and dynamics of a related self-map of $2^{\mathbb{N}}$. Our main result is the existence and uniqueness of a pair of binary sequences, each the compliment-description of the other. We also take every opportunity to make puns.
mathematics
Reduction of flow compressibility with the corresponding ideally invariant helicities, universally for various fluid models of neutral and ionized gases, can be argued statistically and associated with the geometrical scenario in the Taylor-Proudman theorem and its analogues. A `chiral base flow/field', rooted in the generic intrinsic local structure, as well as an `equivalence principle' is explained and used to bridge the single-structure mechanics and the helical statistics. The electric field fluctuations may similarly be depressed by the (self-)helicities of the two-fluid plasma model, with the geometry lying in the relation between the electric and density fields in a Maxwell equation.
physics