text
stringlengths
11
9.77k
label
stringlengths
2
104
In the literature, the notion of discrepancy is used in several contexts, even in the theory of graphs. Here, for a graph $G$, $\{-1, 1\}$ labels are assigned to the edges, and we consider a family $\mathcal{S}_G$ of (spanning) subgraphs of certain types, among others spanning trees, Hamiltonian cycles. As usual, we seek for bounds on the sum of the labels that hold for all elements of $\mathcal{S}_G$, for every labeling.
mathematics
We develop an adversarial-reinforcement learning scheme for microswimmers in statistically homogeneous and isotropic turbulent fluid flows, in both two (2D) and three dimensions (3D). We show that this scheme allows microswimmers to find non-trivial paths, which enable them to reach a target on average in less time than a naive microswimmer, which tries, at any instant of time and at a given position in space, to swim in the direction of the target. We use pseudospectral direct numerical simulations (DNSs) of the 2D and 3D (incompressible) Navier-Stokes equations to obtain the turbulent flows. We then introduce passive microswimmers that try to swim along a given direction in these flows; the microswimmers do not affect the flow, but they are advected by it.
physics
Hidden Markov jump processes are an attractive approach for modeling clinical disease progression data because they are explainable and capable of handling both irregularly sampled and noisy data. Most applications in this context consider time-homogeneous models due to their relative computational simplicity. However, the time homogeneous assumption is too strong to accurately model the natural history of many diseases. Moreover, the population at risk is not homogeneous either, since disease exposure and susceptibility can vary considerably. In this paper, we propose a piece-wise stationary transition matrix to explain the heterogeneity in time. We propose a hierarchical structure for the heterogeneity in population, where prior information is considered to deal with unbalanced data. Moreover, an efficient, scalable EM algorithm is proposed for inference. We demonstrate the feasibility and superiority of our model on a cervical cancer screening dataset from the Cancer Registry of Norway. Experiments show that our model outperforms state-of-the-art recurrent neural network models in terms of prediction accuracy and significantly outperforms a standard hidden Markov jump process in generating Kaplan-Meier estimators.
statistics
We propose methods to perform intensity interferometry of photons having two different wavelengths. Distinguishable particles typically cannot interfere with each other, but we overcome that obstacle by processing the particles via entanglement and projection so that they lead to the same final state at the detection apparatus. Specifically, we discuss how quasi-phase-matched nonlinear crystals can be used to convert a quantum superposition of light of different wavelengths onto a common wavelength, while preserving the phase information essential for their meaningful interference. We thereby gain access to a host of new observables, which can probe subtle frequency correlations and entanglement. Further, we generalize the van Cittert-Zernike formula for the intensity interferometry of extended sources, demonstrate how our proposal supports enhanced resolution of sources with different spectral character, and suggest potential applications.
quantum physics
Multilingual ASR technology simplifies model training and deployment, but its accuracy is known to depend on the availability of language information at runtime. Since language identity is seldom known beforehand in real-world scenarios, it must be inferred on-the-fly with minimum latency. Furthermore, in voice-activated smart assistant systems, language identity is also required for downstream processing of ASR output. In this paper, we introduce streaming, end-to-end, bilingual systems that perform both ASR and language identification (LID) using the recurrent neural network transducer (RNN-T) architecture. On the input side, embeddings from pretrained acoustic-only LID classifiers are used to guide RNN-T training and inference, while on the output side, language targets are jointly modeled with ASR targets. The proposed method is applied to two language pairs: English-Spanish as spoken in the United States, and English-Hindi as spoken in India. Experiments show that for English-Spanish, the bilingual joint ASR-LID architecture matches monolingual ASR and acoustic-only LID accuracies. For the more challenging (owing to within-utterance code switching) case of English-Hindi, English ASR and LID metrics show degradation. Overall, in scenarios where users switch dynamically between languages, the proposed architecture offers a promising simplification over running multiple monolingual ASR models and an LID classifier in parallel.
electrical engineering and systems science
A Neptune-sized exomoon candidate was recently announced by Teachey & Kipping, orbiting a 287 day gas giant in the Kepler-1625 system. However, the system is poorly characterized and needs more observations to be confirmed, with the next potential transit in 2019 May. In this Letter, we aid observational follow up by analyzing the transit signature of exomoons. We derive a simple analytic equation for the transit probability and use it to demonstrate how exomoons may frequently avoid transit if their orbit is larger than the stellar radius and sufficiently misaligned. The nominal orbit for the moon in Kepler-1625 has both of these characteristics, and we calculate that it may only transit roughly 40% of the time. This means that approximately six non-transits would be required to rule out the moon's existence at 95% confidence. When an exomoon's impact parameter is displaced off the star, the planet's impact parameter is displaced the other way, so larger planet transit durations are typically positively correlated with missed exomoon transits. On the other hand, strong correlations do not exist between missed exomoon transits and transit timing variations of the planet. We also show that nodal precession does not change an exomoon's transit probability and that it can break a prograde-retrograde degeneracy.
astrophysics
We carried out a statistical study of the `quiet' solar corona during the descending phase of the sunspot cycle 24 (i.e. 2015 January - 2019 May) using data obtained with the Gauribidanur RAdioheliograPH (GRAPH) at 53 MHz and 80 MHz simultaneously. Our results show that the equatorial (east-west) diameters of the solar corona at the above two frequencies shrunk steadily. The decrease was found to be due to a gradual reduction in the coronal electron density ($N_{e}$). Independent estimates of $N_{e}$ in the equatorial region of the `background' corona using white-light coronagraph observations indicate a decline consistent with our findings.
astrophysics
In [4], we proved that every noncompact ancient $\kappa$-solution to the Ricci flow in dimension $3$ is either locally isometric to a family of shrinking cylinders, or isometric to the Bryant soliton. In the same paper, we announced that the same method implies that compact ancient $\kappa$-solutions are rotationally symmetric. In this note we provide the details of this argument.
mathematics
A leading choice of error correction for scalable quantum computing is the surface code with lattice surgery. The basic lattice surgery operations, the merging and splitting of logical qubits, act non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to design, verify, and optimise protocols that use lattice surgery, in particular in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus -- a form of quantum diagrammatic reasoning based on bialgebras -- match exactly the operations of lattice surgery. Red and green "spider" nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for lattice surgery by considering two operations (T gates and producing a CNOT ) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.
quantum physics
Managing large-scale systems often involves simultaneously solving thousands of unrelated stochastic optimization problems, each with limited data. Intuition suggests one can decouple these unrelated problems and solve them separately without loss of generality. We propose a novel data-pooling algorithm called Shrunken-SAA that disproves this intuition. In particular, we prove that combining data across problems can outperform decoupling, even when there is no a priori structure linking the problems and data are drawn independently. Our approach does not require strong distributional assumptions and applies to constrained, possibly non-convex, non-smooth optimization problems such as vehicle-routing, economic lot-sizing or facility location. We compare and contrast our results to a similar phenomenon in statistics (Stein's Phenomenon), highlighting unique features that arise in the optimization setting that are not present in estimation. We further prove that as the number of problems grows large, Shrunken-SAA learns if pooling can improve upon decoupling and the optimal amount to pool, even if the average amount of data per problem is fixed and bounded. Importantly, we highlight a simple intuition based on stability that highlights when and why data-pooling offers a benefit, elucidating this perhaps surprising phenomenon. This intuition further suggests that data-pooling offers the most benefits when there are many problems, each of which has a small amount of relevant data. Finally, we demonstrate the practical benefits of data-pooling using real data from a chain of retail drug stores in the context of inventory management.
mathematics
All positive helicity four-point gluon-graviton amplitudes in Einstein-Yang-Mills theory coupled to a dilaton and axion field are computed at the leading one-loop order using colour-kinematics duality. In particular, all relevant contributions in the gravitational and gauge coupling are established. This extends a previous generalized unitarity based computation beyond the leading terms in the gravitational coupling $\kappa$. The resulting purely rational expressions take very compact forms. The previously seen vanishing of the single-graviton-three-gluon amplitude at leading order in $\kappa$ is seen to be lifted at order $\kappa^{3}$.
high energy physics theory
We present Atacama Large Millimeter/submillimiter Array (ALMA) high sensitivity ($\sigma_P \simeq 0.4\,$mJy) polarimetric observations at $97.5\,$GHz (Band 3) of a complete sample of $32$ extragalactic radio sources drawn from the faint Planck-ATCA Co-eval Observations (PACO) sample ($b<-75^\circ$, compact sources brighter than $200\,$mJy at $20\,$GHz). We achieved a detection rate of $~97\%$ at $3\,\sigma$ (only $1$ non-detection). We complement these observations with new Australia Telescope Compact Array (ATCA) data between $2.1$ and $35\,$GHz obtained within a few months and with data published in earlier papers from our collaboration. Adding the co-eval GaLactic and Extragalactic All-sky Murchison widefield array (GLEAM) survey detections between $70\,$ and $230\,$MHz for our sources, we present spectra over more than $3$ decades in frequency in total intensity and over about $1.7$ decades in polarization. The spectra of our sources are smooth over the whole frequency range, with no sign of dust emission from the host galaxy at mm wavelengths nor of a sharp high frequency decline due, for example, to electron ageing. We do however find indications of multiple emitting components and present a classification based on the number of detected components. We analyze the polarization fraction behaviour and distributions up to $97\,$GHz for different source classes. Source counts in polarization are presented at $95\,$GHz.
astrophysics
This note describes two versatile accelerator complexes that could be built at a Future Circular Collider (FCC) in order to produce $e^{+}e^{-}$, $\gamma\gamma$ and $ep$ collisions. The first facility is an SLC-type machine comprising a superconducting L-band linear accelerator (linac) and two arcs of bending magnets inside the FCC tunnel. Accelerated by the linac, electron and positron beams would traverse the arcs in opposite directions and collide at centre-of-mass energies considerably exceeding those attainable at circular $e^{+}e^{-}$ colliders. The proposed SLC-type facility would have the same luminosity as a conventional two-linac $e^{+}e^{-}$ collider. The L-band linac may form a part of the injector chain for a 100-TeV proton collider inside the FCC tunnel (FCC-pp), and could deliver electron or positron beams for an $ep$ collider (FCC-ep). The second facility is an ILC-based $e^{+}e^{-}$ collider placed tangentially to the circular FCC tunnel. If the collider is positioned asymmetrically with respect to the FCC tunnel, electron (or positron) bunches could be accelerated by both linacs before they are brought into collision with the 50-TeV beams from the FCC-pp proton storage ring. The two linacs may also form a part of the injector chain for FCC-pp. Each facility could be converted into a $\gamma\gamma$ collider or a source of multi-MW beams for fixed-target experiments.
physics
Global optimization finds applications in a wide range of real world problems. The multi-start methods are a popular class of global optimization techniques, which are based on the ideas of conducting local searches at multiple starting points. In this work we propose a new multi-start algorithm where the starting points are determined in a Bayesian optimization framework. Specifically, the method can be understood as to construct a new function by conducting local searches of the original objective function, where the new function attains the same global optima as the original one. Bayesian optimization is then applied to find the global optima of the new local search defined function.
statistics
The population of exoplanetary systems detected by Kepler provides opportunities to refine our understanding of planet formation. Unraveling the conditions needed to produce the observed exoplanets will sallow us to make informed predictions as to where habitable worlds exist within the galaxy. In this paper, we examine using N-body simulations how the properties of planetary systems are determined during the final stages of assembly. While accretion is a chaotic process, trends in the ensemble properties of planetary systems provide a memory of the initial distribution of solid mass around a star prior to accretion. We also use EPOS, the Exoplanet Population Observation Simulator, to account for detection biases and show that different accretion scenarios can be distinguished from observations of the Kepler systems. We show that the period of the innermost planet, the ratio of orbital periods of adjacent planets, and masses of the planets are determined by the total mass and radial distribution of embryos and planetesimals at the beginning of accretion. In general, some amount of orbital damping, either via planetesimals or gas, during accretion is needed to match the whole population of exoplanets. Surprisingly, all simulated planetary systems have planets that are similar in size, showing that the "peas in a pod" pattern can be consistent with both a giant impact scenario and a planet migration scenario. The inclusion of material at distances larger than what Kepler observes has a profound impact on the observed planetary architectures, and thus on the formation and delivery of volatiles to possible habitable worlds.
astrophysics
We describe a novel framework for estimating subsurface properties, such as rock permeability and porosity, from time-lapse observed seismic data by coupling full-waveform inversion, subsurface flow processes, and rock physics models. For the inverse modeling, we handle the back-propagation of gradients by an intrusive automatic differentiation strategy that offers three levels of user control: (1) at the wave physics level, we adopted the discrete adjoint method in order to use our existing high-performance FWI code; (2) at the rock physics level, we used built-in operators from the $\texttt{TensorFlow}$ backend; (3) at the flow physics level, we implemented customized PDE operators for the potential and nonlinear saturation equations. These three levels of gradient computation strike a good balance between computational efficiency and programming efficiency, and when chained together, constitute a coupled inverse system. We use numerical experiments to demonstrate that (1) the three-level coupled inverse problem is superior in terms of accuracy to a traditional decoupled inversion strategy; (2) it is able to simultaneously invert for parameters in empirical relationships such as the rock physics models; and (3) the inverted model can be used for reservoir performance prediction and reservoir management/optimization purposes.
physics
Uncertain dynamic obstacles, such as pedestrians or vehicles, pose a major challenge for optimal robot navigation with safety guarantees. Previous work on motion planning has followed two main strategies to provide a safe bound on an obstacle's space: a polyhedron, such as a cuboid, or a nonlinear differentiable surface, such as an ellipsoid. The former approach relies on disjunctive programming, which has a relatively high computational cost that grows exponentially with the number of obstacles. The latter approach needs to be linearized locally to find a tractable evaluation of the chance constraints, which dramatically reduces the remaining free space and leads to over-conservative trajectories or even unfeasibility. In this work, we present a hybrid approach that eludes the pitfalls of both strategies while maintaining the original safety guarantees. The key idea consists in obtaining a safe differentiable approximation for the disjunctive chance constraints bounding the obstacles. The resulting nonlinear optimization problem is free of chance constraint linearization and disjunctive programming, and therefore, it can be efficiently solved to meet fast real-time requirements with multiple obstacles. We validate our approach through mathematical proof, simulation and real experiments with an aerial robot using nonlinear model predictive control to avoid pedestrians.
computer science
We present calculations for attosecond atomic delays in photoionization of noble gas atoms based on full two-color two-photon Random-Phase Approximation with Exchange in both length and velocity gauge. Gauge invariant atomic delays are demonstrated for the complete set of diagrams. The results are used to investigate the validity of the common assumption that the measured atomic delays can be interpreted as a one-photon Wigner delay and a universal continuum--continuum contribution that depends only on the kinetic energy of the photoelectron, the laser frequency and the charge of the remaining ion, but not on the specific atom or the orbital from which the electron is ionized. Here we find that although effects beyond the universal IR--photoelectron continuum--continuum transitions are rare, they do occur in special cases such as around the $3s$ Cooper minimum in argon. We conclude also that in general the convergence in terms of many-body diagrams is considerably faster in length gauge than in velocity gauge.
physics
In 2004 Ambainis and Regev formulated a certain form of quantum adiabatic theorem and provided an elementary proof which is especially accessible to computer scientists. Their result is achieved by discretizing the total adiabatic evolution into a sequence of unitary transformations acting on the quantum system. Here we continue this line of study by providing another elementary and shorter proof with improved bounds. Our key finding is a succinct integral representation of the difference between the target and the actual states, which yields an accurate estimation of the approximation error. Our proof can be regarded as a "continuous" version of the work by Ambainis and Regev. As applications, we show how to adiabatically prepare an arbitrary qubit state from an initial state.
quantum physics
Motivated by the fermion bag approach we construct a new class of Hamiltonian lattice field theories that can help us to study fermionic quantum critical points, particularly those with four-fermion interactions. Although these theories are constructed in discrete-time with a finite temporal lattice spacing $\varepsilon$, when $\varepsilon\rightarrow 0$, conventional continuous-time Hamiltonian lattice field theories are recovered. The fermion bag algorithms run relatively faster when $\varepsilon=1$ as compared to $\varepsilon \rightarrow 0$, but still allow us to compute universal quantities near the quantum critical point even at such a large value of $\varepsilon$. As an example of this new approach, here we study the $N_f=1$ Gross-Neveu chiral Ising universality class in $2+1$ dimensions by calculating the critical scaling of the staggered mass order parameter. We show that we are able to study lattice sizes up to $100^2$ sites when $\varepsilon=1$, while with comparable resources we can only reach lattice sizes of up to $64^2$ when $\varepsilon \rightarrow 0$. The critical exponents obtained in both these studies match within errors.
condensed matter
We deal with planar vortex structures in Maxwell-Higgs models in the presence of a generalized magnetic permeability. The model under investigation engenders a real parameter that controls the behavior of the tail of the solutions and of the quantities associated to them. As the parameter gets larger, the solutions attain their boundary values faster, unveiling the existence of a peculiar feature, the presence of double exponential tails. However, the solutions are not compact so we call them quasi-compact vortices.
high energy physics theory
The LASSO is an attractive regularisation method for linear regression that combines variable selection with an efficient computation procedure. This paper is concerned with enhancing the performance of LASSO for square-free hierarchical polynomial models when combining validation error with a measure of model complexity. The measure of the complexity is the sum of Betti numbers of the model which is seen as a simplicial complex, and we describe the model in terms of components and cycles, borrowing from recent developments in computational topology. We study and propose an algorithm which combines statistical and topological criteria. This compound criterion would allow us to deal with model selection problems in polynomial regression models containing higher-order interactions. Simulation results demonstrate that the compound criteria produce sparser models with lower prediction errors than the estimators of several other statistical methods for higher order interaction models.
statistics
The Florence branch of an Italian supermarket chain recently implemented a strategy that permanently lowered the price of numerous store brands in several product categories. To quantify the impact of such a policy change, researchers often use synthetic control methods for estimating causal effects when a subset of units receive a single persistent treatment, and the rest are unaffected by the change. In our applications, however, competitor brands not assigned to treatment are likely impacted by the intervention because of substitution effects; more broadly, this type of interference occurs whenever the treatment assignment of one unit affects the outcome of another. This paper extends the synthetic control methods to accommodate partial interference, allowing interference within predefined groups but not between them. Focusing on a class of causal estimands that capture the effect both on the treated and control units, we develop a multivariate Bayesian structural time series model for generating synthetic controls that would have occurred in the absence of an intervention enabling us to estimate our novel effects. In a simulation study, we explore our Bayesian procedure's empirical properties and show that it achieves good frequentists coverage even when the model is misspecified. We use our new methodology to make causal statements about the impact on sales of the affected store brands and their direct competitors. Our proposed approach is implemented in the CausalMBSTS R package.
statistics
This paper addresses the problem of comparing minimal free resolutions of symbolic powers of an ideal. Our investigation is focused on the behavior of the function depth R/I^(t) = dim R - pd I^(t) - 1, where I^(t) denotes the t-th symbolic power of a homogeneous ideal I in a noetherian polynomial ring R and pd denotes the projective dimension. It has been an open question whether the function depth R/I^(t) is non-increasing if I is a squarefree monomial ideal. We show that depth R/I^(t) is almost non-increasing in the sense that depth R/I^(s) \ge depth R/I^(t) for all s \ge 1 and t \in E(s), where E(s) = \cup_{i \ge 1} {t \in N| i(s-1)+1 \le t \le is} (which contains all integers t \ge (s-1)^2+1). The range E(s) is the best possible since we can find squarefree monomial ideals I such that depth R/I^(s) < depth R/I^(t) for t \not\in E(s), which gives a negative answer to the above question. Another open question asks whether the function depth R/I^(t) is always constant for t \gg 0. We are able to construct counter-examples to this question by monomial ideals. On the other hand, we show that if I is a monomial ideal such that I^(t) is integrally closed for t \gg 0 (e.g. if I is a squarefree monomial ideal), then depth R/I^(t) is constant for t \gg 0 with lim_{t \to \infty} depth R/I^(t) = dim R - dim \oplus_{t \ge 0} I^(t)/m I^(t). Our last result (which is the main contribution of this paper) shows that for any positive numerical function \phi(t) which is periodic for t \gg 0, there exist a polynomial ring R and a homogeneous ideal I such that depth R/I^(t) = \phi(t) for all t \ge 1. As a consequence, for any non-negative numerical function \psi(t) which is periodic for t \gg 0, there is a homogeneous ideal I and a number c such that pd I^(t) = \psi(t) + c for all t \ge 1.
mathematics
We study the Post-Minkowskian (PM) and Post-Newtonian (PN) expansions of the gravitational three-body effective potential. At order 2PM a formal result is given in terms of a differential operator acting on the maximal generalized cut of the one-loop triangle integral. We compute the integral in all kinematic regions and show that the leading terms in the PN expansion are reproduced. We then perform the PN expansion unambiguously at the level of the integrand. Finding agreement with the 2PN three-body potential after integration, we explicitly present new $G^2v^4$-contributions at order 3PN and outline the generalization to $G^2v^{2n}$. The integrals that represent the essential input for these results are obtained by applying the recent Yangian bootstrap directly to their $\epsilon$-expansion around three dimensions. The coordinate space Yangian generator that we employ to obtain these integrals can be understood as a special conformal symmetry in a dual momentum space.
high energy physics theory
Rare-earth ion ensembles doped in single crystals are a promising materials system with widespread applications in optical signal processing, lasing, and quantum information processing. Incorporating rare-earth ions into integrated photonic devices could enable compact lasers and modulators, as well as on-chip optical quantum memories for classical and quantum optical applications. To this end, a thin film single crystalline wafer structure that is compatible with planar fabrication of integrated photonic devices would be highly desirable. However, incorporating rare-earth ions into a thin film form-factor while preserving their optical properties has proven challenging. We demonstrate an integrated photonic platform for rare-earth ions doped in a single crystalline thin film on insulator. The thin film is composed of lithium niobate doped with Tm3+. The ions in the thin film exhibit optical lifetimes identical to those measured in bulk crystals. We show narrow spectral holes in a thin film waveguide that require up to 2 orders of magnitude lower power to generate than previously reported bulk waveguides. Our results pave way for scalable on-chip lasers, optical signal processing devices, and integrated optical quantum memories.
physics
We prove that the extremal length systole of genus two surfaces attains a strict local maximum at the Bolza surface, where it takes the value $\sqrt{2}$.
mathematics
This paper studies the distributed average tracking problem pertaining to a discrete-time linear time-invariant multi-agent network, which is subject to, concurrently, input delays, random packet-drops, and reference noise. The problem amounts to an integrated design of delay and packet-drop tolerant algorithm and determining the ultimate upper bound of the tracking error between agents' states and the average of the reference signals. The investigation is driven by the goal of devising a practically more attainable average tracking algorithm, thereby extending the existing work in the literature which largely ignored the aforementioned uncertainties. For this purpose, a blend of techniques from Kalman filtering, multi-stage consensus filtering, and predictive control is employed, which gives rise to a simple yet comepelling distributed average tracking algorithm that is robust to initialization error and allows the trade-off between communication/computation cost and stationary-state tracking error. Due to the inherent coupling among different control components, convergence analysis is significantly challenging. Nevertheless, it is revealed that the allowable values of the algorithm parameters rely upon the maximal degree of an expected network, while the convergence speed depends upon the second smallest eigenvalue of the same network's topology. The effectiveness of the theoretical results is verified by a numerical example.
electrical engineering and systems science
Recall the classical hypothesis testing setting with two convex sets of probability distributions P and Q. One receives either n i.i.d. samples from a distribution p in P or from a distribution q in Q and wants to decide from which set the points were sampled. It is known that the optimal exponential rate at which errors decrease can be achieved by a simple maximum-likelihood ratio test which does not depend on p or q, but only on the sets P and Q. We consider an adaptive generalization of this model where the choice of p in P and q in Q can change in each sample in some way that depends arbitrarily on the previous samples. In other words, in the k'th round, an adversary, having observed all the previous samples in rounds 1,...,k-1, chooses p_k in P and q_k in Q, with the goal of confusing the hypothesis test. We prove that even in this case, the optimal exponential error rate can be achieved by a simple maximum-likelihood test that depends only on P and Q. We then show that the adversarial model has applications in hypothesis testing for quantum states using restricted measurements. For example, it can be used to study the problem of distinguishing entangled states from the set of all separable states using only measurements that can be implemented with local operations and classical communication (LOCC). The basic idea is that in our setup, the deleterious effects of entanglement can be simulated by an adaptive classical adversary. We prove a quantum Stein's Lemma in this setting: In many circumstances, the optimal hypothesis testing rate is equal to an appropriate notion of quantum relative entropy between two states. In particular, our arguments yield an alternate proof of Li and Winter's recent strengthening of strong subadditivity for quantum relative entropy.
computer science
IGR J17062-6143 is an ultra-compact X-ray binary (UCXB) with an orbital period of 37.96 min. It harbours a millisecond X-ray pulsar that is spinning at 163 Hz and and has continuously been accreting from its companion star since 2006. Determining the composition of the accreted matter in UCXBs is of high interest for studies of binary evolution and thermonuclear burning on the surface of neutron stars. Here, we present a multi-wavelength study of IGR J17062-6143 aimed to determine the detailed properties of its accretion disc and companion star. The multi-epoch photometric UV to near-infrared spectral energy distribution (SED) is consistent with an accretion disc $F_{\nu}\propto\nu^{1/3}$. The SED modelling of the accretion disc allowed us to estimate an outer disc radius of $R_{out}=2.2^{+0.9}_{-0.4} \times 10^{10}$ cm and a mass-transfer rate of $\dot{m}=1.8^{+1.8}_{-0.5}\times10^{-10}$ M$_{\odot}$ yr$^{-1}$. Comparing this with the estimated mass-accretion rate inferred from its X-ray emission suggests that $\gtrsim$90% of the transferred mass is lost from the system. Moreover, our SED modelling shows that the thermal emission component seen in the X-ray spectrum is highly unlikely from the accretion disc and must therefore represent emission from the surface of the neutron star. Our low-resolution optical spectrum revealed a blue continuum and no emission lines, i.e. lacking H and He features. Based on the current data we cannot conclusively identify the nature of the companion star, but we make recommendations for future study that can distinguish between the different possible evolution histories of this X-ray binary. Finally, we demonstrate how multiwavelength observations can be effectively used to find more UCXBs among the LMXBs.
astrophysics
Fault diagnostics and prognostics are important topics both in practice and research. There is an intense pressure on industrial plants to continue reducing unscheduled downtime, performance degradation, and safety hazards, which requires detecting and recovering potential faults in its early stages. Intelligent fault diagnosis is a promising tool due to its ability to rapidly and efficiently processing collected signals and providing accurate diagnosis results. Although many studies have developed machine leaning (M.L) and deep learning (D.L) algorithms for detecting the bearing fault, the results have generally been limited to relatively small train and test datasets and the input data has been manipulated (selective features used) to reach high accuracy. In this work, the raw data, collected from accelerometers (time-domain features) are taken as the input of a novel temporal sequence prediction algorithm to present an end-to-end method for fault detection. We use equivalent temporal sequences as the input of a novel Convolutional Long-Short-Term-Memory Recurrent Neural Network (CRNN) to detect the bearing fault with the highest accuracy in the shortest possible time. The method can reach the highest accuracy in the literature, to the best knowledge of the authors of the present paper, voiding any sort of pre-processing or manipulation of the input data. Effectiveness and feasibility of the fault diagnosis method are validated by applying it to two commonly used benchmark real vibration datasets and comparing the result with the other intelligent fault diagnosis methods.
electrical engineering and systems science
It has been generally accepted that the spin-orbit coupling effect in noncentrosymmetric materials leads to the band splitting and non-trivial spin polarization in the momentum space. However, in some cases, zero net spin polarization in the split bands may occurs, dubbed as the band splitting with vanishing spin polarization (BSVSP) effect, protected by non-pseudo-polar point group symmetry of the wave vector in the first Brillouin zone [Liu et. al., Nat. Commun. \textbf{10}, 5144 (2019)]. In this paper, by using first-principles calculations, we show that the BSVSP effect emerges in two-dimensional (2D) nonsymmorphic Ga$XY$ ($X$= Se, Te; $Y$= Cl, Br, I) family, a new class of 2D materials having in-plane ferroelectricity. Taking the GaTeCl monolayer as a representative example, we observe the BSVSP effect in the split bands along the $X-M$ line located in the proximity of the conduction band minimum. By using $\vec{k}\cdot\vec{p}$ Hamiltonian derived based on the symmetry analysis, we clarify that such effect is originated from the cancellation of the local spin polarization, enforced by non-pseudo-polar $C_{2v}$ point group symmetry of the wave vector along the $X-M$ line. Importantly, we find that the spin polarization can be effectively induced by applying an external out-of-plane electric field, indicating that an electrically tunable spin polarization for spintronic applications is plausible.
condensed matter
Primordial perturbations in our universe are believed to have a quantum origin, and can be described by the wavefunction of the universe (or equivalently, cosmological correlators). It follows that these observables must carry the imprint of the founding principle of quantum mechanics: unitary time evolution. Indeed, it was recently discovered that unitarity implies an infinite set of relations among tree-level wavefunction coefficients, dubbed the Cosmological Optical Theorem. Here, we show that unitarity leads to a systematic set of "Cosmological Cutting Rules" which constrain wavefunction coefficients for any number of fields and to any loop order. These rules fix the discontinuity of an n-loop diagram in terms of lower-loop diagrams and the discontinuity of tree-level diagrams in terms of tree-level diagrams with fewer external fields. Our results apply with remarkable generality, namely for arbitrary interactions of fields of any mass and any spin with a Bunch-Davies vacuum around a very general class of FLRW spacetimes. As an application, we show how one-loop corrections in the Effective Field Theory of inflation are fixed by tree-level calculations and discuss related perturbative unitarity bounds. These findings greatly extend the potential of using unitarity to bootstrap cosmological observables and to restrict the space of consistent effective field theories on curved spacetimes.
high energy physics theory
This paper provides a precise error analysis for the maximum likelihood estimate $\hat{a}_{\text{ML}}(u_1^n)$ of the parameter $a$ given samples $u_1^n = (u_1, \ldots, u_n)'$ drawn from a nonstationary Gauss-Markov process $U_i = a U_{i-1} + Z_i,~i\geq 1$, where $U_0 = 0$, $a> 1$, and $Z_i$'s are independent Gaussian random variables with zero mean and variance $\sigma^2$. We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula that we derived previously for the asymptotically stationary Gauss-Markov sources, i.e., $|a| < 1$. New ideas in the nonstationary case include separately bounding the maximum eigenvalue (which scales exponentially) and the other eigenvalues (which are bounded by constants that depend only on $a$) of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound.
computer science
Bright Be star beta CMi has been identified as a non-radial pulsator on the basis of space photometry with the MOST satellite and also as a single-line spectroscopic binary with a period of 170.4 d. The purpose of this study is to re-examine both these findings, using numerous electronic spectra from the Dominion Astrophysical Observatory, Ond\v{r}ejov Observatory, Universit\"atssterwarte Bochum, archival electronic spectra from several observatories, and also the original MOST satellite photometry. We measured the radial velocity of the outer wings of the double Halpha emission in all spectra at our disposal and were not able to confirm significant radial-velocity changes. We also discuss the problems related to the detection of very small radial-velocity changes and conclude that while it is still possible that the star is a spectroscopic binary, there is currently no convincing proof of it from the radial-velocity measurements. Wavelet analysis of the MOST photometry shows that there is only one persistent (and perhaps slightly variable) periodicity of 0.617 d of the light variations, with a double-wave light curve, all other short periods having only transient character. Our suggestion that this dominant period is the star's rotational period agrees with the estimated stellar radius, projected rotational velocity and with the orbital inclination derived by two teams of investigators. New spectral observations obtained in the whole-night series would be needed to find out whether some possibly real, very small radial-velocity changes cannot in fact be due to rapid line-profile changes.
astrophysics
Behavioral science researchers have recently shown strong interest in disaggregating within- and between-person effects (stable traits) from longitudinal data. In this paper, we propose a method of within-person variability score-based causal inference for estimating joint effects of time-varying continuous treatments by effectively controlling for stable traits as time-invariant unobserved confounders. After conceptualizing stable trait factors and within-person variability scores, we introduce the proposed method, which consists of a two-step analysis. Within-person variability scores for each person, which are disaggregated from stable traits of that person, are first calculated using weights based on a best linear correlation preserving predictor through structural equation modeling. Causal parameters are then estimated via a potential outcome approach, either marginal structural models (MSMs) or structural nested mean models (SNMMs), using calculated within-person variability scores. We emphasize the use of SNMMs with G-estimation because of its doubly robust property to model errors. Through simulation and empirical application to data regarding sleep habits and mental health status from the Tokyo Teen Cohort study, we show that the proposed method can recover causal parameters well and that causal estimates might be severely biased if one does not properly account for stable traits.
statistics
In a classically chaotic system that is ergodic, any trajectory will be arbitrarily close to any point of the available phase space after a long time, filling it uniformly. Using Born's rules to connect quantum states with probabilities, one might then expect that all quantum states in the chaotic regime should be uniformly distributed in phase space. This simplified picture was shaken by the discovery of quantum scarring, where some eigenstates are concentrated along unstable periodic orbits. Despite of that, it is widely accepted that most eigenstates of chaotic models are indeed ergodic. Our results show instead that all eigenstates of the chaotic Dicke model are actually scarred. They also show that even the most random states of this interacting atom-photon system never occupy more than half of the available phase space. Quantum ergodicity is achievable only as an ensemble property, after temporal averages are performed.
condensed matter
Influenza-like illness (ILI) places a heavy social and economic burden on our society. Traditionally, ILI surveillance data is updated weekly and provided at a spatially coarse resolution. Producing timely and reliable high-resolution spatiotemporal forecasts for ILI is crucial for local preparedness and optimal interventions. We present TDEFSI (Theory Guided Deep Learning Based Epidemic Forecasting with Synthetic Information), an epidemic forecasting framework that integrates the strengths of deep neural networks and high-resolution simulations of epidemic processes over networks. TDEFSI yields accurate high-resolution spatiotemporal forecasts using low-resolution time series data. During the training phase, TDEFSI uses high-resolution simulations of epidemics that explicitly model spatial and social heterogeneity inherent in urban regions as one component of training data. We train a two-branch recurrent neural network model to take both within-season and between-season low-resolution observations as features, and output high-resolution detailed forecasts. The resulting forecasts are not just driven by observed data but also capture the intricate social, demographic and geographic attributes of specific urban regions and mathematical theories of disease propagation over networks. We focus on forecasting the incidence of ILI and evaluate TDEFSI's performance using synthetic and real-world testing datasets at the state and county levels in the USA. The results show that, at the state level, our method achieves comparable/better performance than several state-of-the-art methods. At the county level, TDEFSI outperforms the other methods. The proposed method can be applied to other infectious diseases as well.
statistics
A wide class of machine learning algorithms can be reduced to variable elimination on factor graphs. While factor graphs provide a unifying notation for these algorithms, they do not provide a compact way to express repeated structure when compared to plate diagrams for directed graphical models. To exploit efficient tensor algebra in graphs with plates of variables, we generalize undirected factor graphs to plated factor graphs and variable elimination to a tensor variable elimination algorithm that operates directly on plated factor graphs. Moreover, we generalize complexity bounds based on treewidth and characterize the class of plated factor graphs for which inference is tractable. As an application, we integrate tensor variable elimination into the Pyro probabilistic programming language to enable exact inference in discrete latent variable models with repeated structure. We validate our methods with experiments on both directed and undirected graphical models, including applications to polyphonic music modeling, animal movement modeling, and latent sentiment analysis.
statistics
The quantitative analyses of karst spring discharge typically rely on physical-based models, which are inherently uncertain. To improve the understanding of the mechanism of spring discharge fluctuation and the relationship between precipitation and spring discharge, three machine learning methods were developed to reduce the predictive errors of physical-based groundwater models, simulate the discharge of Longzici Spring's karst area, and predict changes in the spring on the basis of long time series precipitation monitoring and spring water flow data from 1987 to 2018. The three machine learning methods included two artificial neural networks (ANNs), namely, multilayer perceptron (MLP) and long short-term memory-recurrent neural network (LSTM-RNN), and support vector regression (SVR). A normalization method was introduced for data preprocessing to make the three methods robust and computationally efficient. To compare and evaluate the capability of the three machine learning methods, the mean squared error (MSE), mean absolute error (MAE), and root-mean-square error (RMSE) were selected as the performance metrics for these methods. Simulations showed that MLP reduced MSE, MAE, and RMSE to 0.0010, 0.0254, and 0.0318, respectively. Meanwhile, LSTM-RNN reduced MSE to 0.0010, MAE to 0.0272, and RMSE to 0.0329. Moreover, the decrease in MSE, MAE, and RMSE were 0.0910, 0.1852, and 0.3017, respectively, for SVR. Results indicated that MLP performed slightly better than LSTM-RNN, and MLP and LSTM-RNN performed considerably better than SVR. Furthermore, ANNs were demonstrated to be prior machine learning methods for simulating and predicting karst spring discharge.
electrical engineering and systems science
We study the effects of the $\Delta$(1232) resonance as an effective degree of freedom for charged and neutral pion photo-production on nucleons. Different observables have been calculated for these processes by using relativistic chiral perturbation theory up to $\mathcal{O}(p^3)$ in the $ \delta$ counting, thus, including pion loops. We compare our model with a large database containing the available experimental data and constrain some unknown low energy constants.
high energy physics phenomenology
Recent NuSTAR and XMM-Newton observations of the molecular cloud around the Arches stellar cluster demonstrate a dramatic change both in morphology and intensity of its non-thermal X-ray emission, similar to that observed in many molecular clouds of the Central Molecular Zone at the Galactic Center. These variations trace the propagation of illuminating fronts, presumably induced by past flaring activities of Sgr A$^{\star}$. In this paper we present results of a long NuSTAR observation of the Arches complex in 2016, taken a year after the previous XMM+NuSTAR observations which revealed a strong decline in the cloud emission. The 2016 NuSTAR observation shows that both the non-thermal continuum emission and the Fe K$_{\alpha}$ 6.4~keV line flux are consistent with the level measured in 2015. No significant variation has been detected in both spectral shape and Fe K$_{\alpha}$ equivalent width EW$_{\rm 6.4\ keV}$, which may be interpreted as the intensity of the Arches non-thermal emission reaching its stationary level. At the same time, the measured 2016 non-thermal flux is not formally in disagreement with the declining trend observed in 2007-2015. Thus, we cannot assess whether the non-thermal emission has reached a stationary level in 2016, and new observations, separated by a longer time period, are needed to draw stringent conclusions. Detailed spectral analysis of three bright clumps of the Arches molecular cloud performed for the first time showed different EW$_{\rm 6.4\ keV}$ and absorption. This is a strong hint that the X-ray emission from the molecular cloud is a mix of two components with different origins.
astrophysics
Path integral quantum Monte Carlo (PIMC) is a method for estimating thermal equilibrium properties of stoquastic quantum spin systems by sampling from a classical Gibbs distribution using Markov chain Monte Carlo. The PIMC method has been widely used to study the physics of materials and for simulated quantum annealing, but these successful applications are rarely accompanied by formal proofs that the Markov chains underlying PIMC rapidly converge to the desired equilibrium distribution. In this work we analyze the mixing time of PIMC for 1D stoquastic Hamiltonians, including disordered transverse Ising models (TIM) with long-range algebraically decaying interactions as well as disordered XY spin chains with nearest-neighbor interactions. By bounding the convergence time to the equilibrium distribution we rigorously justify the use of PIMC to approximate partition functions and expectations of observables for these models at inverse temperatures that scale at most logarithmically with the number of qubits. The mixing time analysis is based on the canonical paths method applied to the single-site Metropolis Markov chain for the Gibbs distribution of 2D classical spin models with couplings related to the interactions in the quantum Hamiltonian. Since the system has strongly nonisotropic couplings that grow with system size, it does not fall into the known cases where 2D classical spin models are known to mix rapidly.
quantum physics
We present FLaREON (Fast Lyman-Alpha Radiative Escape from Outflowing Neutral gas), a public python package that delivers fast and accurate Lyman alpha escape fractions and line profiles over a wide range of outflow geometries and properties. The code incorporates different algorithms, such as interpolation and machine learning to predict Lyman alpha line properties from a pre-computed grid of outflow configurations based on the outputs of a Monte Carlo radiative transfer code. Here we describe the algorithm, discuss its performance and illustrate some of its many applications. Most notably, FLaREON can be used to infer the physical properties of the outflowing medium from an observed Lyman alpha line profile, including the escape fraction, or it can be run over millions of objects in a galaxy formation model to simulate the escape of Lyman alpha photons in a cosmological volume.
astrophysics
Kaplan-Meier estimate, commonly known as product limit method (PLM), and maximum likelihood estimate (MLE) methods in general are often cited as means of stochastic highway capacity estimation. This article discusses their unsuitability for such application as properties of traffic flow do not meet the assumptions for use of the methods. They assume the observed subject has a history which it went through and did not fail. However, due to its nature, each traffic flow measurement behaves as a separate subject which did not go through all the lower levels of intensity (did not "age"). An alternative method is proposed. It fits the resulting cumulative frequency of breakdowns with respect to the traffic flow intensity leading to the breakdown instead of directly estimating the underlying probability distribution of capacity. Analyses of accuracy and sensitivity to data quantity and censoring rate of the new method are provided along with comparison to the PLM. The results prove unsuitability of the PLM and MLE methods in general. The new method is then used in a case study which compares capacity of a work-zone with and without a traffic flow speed harmonisation system installed. The results confirm positive effect of harmonisation on capacity.
statistics
We describe the ongoing `Survey for Magnetars, Intermittent pulsars, Rotating radio transients and Fast radio bursts' (SMIRF), performed using the newly refurbished UTMOST telescope. SMIRF repeatedly sweeps the southern Galactic plane performing real-time periodicity and single-pulse searches, and is the first survey of its kind carried out with an interferometer. SMIRF is facilitated by a robotic scheduler which is capable of fully autonomous commensal operations. We report on the SMIRF observational parameters, the data analysis methods, the survey's sensitivities to pulsars, techniques to mitigate radio frequency interference and present some early survey results. UTMOST's wide field of view permits a full sweep of the Galactic plane to be performed every fortnight, two orders of magnitude faster than previous surveys. In the six months of operations from January to June 2018, we have performed $\sim 10$ sweeps of the Galactic plane with SMIRF. Notable blind re-detections include the magnetar PSR J1622$-$4950, the RRAT PSR J0941$-$3942 and the eclipsing pulsar PSR J1748$-$2446A. We also report the discovery of a new pulsar, PSR J1705$-$54. Our follow-up of this pulsar with the UTMOST and Parkes telescopes at an average flux limit of $\leq 20$ mJy and $\leq 0.16$ mJy respectively, categorizes this as an intermittent pulsar with a high nulling fraction of $< 0.002$
astrophysics
We propose a defect-to-edge topological quantum quench protocol that can efficiently inject electric charge from defect-core states into a chiral edge current of an induced Chern insulator. The initial state of the system is assumed to be a Mott insulator, with electrons bound to topological defects that are pinned by disorder. We show that a "critical quench" to a Chern insulator mass of order the Mott gap shunts charge from defects to the edge, while a second stronger quench can trap it there and boost the edge velocity, creating a controllable current. We apply this idea to a skyrmion charge in the $\nu = 0$ quantum Hall antiferromagnet in graphene, where the quench into the Chern insulator could be accomplished via Floquet driving with circularly polarized light.
condensed matter
We investigate the effect of an applied constant and uniform magnetic field in the fine-structure constant of massive and massless QED. In massive QED, it is shown that a strong magnetic field removes the so called Landau pole and that the fine-structure constant becomes anisotropic having different values along and transverse to the field direction. Contrary to other results in the literature, we find that the anisotropic fine-structure constant always decreases with the field. We also study the effect of the running of the coupling constant with the magnetic field on the electron mass. We find that in both cases of massive and massless QED, the electron dynamical mass always decreases with the magnetic field, what can be interpreted as an inverse magnetic catalysis effect.
high energy physics phenomenology
Approximate Bayesian computation (ABC) and other likelihood-free inference methods have gained popularity in the last decade, as they allow rigorous statistical inference for complex models without analytically tractable likelihood functions. A key component for accurate inference with ABC is the choice of summary statistics, which summarize the information in the data, but at the same time should be low-dimensional for efficiency. Several dimension reduction techniques have been introduced to automatically construct informative and low-dimensional summaries from a possibly large pool of candidate summaries. Projection-based methods, which are based on learning simple functional relationships from the summaries to parameters, are widely used and usually perform well, but might fail when the assumptions behind the transformation are not satisfied. We introduce a localization strategy for any projection-based dimension reduction method, in which the transformation is estimated in the neighborhood of the observed data instead of the whole space. Localization strategies have been suggested before, but the performance of the transformed summaries outside the local neighborhood has not been guaranteed. In our localization approach the transformation is validated and optimized over validation datasets, ensuring reliable performance. We demonstrate the improvement in the estimation accuracy for localized versions of linear regression and partial least squares, for three different models of varying complexity.
statistics
It is proved that a module $M$ over a Noetherian local ring $R$ of prime characteristic and positive dimension has finite flat dimension if Tor$_i^R({}^e R, M)=0$ for dim $R$ consecutive positive values of $i$ and infinitely many $e$. Here ${}^e R$ denotes the ring $R$ viewed as an $R$-module via the $e$th iteration of the Frobenius endomorphism. In the case $R$ is Cohen-Macualay, it suffices that the Tor vanishing above holds for a single $e\geq \log_p e(R)$, where $e(R)$ is the multiplicity of the ring. This improves a result of D. Dailey, S. Iyengar, and the second author, as well as generalizing a theorem due to C. Miller from finitely generated modules to arbitrary modules. We also show that if $R$ is a complete intersection ring then the vanishing of Tor$_i^R({}^e R, M)$ for single positive values of $i$ and $e$ is sufficient to imply $M$ has finite flat dimension. This extends a result of L. Avramov and C. Miller.
mathematics
We present the discovery of a new $z \sim 6.8$ quasar discovered with the near-IR VISTA Hemisphere Survey (VHS) which has been spectroscopically confirmed by the ESO New Technology Telescope (NTT) and the Magellan telescope. This quasar has been selected by spectral energy distribution (SED) classification using near infrared data from VISTA, optical data from Pan-STARRS, and mid-IR data from WISE. The SED classification algorithm is used to statistically rank two classes; foreground Galactic low-mass stars and high redshift quasars, prior to spectroscopic observation. Forced photometry on Pan-STARRS pixels for VHS J0411-0907 allows to improve the SED classification reduced-$\chi^2$ and photometric redshift. VHS J0411-0907 ($z=6.82$, $y_{AB} = 20.1$ mag, $J_{AB} = 20.0$ mag) has the brightest J-band continuum magnitude of the nine known quasars at $z > 6.7$ and is currently the highest redshift quasar detected in the Pan-STARRS survey. This quasar has one of the lowest black hole mass ($M_{\rm{BH}}= (6.13 \pm 0.51)\times 10^8\:\mathrm{M_{\odot}}$) and the highest Eddington ratio ($2.37\pm0.22$) of the known quasars at $z>6.5$. The high Eddington ratio indicates that some very high-$z$ quasars are undergoing super Eddington accretion. We also present coefficients of the best polynomials fits for colours vs spectral type on the Pan-STARRS, VISTA and WISE system for MLT dwarfs and present a forecast for the expected numbers of quasars at $z>6.5$.
astrophysics
In this paper we consider a distributed online learning setting for joint regret with communication constraints. This is a multi-agent setting in which in each round $t$ an adversary activates an agent, which has to issue a prediction. A subset of all the agents may then communicate a $b$-bit message to their neighbors in a graph. All agents cooperate to control the joint regret, which is the sum of the losses of the agents minus the losses evaluated at the best fixed common comparator parameters $\pmb{u}$. We provide a comparator-adaptive algorithm for this setting, which means that the joint regret scales with the norm of the comparator $\|\pmb{u}\|$. To address communication constraints we provide deterministic and stochastic gradient compression schemes and show that with these compression schemes our algorithm has worst-case optimal regret for the case that all agents communicate in every round. Additionally, we exploit the comparator-adaptive property of our algorithm to learn the best partition from a set of candidate partitions, which allows different subsets of agents to learn a different comparator.
computer science
Estimating the number of communities is one of the fundamental problems in community detection. We re-examine the Bayesian paradigm for stochastic block models and propose a "corrected Bayesian information criterion",to determine the number of communities and show that the proposed estimator is consistent under mild conditions. The proposed criterion improves those used in Wang and Bickel (2016) and Saldana et al. (2017) which tend to underestimate and overestimate the number of communities, respectively. Along the way, we establish the Wilks theorem for stochastic block models. Moreover, we show that, to obtain the consistency of model selection for stochastic block models, we need a so-called "consistency condition". We also provide sufficient conditions for both homogenous networks and non-homogenous networks. The results are further extended to degree corrected stochastic block models. Numerical studies demonstrate our theoretical results.
statistics
Quantum optimal control has numerous important applications ranging from pulse shaping in magnetic-resonance imagining to laser control of chemical reactions and quantum computing. Our objective is to address two major challenges that have limited the success of applications of quantum optimal control so far: non-commutativity inherent in quantum systems and non-convexity of quantum optimal control problems involving more than three quantum levels. Methodologically, we address the non-commutativity of the control Hamiltonian at different times by the use of Magnus expansion. To tackle the non-convexity, we employ non-commutative polynomial optimisation and non-commutative geometry. As a result, we present the first globally convergent methods for quantum optimal control.
quantum physics
We consider the problem of providing valid inference for a selected parameter in a sparse regression setting. It is well known that classical regression tools can be unreliable in this context due to the bias generated in the selection step. Many approaches have been proposed in recent years to ensure inferential validity. Here, we consider a simple alternative to data splitting based on randomising the response vector, which allows for higher selection and inferential power than the former and is applicable with an arbitrary selection rule. We provide a theoretical and empirical comparison of both methods and extend the randomisation approach to non-normal settings. Our investigations show that the gain in power can be substantial.
statistics
We study the relationship between TsT transformations, marginal deformations of string theory on AdS$_3$ backgrounds, and irrelevant deformations of 2d CFTs. We show that TsT transformations of NS-NS backgrounds correspond to instantaneous deformations of the worldsheet action by the antisymmetric product of two Noether currents, holographically mirroring the definition of the $T\bar{T}$, $J\bar{T}$, $T\bar{J}$, and $J\bar{J}$ deformations of 2d CFTs. Applying a TsT transformation to string theory on BTZ $\times S^3\times M^4$ we obtain a general class of rotating black string solutions, including the Horne-Horowitz and the Giveon-Itzhaki-Kutasov ones as special cases, which we show are holographically dual to thermal states in single-trace $T\bar{T}$-deformed CFTs. We also find a smooth solution interpolating between global AdS$_3$ in the IR and a linear dilaton background in the UV that is interpreted as the NS-NS ground state in the dual $T\bar{T}$-deformed CFT. This background suggests the existence of an upper bound on the deformation parameter above which the solution becomes complex. We find that the worldsheet spectrum, the thermodynamics of the black strings (in particular their Bekenstein-Hawking entropy), and the critical value of the deformation parameter match the corresponding quantities obtained from single-trace $T\bar{T}$ deformations.
high energy physics theory
We work out the consistent AdS$_3\times S^3$ truncations of the bosonic sectors of both the six-dimensional ${\cal N}=(1,1)$ and ${\cal N}=(2,0)$ supergravity theories. They result in inequivalent three-dimensional half-maximal ${\rm SO}(4)$ gauged supergravities describing 32 propagating bosonic degrees of freedom apart from the non-propagating supergravity multiplet. We present the full non-linear Kaluza-Klein reduction formulas and illustrate them by explicitly uplifting a number of AdS$_3$ vacua.
high energy physics theory
We investigate the consequences of $\mu-\tau$ reflection symmetry in presence of a light sterile neutrino for the $3+1$ neutrino mixing scheme. We discuss the implications of total $\mu-\tau$ reflection symmetry as well partial $\mu-\tau$ reflection symmetry. For the total $\mu-\tau$ reflection symmetry we find values of $\theta_{23}$ and $\delta$ remains confined near $\pi/4$ and $\pm \pi/2$ respectively. The current allowed region for $\theta_{23}$ and $\delta$ in case of inverted hierarchy lies outside the area preferred by the total $\mu-\tau$ reflection symmetry. However, interesting predictions on the neutrino mixing angles and Dirac CP violating phases are obtained considering partial $\mu-\tau$ reflection symmetry. We obtain predictive correlations between the neutrino mixing angle $\theta_{23}$ and Dirac CP phase $\delta$ and study the testability of these correlations at the future long baseline experiment DUNE. We find that while the imposition of $\mu-\tau$ reflection symmetry in the first column admit both normal and inverted neutrino mass hierarchy, demanding $\mu-\tau$ reflection symmetry for the second column excludes the inverted hierarchy. Interestingly, the sterile mixing angle $\theta_{34}$ gets tightly constrained considering the $\mu-\tau$ reflection symmetry in the fourth column. We also study consequences of $\mu-\tau$ reflection symmetry for the Majorana phases and neutrinoless double beta decay.
high energy physics phenomenology
An alternative approach to the calculation of tunneling actions, that control the exponential suppression of the decay of metastable phases, is presented. The new method circumvents the use of bounces in Euclidean space by introducing an auxiliary function, a tunneling potential $V_t$ that connects smoothly the metastable and stable phases of the field potential $V$. The tunneling action is obtained as the integral in field space of an action density that is a simple function of $V_t$ and $V$. This compact expression can be considered as a generalization of the thin-wall action to arbitrary potentials and allows a fast numerical evaluation with a precision below the percent level for typical potentials. The method can also be used to generate potentials with analytic tunneling solutions.
high energy physics theory
We analyze the 2019 Chilean social unrest episode, consisting of a sequence of events, through the lens of an epidemic-like model that considers global contagious dynamics. We adjust the parameters to the Chilean social unrest aggregated public data available from the Undersecretary of Human Rights, and observe that the number of violent events follows a well-defined pattern already observed in various public disorder episodes in other countries since the sixties. Although the epidemic-like models display a single event that reaches a peak followed by an exponential decay, we add standard perturbation schemes that may produce a rich temporal behavior as observed in the 2019 Chilean social turmoil. Although we only have access to aggregated data, we are still able to fit it to our model quite well, providing interesting insights on social unrest dynamics.
physics
We propose a novel adversarial learning strategy for mixture models of Hawkes processes, leveraging data augmentation techniques of Hawkes process in the framework of self-paced learning. Instead of learning a mixture model directly from a set of event sequences drawn from different Hawkes processes, the proposed method learns the target model iteratively, which generates "easy" sequences and uses them in an adversarial and self-paced manner. In each iteration, we first generate a set of augmented sequences from original observed sequences. Based on the fact that an easy sample of the target model can be an adversarial sample of a misspecified model, we apply a maximum likelihood estimation with an adversarial self-paced mechanism. In this manner the target model is updated, and the augmented sequences that obey it are employed for the next learning iteration. Experimental results show that the proposed method outperforms traditional methods consistently.
statistics
Complementary metal-oxide semiconductor (CMOS) technology has radically reshaped the world by taking humanity to the digital age. Cramming more transistors into the same physical space has enabled an exponential increase in computational performance, a strategy that has been recently hampered by the increasing complexity and cost of miniaturization. To continue achieving significant gains in computing performance, new computing paradigms, such as quantum computing, must be developed. However, finding the optimal physical system to process quantum information, and scale it up to the large number of qubits necessary to build a general-purpose quantum computer, remains a significant challenge. Recent breakthroughs in nanodevice engineering have shown that qubits can now be manufactured in a similar fashion to silicon field-effect transistors, opening an opportunity to leverage the know-how of the CMOS industry to address the scaling challenge. In this article, we focus on the analysis of the scaling prospects of quantum computing systems based on CMOS technology.
quantum physics
We consider a Markov chain of point processes such that each state is a super position of an independent cluster process with the previous state as its centre process together with some independent noise process. The model extends earlier work by Felsenstein and Shimatani describing a reproducing population. We discuss when closed term expressions of the first and second order moments are available for a given state. In a special case it is known that the pair correlation function for these type of point processes converges as the Markov chain progresses, but it has not been shown whether the Markov chain has an equilibrium distribution with this, particular, pair correlation function and how it may be constructed. Assuming the same reproducing system, we construct an equilibrium distribution by a coupling argument.
mathematics
We present a derivation of the Feynman-Vernon approach to open quantum systems in the language of super-operators. We show that this gives a new and more direct derivation of the generating function of energy changes in a bath, or baths. This generating function is given by a Feynman-Vernon-like influence functional, with only time shifts in some of the kernels. We further show that the approach can be extended to anharmonic baths by an expansion in cumulants. Every non-zero cumulant of certain environment correlation functions thus gives a kernel in a higher-order term in the Feynman-Vernon action.
quantum physics
A great amount of endeavour has recently been devoted to the joint device activity detection and channel estimation problem in massive machine-type communications. This paper targets at two practical issues along this line that have not been addressed before: asynchronous transmission from uncoordinated users and efficient algorithms for real-time implementation in systems with a massive number of devices. Specifically, this paper considers a practical system where the preamble sent by each active device is delayed by some unknown number of symbols due to the lack of coordination. We manage to cast the problem of detecting the active devices and estimating their delay and channels into a group LASSO problem. Then, a block coordinate descent algorithm is proposed to solve this problem globally, where the closed-form solution is available when updating each block of variables with the other blocks of variables being fixed, thanks to the special structure of our interested problem. Our analysis shows that the overall complexity of the proposed algorithm is low, making it suitable for real-time application.
electrical engineering and systems science
Carbon-enhanced metal-poor stars, CH stars and barium stars, among other classes of chemically peculiar stars, are thought to be products of the interaction of low- and intermediate-mass binaries which occurred when the most evolved star was in the asymptotic giant branch (AGB) phase. Binary evolution models predict that if the initial orbital periods of such systems are shorter than a few thousand days, their orbits should have circularised due to tidal effects. However, observations of the progeny of AGB binaries show that many of these objects have substantial eccentricities, up to about 0.9. In this work we explore the impact of wind mass transfer on the orbits of AGB binaries by performing numerical simulations in which the AGB wind is modelled using a hydrodynamical code and the stellar dynamics is evolved using an N-body code. We find that in most models wind mass transfer contributes to the circularisation of the orbit, but on longer timescales than tidal circularisation if the eccentricity is less than about 0.4. For low initial wind velocities and pseudo-synchronisation of the donor star, we find a structure resembling wind Roche-lobe overflow near periastron. In this case, the interaction between the gas and the stars is stronger than for high initial wind velocities and the orbit shrinks while the eccentricity decreases. In one of our models wind interaction is found to pump the eccentricity on a similar timescale as tidal circularisation. Although our study is based on a small sample of models, it offers some insight into the orbital evolution of eccentric binaries interacting via winds. A larger grid of numerical models for different binary parameters is needed to test if a regime exists where hydrodynamical eccentricity pumping can effectively counteract tidal circularisation, and if this can explain the puzzling eccentricities of the descendants of AGB binaries.
astrophysics
Superconducting Weyl semimetals present a novel and promising system to harbor new forms of unconventional topological superconductivity. Within the context of time-reversal symmetric Weyl semimetals with $d$-wave superconductivity, we demonstrate that the number of Majorana cones equates to the number of intersections between the $d$-wave nodal lines and the Fermi arcs. We illustrate the importance of nodal line-arc intersections by demonstrating the existence of locally stable surface Majorana cones that the winding number does not predict. The discrepancy between Majorana cones and the winding number necessitates an augmentation of the winding number formulation to account for each intersection. In addition, we show that imposing additional mirror symmetries globally protect the nodal line-arc intersections and the corresponding Majorana cones.
condensed matter
Acceptance sampling plans offered by ISO 2859-1 are far from optimal under the conditions for statistical verification in modules F and F1 as prescribed by Annex II of the Measuring Instruments Directive (MID) 2014/32/EU, resulting in sample sizes that are larger than necessary. An optimised single-sampling scheme is derived, both for large lots using the binomial distribution and for finite-sized lots using the exact hypergeometric distribution, resulting in smaller sample sizes that are economically more efficient while offering the full statistical protection required by the MID.
statistics
Landau-Yang theorem is sometimes formulated as a selection rule forbidding two real (that is, non-virtual) photons with zero total momentum to be in the state of the total angular momentum J=1. In this paper we discuss whether the theorem itself and this particular formulation can be extended to a pair of two {\em twisted} photons, which carry orbital angular momentum with respect to their propagation direction. We point out possible sources of confusion, which may arise both from the unusual features of twisted photons and from the fact that usual proofs of the Landau-Yang theorem operate in the center of motion reference frame, which, strictly speaking, exists only for plane waves. We show with an explicit calculation that a pair of twisted photons does have a non-zero overlap with the J=1 state. What is actually forbidden is production of a spin-1 particle by such a photon pair, and in this formulation the Landau-Yang theorem is rock-solid. Although both the twisted photon pair and the spin-1 particle can exist in the J=1 state, these two systems just cannot be coupled in a gauge-invariant and Lorentz invariant manner respecting Bose symmetry.
high energy physics phenomenology
In a microgrid, real-time state estimation has always been a challenge due to several factors such as the complexity of computations, constraints of the communication network and low inertia. In this paper, a real-time event-based optimal linear state estimator is introduced, which uses the send-on-delta data collection approach over wireless sensors networks and exhibits low computation and communication resources cost. By employing the send-on-delta event-based measurement strategy, the burden over the wireless sensor network is reduced due to the transmission of events only when there is a significant variation in the signals. The state estimator structure is developed based on the linear Kalman filter with the additional steps for the centralized fusion of events data and optimal reconstruction of signals by projection onto convex sets. Also for the practical feasibility analysis, this paper developed an Internet of things prototype platform based on LoRaWAN protocol that satisfies the requirements of the proposed state estimator in a microgrid.
electrical engineering and systems science
We outline a model of the Crab Pulsar Wind Nebula with two different populations of synchrotron emitting particles, arising from two different acceleration mechanisms: (i) Component-I due to Fermi-I acceleration at the equatorial portion of the termination shock, with particle spectral index $p_I \approx 2.2$ above the injection break corresponding to $\gamma_{wind} \sigma_{wind} \sim 10^5$, peaking in the UV ($\gamma_{wind} \sim 10^2$ is the bulk Lorentz factor of the wind, $\sigma _{wind} \sim 10^3$ is wind magnetization); (ii) Component-II due to acceleration at reconnection layers in the bulk of the turbulent Nebula, with particle index $p_{II} \approx 1.6$. The model requires relatively slow but highly magnetized wind. For both components the overall cooling break is in the infra-red at $\sim 0.01$ eV, so that the Component-I is in the fast cooling regime (cooling frequency below the peak frequency). In the optical band Component-I produces emission with the cooling spectral index of $\alpha_o \approx 0.5$, softening towards the edges due to radiative losses. Above the cooling break, in the optical, UV and X-rays, Component-I mostly overwhelms Component-II. We hypothesize that acceleration at large-scale current sheets in the turbulent nebula (Component-II) extends to the synchrotron burn-off limit of $\epsilon_s \approx 100$ MeV. Thus in our model acceleration in turbulent reconnection (Component-II) can produce both hard radio spectra and occasional gamma-ray flares. This model may be applicable to a broader class of high energy astrophysical objects, like AGNe and GRB jets, where often radio electrons form a different population from the high energy electrons.
astrophysics
Anisotropic superexchange interaction is one of the most important interactions in realizing exotic quantum magnetism, which is traditionally regarded to originate from magnetic ions and has no relation with the nonmagnetic ions. In our work, by studying a multi-orbital Hubbard model with spin-orbit coupling on both magnetic cations and nonmagnetic anions, we analytically demonstrate that the spin-orbit coupling on nonmagnetic anions alone can induce antisymmetric Dzyaloshinskii-Moriya interaction, symmetric anisotropic exchange and single ion anisotropy on the magnetic ions and thus it actually contributes to anisotropic superexchange on an equal footing as that of magnetic ions. Our results promise one more route to realize versatile exotic phases in condensed matter systems, long-range orders in low dimensional materials and switchable single molecule magnetic devices for recording and manipulating quantum information through nonmagnetic anions.
condensed matter
We address the problem of estimating direction-of-arrivals (DOAs) for multiple acoustic sources in a reverberant environment using a spherical microphone array. It is well-known that multi-source DOA estimation is challenging in the presence of room reverberation, environmental noise and overlapping sources. In this work, we introduce multiple schemes to improve the robustness of estimation consistency (EC) approach in reverberant and noisy conditions through redefined and modified parametric weights. Simulation results show that our proposed methods achieve superior performance compared to the existing EC approach, especially when the sources are spatially close in a reverberant environment.
electrical engineering and systems science
We prove general equidistribution statements (both conditional and unconditional) relating to the Fourier coefficients of arithmetically normalized holomorphic Hecke cusp forms $f_1,\ldots,f_k$ without complex multiplication, of equal weight, (possibly different) squarefree level and trivial nebentypus. As a first application, we show that for the Ramanujan $\tau$ function and any admissible $k$-tuple of distinct non-negative integers $a_1,\ldots,a_k$ the set $$ \{n \in \mathbb{N} : |\tau(n+a_1)| < \cdots < |\tau(n+a_k)|\} $$ has positive natural density. This result improves upon recent work of Bilu, Deshouillers, Gun and Luca [Compos. Math. (2018), no. 11, 2441-2461]. Secondly, we make progress towards understanding the signed version by showing that $$ \{n \in \mathbb{N} : \tau(n+a_1) < \tau(n+a_2) < \tau(n+a_3)\} $$ has positive relative upper density at least $1/6$ for any admissible triple of distinct non-negative integers $(a_1,a_2,a_3).$ More generally, for such chains of inequalities of length $k > 3$ we show that under the assumption of Elliott's conjecture on correlations of multiplicative functions, the relative natural density of this set is $1/k!.$ Previously results of such type were known for $k\le 2$ as consequences of works by Serre and by Matom\"{a}ki and Radziwill. Our results rely crucially on several key ingredients: i) a multivariate Erd\H{o}s-Kac type theorem for the function $n \mapsto \log|\tau(n)|$, conditioned on $n$ belonging to the set of non-vanishing of $\tau$, generalizing work of Luca, Radziwill and Shparlinski; ii) the recent breakthrough of Newton and Thorne on the functoriality of symmetric power $L$-functions for $\text{GL}(n)$ for all $n \geq 2$ and its application to quantitative forms of the Sato-Tate conjecture; and iii) the work of Tao and Ter\"{a}v\"{a}inen on the logarithmic Elliott conjecture.
mathematics
Machine learning recently has been used to identify the governing equations for dynamics in physical systems. The promising results from applications on systems such as fluid dynamics and chemical kinetics inspire further investigation of these methods on complex engineered systems. Dynamics of these systems play a crucial role in design and operations. Hence, it would be advantageous to learn about the mechanisms that may be driving the complex dynamics of systems. In this work, our research question was aimed at addressing this open question about applicability and usefulness of novel machine learning approach in identifying the governing dynamical equations for engineered systems. We focused on distillation column which is an ubiquitous unit operation in chemical engineering and demonstrates complex dynamics i.e. it's dynamics is a combination of heuristics and fundamental physical laws. We tested the method of Sparse Identification of Non-Linear Dynamics (SINDy) because of it's ability to produce white-box models with terms that can be used for physical interpretation of dynamics. Time series data for dynamics was generated from simulation of distillation column using ASPEN Dynamics. One promising result was reduction of number of equations for dynamic simulation from 1000s in ASPEN to only 13 - one for each state variable. Prediction accuracy was high on the test data from system within the perturbation range, however outside perturbation range equations did not perform well. In terms of physical law extraction, some terms were interpretable as related to Fick's law of diffusion (with concentration terms) and Henry's law (with ratio of concentration and pressure terms). While some terms were interpretable, we conclude that more research is needed on combining engineering systems with machine learning approach to improve understanding of governing laws for unknown dynamics.
electrical engineering and systems science
We develop a special phase field/diffusive interface method to model the nuclear architecture reorganization process. In particular, we use a Lagrange multiplier approach in the phase field model to preserve the specific physical and geometrical constraints for the biological events. We develop several efficient and robust linear and weakly nonlinear schemes for this new model. To validate the model and numerical methods, we present ample numerical simulations which in particular reproduce several processes of nuclear architecture reorganization from the experiment literature.
mathematics
Thorough modeling of the physics involved in liquid argon calorimetry is essential for accurately predicting the performance of DUNE and optimizing its design and analysis pipeline. At the fundamental level, it is essential to quantify the detector response to individual hadrons---protons, charged pions, and neutrons---at different injection energies. We report such a simulation, analyzed under different assumptions about event reconstruction, such as particle identification and neutron detection. The role of event containment is also quantified. The results of this simulation can help inform the ProtoDUNE test-beam data analysis, while also providing a framework for assessing the impact of various cross section uncertainties.
high energy physics phenomenology
We propose a method for detecting bipartite entanglement in a many-body mixed state based on estimating moments of the partially transposed density matrix. The estimates are obtained by performing local random measurements on the state, followed by post-processing using the classical shadows framework. Our method can be applied to any quantum system with single-qubit control. We provide a detailed analysis of the required number of experimental runs, and demonstrate the protocol using existing experimental data [Brydges et al, Science 364, 260 (2019)].
quantum physics
Uncertainty principle has a fundamental role in quantum theory. This principle states that it is not possible to measure two incompatible observers simultaneously. The uncertainty principle is expressed logically in terms of standard deviation of the measured observables. In quantum information it has been shown that the uncertainty principle can be expressed by means of Shannon's entropy. Entropic uncertainty relation can be improve by consider an additional particle as the quantum memory. In the presence of quantum memory the entropic uncertainty relation is called quantum memory assisted entropic uncertainty relation. In this work we will consider the case in which the quantum memory moves inside the leaky cavity. We will show that by increasing the velocity of the quantum memory the entropic uncertainty lower bound decreased.
quantum physics
We propose a framework for solving high-dimensional Bayesian inference problems using \emph{structure-exploiting} low-dimensional transport maps or flows. These maps are confined to a low-dimensional subspace (hence, lazy), and the subspace is identified by minimizing an upper bound on the Kullback--Leibler divergence (hence, structured). Our framework provides a principled way of identifying and exploiting low-dimensional structure in an inference problem. It focuses the expressiveness of a transport map along the directions of most significant discrepancy from the posterior, and can be used to build deep compositions of lazy maps, where low-dimensional projections of the parameters are iteratively transformed to match the posterior. We prove weak convergence of the generated sequence of distributions to the posterior, and we demonstrate the benefits of the framework on challenging inference problems in machine learning and differential equations, using inverse autoregressive flows and polynomial maps as examples of the underlying density estimators.
statistics
We present a methodology for ensuring the robustness of our analysis pipeline in separating the global 21-cm hydrogen cosmology signal from large systematics based on singular value decomposition (SVD) of training sets. We show how traditional goodness-of-fit metrics such as the $\chi^2$ statistic that assess the fit to the full data may not be able to detect a suboptimal extraction of the 21-cm signal when it is fit alongside one or more additional components due to significant covariance between them. However, we find that comparing the number of SVD eigenmodes for each component chosen by the pipeline for a given fit to the distribution of eigenmodes chosen for synthetic data realizations created from training set curves can detect when one or more of the training sets is insufficient to optimally extract the signal. Furthermore, this test can distinguish which training set (e.g. foreground, 21-cm signal) needs to be modified in order to better describe the data and improve the quality of the 21-cm signal extraction. We also extend this goodness-of-fit testing to cases where a prior distribution derived from the training sets is applied and find that, in this case, the $\chi^2$ statistic as well as the recently introduced $\psi^2$ statistic are able to detect inadequacies in the training sets due to the increased restrictions imposed by the prior. Crucially, the tests described in this paper can be performed when analyzing any type of observations with our pipeline.
astrophysics
We investigate the impact of a highly eccentric 10 $M_{\rm \oplus}$ (where $M_{\rm \oplus}$ is the Earth mass) planet embedded in a dusty protoplanetary disk on the dust dynamics and its observational implications. By carrying out high-resolution 2D gas and dust two-fluid hydrodynamical simulations, we find that the planet's orbit can be circularized at large radii. After the planet's orbit is circularized, partial gap opening and dust ring formation happen close to the planet's circularization radius, which can explain the observed gaps/rings at the outer region of disks. When the disk mass and viscosity become low, we find that an eccentric planet can even open gaps and produce dust rings close to the pericenter and apocenter radii before its circularization. This offers alternative scenarios for explaining the observed dust rings and gaps in protoplanetary disks. A lower disk viscosity is favored to produce brighter rings in observations. An eccentric planet can also potentially slow down the dust radial drift in the outer region of the disk when the disk viscosity is low ($\alpha \lesssim2\times10^{-4}$) and the circularization is faster than the dust radial drift.
astrophysics
In many contexts, missing data and disclosure control are ubiquitous and challenging issues. In particular at statistical agencies, the respondent-level data they collect from surveys and censuses can suffer from high rates of missingness. Furthermore, agencies are obliged to protect respondents' privacy when publishing the collected data for public use. The NPBayesImputeCat R package, introduced in this paper, provides routines to i) create multiple imputations for missing data, and ii) create synthetic data for statistical disclosure control, for multivariate categorical data, with or without structural zeros. We describe the Dirichlet process mixture of products of multinomial distributions model used in the package, and illustrate various uses of the package using data samples from the American Community Survey (ACS). We also compare results of the missing data imputation to the mice R package and those of the synthetic data generation to the synthpop R package.
statistics
Continuous-variable measurement-independent-device quantum key distribution (CV-MDI-QKD) can offer high secure key rate at metropolitan distance and remove all side channel loopholes of detection as well. However, there is no complete experimental demonstration of CV-MDI-QKD due to the remote distance phase-locking techniques challenge. Here, we present a new optical scheme to overcome this difficulty and also removes the requirement of two identical independent lasers. We anticipate that our new scheme can be used to demonstrate the in-field CV-MDI-QKD experiment and build the CV-MDI-QKD network with untrusted source.
quantum physics
This paper studies the resilient control of networked systems in the presence of cyber attacks. In particular, we consider the state feedback stabilization problem for nonlinear systems when the state measurement is sent to the controller via a communication channel that only has a finite transmitting rate and is moreover subject to cyber attacks in the form of Denial-of-Service (DoS). We use a dynamic quantization method to update the quantization range of the encoder/decoder and characterize the number of bits for quantization needed to stabilize the system under a given level of DoS attacks in terms of duration and frequency. Our theoretical result shows that under DoS attacks, the required data bits to stabilize nonlinear systems by state feedback control are larger than those without DoS since the communication interruption induced by DoS makes the quantization uncertainty expand more between two successful transmissions. Even so, in the simulation, we show that the actual quantization bits can be much smaller than the theoretical value.
electrical engineering and systems science
In this paper, we address a sub-topic of the broad domain of audio enhancement, namely musical audio bandwidth extension. We formulate the bandwidth extension problem using deep neural networks, where a band-limited signal is provided as input to the network, with the goal of reconstructing a full-bandwidth output. Our main contribution centers on the impact of the choice of low pass filter when training and subsequently testing the network. For two different state of the art deep architectures, ResNet and U-Net, we demonstrate that when the training and testing filters are matched, improvements in signal-to-noise ratio (SNR) of up to 7dB can be obtained. However, when these filters differ, the improvement falls considerably and under some training conditions results in a lower SNR than the band-limited input. To circumvent this apparent overfitting to filter shape, we propose a data augmentation strategy which utilizes multiple low pass filters during training and leads to improved generalization to unseen filtering conditions at test time.
electrical engineering and systems science
In this paper, we work on the notion of k-synchronizability: a system is k-synchronizable if any of its executions, up to reordering causally independent actions, can be divided into a succession of k-bounded interaction phases. We show two results (both for mailbox and peer-to-peer automata): first, the reachability problem is decidable for k-synchronizable systems; second, the membership problem (whether a given system is k-synchronizable) is decidable as well. Our proofs fix several important issues in previous attempts to prove these two results for mailbox automata.
computer science
Imitation Learning is a promising area of active research. Over the last 30 years, Imitation Learning has advanced significantly and been used to solve difficult tasks ranging from Autonomous Driving to playing Atari games. In the course of this development, different methods for performing Imitation Learning have fallen into and out of favor. In this paper, I explore the development of these different methods and attempt to examine how the field has progressed. I focus my analysis on surveying 4 landmark papers that sequentially build upon each other to develop increasingly impressive Imitation Learning methods.
computer science
The framework of quantum invariants is an elegant generalization of adiabatic quantum control to control fields that do not need to change slowly. Due to the unavailability of invariants for systems with more than one spatial dimension, the benefits of this framework have not yet been exploited in multi-dimensional systems. We construct a multi-dimensional Gaussian quantum invariant that permits the design of time-dependent potentials that let the ground state of an initial potential evolve towards the ground state of a final potential. The scope of this framework is demonstrated with the task of shuttling an ion around a corner which is a paradigmatic control problem in achieving scalability of trapped ion quantum information technology.
quantum physics
Combining the electroweak phase transition and the Higgs searches at the LHC, we study the Higgs inflation in the type-I and type-II two-Higgs-doublet models with non-minimally couplings to gravity. Considering relevant theoretical and experimental constraints, we find that the Higgs inflation imposes stringent constraints on the mass splitting between $A$, $H^\pm$ and $H$, and they tend to be nearly degenerate in mass with increasing of their masses. The direct searches for Higgs at the LHC can exclude many points achieving Higgs inflation in the region of $m_H~(m_A)<$ 450 GeV in the type-I model, and impose a lower bound on $\tan\beta$ for the type-II model. The Higgs inflation disfavors the wrong sign Yukawa coupling region of type-II model. In the parameter space achieving the Higgs inflation, the type-I and type-II models can produce a first order electroweak phase transition, but $v_c/T_c$ is much smaller than 1.0.
high energy physics phenomenology
The problem of allocating scarce items to individuals is an important practical question in market design. An increasingly popular set of mechanisms for this task uses the concept of market equilibrium: individuals report their preferences, have a budget of real or fake currency, and a set of prices for items and allocations is computed that sets demand equal to supply. An important real world issue with such mechanisms is that individual valuations are often only imperfectly known. In this paper, we show how concepts from classical market equilibrium can be extended to reflect such uncertainty. We show that in linear, divisible Fisher markets a robust market equilibrium (RME) always exists; this also holds in settings where buyers may retain unspent money. We provide theoretical analysis of the allocative properties of RME in terms of envy and regret. Though RME are hard to compute for general uncertainty sets, we consider some natural and tractable uncertainty sets which lead to well behaved formulations of the problem that can be solved via modern convex programming methods. Finally, we show that very mild uncertainty about valuations can cause RME allocations to outperform those which take estimates as having no underlying uncertainty.
computer science
Coherence arises from the superposition principle, where it plays a central role in quantum mechanics. In [Phys.Rev.Lett.114,210401(2015)], it has been shown that the freezing phenomenon of quantum correlations beyond entanglement, is intimately related to the freezing of quantum coherence (QC). In this paper, we compare the behaviour of entanglement and quantum discord with quantum coherence in two di erent subsystems (optical and mechanical). We use respectively the en-tanglement of formation (EoF) and the Gaussian quantum discord (GQD) to quantify entanglement and quantum discord. Under thermal noise and optomechanical coupling e ects, we show that EoF, GQD and QC behave in the same way. Remarkably, when entanglement vanishes, GQD and QC re-main almost una ected by thermal noise, keeping non zero values even for high temperature, which in concordance with [Phys.Rev.Lett.114,210401(2015)]. Also, we nd that the coherence associated with the optical subsystem are more robustagainst thermal noisethan those of the mechanical subsystem. Our results con rm that optomechanical cavities constitute a powerful resource of QC.
quantum physics
This paper presents necessary and sufficient conditions for the existence of a real root of maximal multiplicity in the spectrum of a linear time-invariant single-delay equation of retarded type. We also prove that this root is always strictly dominant, and hence determines the asymptotic behavior of the system. These results are based on improved a priori bounds on the imaginary part of roots on the complex right half-plane.
mathematics
In this paper, we present a reciprocity on finite abelian groups involving zero-sum sequences. Let $G$ and $H$ be finite abelian groups with $(|G|,|H|)=1$. For any positive integer $m$, let $\mathsf M(G,m)$ denote the set of all zero-sum sequences over $G$ of length $m$. We have the following reciprocity $$|\mathsf M(G,|H|)|=|\mathsf M(H,|G|)|.$$ Moreover, we provide a combinatorial interpretation of the above reciprocity using ideas from rational Catalan combinatorics. We also present and explain some other symmetric relationships on finite abelian groups with methods from invariant theory. Among others, we partially answer a question proposed by Panyushev in a generalized version.
mathematics
The parton distribution functions (PDFs) which characterize the structure of the proton are currently one of the dominant sources of uncertainty in the predictions for most processes measured at the Large Hadron Collider (LHC). Here we present the first extraction of the proton PDFs that accounts for the missing higher order uncertainty (MHOU) in the fixed-order QCD calculations used in PDF determinations. We demonstrate that the MHOU can be included as a contribution to the covariance matrix used for the PDF fit, and then introduce prescriptions for the computation of this covariance matrix using scale variations. We validate our results at next-to-leading order (NLO) by comparison to the known next order (NNLO) corrections. We then construct variants of the NNPDF3.1 NLO PDF set that include the effect of the MHOU, and assess their impact on the central values and uncertainties of the resulting PDFs.
high energy physics phenomenology
We study the ionization dynamics of oriented HeH$^+$ in strong linearly-polarized laser fields by numerically solving the time-dependent Schr\"{o}dinger equation. The calculated photoelectron momentum distributions for parallel orientation show a striking asymmetric structure. With a developed model pertinent to polar molecules, we trace the electron motion in real time. We show that this asymmetric structure arises from the interplay of the Coulomb effect and the permanent dipole in strong laser fields. This structure can be used to probe the degree of orientation which is important in ultrafast experiments for polar molecules. we also check our results for other polar molecules such as CO and BF.
physics
We present a new algorithm to compute minimal telescopers for rational functions in two discrete variables. As with recent reduction-based approach, our algorithm has the nice feature that the computation of a telescoper is independent of its certificate. Moreover, our algorithm uses a sparse representation of the certificate, which allows it to be easily manipulated and analyzed without knowing the precise expanded form. This representation hides potential expression swell until the final (and optional) expansion, which can be accomplished in time polynomial in the size of the expanded certificate. A complexity analysis, along with a Maple implementation, suggests that our algorithm has better theoretical and practical performance than the reduction-based approach in the rational case.
computer science
This paper extends our previous work in [1],[2], on optimal scheduling of autonomous vehicle arrivals at intersections, from one to a grid of intersections. A scalable distributed Mixed Integer Linear Program (MILP) is devised that solves the scheduling problem for a grid of intersections. A computational control node is allocated to each intersection and regularly receives position and velocity information from subscribed vehicles. Each node assigns an intersection access time to every subscribed vehicle by solving a local MILP. Neighboring intersections will coordinate with each other in real-time by sharing their solutions for vehicles' access times with each other. Our proposed approach is applied to a grid of nine intersections and its positive impact on traffic flow and vehicles' fuel economy is demonstrated in comparison to conventional intersection control scenarios.
electrical engineering and systems science
Recently, the standard model predictions for the $B$-meson hadronic decays, $\bar{B}^0 \to D^{(\ast)+}K^-$ and $\bar{B}^0_s \to D^{(\ast)+}_s \pi^-$, have been updated based on the QCD factorization approach. This improvement sheds light on a novel puzzle in the $B$-meson hadronic decays: there are mild but universal tensions between data and the predicted branching ratios. Assuming the higher-order QCD corrections are not huge enough to solve the tension, we examine several new physics interpretations of this puzzle. We find that the tension can be partially explained by a left-handed $W^\prime$ model, which can be compatible with other flavor observables and collider bounds.
high energy physics phenomenology
We consider the common setting where one observes probability estimates for a large number of events, such as default risks for numerous bonds. Unfortunately, even with unbiased estimates, selecting events corresponding to the most extreme probabilities can result in systematically underestimating the true level of uncertainty. We develop an empirical Bayes approach "Excess Certainty Adjusted Probabilities" (ECAP), using a variant of Tweedie's formula, which updates probability estimates to correct for selection bias. ECAP is a flexible non-parametric method, which directly estimates the score function associated with the probability estimates, so it does not need to make any restrictive assumptions about the prior on the true probabilities. ECAP also works well in settings where the probability estimates are biased. We demonstrate through theoretical results, simulations, and an analysis of two real world data sets, that ECAP can provide significant improvements over the original probability estimates.
statistics