text
stringlengths
11
9.77k
label
stringlengths
2
104
Multiparticle entanglement is of great significance for quantum metrology and quantum information processing. We here present an efficient scheme to generate stable multiparticle entanglement in a solid state setup, where an array of silicon-vacancy centers are embedded in a quasi-one-dimensional acoustic diamond waveguide. In this scheme, the continuum of phonon modes induces a controllable dissipative coupling among the SiV centers. We show that, by an appropriate choice of the distance between the SiV centers, the dipole-dipole interactions can be switched off due to destructive interferences, thus realizing a Dicke superradiance model. This gives rise to an entangled steady state of SiV centers with high fidelities. The protocol provides a feasible setup for the generation of multiparticle entanglement in a solid state system.
quantum physics
Considering the ultrahigh energy (UHE) neutrino events reported by IceCube in the PeV regime to have originated from the decay of superheavy dark matter, the IceCube UHE neutrino events are analysed and the best fit values of the two parameters namely the mass of the superheavy dark matter and its decay lifetime are obtained. The theoretical astrophysical flux is also included in theanalysis. We find that while the neutrino events in the energy range $\sim$ 60 TeV-$\sim$ 120 TeV appears to have astrophysical origin, the events in the energy range $\sim 1.2 \times 10^5$ GeV - $\sim 5 \times 10^7$ GeV can be well described from the superheavy dark matter decay hypothesis. We also find that although hadronic decay channel of the superheavy dark matter can well explain the events in the energy range $\sim 1.2 \times 10^5$ GeV - $\sim 5 \times 10^6$ GeV, the higher energy regime higher than this range can be addressed only when the leptonic decay channel is considered.
high energy physics phenomenology
The author considers a hypothesis of neutron lifetime splitting in beta-decay and shows that the beta-decay of neutrons could be described by the triad of lifetimes tau_{Left}, tau_{Mean}, tau_{Right}. The lifetime tau_{Left} is the lifetime of L-neutrons emitting electrons against the neutron spin direction (L-type neutron decay). The lifetime tau_{Right} is the lifetime of R-neutrons emitting electrons in the direction of the neutron spin (R-type neutron decay). The lifetime tau_{Mean} is the arithmetic average of tau_{Left}, tau_{Right} or the mean neutron lifetime. While using the parameters of electron-spin asymmetry of neutron decay and the results for determining the mean neutron lifetime, the performed numerical estimates gave the numerical values of the triad tau_{Left}, tau_{Mean}, tau_{Right} as 813 s, 900 s and 987 s respectively. In addition to the estimates, the lifetimes of the triad are determined from experimental data applying the decay scale tuning method proposed by the author. The experimental values of the triad coincided with the estimates with high accuracy. The weighted average neutron lifetime tau_{W} determined from the experimental neutron lifetimes tau_{Left} and tau_{Right} is equal to tau_{W}=833.33 +- 0.02 s, and is in good agreement with the values of the neutron lifetime obtained by the other methods.
physics
The IKKT model is proposed as a non-perturbative formulation of superstring theory. We propose a Dirac operator on the noncommutative torus,which is consistent with the IKKT model, based on noncommutative geometry. Next, we consider zero-mode equations of the Dirac operator with magnetic fluxes. We find that zero-mode solutions have the chirality and the generation structures similar to the commutative case. Moreover, we compute Yukawa couplings of chiral matter fields.
high energy physics theory
In autonomous systems, a motion planner generates reference trajectories which are tracked by a low-level controller. For safe operation, the motion planner should account for inevitable controller tracking error when generating avoidance trajectories. In this article we present a method for generating provably safe tracking error bounds, while reducing over-conservatism that exists in existing methods. We achieve this goal by restricting possible behaviors for the motion planner. We provide an algebraic method based on sum-of-squares programming to define restrictions on the motion planner and find small bounds on the tracking error. We demonstrate our method on two case studies and show how we can integrate the method into already developed motion planning techniques. Results suggest that our method can provide acceptable tracking error wherein previous work were not applicable.
computer science
The Entanglement contour function quantifies the contribution from each degree of freedom in a region $\mathcal{A}$ to the entanglement entropy $S_{\mathcal{A}}$. Recently in \cite{Wen:2018whg} the author gave two proposals for the entanglement contour in two-dimensional theories. The first proposal is a fine structure analysis of the entanglement wedge which applies to holographic theories. The second proposal is a claim that for general two-dimensional theories the partial entanglement entropy is given by a linear combination of entanglement entropies of relevant subsets inside $\mathcal{A}$. In this paper we further study the partial entanglement entropy proposal by showing that it satisfies all the rational requirements proposed previously. We also extend the fine structure analysis from vacuum AdS space to BTZ black holes. Furthermore we give a simple prescription to generate the local modular flows for two-dimensional theories from only the entanglement entropies without refer to the explicit Rindler transformations.
high energy physics theory
During tokamak disruptions the profile of the net parallel current is observed to flatten on a time scale that is so fast that it must be due to a fast magnetic reconnection. After a fast magnetic reconnection has broken magnetic surfaces, a single magnetic field line covers an entire volume and not just a magnetic surface. The current profile, given by $K\equiv\mu_0j_{||}/B$, relaxes to a constant within that volume by Alfv\'en waves propagating along the chaotic magnetic field lines. The time scale for this relaxation determines the commonly observed disruption phenomena of a current spike and a sudden drop in the plasma internal inductance. An efficient method for studying this relaxation is derived, which allows a better understanding of the information encoded in the current spike and the associated sudden drop in the plasma internal inductance. Implications for coronal heating are also discussed.
physics
The theory of elliptic equations involving singular nonlinearities is well studied topic but the interaction of singular type nonlinearity with nonlocal nonlinearity in elliptic problems has not been investigated so far. In this article, we study the very singular and doubly nonlocal singular problem $(P_\lambda)$(See below). Firstly, we establish a very weak comparison principle and the optimal Sobolev regularity. Next using the critical point theory of non-smooth analysis and the geometry of the energy functional, we establish the global multiplicity of positive weak solutions.
mathematics
Correlations between a system and its environment lead to errors in an open quantum system. Detecting those correlations would be valuable for avoiding and/or correcting those errors. Here we show that we can detect correlations by only measuring the system itself if we know the cause of the interaction between the two, for example in the case of a dipole-dipole interaction. We investigate the unitary $U$ which is associated with the exchange Hamiltonian and examine the ability to detect initial correlations between a system and its environment for various types of initial states. The states we select are motivated by realistic experimental conditions and we provide bounds for when we can state with certainty that there are initial system-environment correlations given experimental data.
quantum physics
We discuss the minimal theory for quark-lepton unification at the low scale. In this context, the quarks and leptons are unified in the same representations and neutrino masses are generated through the inverse seesaw mechanism. The properties of the leptoquarks predicted in this theory are discussed in detail and we investigate the predictions for the leptonic and semi-leptonic decays of mesons. We study the possibility to explain the current value of $\mathcal{R}_K$ reported by the LHCb collaboration and the value of the muon anomalous magnetic moment reported by the Muon $g-2$ experiment at Fermilab.
high energy physics phenomenology
A generic theory of skyrmion crystal (SkX) formation in chiral magnetic films is presented. We numerically demonstrate that a chiral film can have many metastable states with an arbitrary number of skyrmions up to a maximal value. A perpendicular magnetic field plays a crucial role in SkX formation. The energy of a film increases monotonically with skyrmion number at zero field while the film with $Q_m$ skyrmions has the lowest energy in a magnetic field. $Q_m$ first increases with the magnetic field up to an optimal value and then decreases with the field. Outside of a field window, helical states of low skyrmion number densities are thermal equilibrium phases while an SkX is metastable. Within the field window, SkXs are the thermal equilibrium states below the Curie temperature. However, the time to reach the thermal equilibrium SkX states from a helical state would be too long at a low temperature. This causes a widely spread false belief that SkXs are metastable and helical states are thermal equilibrium phase at low temperature and at the optimal field. Our findings explain well the critical role of a field in SkX formation and fascinating thermodynamic behaviours of helical states and SkXs. Our theory opens a new avenue for SkX manipulation and skyrmion-based applications.
condensed matter
Scaling the number of qubits while maintaining high-fidelity quantum gates remains a key challenge for quantum computing. Presently, superconducting quantum processors with >50-qubits are actively available. For such systems, fixed-frequency transmons are attractive due to their long coherence and noise immunity. However, scaling fixed-frequency architectures proves challenging due to precise relative frequency requirements. Here we employ laser annealing to selectively tune transmon qubits into desired frequency patterns. Statistics over hundreds of annealed qubits demonstrate an empirical tuning precision of 18.5 MHz, with no measurable impact on qubit coherence. We quantify gate error statistics on a tuned 65-qubit processor, with median two-qubit gate fidelity of 98.7%. Baseline tuning statistics yield a frequency-equivalent resistance precision of 4.7 MHz, sufficient for high-yield scaling beyond 1000-qubit levels. Moving forward, we anticipate selective laser annealing to play a central role in scaling fixed-frequency architectures.
quantum physics
Koopman mode decomposition (KMD) is a technique of nonlinear time-series analysis capable of decomposing data on complex spatio temporal dynamics into multiple modes oscillating with single frequencies, called the Koopman modes (KMs). We apply KMD to measurement data on oscillatory dynamics of a temperature field inside a room that is a complex phenomenon ubiquitous in our daily lives and has a clear technological motivation in energy-efficient air conditioning. To characterize not only the oscillatory field (scalar field) but also associated heat flux (vector field), we introduce the notion of a temperature gradient using the spatial gradient of a KM. By estimating the temperature gradient directly from data, we show that KMD is capable of extracting a distinct structure of the heat flux embedded in the oscillatory temperature field, relevant in terms of air conditioning.
electrical engineering and systems science
We review a series of unitarization techniques that have been used during the last decades, many of them in connection with the advent and development of current algebra and later of Chiral Perturbation Theory. Several methods are discussed like the generalized effective-range expansion, K-matrix approach, Inverse Amplitude Method, Pad\'e approximants and the N/D method. More details are given for the latter though. We also consider how to implement them in order to correct by final-state interactions. In connection with this some other methods are also introduced like the expansion of the inverse of the form factor, the Omn\'es solution, generalization to coupled channels and the Khuri-Treiman formalism, among others.
high energy physics phenomenology
We prove that, for $g\geq19$ the mapping class group of a nonorientable surface of genus $g$, $\textrm{Mod}(N_g)$, can be generated by two elements, one of which is of order $g$. We also prove that for $g\geq26$, $\textrm{Mod}(N_g)$ can be generated by three involutions if $g\geq26$.
mathematics
Classification is a vital tool that is important for modelling many complex numerical models. A model or system may be such that, for certain areas of input space, the output either does not exist, or is not in a quantifiable form. Here, we present a new method for classification where the model outputs are given distinct classifying labels, which we model using a latent Gaussian process (GP). The latent variable is estimated using MCMC sampling, a unique likelihood and distinct prior specifications. Our classifier is then verified by calculating a misclassification rate across the input space. Comparisons are made with other existing classification methods including logistic regression, which models the probability of being classified into one of two regions. To make classification predictions we draw from an independent Bernoulli distribution, meaning that distance correlation is lost from the independent draws and so can result in many misclassifications. By modelling the labels using a latent GP, this problem does not occur in our method. We apply our novel method to a range of examples including a motivating example which models the hormones associated with the reproductive system in mammals, where the two labelled outputs are high and low rates of reproduction.
statistics
In this work, the transition form factors are calculated for the semileptonic $D_{(s)} \to A \ell^+ \nu$ where $A=a_{1}, b_{1}, K_{1}(1270,1400)$, i.e., $D^{+} \to a^{0}_{1} (b^{0}_{1}, K^{0}_{1}) \ell^+ \nu$, $D^{0} \to a^{-}_{1} (b^{-}_{1}) \ell^+ \nu$, and $D^{+}_{s}\to K^{0}_{1} \ell^+ \nu$ in the frame work of the light-cone QCD sum rules (LCSR) approach up to the twist-3 distribution amplitudes (DAs). Since the masses of these axial vector mesons are comparable to the charm quark mass, we keep out in our calculations all terms including ${m_{A}}/{m_c}$ in expansion of two--parton DAs. Branching ratio values are estimated for the semileptonic $D_{(s)} \to A \ell \nu$ and nonleptonic $D\to K_1 (1270,1400) \pi$ decays. A comparison is also made between our results and predictions of other methods and the existing experimental values.
high energy physics phenomenology
Communication networks have multiple users, each sending and receiving messages. A multiple access channel (MAC) models multiple senders transmitting to a single receiver, such as the uplink from many mobile phones to a single base station. The optimal performance of a MAC is quantified by a capacity region of simultaneously achievable communication rates. We study the two-sender classical MAC, the simplest and best-understood network, and find a surprising richness in both a classical and quantum context. First, we find that quantum entanglement shared between senders can substantially boost the capacity of a classical MAC. Second, we find that optimal performance of a MAC with bounded-size inputs may require unbounded amounts of entanglement. Third, determining whether a perfect communication rate is achievable using finite-dimensional entanglement is undecidable. Finally, we show that evaluating the capacity region of a two-sender classical MAC is in fact NP-hard.
quantum physics
Measurement error arises commonly in clinical research settings that rely on data from electronic health records or large observational cohorts. In particular, self-reported outcomes are typical in cohort studies for chronic diseases such as diabetes in order to avoid the burden of expensive diagnostic tests. Dietary intake, which is also commonly collected by self-report and subject to measurement error, is a major factor linked to diabetes and other chronic diseases. These errors can bias exposure-disease associations that ultimately can mislead clinical decision-making. We have extended an existing semiparametric likelihood-based method for handling error-prone, discrete failure time outcomes to also address covariate error. We conduct an extensive numerical study to compare the proposed method to the naive approach that ignores measurement error in terms of bias and efficiency in the estimation of the regression parameter of interest. In all settings considered, the proposed method showed minimal bias and maintained coverage probability, thus outperforming the naive analysis which showed extreme bias and low coverage. This method is applied to data from the Women's Health Initiative to assess the association between energy and protein intake and the risk of incident diabetes mellitus. Our results show that correcting for errors in both the self-reported outcome and dietary exposures leads to considerably different hazard ratio estimates than those from analyses that ignore measurement error, which demonstrates the importance of correcting for both outcome and covariate error. Computational details and R code for implementing the proposed method are presented in Section S1 of the Supplementary Materials.
statistics
Equilibrium properties and localised magnon excitations are investigated in topologically distinct skyrmionic textures. The observed shape of the structures and their orientation on the lattice is explained based on their vorticities and the symmetry of the crystal. The transformation between different textures and their annihilation as a function of magnetic field is understood based on the energy differences between them. The angular momentum spin-wave eigenmodes characteristic of cylindrically symmetric structures are combined in the distorted spin configurations, leading to avoided crossings in the magnon spectrum. The susceptibility of the skyrmionic textures to homogeneous external fields is calculated, revealing that a high number of modes become detectable due to the hybridization between the angular momentum eigenmodes. These findings should contribute to the observation of spin waves in distorted skyrmionic structures via experiments and numerical simulations, widening the range of their possible applications in magnonic devices.
condensed matter
This paper addresses variational data assimilation from a learning point of view. Data assimilation aims to reconstruct the time evolution of some state given a series of observations, possibly noisy and irregularly-sampled. Using automatic differentiation tools embedded in deep learning frameworks, we introduce end-to-end neural network architectures for data assimilation. It comprises two key components: a variational model and a gradient-based solver both implemented as neural networks. A key feature of the proposed end-to-end learning architecture is that we may train the NN models using both supervised and unsupervised strategies. Our numerical experiments on Lorenz-63 and Lorenz-96 systems report significant gain w.r.t. a classic gradient-based minimization of the variational cost both in terms of reconstruction performance and optimization complexity. Intriguingly, we also show that the variational models issued from the true Lorenz-63 and Lorenz-96 ODE representations may not lead to the best reconstruction performance. We believe these results may open new research avenues for the specification of assimilation models in geoscience.
physics
We consider the most general fractional background fluxes in the color, flavor, and baryon number directions, compatible with the faithful action of the global symmetry of a given theory. We call the obstruction to gauging symmetries revealed by such backgrounds the baryon-color-flavor (BCF) 't Hooft anomaly. We apply the BCF anomaly to vector-like theories, with fermions in higher-dimensional representations of arbitrary N-ality, and derive non-trivial constraints on their IR dynamics. In particular, this class of theories enjoys an independent discrete chiral symmetry and one may ask about the fate of this symmetry in the background of BCF fluxes. We show that, under certain conditions, an anomaly between the chiral symmetry and the BCF background rules out massless composite fermions as the sole player in the IR: either the composites do not form or additional contributions to the matching of the BCF anomaly are required. We can also give a flavor-symmetric mass to the fermions, smaller than or of order the strong scale of the theory, and examine the $\theta$-angle periodicity of the theory in the BCF background. Interestingly, we find that the conditions that rule out the composites are the exact same conditions that lead to an anomaly of the $\theta$ periodicity: the massive theory will experience a phase transition as we vary $\theta$ from $0$ to $2\pi$.
high energy physics theory
Stanene was proposed to be a quantum spin hall insulator containing topological edges states and a time reversal invariant topological superconductor hosting helical Majorana edge mode. Recently, experimental evidences of existence of topological edge states have been found in monolayer stanene films and superconductivity has been observed in few-layer stanene films excluding single layer. An integrated system with both topological edge states and superconductivity are higly pursued as a possible platform to realize topological superconductivity. Few-layer stanene show great potential to meet this requirement and is highly desired in experiment. Here we successfully grow few-layer stanene on bismuth (111) substrate. Both topological edge states and superconducting gaps are observed by in-situ scanning tunneling microscopy/spectroscopy (STM/STS). Our results take a further step towards topological superconductivity by stanene films.
condensed matter
We investigate the three-term Leggett-Garg inequality (LGI) for a two-level quantum system undergoing parity-time (PT ) symmetric dynamics governed by a non-Hermitian Hamiltonian, when a sequence of dichotomic projective measurements are carried out at different time intervals. In contrast to the case of coherent unitary dynamics, violation of LGI is shown to increase beyond the temporal Tsirelson bound approaching the algebraic maximum in the limit of the spontaneous PT symmetry breaking
quantum physics
In this study, investigation of two double-lined binary stars KIC 5113146 and KIC 5111815 in NGC 6819 is presented based on both photometric and spectroscopic data. Simultaneous analysis of light and radial velocity curves was made and the absolute parameters of the systems' components were determined for the first time. We find that both systems have F-type main-sequence components. The masses and radii were found to be $M_1=1.29\pm0.02 M_{\odot}$, $R_{1}=1.47\pm0.03 R_{\odot}$ and $M_{2}=1.19\pm0.02 M_{\odot}$, $R_{2}=1.13\pm0.02 R_{\odot}$ for the primary and secondary components of KIC 5113146; $M_{1}=1.51\pm0.08 M_{\odot}$, $R_{1}=2.02\pm0.05 R_{\odot}$ and $M_{2}=1.19\pm0.07 M_{\odot}$, $R_{2}=1.32\pm0.04 R_{\odot}$ for components of KIC 5111815, respectively. Evolutionary status of the components was evaluated based on the MESA evolutionary tracks and isochrones. The ages of the KIC 5111815 and KIC 5113146 were derived to be about $2.50\pm0.35$ Gyr and $1.95\pm0.40$ Gyr, respectively. Photometric distances were calculated to be $2850\pm 185$ pc for KIC 5113146 and $3120\pm 260$ pc for KIC 5111815. The results obtained in this study, astrometric data and researches in the literature reveal that both KIC 5113146 and KIC 5111815 systems are the most likely member of NGC 6819.
astrophysics
The precise and automated calibration of quantum gates is a key requirement for building a reliable quantum computer. Unlike errors from decoherence, systematic errors can in principle be completely removed by tuning experimental parameters. Here, we present an iterative calibration routine which can remove systematic gate errors on several qubits. A central ingredient is the construction of pulse sequences that extract independent indicators for every linearly independent error generator. We show that decoherence errors only moderately degrade the achievable infidelity due to systematic errors. Furthermore, we investigate the convergence properties of our approach by performing simulations for a specific qubit encoded in a pair of spins. Our results indicate that a gate set with 230 gate parameters can be calibrated in about ten iterations, after which incoherent errors limit the gate fidelity.
quantum physics
We strengthen the maximal ergodic theorem for actions of groups of polynomial growth to a form involving jump quantity, which is the sharpest result among the family of variational or maximal ergodic theorems. As a consequence, we deduce in this setting the quantitative ergodic theorem, in particular, the upcrossing inequalities with exponential decay. The ideas or techniques involve probability theory, non-doubling Calder\'on-Zygmund theory, almost orthogonality argument and some delicate geometric argument involving the balls and the cubes on the group equipped with a not necessarily doubling measure.
mathematics
Joule heating is a non-equilibrium dissipative process that occurs in a normal metal when an electric current flows, in an amount proportional to the metal's resistance. When it is induced by eddy currents resulting from a change in magnetic flux, it is also proportional to the rate at which the magnetic flux changes. Here we show that in the phase transformation between normal and superconducting states of a metal in a magnetic field, the total amount of Joule heating is determined by the thermodynamic properties of the system and is independent of the resistivity of the normal metal. We also show that Joule heating only occurs in the normal region of the material. The conventional theory of superconductivity however predicts that Joule heating occurs also in the superconducting region within a London penetration depth of the phase boundary. This implies that there is a problem with the conventional theory of superconductivity.
condensed matter
The notorious Wigner's friend thought experiment (and modifications thereof) has in recent years received renewed interest especially due to new arguments that force us to question some of the fundamental assumptions of quantum theory. In this paper, we formulate a no-go theorem for the persistent reality of Wigner's friend's perception, which allows us to conclude that the perceptions that the friend has of her own measurement outcomes at different times cannot "share the same reality", if seemingly natural quantum mechanical assumptions are met. More formally, this means that, in a Wigner's friend scenario, there is no joint probability distribution for the friend's perceived measurement outcomes at two different times, that depends linearly on the initial state of the measured system and whose marginals reproduce the predictions of unitary quantum theory. This theorem entails that one must either (1) propose a nonlinear modification of the Born rule for two-time predictions, (2) sometimes prohibit the use of present information to predict the future --thereby reducing the predictive power of quantum theory-- or (3) deny that unitary quantum mechanics makes valid single-time predictions for all observers. We briefly discuss which of the theorem's assumptions are more likely to be dropped within various popular interpretations of quantum mechanics.
quantum physics
The physical meaning of the decomposition of the Rosenbluth formula into two terms containing only squares of Sachs form factors has been established. A new method has been proposed for their independent measurement in the $e \vec{p} \to e \vec{p}$ elastic process when the initial proton at rest is fully polarized along the direction of motion of the final proton.
high energy physics phenomenology
We identify a means to explicitly construct primary operators of free conformal field theories (CFTs) in spacetime dimensions $d=2,~3$, and $4$. Working in momentum space with spinors, we find that the $N$-distinguishable-particle Hilbert space $\mathcal{H}_N$ exhibits a $U(N)$ action in $d=4$ ($O(N)$ in $d=2,3$) which dually describes the decomposition of $\mathcal{H}_N$ into irreducible representations of the conformal group. This $U(N)$ is a natural $N$-particle generalization of the single-particle $U(1)$ little group. The spectrum of primary operators is identified with the harmonics of $N$-particle phase space which, specifically, is shown to be the Stiefel manifold $V_2(\mathbb{C}^N) = U(N)/U(N-2)$ (respectively, $V_2(\mathbb{R}^N)$, $V_1(\mathbb{R}^N)$ in $d=3,2$). Lorentz scalar primaries are harmonics on the Grassmannian $G_2(\mathbb{C}^N) \subset V_2(\mathbb{C}^N)$. We provide a recipe to construct these harmonic polynomials using standard $U(N)$ ($O(N)$) representation theory. We touch upon applications to effective field theory and numerical methods in quantum field theory.
high energy physics theory
This paper deals with non-observed dyads during the sampling of a network and consecutive issues in the inference of the Stochastic Block Model (SBM). We review sampling designs and recover Missing At Random (MAR) and Not Missing At Random (NMAR) conditions for the SBM. We introduce variants of the variational EM algorithm for inferring the SBM under various sampling designs (MAR and NMAR) all available as an R package. Model selection criteria based on Integrated Classification Likelihood are derived for selecting both the number of blocks and the sampling design. We investigate the accuracy and the range of applicability of these algorithms with simulations. We explore two real-world networks from ethnology (seed circulation network) and biology (protein-protein interaction network), where the interpretations considerably depends on the sampling designs considered.
statistics
We introduce the Gaussian orthogonal latent factor processes for modeling and predicting large correlated data. To handle the computational challenge, we first decompose the likelihood function of the Gaussian random field with multi-dimensional input domain into a product of densities at the orthogonal components with lower dimensional inputs. The continuous-time Kalman filter is implemented to efficiently compute the likelihood function without making approximation. We also show that the posterior distribution of the factor processes are independent, as a consequence of prior independence of factor processes and orthogonal factor loading matrix. For studies with a large sample size, we propose a flexible way to model the mean in the model and derive the closed-form marginal posterior distribution. Both simulated and real data applications confirm the outstanding performance of this method.
statistics
This work reviews the origin, development, completion, and outcome of a trans-elastic ultracentrifuge project of Mexico's Nuclear Center through 1971 to 1986. The project had its origin in the search for an effect that supposedly would validate Birkhoff's gravity theory over Einstein's General Relativity. For this purpose an extraordinary ultracentrifuge was built which deserved the 1973 National Award for Instruments (Mexico). The ultracentrifuge was also used to investigate the feasibility of uranium enrichment by solid state centrifugation. Highly enriched uranium was obtained, but in small quantities.
physics
Due to a recent more precise evaluation of $V_{ud}$ and $V_{us}$, the unitarity condition of the first row in the Cabibbo-Kobayashi-Maskawa (CKM) matrix: $|V_{ud}|^2 + |V_{us}|^2 + |V_{ub}|^2 = 0.99798 \pm 0.00038$ now stands at a deviation more than $4\sigma$ from unity. Furthermore, a mild excess in the overall Higgs signal strength appears at about $2\sigma$ above the standard model (SM) prediction, as well as the long-lasting discrepancy in the forward-backward asymmetry ${\cal A}_{\rm FB}^b$ in $Z\to b\bar b$ at LEP. Motivated from the above three anomalies we investigate an extension of the SM with vector-like quarks (VLQs) associated with the down-quark sector, with the goal of alleviating the tension among these datasets. We perform global fits of the model under the constraints coming from the unitarity condition of the first row of the CKM matrix, the $Z$-pole observables ${\cal A}_{\rm FB}^b$, $R_b$ and $\Gamma_{\rm had}$, Electro-Weak precision observables $\Delta S$ and $\Delta T$, $B$-meson observables $B_d^0$-$\overline{B}_d^0$ mixing, $B^+ \to \pi^+ \ell^+ \ell^-$ and $B^0 \to \mu^+ \mu^-$, and direct searches for VLQs at the Large Hadron Collider (LHC). Our results suggest that adding VLQs to the SM provides better agreement than the SM.
high energy physics phenomenology
Based on the application of the Sum-Product algorithm (SPA) over factor graphs, this paper presents a graphical representation of generalized frequency division multiplexing (GFDM) and filter bank multicarrier with offset QAM (FBMC-OQAM). FBMC-OQAM was chosen because it has the advantage of reducing the algorithm's complexity, since it is directly related to the number of possible values assumed by the transmitted data symbols. The receiver algorithm performance is evaluated by the bit error ratio (BER) estimation considering two channel models, additive white Gaussian noise (AWGN) and flat-fading time-variant (Rayleigh). Likewise, a computational complexity analysis is presented. Numerical results show that the BER curves of the proposed scheme present a good match compared with theoretical bit error probability curves.
electrical engineering and systems science
Precision flavor experiments can look for the QCD axion complementarily to usual searches with axion helio- and haloscopes, allowing to test PQ breaking scales as high as $10^{12} \, {\rm GeV}$. Such searches are sensitive to flavor-violating axion couplings, which are generic and potentially sizable whenever SM fermions carry flavor non-universal PQ charges. A particularly predictive scenario is obtained when PQ is identified with the simplest FN flavor symmetry, so that all flavor-violating axion couplings are related to Yukawa hierarchies, up to ${\cal O}(1)$ coefficients.
high energy physics phenomenology
Recently, learning-based algorithms for image inpainting achieve remarkable progress dealing with squared or irregular holes. However, they fail to generate plausible textures inside damaged area because there lacks surrounding information. A progressive inpainting approach would be advantageous for eliminating central blurriness, i.e., restoring well and then updating masks. In this paper, we propose full-resolution residual network (FRRN) to fill irregular holes, which is proved to be effective for progressive image inpainting. We show that well-designed residual architecture facilitates feature integration and texture prediction. Additionally, to guarantee completion quality during progressive inpainting, we adopt N Blocks, One Dilation strategy, which assigns several residual blocks for one dilation step. Correspondingly, a step loss function is applied to improve the performance of intermediate restorations. The experimental results demonstrate that the proposed FRRN framework for image inpainting is much better than previous methods both quantitatively and qualitatively. Our codes are released at: \url{https://github.com/ZongyuGuo/Inpainting_FRRN}.
electrical engineering and systems science
The 3d-metal chromites ACr$_2$O$_4$ where A is a magnetic ion, show the paramagnetic to ferrimagnetic phase transition at T$_C$ while for non-magnetic A-site ion, ACr$_2$O$_4$ show paramagnetic to antiferromagnetic phase transition at T$_N$. In this report, we present the detailed study of magnetic and the magnetocaloric effect (MCE) of the 3d-metal chromites ACr$_{2}$O$_{4}$ (where A = Mn, Fe, Co, Ni, Cu, and Zn) near T$_C$ and T$_N$. We find the magnitude of MCE (-$\Delta$S$_M$) decreases on decreasing the magnetic moment of A-site ion with an exception for CuCr$_{2}$O$_{4} $. Additionally, to know more about the order and nature of phase transition, we have made a scaling analysis of (-$\Delta$S$_{M}$) for all the chromites across the phase transition temperatures T$_C$ and T$_N$.
condensed matter
Learning robust speaker embeddings is a crucial step in speaker diarization. Deep neural networks can accurately capture speaker discriminative characteristics and popular deep embeddings such as x-vectors are nowadays a fundamental component of modern diarization systems. Recently, some improvements over the standard TDNN architecture used for x-vectors have been proposed. The ECAPA-TDNN model, for instance, has shown impressive performance in the speaker verification domain, thanks to a carefully designed neural model. In this work, we extend, for the first time, the use of the ECAPA-TDNN model to speaker diarization. Moreover, we improved its robustness with a powerful augmentation scheme that concatenates several contaminated versions of the same signal within the same training batch. The ECAPA-TDNN model turned out to provide robust speaker embeddings under both close-talking and distant-talking conditions. Our results on the popular AMI meeting corpus show that our system significantly outperforms recently proposed approaches.
electrical engineering and systems science
Squeezed states of light reduce the signal-normalized photon counting noise of measurements without increasing the light power and enable fundamental research on quantum entanglement in hybrid systems of light and matter. Furthermore, the completion of squeezed states with cryo-cooling has high potential. First, measurement sensitivities are usually limited by quantum noise and thermal noise. Second, squeezed states allow for reducing the heat load on cooled devices without losing measurement precision. Here, we demonstrate squeezed-light position sensing of a cryo-cooled micro-mechanical membrane. The sensing precision is improved by up to 4.8 dB below photon counting noise, limited by optical loss in two Faraday rotators, at a membrane temperature of about 20K, limited by our cryo-cooler. We prove that realising a high interference contrast in a cryogenic Michelson interferometer is feasible. Our setup is the first conceptual demonstration towards the envisioned European gravitational-wave detector, the 'Einstein Telescope', which is planned to use squeezed states of light together with cryo-cooling of its mirror test masses.
quantum physics
Speech translation has recently become an increasingly popular topic of research, partly due to the development of benchmark datasets. Nevertheless, current datasets cover a limited number of languages. With the aim to foster research in massive multilingual speech translation and speech translation for low resource language pairs, we release CoVoST 2, a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. This represents the largest open dataset available to date from total volume and language coverage perspective. Data sanity checks provide evidence about the quality of the data, which is released under CC0 license. We also provide extensive speech recognition, bilingual and multilingual machine translation and speech translation baselines with open-source implementation.
computer science
In this paper, we tentatively assign $Z_{c}(3900)$ to be an axialvector molecular state, and calculate its magnetic moment using the QCD sum rule method in external weak electromagnetic field. Starting with the two-point correlation function in external electromagnetic field and expanding it in power of the electromagnetic interaction Hamiltonian, we extract the mass and pole residue of $Z_{c}(3900)$ state from the leading term in the expansion and the magnetic moment from the linear response to the external electromagnetic field. The numerical values are $m_{Z_{c}}=3.97\pm0.12\mbox{GeV}$ in agreement with the experimental value $m^{exp}_{Z_{c}}=3899.0\pm3.6\pm4.9\mbox{MeV}$, $\lambda_{Z_{c}}=2.1\pm0.4\times10^{-2}\mbox{GeV}^{5}$ and $\mu_{Z_{c}}=0.19^{+0.04}_{-0.01}\mu_{N}$.
high energy physics phenomenology
We study the data deletion problem for convex models. By leveraging techniques from convex optimization and reservoir sampling, we give the first data deletion algorithms that are able to handle an arbitrarily long sequence of adversarial updates while promising both per-deletion run-time and steady-state error that do not grow with the length of the update sequence. We also introduce several new conceptual distinctions: for example, we can ask that after a deletion, the entire state maintained by the optimization algorithm is statistically indistinguishable from the state that would have resulted had we retrained, or we can ask for the weaker condition that only the observable output is statistically indistinguishable from the observable output that would have resulted from retraining. We are able to give more efficient deletion algorithms under this weaker deletion criterion.
statistics
We characterize the optical coherence and energy-level properties of the 795 nm $^3$H$_6$ to $^3$H$_4$ transition of Tm$^{3+}$ in a Ti$^{4+}$:LiNbO$_{3}$ waveguide at temperatures as low as 0.65 K. Coherence properties are measured with varied temperature, magnetic field, optical excitation power and wavelength, and measurement time-scale. We also investigate nuclear spin-induced hyperfine structure and population dynamics with varying magnetic field and laser excitation power. Except for accountable differences due to difference Ti$^{4+}$ and Tm$^{3+}$-doping concentrations, we find that the properties of Tm$^{3+}$:Ti$^{4+}$:LiNbO$_{3}$ produced by indiffusion doping are consistent with those of a bulk-doped Tm$^{3+}$:LiNbO$_{3}$ crystal measured under similar conditions. Our results, which complement previous work in a narrower parameter space, support using rare-earth-ions for integrated optical and quantum signal processing.
physics
The advent of automated vehicles operating at SAE levels 4 and 5 poses high fault tolerance demands for all functions contributing to the driving task. At the actuator level, fault-tolerant vehicle motion control, which exploits functional redundancies among the actuators, is one means to achieve the required degree of fault tolerance. Therefore, we give a comprehensive overview of the state of the art in actuator fault-tolerant vehicle motion control with a focus on drive, brake, and steering degradations, as well as tire blowouts. This review shows that actuator fault-tolerant vehicle motion is a widely studied field; yet, the presented approaches differ with respect to many aspects. To provide a starting point for future research, we survey the employed actuator topologies, the tolerated degradations, the presented control approaches, as well as the experiments conducted for validation. Overall, and despite the large number of different approaches, the covered literature reveals the potential of increasing fault tolerance by fault-tolerant vehicle motion control. Thus, besides developing novel approaches or demonstrating real-time applicability, future research should aim at investigating limitations and enabling comparison of fault-tolerant motion control approaches in order to allow for a thorough safety argumentation.
electrical engineering and systems science
We investigate the cosmological consequences of a two brane system embedded in a higher dimensional background spacetime with a compactified extra dimension described with an action that include the intrinsic curvature for each brane. We find that the dynamics of each brane is related to the other by means of cosmological restrictions that involve the scale factors, the equation of state parameters and the Hubble parameters of the two branes. We analyze the evolution of the scale factor and the equation of state parameter for the hidden brane when in the visible brane is present a cosmological constant, radiation or non-relativistic matter. The topological restrictions give rise to the cosmological scenarios without initial singularity, recollapsing and oscillating universes.
high energy physics theory
In this paper, we propose a semiparametric, tree based joint latent class modeling approach (JLCT) to model the joint behavior of longitudinal and time-to-event data. Existing joint latent class modeling approaches are parametric and can suffer from high computational cost. The most common parametric approach, the joint latent class model (JLCM), further restricts analysis to using time-invariant covariates in modeling survival risks and latent class memberships. Instead, the proposed JLCT is fast to fit, and can use time-varying covariates in all of its modeling components. We demonstrate the prognostic value of using time-varying covariates, and therefore the advantage of JLCT over JLCM on simulated data. We further apply JLCT to the PAQUID data set and confirm its superior prediction performance and orders-of-magnitude speedup over JLCM.
statistics
Color and structure are the two pillars that construct an image. Usually, the structure is well expressed through a rich spectrum of colors, allowing objects in an image to be recognized by neural networks. However, under extreme limitations of color space, the structure tends to vanish, and thus a neural network might fail to understand the image. Interested in exploring this interplay between color and structure, we study the scientific problem of identifying and preserving the most informative image structures while constraining the color space to just a few bits, such that the resulting image can be recognized with possibly high accuracy. To this end, we propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner. Given a color space size, ColorCNN quantizes colors in the original image by generating a color index map and an RGB color palette. Then, this color-quantized image is fed to a pre-trained task network to evaluate its performance. In our experiment, with only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset, outperforming traditional color quantization methods by a large margin. For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime. The code is available at: https://github.com/hou-yz/color_distillation.
computer science
Only 20\% of old field stars have detectable debris discs, leaving open the question of what disc, if any, is present around the remaining 80\%. Young moving groups allow to probe this population, since discs are expected to have been brighter early on. This paper considers the population of F~stars in the 23~Myr-old BPMG where we find that 9/12 targets possess discs. We also analyse archival ALMA data to derive radii for 4 of the discs, presenting the first image of the 63au radius disc of HD~164249. Comparing the BPMG results to disc samples from $\sim45$~Myr and $\sim150$~Myr-old moving~groups, and to discs found around field stars, we find the disc incidence rate in young moving~groups is comparable to that of the BPMG and significantly higher than that of field~stars. The BPMG discs tend to be smaller than those around field~stars. However, this difference is not statistically significant due to the small number of targets. Yet, by analysing the fractional luminosity vs disc radius parameter space we find that the fractional luminosities in the populations considered drop by two orders of magnitude within the first 100~Myr. This is much faster than expected by collisional evolution, implying a decay equivalent to $1/\text{age}^2$. We attribute this depletion to embedded planets which would be around 170~$M_\text{earth}$ to cause a depletion on the appropriate timescale. However, we cannot rule out that different birth environments of nearby young clusters result in brighter debris discs than the progenitors of field~stars which likely formed in a more dense environment.
astrophysics
We consider several mathematical issues regarding models that simulate forces exerted by cells. Since the size of cells is much smaller than the size of the domain of computation, one often considers point forces, modelled by Dirac Delta distributions on boundary segments of cells. In the current paper, we treat forces that are directed normal to the cell boundary and that are directed toward the cell centre. Since it can be shown that there exists no smooth solution, at least not in $H^1$ for solutions to the governing momentum balance equation, we analyse the convergence and quality of the approximation. Furthermore, the expected finite element problems that we get necessitate scrutinizing alternative model formulations, such as the use of smoothed Dirac distributions, or the so-called smoothed particle approach as well as the so-called 'hole' approach where cellular forces are modelled through the use of (natural) boundary conditions. In this paper, we investigate and attempt to quantify the conditions for consistency between the various approaches. This has resulted in error analyses in the $H^1$-norm of the numerical solution based on Galerkin principles that entail Lagrangian basis functions. The paper also addresses well-posedness in terms of existence and uniqueness. The current analysis has been performed for the linear steady-state (hence neglecting inertia and damping) momentum equations under the assumption of Hooke's law.
mathematics
This paper proposes a robust dual-quaternion based H-infinity task-space kinematic controller for robot manipulators. To address the manipulator liability to modeling errors, uncertainties, exogenous disturbances, kinematic singularities, and their influence upon the kinematics of the end-effector pose, we adapt H-infinity techniques---suitable only for additive noises---to unit dual quaternions. The noise to error attenuation within the H-infinity framework has the additional advantage of casting aside requirements concerning noise distributions, which are significantly hard to characterize within the group of rigid-body transformations. Using dual quaternion algebra, we provide a connection between performance effects over the end-effector trajectory and different sources of uncertainties and disturbances while satisfying attenuation requirements with minimum instantaneous control effort. The result is an easy-to-implement closed-form H-infinity control design criterion. The effectiveness and performance overview of the proposed strategies are evaluated within different realistic simulated scenarios.
computer science
We address the problem of model-free distributed stabilization of heterogeneous multi-agent systems using reinforcement learning (RL). Two algorithms are developed. The first algorithm solves a centralized linear quadratic regulator (LQR) problem without knowing any initial stabilizing gain in advance. The second algorithm builds upon the results of the first algorithm, and extends it to distributed stabilization of multi-agent systems with predefined interaction graphs. Rigorous proofs are provided to show that the proposed algorithms achieve guaranteed convergence if specific conditions hold. A simulation example is presented to demonstrate the theoretical results.
electrical engineering and systems science
Organizing a chemical space so that elements with similar properties would take neighboring places in a sequence can help to predict new materials. In this paper, we propose a universal method of generating such a one-dimensional sequence of elements, i.e. at arbitrary pressure, which could be used to create a well-structured chemical space of materials and facilitate the prediction of new materials. This work clarifies the physical meaning of Mendeleev numbers, which was earlier tabulated by Pettifor. We compare our proposed sequence of elements with alternative sequences formed by different Mendeleev numbers using the data for hardness, magnetization, enthalpy of formation, and atomization energy. For an unbiased evaluation of the MNs, we compare clustering rates obtained with each system of MNs.
condensed matter
Dense suspension of particles in a liquid exhibit rich, non-Newtonian behaviors such as shear thickening and shear jamming. Shear thickening is known to be enhanced by increasing the particles' frictional interactions and also by making their shape more anisotropic. For shear jamming, however, only the role of interparticle friction has been investigated, while the effect of changing particle shape has so far not been studied systematically. To address this we here synthesize smooth silica particles and design the particle surface chemistry to generate strong frictional interactions such that dense, aqueous suspensions of spheres exhibit pronounced shear jamming. We then vary particle aspect ratio from $\Gamma$=1 (spheres) to $\Gamma$=11 (slender rods), and perform rheological measurements to determine the effect of particle anisotropy on the onset of shear jamming and its precursor, discontinuous shear thickening. Keeping the frictional interactions fixed, we find that increasing aspect ratio significantly reduces $\phi_m$, the minimum particle packing fraction at which shear jamming can be observed, to values as low $\phi_m=33\%$ for $\Gamma$=11. The ability to independently control particle interactions due to friction and shape anisotropy yields fundamental insights about the thickening and jamming capabilities of suspensions and provides a framework to rationally design shear jamming characteristics.
condensed matter
The purpose of this report is to look at the measures of importance of components in systems in terms of reliability. In the first work of Birnbaum (1968) on this subject, many interesting studies were created and important indicators were constructed that allowed to organize the components of complex systems. They are helpful in analyzing the reliability of the designed systems, establishing the principles of operation and maintenance. The significance measures presented here are collected and discussed regarding the motivation behind their creation. They concern an approach in which both elements and systems are binary, and the possibility of generalization to multistate systems is only mentioned. Among the discussed is one new proposal using the methods of game theory, combining the sensitivity to the structure of the system and the operational effects on the system performance. The presented severity measures use a knowledge of the system structure as well as reliability and wear and tear, and whether the components can be repaired and maintained.
electrical engineering and systems science
Automated machine learning makes it easier for data scientists to develop pipelines by searching over possible choices for hyperparameters, algorithms, and even pipeline topologies. Unfortunately, the syntax for automated machine learning tools is inconsistent with manual machine learning, with each other, and with error checks. Furthermore, few tools support advanced features such as topology search or higher-order operators. This paper introduces Lale, a library of high-level Python interfaces that simplifies and unifies automated machine learning in a consistent way.
computer science
Maintaining local interactions in the quantum simulation of gauge field theories relegates most states in the Hilbert space to be unphysical -- theoretically benign, but experimentally difficult to avoid. Reformulations of the gauge fields can modify the ratio of physical to gauge-variant states often through classically preprocessing the Hilbert space and modifying the representation of the field on qubit degrees of freedom. This paper considers the implications of representing SU(3) Yang-Mills gauge theory on a lattice of irreducible representations in both a global basis of projected global quantum numbers and a local basis in which controlled-plaquette operators support efficient time evolution. Classically integrating over the internal gauge space at each vertex (e.g., color isospin and color hypercharge) significantly reduces both the qubit requirements and the dimensionality of the unphysical Hilbert space. Initiating tuning procedures that may inform future calculations at scale, the time evolution of one- and two-plaquettes are implemented on one of IBM's superconducting quantum devices, and early benchmark quantities are identified. The potential advantages of qudit environments, with either constrained 2D hexagonal or 1D nearest-neighbor internal state connectivity, are discussed for future large-scale calculations.
quantum physics
Passivation mechanisms were investigated on (100)-oriented Fe-18Cr-13Ni surfaces with direct transfer between surface preparation and analysis by X-ray photoelectron spectroscopy and scanning tunneling microscopy and electrochemical characterization. Starting from oxide-free surfaces, pre-oxidation at saturation under ultra-low pressure (ULP) oxygen markedly promotes the oxide film Cr(III) enrichment and hinders/delays subsequent iron oxidation in water-containing environment. Exposure to sulfuric acid at open circuit potential causes preferential dissolution of oxidized iron species. Anodic passivation forces oxide film re-growth, Cr(III) dehydroxylation and further enrichment. ULP pre-oxidation promotes Cr(III) hydroxide formation at open circuit potential, compactness of the nanogranular oxide film and corrosion protection.
condensed matter
The spectroscopic parameters and partial decay widths of the light mesons $ a_0(980) $ and $K_{0}^{\ast}(800)$ are calculated by treating them as scalar diquark-antidiquark states. The masses and couplings of the mesons are found in the framework of QCD two-point sum rule approach. The widths of the decay channels $a_0(980) \to \eta \pi$ and $a_0(980) \to K \bar{K}$, and $ K_{0}^{\ast}(800) \to K^{+} \pi^{-}$ and $K_{0}^{\ast}(800) \to K^{0} \pi^{0} $ are evaluated using QCD sum rules on the light-cone and technical tools of the soft meson approximation. Our results for the mass of the mesons $m_{a_0}=991^{+29}_{-27} \ \mathrm{MeV}$ and $m_{K^{ \ast}}=767^{+38}_{-29} \ \mathrm{MeV}$, as well as their total width $\Gamma _{\mathrm{a_0}}=62.01\pm 14.37\ \mathrm{MeV}$ and $\Gamma _{\mathrm{ K_0^{\ast}}}=401.1\pm 87.1\ \mathrm{MeV}$ are compared with last experimental data.
high energy physics phenomenology
Since the result of the 2016 referendum, Brexit has been an unpredictable democratic adventure, the finale of which remains unclear. This year, in the final days of March, parliamentarians seized control of the order paper from the Government and held their own indicative votes in an attempt to break the deadlock. In this paper we analyse the results of this unusual cardinal ballot. We express the various motions in terms of `how much Brexit' they deliver, and employ Monte Carlo in an attempt to determine this from the data. We find solutions which reproduce our intuitive understanding of the debate. Finally, we construct hypothetical ordinal ballots for the various Brexit scenarios, using three different processes. The results suggest that the Government would be more successful taking a softer position, and we quantify this. Additionally, there is some discussion of how tactical voting might manifest itself in the event of such an exercise.
physics
The interlayer coupling mediated by fermions in ferromagnets brings about parallel and anti-parallel magnetization orientations of two magnetic layers, resulting in the giant magnetoresistance, which forms the foundation in spintronics and accelerates the development of information technology. However, the interlayer coupling mediated by another kind of quasi-particle, boson, is still lacking. Here we demonstrate such a static interlayer coupling at room temperature in an antiferromagnetic junction Fe2O3/Cr2O3/Fe2O3, where the two antiferromagnetic Fe2O3 layers are functional materials and the antiferromagnetic Cr2O3 layer serves as a spacer. The N\'eel vectors in the top and bottom Fe2O3 are strongly orthogonally coupled, which is bridged by a typical bosonic excitation (magnon) in the Cr2O3 spacer. Such an orthogonally coupling exceeds the category of traditional collinear interlayer coupling via fermions in ground state, reflecting the fluctuating nature of the magnons, as supported by our magnon quantum well model. Besides the fundamental significance on the quasi-particle-mediated interaction, the strong coupling in an antiferromagnetic magnon junction makes it a realistic candidate for practical antiferromagnetic spintronics and magnonics with ultrahigh-density integration.
condensed matter
Recent reports of current-induced switching of ferrimagnetic oxides coupled to a heavy metal layer have opened realistic prospects for implementing magnetic insulators into electrically addressable spintronic devices. However, key aspects such as the configuration and dynamics of magnetic domain walls driven by electrical currents in insulating oxides remain unexplored. Here, we investigate the internal structure of the domain walls in Tm3Fe5O12 (TmIG) and TmIG/Pt bilayers and demonstrate their efficient manipulation by spin-orbit torques with velocities of up to 400 m s$^{-1}$ and minimal current threshold for domain wall flow of 5 x 10$^{6}$ A cm$^{-2}$. Domain wall racetracks embedded in TmIG are defined by the deposition of Pt current lines, which allow us to control the domain propagation and magnetization switching in selected regions of an extended magnetic layer. Scanning nitrogen-vacancy magnetometry reveals that the domain walls of thin TmIG films are N\'eel walls with left-handed chirality, with the domain wall magnetization rotating towards an intermediate N\'eel-Bloch configuration upon deposition of Pt. These results indicate the presence of a sizable interfacial Dzyaloshinskii-Moriya interaction in TmIG, which leads to novel possibilities to control the formation of chiral spin textures in magnetic insulators. Ultimately, domain wall racetracks provide an efficient scheme to pattern the magnetic landscape of TmIG in a fast and reversible way
physics
We describe an algorithm for computing, for all primes $p \leq X$, the mod-$p$ reduction of the trace of Frobenius at $p$ of a fixed hypergeometric motive in time quasilinear in $X$. This combines the Beukers--Cohen--Mellit trace formula with average polynomial time techniques of Harvey et al.
mathematics
Effective resource management plays a pivotal role in wireless networks, which, unfortunately, results in challenging mixed-integer nonlinear programming (MINLP) problems in most cases. Machine learning-based methods have recently emerged as a disruptive way to obtain near-optimal performance for MINLPs with affordable computational complexity. There have been some attempts in applying such methods to resource management in wireless networks, but these attempts require huge amounts of training samples and lack the capability to handle constrained problems. Furthermore, they suffer from severe performance deterioration when the network parameters change, which commonly happens and is referred to as the task mismatch problem. In this paper, to reduce the sample complexity and address the feasibility issue, we propose a framework of Learning to Optimize for Resource Management (LORM). Instead of the end-to-end learning approach adopted in previous studies, LORM learns the optimal pruning policy in the branch-and-bound algorithm for MINLPs via a sample-efficient method, namely, imitation learning. To further address the task mismatch problem, we develop a transfer learning method via self-imitation in LORM, named LORM-TL, which can quickly adapt a pre-trained machine learning model to the new task with only a few additional unlabeled training samples. Numerical simulations will demonstrate that LORM outperforms specialized state-of-the-art algorithms and achieves near-optimal performance, while achieving significant speedup compared with the branch-and-bound algorithm. Moreover, LORM-TL, by relying on a few unlabeled samples, achieves comparable performance with the model trained from scratch with sufficient labeled samples.
electrical engineering and systems science
While the utilisation of different methods of outliers correction has been shown to counteract the inferential error produced by the presence of contaminating data not belonging to the studied population; the effects produced by their utilisation when samples do not contain contaminating outliers are less clear. Here a simulation approach shows that the most popular methods of outliers correction (2 Sigma, 3 Sigma, MAD, IQR, Grubbs and winsorizing) worsen the inferential evaluation of the studied population in this condition, in particular producing an inflation of Type I error and increasing the error committed in estimating the population mean and STD. We show that those methods that have the highest efficacy in counteract the inflation of Type I and Type II errors in the presence of contaminating outliers also produce the stronger increase of false positive results in their absence, suggesting that the systematic utilisation of methods for outliers correction risk to produce more harmful than beneficial effect on statistical inference. We finally propose that the safest way to deal with the presence of outliers for statistical comparisons is the utilisation of non-parametric tests
statistics
Gaussian distributions are plentiful in applications dealing in uncertainty quantification and diffusivity. They furthermore stand as important special cases for frameworks providing geometries for probability measures, as the resulting geometry on Gaussians is often expressible in closed-form under the frameworks. In this work, we study the Gaussian geometry under the entropy-regularized 2-Wasserstein distance, by providing closed-form solutions for the distance and interpolations between elements. Furthermore, we provide a fixed-point characterization of a population barycenter when restricted to the manifold of Gaussians, which allows computations through the fixed-point iteration algorithm. As a consequence, the results yield closed-form expressions for the 2-Sinkhorn divergence. As the geometries change by varying the regularization magnitude, we study the limiting cases of vanishing and infinite magnitudes, reconfirming well-known results on the limits of the Sinkhorn divergence. Finally, we illustrate the resulting geometries with a numerical study.
statistics
We use Very Long Baseline Interferometry to measure the proper motions of three black hole X-ray binaries (BHXBs). Using these results together with data from the literature and Gaia-DR2 to collate the best available constraints on proper motion, parallax, distance and systemic radial velocity of 16 BHXBs, we determined their three dimensional Galactocentric orbits. We extended this analysis to estimate the probability distribution for the potential kick velocity (PKV) a BHXB system could have received on formation. Constraining the kicks imparted to BHXBs provides insight into the birth mechanism of black holes (BHs). Kicks also have a significant effect on BH-BH merger rates, merger sites, and binary evolution, and can be responsible for spin-orbit misalignment in BH binary systems. $75\%$ of our systems have potential kicks $>70\,\rm{km~s^{-1}}$. This suggests that strong kicks and hence spin-orbit misalignment might be common among BHXBs, in agreement with the observed quasi-periodic X-ray variability in their power density spectra. We used a Bayesian hierarchical methodology to analyse the PKV distribution of the BHXB population, and suggest that a unimodal Gaussian model with a mean of $107\pm16\,\rm{km~s^{-1}}$ is a statistically favourable fit. Such relatively high PKVs would also reduce the number of BHs likely to be retained in globular clusters. We found no significant correlation between the BH mass and PKV, suggesting a lack of correlation between BH mass and the BH birth mechanism. Our Python code allows the estimation of the PKV for any system with sufficient observational constraints.
astrophysics
With the rise of data-centric process management paradigms, interdependent processes, such as artifacts or object lifecycles, form a business process through their interactions. Coordination processes may be used to coordinate these interactions, guiding the overall business process towards a meaningful goal. A coordination process model specifies coordination constraints between the interdependent processes in terms of semantic relationships. At run-time, these coordination constraints must be enforced by a coordination process instance. As the coordination of multiple interdependent processes is a complex endeavor, several challenges need to be fulfilled to achieve optimal process coordination. For example, processes must be allowed to run asynchronously and concurrently, taking their complex relations into account. This paper contributes the operational semantics of coordination processes, which enforces the coordination constraints at run-time. Coordination processes form complex structures to adequately represent processes and their relations, specifically supporting many-to-many relationships. Based on these complex structures, markings and process rules allow for the flexible enactment of the interdependent processes while fulfilling all challenges. Coordination processes represent a sophisticated solution to the complex problem of coordinating interdependent, concurrently running processes.
computer science
One of the advantages of spectral computed tomography (CT) is it can achieve accurate material components using the material decomposition methods. The image-based material decomposition is a common method to obtain specific material components, and it can be divided into two steps: image reconstruction and post material decomposition. To obtain accurate material maps, the image reconstruction method mainly focuses on improving image quality by incorporating regularization priors. Very recently, the regularization priors are introduced into the post material decomposition procedure in the iterative image-based methods. Since the regularization priors can be incorporated into image reconstruction and post image-domain material decomposition, the performance of regularization by combining these two cases is still an open problem. To realize this goal, the material accuracy from those steps are first analyzed and compared. Then, to further improve the accuracy of decomposition materials, a two-step regularization based method is developed by incorporating priors into image reconstruction and post material decomposition. Both numerical simulation and preclinical mouse experiments are performed to demonstrate the advantages of the two-step regularization based method in improving material accuracy.
electrical engineering and systems science
We calculate exclusive production of a longitudinally polarized heavy vector meson at next-to-leading order in the dipole picture. The large quark mass allows us to separately include both the first QCD correction proportional to the coupling constant $\alpha_s$, and the first relativistic correction suppressed by the quark velocity $v^2$. Both of these corrections are found to be numerically important in $\mathrm{J}/\psi$ production. The results obtained are directly suitable for phenomenological calculations. We also demonstrate how vector meson production provides complementary information to structure function analyses when one extracts the initial condition for the energy evolution of the proton small-$x$ structure.
high energy physics phenomenology
The existence of light sterile neutrinos is a long standing question for particle physics. Several experimental ``anomalies'' could be explained by introducing ~eV mass scaled light sterile neutrinos. Many experiments are actively hunting for such light sterile neutrinos through neutrino oscillation. For long baseline experiments, matter effect needs to be treated carefully for precise neutrino oscillation probability calculation. However, it is usually time-consuming or analytical complexity. In this manuscript we adopt the Jacobi-like method to diagonalize the Hermitian Hamiltonian matrix and derive analytically simplified neutrino oscillation probabilities for 3 (active) + 1 (sterile)-neutrino mixing for a constant matter density. These approximations can reach quite high numerical accuracy while keeping its analytical simplicity and fast computing speed. It would be useful for the current and future long baseline neutrino oscillation experiments.
high energy physics phenomenology
Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification. A limitation of current patch-based black-box attacks is that they perform poorly for targeted attacks, and even for the less challenging non-targeted scenarios, they require a large number of queries. Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks. PatchAttack induces misclassifications by superimposing small textured patches on the input image. We parametrize the appearance of these patches by a dictionary of class-specific textures. This texture dictionary is learned by clustering Gram matrices of feature activations from a VGG backbone. PatchAttack optimizes the position and texture parameters of each patch using reinforcement learning. Our experiments show that PatchAttack achieves > 99% success rate on ImageNet for a wide range of architectures, while only manipulating 3% of the image for non-targeted attacks and 10% on average for targeted attacks. Furthermore, we show that PatchAttack circumvents state-of-the-art adversarial defense methods successfully.
computer science
We develop a framework for joint constraints on the CO luminosity function based on power spectra (PS) and voxel intensity distributions (VID), and apply this to simulations of COMAP, a CO intensity mapping experiment. This Bayesian framework is based on a Markov chain Monte Carlo (MCMC) sampler coupled to a Gaussian likelihood with a joint PS + VID covariance matrix computed from a large number of fiducial simulations, and re-calibrated with a small number of simulations per MCMC step. The simulations are based on dark matter halos from fast peak patch simulations combined with the $L_\text{CO}(M_\text{halo})$ model of Li et al. (2016). We find that the relative power to constrain the CO luminosity function depends on the luminosity range of interest. In particular, the VID is more sensitive at both small and large luminosities, while the PS is more sensitive at intermediate luminosities. The joint analysis is superior to using either observable separately. When averaging over CO luminosities ranging between $L_\text{CO} = 10^4-10^7L_\odot$, and over 10 cosmological realizations of COMAP Phase 2, the uncertainties (in dex) are larger by 58 % and 30 % for the PS and VID, respectively, when compared to the joint analysis (PS + VID). This method is generally applicable to any other random field, with a complicated likelihood, as long a fast simulation procedure is available.
astrophysics
This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models. Specifically, we propose a new spectral analysis of the generalization error, expressed in terms of the power spectra of the sampling pattern and the function involved. The framework is build in the Euclidean space using Fourier analysis and establishes a connection between some high dimensional geometric objects and optimal spectral form of different state-of-the-art sampling patterns. Subsequently, we estimate the expected error bounds and convergence rate of different state-of-the-art sampling patterns, as the number of samples and dimensions increase. We make several observations about generalization error which are valid irrespective of the approximation scheme (or learning architecture) and training (or optimization) algorithms. Our result also sheds light on ways to formulate design principles for constructing optimal sampling methods for particular problems.
computer science
One of the major challenges for long range, high speed Free-Space Optical (FSO) communication is turbulence induced beam wander. Beam wander causes fluctuations in the received intensity as well as crosstalk in mode division multiplexed systems. Existing models for beam wander make use of probability distributions and long term averages and are not able to accurately model time-dependent intensity fluctuations such as deep fading, where the received intensity is too low to maintain reliable communication for an extended period of time. In this work we present an elegant new memory model which models the behaviour of beam wander induced intensity fluctuations with the unique capability to accurately simulate deep fading. This is invaluable for the development of optimised error correction coding and digital signal processing in order to improve the throughput and reliability of FSO systems.
electrical engineering and systems science
Number theoretic public-key solutions, currently used in many applications worldwide, will be subject to various quantum attacks, making them less attractive for longer-term use. Certain group theoretic constructs are now showing promise in providing quantum-resistant cryptographic primitives, and may provide suitable alternatives for those looking to address known quantum attacks. In this paper, we introduce a new protocol called a Meta Key Agreement and Authentication Protocol (MKAAP) that has some characteristics of a public-key solution and some of a shared-key solution. Specifically it has the deployment benefits of a public-key system, allowing two entities that have never met before to authenticate without requiring real-time access to a third-party, but does require secure provisioning of key material from a trusted key distribution system (similar to a symmetric system) prior to deployment. We then describe a specific MKAAP instance, the Ironwood MKAAP, discuss its security, and show how it resists certain quantum attacks such as Shor's algorithm or Grover's quantum search algorithm. We also show Ironwood implemented on several ``internet of things'' (IoT devices), measure its performance, and show how it performs significantly better than ECC using fewer device resources.
computer science
We prove a non-asymptotic distribution-independent lower bound for the expected mean squared generalization error caused by label noise in ridgeless linear regression. Our lower bound generalizes a similar known result to the overparameterized (interpolating) regime. In contrast to most previous works, our analysis applies to a broad class of input distributions with almost surely full-rank feature matrices, which allows us to cover various types of deterministic or random feature maps. Our lower bound is asymptotically sharp and implies that in the presence of label noise, ridgeless linear regression does not perform well around the interpolation threshold for any of these feature maps. We analyze the imposed assumptions in detail and provide a theory for analytic (random) feature maps. Using this theory, we can show that our assumptions are satisfied for input distributions with a (Lebesgue) density and feature maps given by random deep neural networks with analytic activation functions like sigmoid, tanh, softplus or GELU. As further examples, we show that feature maps from random Fourier features and polynomial kernels also satisfy our assumptions. We complement our theory with further experimental and analytic results.
statistics
We propose an all-optical experiment to quantify non-Markovianity in an open quantum system through quantum coherence of a single quantum bit. We use an amplitude damping channel implemented by an optical setup with an intense laser beam simulating a single-photon polarization. The optimization over initial states required to quantify non-Markovianity is analytically evaluated. The experimental results are in a very good agreement with the theoretical predictions.
quantum physics
We demonstrate a new practical approach for generating multicolour spiral-shaped beams. It makes use of a standard silica optical fibre, combined with a titled input laser beam. The resulting breaking of the fibre axial symmetry leads to the propagation of a helical beam. The associated output far-field has spiral shape, independently of the input laser power value. Whereas, with a high-power near-infrared femtosecond laser, a visible supercontinuum spiral emission is generated. With appropriate control of the input laser coupling conditions, the colours of the spiral spatially self-organize in a rainbow distribution. Our method is independent of the laser source wavelength and polarization. Therefore, standard optical fibres may be used for generating spiral beams in many applications, ranging from communications to optical tweezers and quantum optics.
physics
A coupling of a scalar, charged under an unbroken global U(1) symmetry, to the Standard Model via the Higgs portal is one of the simplest gateways to a dark sector. Yet, for masses $m_{S}\geq m_{H}/2$ there are few probes of such an interaction. In this note we evaluate the sensitivity to the Higgs portal coupling of di-Higgs boson production at the LHC as well as at a future high energy hadron collider, FCC-hh, taking into account the full momentum dependence of the process. This significantly impacts the sensitivity compared to estimates of changes in the Higgs-coupling based on the effective potential. We also compare our findings to precision single Higgs boson probes such as the cross section for vector boson associated Higgs production at a future lepton collider, e.g. FCC-ee, as well as searches for missing energy based signatures.
high energy physics phenomenology
For arbitrary Ising-like models of any dimension and Hamiltonians with a finite support with all possible multispin interactions and boundary conditions with a shift, the exact value of the free energy in the thermodynamic limit is obtained at some parametrically specified set of multispin interaction coefficients. In this case, half of the multispin interaction coefficients and the coordinates of the special eigenvector corresponding to the largest eigenvalue of the elementary transfer matrix are parameters, and the second half of the multispin coefficients is calculated using simple explicit formulas. For models with Hamiltonians invariant under the reversal of signs of all spins, the formulas are simplified. As examples of independent interest, solutions are written for the cases when the support of the Hamiltonian is a simplex, a cube, the support of the ANNNI model in spaces of 2, 3 and arbitrary dimensions.
condensed matter
Photon Green function (GF) is the vital and most decisive factor in the field of quantum light-matter interaction. It is divergent with two equal space arguments in arbitrary-shaped lossy structure and should be regularized. We introduce a finite element method for calculating the regularized GF. It is expressed by the averaged radiation electric field over the finite-size of the photon emitter. For emitter located in homogeneous lossy material, excellent agreement with the analytical results is found for both real cavity model and virtual cavity model. For emitter located in a metal nano-sphere, the regularized scattered GF, which is the difference between the regularized GF and the analytical regularized one in homogeneous space, agrees well with the analytical scattered GF.
physics
We obtain constraints on the Yukawa-type corrections to Newton's gravitational law and on the coupling constant of axionlike particles to nucleons following from the experiment on measuring the Casimir force between an Au-coated microsphere and a silicon carbide plate. For this purpose, both the Yukawa-type force and the force due to two-axion exchange between nucleons are calculated in the experimental configuration. In the interaction range of Yukawa force exceeding 1 nm and for axion masses above 17.8 eV, the obtained constraints are much stronger than those found previously from measuring the lateral Casimir force between sinusoidally corrugated surfaces. These results are compared with the results of other laboratory experiments on constraining non-Newtonian gravity and axionlike particles in the relevant interaction ranges.
high energy physics phenomenology
In this paper, we show the spooky effect at a distance that arises in optimal estimation of multiple targets with the optimal sub-pattern assignment (OSPA) metric. This effect refers to the fact that if we have several independent potential targets at distant locations, a change in the probability of existence of one of them can completely change the optimal estimation of the rest of the potential targets. As opposed to OSPA, the generalised OSPA (GOSPA) metric ($\alpha=2$) penalises localisation errors for properly detected targets, false targets and missed targets. As a consequence, optimal GOSPA estimation aims to lower the number of false and missed targets, as well as the localisation error for properly detected targets, and avoids the spooky effect.
electrical engineering and systems science
We present a deep residual network-based generative model for single image super-resolution (SISR) of underwater imagery for use by autonomous underwater robots. We also provide an adversarial training pipeline for learning SISR from paired data. In order to supervise the training, we formulate an objective function that evaluates the \textit{perceptual quality} of an image based on its global content, color, and local style information. Additionally, we present USR-248, a large-scale dataset of three sets of underwater images of 'high' (640x480) and 'low' (80x60, 160x120, and 320x240) spatial resolution. USR-248 contains paired instances for supervised training of 2x, 4x, or 8x SISR models. Furthermore, we validate the effectiveness of our proposed model through qualitative and quantitative experiments and compare the results with several state-of-the-art models' performances. We also analyze its practical feasibility for applications such as scene understanding and attention modeling in noisy visual conditions.
electrical engineering and systems science
Four-dimensional (4D) printing of shape memory polymer (SMP) imparts time responsive properties to 3D structures. Here, we explore 4D printing of a SMP in the submicron length scale, extending its applications to nanophononics. We report a new SMP photoresist based on Vero Clear achieving print features at a resolution of ~300 nm half pitch using two-photon polymerization lithography (TPL). Prints consisting of grids with size-tunable multi-colours enabled the study of shape memory effects to achieve large visual shifts through nanoscale structure deformation. As the nanostructures are flattened, the colours and printed information become invisible. Remarkably, the shape memory effect recovers the original surface morphology of the nanostructures along with its structural colour within seconds of heating above its glass transition temperature. The high-resolution printing and excellent reversibility in both microtopography and optical properties promises a platform for temperature-sensitive labels, information hiding for anti-counterfeiting, and tunable photonic devices.
physics
Brain-computer interfaces (BCIs) allow users to control computer applications by modulating their brain activity. Since BCIs rely solely on brain activity, they have a lot of potential as an alternative access method for engaging children with severe disabilities and/or medical complexities in Therapeutic Recreation and leisure. In particular, one commercially available BCI platform is the Emotiv EPOC headset, which is a portable and affordable electroencephalography (EEG) device. Combined with the EmotivBCI software, the Emotiv system can generate a model to discern between different mental tasks based on the user's EEG signals in real-time. While the Emotiv system shows promise for use by the pediatric population in the setting of a BCI clinic, it lacks integrated support that allows users to directly control computer applications using the generated classification output. To achieve this, users would have to create their own program, which can be challenging for those who may not be technologically inclined. To address this gap, we present a free and user-friendly BCI software application called EmoconLite. Using the classification output from EmotivBCI, EmoconLite allows users to play YouTube video clips and a variety of video games from multiple platforms, ultimately creating an end-to-end solution for users. With its use in the Holland Bloorview Kids Rehabilitation Hospital's BCI clinic, EmoconLite will bridge the gap between research and clinical practice, providing children with access to BCI technology and supporting BCI-enabled play.
computer science
Biomedical images are increasing drastically. Along the way, many machine learning algorithms have been proposed to predict and identify various kinds of diseases. One such disease is Pneumonia which is an infection caused by both bacteria and viruses through the inflammation of a person's lung air sacs. In this paper, an algorithm was proposed that receives x-ray images as input and verifies whether this patient is infected by Pneumonia as well as specific region of the lungs that the inflammation has occurred at. The algorithm is based on the transfer learning mechanism where pre-trained ResNet-50 (Convolutional Neural Network) was used followed by some custom layer for making the prediction. The model has achieved an accuracy of 90.6 percent which confirms that the model is effective and can be implemented for the detection of Pneumonia in patients. Furthermore, a class activation map is used for the detection of the infected region in the lungs. Also, PneuNet was developed so that users can access more easily and use the services.
electrical engineering and systems science
We study the functioning of associative memory on three-level quantum elements, qutrites represented by spins with S = 1. The recording of patterns into the superposition of quantum states and their recall are carried out by adiabatic variation of the Hamiltonian with time. To equalize the probabilities of finding the system in different states of superposition, an auxiliary Hamiltonian is proposed, which is turned off at the end of evolution. Simulations were performed on two and three qutrits and an increase in the memory capacity after replacing qubits with qutrits is shown.
quantum physics
We study the dynamic behavior of a Bose-Einstein condensate (BEC) containing a dark soliton separately reflected from potential drops and potential barriers. It is shown that for a rapidly varying potential and in a certain regime of incident velocity, the quantum reflection probability displays the cosine of the deflection angle between the incident soliton and the reflected soliton, i.e., $R(\theta) \sim \cos 2\theta$. For a potential drop, $R(\theta)$ is susceptible to the widths of potential drop up to the length of the dark soliton and the difference of the reflection rates between the orientation angle of the soliton $\theta=0$ and $\theta=\pi/2$, $\delta R_s$, displays oscillating exponential decay with increasing potential widths. However, for a barrier potential, $R(\theta)$ is insensitive for the potential width less than the decay length of the matter wave and $\delta R_s$ presents an exponential trend. This discrepancy of the reflectances in two systems is arisen from the different behaviors of matter waves in the region of potential variation.
condensed matter
Most discussions of propagators in Lee-Wick theories focus on the presence of two massive complex conjugate poles in the propagator. We show that there is in fact only one pole near the physical region, or in another representation three pole-like structures with compensating extra poles. The latter modified Lehmann representation is useful caculationally and conceptually only if one includes the resonance structure in the spectral integral.
high energy physics theory
We present the results of the spectroscopic follow up of the QUBRICS survey. The selection method is based on a machine learning approach applied to photometric catalogs, covering an area of $\sim$ 12,400 deg$^2$ in the Southern Hemisphere. The spectroscopic observations started in 2018 and identified 55 new, high-redshift (z>=2.5), bright (i<=18) QSOs, with the catalog published in late 2019. Here we report the current status of the survey, bringing the total number of bright QSOs at z<=2.5 identified by QUBRICS to 224. The success rate of the QUBRICS selection method, in its most recent training, is estimated to be 68%. The predominant contaminant turns out to be lower-z QSOs at z<2.5. This survey provides a unique sample of bright QSOs at high-z available for a number of cosmological investigations. In particular, carrying out the redshift drift measurements (Sandage Test) in the Southern Hemisphere, using the HIRES spectrograph at the 39m ELT, appears to be possible with less than 2500 hours of observations spread over 30 targets in 25 years.
astrophysics
We give a brief overview how to couple general relativity to the Standard Model of elementary particles, within the higher gauge theory framework, suitable for the spinfoam quantization procedure. We begin by providing a short review of all relevant mathematical concepts, most notably the idea of a categorical ladder, 3-groups and generalized parallel transport. Then, we give an explicit construction of the algebraic structure which describes the full Standard Model coupled to Einstein-Cartan gravity, along with the classical action, written in the form suitable for the spinfoam quantization procedure. We emphasize the usefulness of the 3-group concept as a superior tool to describe gauge symmetry, compared to an ordinary Lie group, as well as the possibility to employ this new structure to classify matter fields and study their spectrum, including the origin of fermion families.
high energy physics theory
Recent studies have demonstrated that analysis of laboratory-quality voice recordings can be used to accurately differentiate people diagnosed with Parkinson's disease (PD) from healthy controls (HC). These findings could help facilitate the development of remote screening and monitoring tools for PD. In this study, we analyzed 2759 telephone-quality voice recordings from 1483 PD and 15321 recordings from 8300 HC participants. To account for variations in phonetic backgrounds, we acquired data from seven countries. We developed a statistical framework for analyzing voice, whereby we computed 307 dysphonia measures that quantify different properties of voice impairment, such as, breathiness, roughness, monopitch, hoarse voice quality, and exaggerated vocal tremor. We used feature selection algorithms to identify robust parsimonious feature subsets, which were used in combination with a Random Forests (RF) classifier to accurately distinguish PD from HC. The best 10-fold cross-validation performance was obtained using Gram-Schmidt Orthogonalization (GSO) and RF, leading to mean sensitivity of 64.90% (standard deviation, SD 2.90%) and mean specificity of 67.96% (SD 2.90%). This large-scale study is a step forward towards assessing the development of a reliable, cost-effective and practical clinical decision support tool for screening the population at large for PD using telephone-quality voice.
statistics
Semi-quantum inspired lightweight protocol is an important research issue in realization of quantum protocols. However, the previous semi-quantum inspired lightweight mediated quantum key distribution (SQIL-MQKD) protocols need to use the Bell states or measure the Bell states. The generation and measurement of Bell states are more difficult and expensive than those of single photons. To solve this problem, a semi-quantum inspired lightweight mediated quantum key distribution with limited resource protocol is proposed. In this protocol, an untrusted third party (TP) only needs to perform the quantum operations related to single photons and the participants only have to perform two quantum operations: (1) reflecting qubits without disturbance (2) performing unitary operations on single photons. In addition, this protocol is showed to be robust under the collective attack.
quantum physics
We report on a method for measuring ac Stark shifts observed in stored light experiments while simultaneously determining the energetic splitting between the electronic ground states involved in the two-photon transition. To this end we make use of the frequency matching effect in light storage spectroscopy. We find a linear dependence on the intensity of the control field applied during the retrieval phase of the experiment. At the same time, we observe that the light shift is insensitive to the intensity of the signal field which is in contrast to continuously operated schemes using electromagnetically induced transparency (EIT) or coherent population trapping (CPT). Our results may be of importance for future light storage-based precision measurements with EIT and CPT-type devices where, in contrast to schemes using continuous exposure to optical fields, the impact of intensity fluctuations from the signal field can be suppressed.
quantum physics
We examine crime patterns in Santa Monica, California before and after passage of Proposition 47, a 2014 initiative that reclassified some non-violent felonies to misdemeanors. We also study how the 2016 opening of four new light rail stations, and how more community-based policing starting in late 2018, impacted crime. A series of statistical analyses are performed on reclassified (larceny, fraud, possession of narcotics, forgery, receiving/possessing stolen property) and non-reclassified crimes by probing publicly available databases from 2006 to 2019. We compare data before and after passage of Proposition 47, city-wide and within eight neighborhoods. Similar analyses are conducted within a 450 meter radius of the new transit stations. Reports of monthly reclassified crimes increased city-wide by approximately 15% after enactment of Proposition 47, with a significant drop observed in late 2018. Downtown exhibited the largest overall surge. The reported incidence of larceny intensified throughout the city. Two new train stations, including Downtown, reported significant crime increases in their vicinity after service began. While the number of reported reclassified crimes increased after passage of Proposition 47, those not affected by the new law decreased or stayed constant, suggesting that Proposition 47 strongly impacted crime in Santa Monica. Reported crimes decreased in late 2018 concurrent with the adoption of new policing measures that enhanced outreach and patrolling. These findings may be relevant to law enforcement and policy-makers. Follow-up studies needed to confirm long-term trends may be affected by the COVID-19 pandemic that drastically changed societal conditions.
statistics
In this paper, we examine the multi-task training of lightweight convolutional neural networks for face identification and classification of facial attributes (age, gender, ethnicity) trained on cropped faces without margins. It is shown that it is still necessary to fine-tune these networks in order to predict facial expressions. Several models are presented based on MobileNet, EfficientNet and RexNet architectures. It was experimentally demonstrated that our models are characterized by the state-of-the-art emotion classification accuracy on AffectNet dataset and near state-of-the-art results in age, gender and race recognition for UTKFace dataset. Moreover, it is shown that the usage of our neural network as a feature extractor of facial regions in video frames and concatenation of several statistical functions (mean, max, etc.) leads to 4.5\% higher accuracy than the previously known state-of-the-art single models for AFEW and VGAF datasets from the EmotiW challenges. The models and source code are publicly available at https://github.com/HSE-asavchenko/face-emotion-recognition.
computer science
The fireball concept of Rolf Hagedorn, developed in the 1960's, is an alternative description of hadronic matter. Using a recently derived mass spectrum, we use the transport model GiBUU to calculate the shear viscosity of a gas of such Hagedorn states, applying the Green-Kubo method to Monte-Carlo calculations. Since the entropy density is rising ad infinitum near $T_H$, this leads to a very low shear viscosity to entropy density ratio near $T_H$. Further, by comparing our results with analytic expressions, we find a nice extrapolation behavior, indicating that a gas of Hagedorn states comes close or even below the boundary $1/4\pi$ from AdS-CFT.
high energy physics phenomenology