text
stringlengths
11
9.77k
label
stringlengths
2
104
In this letter, we aim to design a robust hybrid precoder and combiner against beam misalignment in millimeter-wave (mmWave) communication systems. We consider the inclusion of the `error statistics' into the precoder and combiner design, where the array response that incorporates the distribution of the misalignment error is first derived. An iterative algorithm is then proposed to design the robust hybrid precoder and combiner to maximize the array gain in the presence of beam misalignment. To further enhance the spectral efficiency, a second-stage digital precoder and combiner are included to mitigate the inter-stream interference. Numerical results show that the proposed robust hybrid precoder and combiner design can effectively alleviate the performance degradation incurred by beam misalignment.
electrical engineering and systems science
As handwriting input becomes more prevalent, the large symbol inventory required to support Chinese handwriting recognition poses unique challenges. This paper describes how the Apple deep learning recognition system can accurately handle up to 30,000 Chinese characters while running in real-time across a range of mobile devices. To achieve acceptable accuracy, we paid particular attention to data collection conditions, representativeness of writing styles, and training regimen. We found that, with proper care, even larger inventories are within reach. Our experiments show that accuracy only degrades slowly as the inventory increases, as long as we use training data of sufficient quality and in sufficient quantity.
computer science
Humans are extremely swift learners. We are able to grasp highly abstract notions, whether they come from art perception or pure mathematics. Current machine learning techniques demonstrate astonishing results in extracting patterns in information. Yet the abstract notions we possess are more than just statistical patterns in the incoming information. Sensorimotor theory suggests that they represent functions, laws, describing how the information can be transformed, or, in other words, they represent the statistics of sensorimotor changes rather than sensory inputs themselves. The aim of our work is to suggest a way for machine learning and sensorimotor theory to benefit from each other so as to pave the way toward new horizons in learning. We show in this study that a highly abstract notion, that of space, can be seen as a collection of laws of transformations of sensory information and that these laws could in theory be learned by a naive agent. As an illustration we do a one-dimensional simulation in which an agent extracts spatial knowledge in the form of internalized ("sensible") rigid displacements. The agent uses them to encode its own displacements in a way which is isometrically related to external space. Though the algorithm allowing acquisition of rigid displacements is designed \emph{ad hoc}, we believe it can stimulate the development of unsupervised learning techniques leading to similar results.
computer science
We study quadratic Lie conformal superalgebras associated with No\-vikov superalgebras. For every Novikov superalgebra $(V,\circ)$, we construct an enveloping differential Poisson superalgebra $U(V)$ with a derivation $d$ such that $u\circ v = ud(v)$ and $\{u,v\} = u\circ v - (-1)^{|u||v|} v\circ u$ for $u,v\in V$. The latter means that the commutator Gelfand--Dorfman superalgebra of $V$ is special. Next, we prove that every quadratic Lie conformal superalgebra constructed on a finite-dimensional special Gel'fand--Dorfman superalgebra has a finite faithful conformal representation. This statement is a step toward a solution of the following open problem: whether a finite Lie conformal (super)algebra has a finite faithful conformal representation.
mathematics
We solve a problem mentioned in an article of Berger and Bourn: we prove that in the context of an algebraically coherent semi-abelian category, two natural definitions of the lower central series coincide. In a first, "standard" approach, nilpotency is defined as in group theory via nested binary commutators of the form $[[X,X],X]$. In a second approach, higher Higgins commutators of the form $[X,X,X]$ are used to define nilpotent objects. The two are known to be different in general; for instance, in the context of loops, the definition of Bruck is of the former kind, while the commutator-associator filtration of Mostovoy and his co-authors is of the latter type. Another example, in the context of Moufang loops, is given in Berger and Bourn's paper. In this article, we show that the two streams of development agree in any algebraically coherent semi-abelian category. Such are, for instance, all Orzech categories of interest. Our proof of this result is based on a higher-order version of the Three Subobjects Lemma of Cigoli-Gray-Van der Linden, which extends the classical Three Subgroups Lemma from group theory to categorical algebra. It says that any $n$-fold Higgins commutator $[K_1, \dots,K_n]$ of normal subobjects $K_i$ of an object $X$ may be decomposed into a join of nested binary commutators.
mathematics
We investigate the R\'enyi entropy and entanglement entropy of an interval with an arbitrary length in the canonical ensemble, microcanonical ensemble and primary excited states at large energy density in the thermodynamic limit of a two-dimensional large central charge $c$ conformal field theory. As a generalization of the recent work [Phys. Rev. Lett. 122 (2019) 041602], the main purpose of the paper is to see whether one can distinguish these various large energy density states by the R\'enyi entropies of an interval at different size scales, namely, short, medium and long. Collecting earlier results and performing new calculations in order to compare with and fill gaps in the literature, we give a more complete and detailed analysis of the problem. Especially, we find some corrections to the recent results for the holographic R\'enyi entropy of a medium size interval, which enlarge the validity region of the results. Based on the R\'enyi entropies of the three interval scales, we find that R\'enyi entropy cannot distinguish the canonical and microcanonical ensemble states for a short interval, but can do the job for both medium and long intervals. At the leading order of large $c$ the entanglement entropy cannot distinguish the canonical and microcanonical ensemble states for all interval lengths, but the difference of entanglement entropy for a long interval between the two states would appear with $1/c$ corrections. We also discuss R\'enyi entropy and entanglement entropy differences between the thermal states and primary excited state. Overall, our work provides an up-to-date picture of distinguishing different thermal or primary states at various length scales of the subsystem.
high energy physics theory
Constraining the delay-time distribution (DTD) of different supernova (SN) types can shed light on the timescales of galaxy chemical enrichment and feedback processes affecting galaxy dynamics, and SN progenitor properties. Here, we present an approach to recover SN DTDs based on integral field spectroscopy (IFS) of their host galaxies. Using a statistical analysis of a sample of 116 supernovae in 102 galaxies, we evaluate different DTD models for SN types Ia (73), II (28) and Ib/c (15). We find the best SN Ia DTD fit to be a power law with an exponent $\alpha = -1.1\pm 0.3$ (50\% confidence interval), and a time delay (between star formation and the first SNe) $\Delta = 50^{+100}_{-35}~Myr$ (50\% C.I.). For core collapse (CC) SNe, both of the Zapartas et al. (2017) DTD models for single and binary stellar evolution are consistent with our results. For SNe II and Ib/c, we find a correlation with a Gaussian DTD model with $\sigma = 82^{+129}_{-23}~Myr$ and $\sigma = 56^{+141}_{-9}~Myr$ (50\% C.I.) respectively. This analysis demonstrates that integral field spectroscopy opens a new way of studying SN DTD models in the local universe.
astrophysics
Two renormalization group invariant quantities in quantum chromodinamics (QCD), defined in Euclidean space,namely, Adler D-function of electron-positron annihilation to hadrons and Bjorken polarized deep-inelastic scattering sum rule, are considered. It is shown, that the 5-th order corrections to them in $\overline{MS}$-like renormalization prescriptions, proportional to Riemann $\zeta$-function $\zeta\left(4\right)$, can be restored by the transition to the C-scheme, with the $\beta$-function, analogous to Novikov, Shifman, Vainshtein and Zakharov exact $\beta$-function in $\mathcal{N}=1$ supersymmetric gauge theories. The general analytical expression for these corrections in $SU\left(N_c\right)$ QCD is deduced and their scale invariance is shown. The $\beta$-expansion procedure for these contributions is performed and mutual cancellation of them in the 5-th order of the generalized Crewther identity are discussed.
high energy physics phenomenology
We show that a surface acoustic wave (SAW) applied across the terminals of a magnetic tunnel junction (MTJ) decreases both the (time-averaged) parallel and antiparallel resistances of the MTJ, with the latter decreasing much more than the former. This results in a decrease of the tunneling magnetoresistance (TMR) ratio. The coercivities of the free and fixed layer of the MTJ, however, are not affected significantly, suggesting that the SAW does not cause large-angle magnetization rotation in the magnetic layers through the inverse magnetostriction (Villari) effect at the power levels used. This study sheds light on the dynamical behavior of an MTJ under periodic compressive and tensile strain.
condensed matter
Linear Programming (LP) is an important decoding technique for binary linear codes. However, the advantages of LP decoding, such as low error floor and strong theoretical guarantee, etc., come at the cost of high computational complexity and poor performance at the low signal-to-noise ratio (SNR) region. In this letter, we adopt the penalty dual decomposition (PDD) framework and propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem. Furthermore, we propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm, i.e., check polytope projection (CPP). Inspired by the fact that a multi-layer perception (MLP) can theoretically approximate any nonlinear mapping function, we present a specially designed neural CPP (NCPP) algorithm to decrease the decoding latency. Simulation results demonstrate the effectiveness of the proposed algorithms.
electrical engineering and systems science
In this paper we study the analytic properties of a multiple Dirichlet series associated to the prehomogeneous vector space of binary cubic forms.
mathematics
Recent measurements suggest that the Large Magellanic Cloud (LMC) may weigh as much as 25\% of the Milky Way. In this work we explore how such a large satellite affects mass estimates of the Milky Way based on equilibrium modelling of the stellar halo or other tracers. In particular, we show that if the LMC is ignored, the Milky Way mass is overestimated by as much as 50\%. This bias is due to the bulk motion in the outskirts of the Galaxy's halo and can be, at least in part, accounted for with a simple modification to the equilibrium modelling. Finally, we show that the LMC has a substantial effect on the orbit Leo I which acts to increase its present day speed relative to the Milky Way. We estimate that accounting for a $1.5\times10^{11} M_\odot$ LMC would lower the inferred Milky Way mass to $\sim10^{12} M_\odot$.
astrophysics
This paper describes the motion of a classical Nambu-Goto string in three-dimensional anti-de Sitter spacetime in terms of two fields on the worldsheet. The fields correspond to retarded and advanced boundary times at which null rays emanating from the string reach the boundary. The formalism allows for a simple derivation of the Schwarzian action for near-AdS_2 embeddings.
high energy physics theory
We study the recently introduced 'critically coupled' model of magnetic Skyrmions, generalising it to thin films with curved geometry. The model feels keenly the extrinsic geometry of the film in three-dimensional space. We find exact Skyrmion solutions on spherical, conical and cylindrical thin films. Axially symmetric solutions on cylindrical films are described by kinks tunnelling between 'vacua'. For the model defined on general compact thin films, we prove the existence of energy minimising multi-Skyrmion solutions and construct the (resolved) moduli space of these solutions.
condensed matter
The multiplicative depth of a logic network over the gate basis $\{\land, \oplus, \neg\}$ is the largest number of $\land$ gates on any path from a primary input to a primary output in the network. We describe a dynamic programming based logic synthesis algorithm to reduce the multiplicative depth in logic networks. It makes use of cut enumeration, tree balancing, and exclusive sum-of-products (ESOP) representations. Our algorithm has applications to cryptography and quantum computing, as a reduction in the multiplicative depth directly translates to a lower $T$-depth of the corresponding quantum circuit. Our experimental results show improvements in $T$-depth over state-of-the-art methods and over several hand-optimized quantum circuits for instances of AES, SHA, and floating-point arithmetic.
quantum physics
The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image. Although significant progress has been made by deep learning models, they are trained on synthetic paired data in a supervised way and do not perform well on real data. There are several attempts that directly apply unsupervised image translation models to address such a problem. However, unsupervised low-level vision problem poses more challenge on the accuracy of translation. In this work,we propose a novel framework which is composed of two stages: 1) unsupervised image translation between real LR images and synthetic LR images; 2) supervised super-resolution from approximated real LR images to HR images. It takes the synthetic LR images as a bridge and creates an indirect supervised path from real LR images to HR images. Any existed deep learning based image super-resolution model can be integrated into the second stage of the proposed framework for further improvement. In addition it shows great flexibility in balancing between distortion and perceptual quality under unsupervised setting. The proposed method is evaluated on both NTIRE 2017 and 2018 challenge datasets and achieves favorable performance against supervised methods.
electrical engineering and systems science
We perform a phenomenological analysis of the observable consequences on the extended scalar sector of the SMASH (Standard Model - Axion - Seesaw - Higgs portal inflation) framework. We solve the vacuum metastability problem in a suitable region of SMASH scalar parameter spaces and discuss the one-loop correction to triple Higgs coupling $\lambda_{HHH}$. We also find that the correct neutrino masses and mass squared differences and baryonic asymmetry of the universe can arise from this model and consider running of the Yukawa couplings of the model. In fact, we perform a full two-loop renormalization group analysis of the SMASH model.
high energy physics phenomenology
The static patch of de Sitter spacetime and the Rindler wedge of Minkowski spacetime are causal diamonds admitting a true Killing field, and they behave as thermodynamic equilibrium states under gravitational perturbations. We explore the extension of this gravitational thermodynamics to all causal diamonds in maximally symmetric spacetimes. Although such diamonds generally admit only a conformal Killing vector, that seems in all respects to be sufficient. We establish a Smarr formula for such diamonds and a "first law" for variations to nearby solutions. The latter relates the variations of the bounding area, spatial volume of the maximal slice, cosmological constant, and matter Hamiltonian. The total Hamiltonian is the generator of evolution along the conformal Killing vector that preserves the diamond. To interpret the first law as a thermodynamic relation, it appears necessary to attribute a negative temperature to the diamond, as has been previously suggested for the special case of the static patch of de Sitter spacetime. With quantum corrections included, for small diamonds we recover the "entanglement equilibrium" result that the generalized entropy is stationary at the maximally symmetric vacuum at fixed volume, and we reformulate this as the stationarity of free conformal energy with the volume not fixed.
high energy physics theory
We compute the power spectrum of curvature perturbations in stochastic inflation. This combines the distribution of first crossing times through the end-of-inflation surface, which has been previously studied, with the distribution of the fields value at the time when a given scale crosses out the Hubble radius during inflation, which we show how to compute. This allows the stochastic-$\delta N$ formalism to make concrete contact with observations. As an application, we study how quantum diffusion at small scales (arising e.g. in models leading to primordial black holes) affects the large-scale perturbations observed in the cosmic microwave background. We find that even if those sets of scales are well separated, large effects can arise from the distortion of the classical relationship between field values and wavenumbers brought about by quantum diffusion near the end of inflation. This shows that cosmic microwave background measurements can set explicit constraints on the entire inflationary potential down to the end of inflation.
astrophysics
We establish the status of the Weyl double copy relation for radiative solutions of the vacuum Einstein equations. We show that all type N vacuum solutions, which describe the radiation region of isolated gravitational systems with appropriate fall-off for the matter fields, admit a degenerate Maxwell field that squares to give the Weyl tensor. The converse statement also holds, i.e. if there exists a degenerate Maxwell field on a curved background, then the background is type N. This relation defines a scalar that satisfies the wave equation on the background. We show that for non-twisting radiative solutions, the Maxwell field and the scalar also satisfy the Maxwell equation and the wave equation on Minkowski spacetime. Hence, non-twisting solutions have a straightforward double copy interpretation.
high energy physics theory
Use copula to model dependency of variable extends multivariate gaussian assumption. In this paper we first empirically studied copula regression model with continous response. Both simulation study and real data study are given. Secondly we give a novel copula regression model with binary outcome, and we propose a score gradient estimation algorithms to fit the model. Both simulation study and real data study are given for our model and fitting algorithm.
statistics
This research focuses on the possibility of the surjective relation between symmetric potential function and its scattering matrix in 1-dimension. The theory bases on the property of wave function symmetry and boundary conditions. This research shows the surjective relation in some particular cases, delta function potential, and finite square wall potential, and disproves the injective relation of the arbitrary potential function and its S-matrix.
quantum physics
We propose a paradigm for realizing the SYK model within string theory. Using the large $N$ matrix description of $c<1$ string theory, we show that the effective theory on a large number $Q$ of FZZT D-branes in $(p,1)$ minimal string theory takes the form of the disorder averaged SYK model with $J \psi^{p}$ interaction. The SYK fermions represent open strings between the FZZT branes and the ZZ branes that underly the matrix model. The continuum SYK dynamics arises upon taking the large $Q$ limit. We observe several qualitative and quantitative links between the SYK model and $(p,q)$ minimal string theory and propose that the two describe different phases of a single system. We comment on the dual string interpretation of double scaled SYK and on the relevance of our results to the recent discussion of the role of ensemble averaging in holography.
high energy physics theory
Forecasting the number of trips in bike-sharing systems and its volatility over time is crucial for planning and optimizing such systems. This paper develops timeseries models to forecast hourly count timeseries data, and estimate its volatility. Such models need to take into account the complex patterns over various temporal scales including hourly, daily, weekly and annual as well as the temporal correlation. To capture this complex structure, a large number of parameters are needed. Here a structural model selection approach is utilized to choose the parameters. This method explores the parameter space for a group of covariates at each step. These groups of covariate are constructed to represent a particular structure in the model. The statistical models utilized are extensions of Generalized Linear Models to timeseries data. One challenge in using such models is the explosive behavior of the simulated values. To address this issue, we develop a technique which relies on damping the simulated value, if it falls outside of an admissible interval. The admissible interval is defined using measures of variability of the left and right tails. A new definition of outliers is proposed based on these variability measures. This new definition is shown to be useful in the context of asymmetric distributions.
statistics
Over the last decade it has been shown that magnetic non-collinearity at a s-wave superconductor/ferromagnet interface is a key ingredient for spin-singlet to spin-triplet pair conversion. This has been verified in several synthetic non-collinear magnetic structures. A magnetically soft and hard ferromagnetic layer combination in a bi-layer structure can function as a field tunable non-collinear magnetic structure which may offer magnetic-field tuneability of singlet-to-triplet pair conversion. From magnetization measurements of Nb/Co/Py/Nb multilayers we demonstrate a reversible enhancement of the superconducting critical temperature of 400 mK by measuring Tc with and without a non-collinear magnetic structure between Co and Py. The sensitivity of Tc in these structures offers the potential for realizing magnetic field tunable Josephson junctions in which pair conversion and Josephson critical currents are controllable using modest magnetic fields.
condensed matter
We identify a parametrically light dilaton by studying the perturbations of metastable vacua along a branch of regular supergravity backgrounds that are dual to four-dimensional confining field theories. The branch includes also stable and unstable solutions. The former encompass, as a special case, the geometry proposed by Witten as a holographic model of confinement. The latter approach a supersymmetric solution, by enhancing a condensate in the dual field theory. A phase transition separates the space of stable backgrounds from the metastable ones. In proximity of the phase transition, one of the lightest scalar states inherits some of the properties of the dilaton, despite not being particularly light.
high energy physics theory
*Abbreviated abstract* Context: Brown dwarfs in the spectral range L9-T3.5, within the so called L/T transition, have been shown to be variable at higher amplitudes and with greater frequency than other field dwarfs. [...] Now, more variables such as these need to be discovered and studied to better constrain atmospheric models. This is also critical to better understand giant exoplanets and to shed light on a number of possible correlations between brown dwarf characteristics and variability. Aims: [...] In this work, we aim to discover new strong variables in this spectral range by targeting ten previously unsurveyed brown dwarfs. Methods: We used the NOTCam at the Nordic Optical Telescope to observe 11 targets, with spectral types ranging from L9.5 to T3.5, in the J-band [...] Results: We report first discoveries of strong and significant variability in four out of the ten targets [...] measuring peak-to-peak amplitudes up to 10.7 +- 0.4% in J for the T1 dwarf 2MASS J22153705+2110554, for which we observe significant light curve evolution between the 2017 and 2018 epochs. We also report a marginally significant detection of strong variability, and confirm that the well known 2MASS J01365662+0933473 is still strongly variable three years after the last reported epoch. Finally, we present an extensive multi-epoch catalogue of strong variables reported in the literature and discuss possible correlations that are identifiable from the catalogue. Conclusions: We significantly add to the number of known strong variables, and through Poisson statistics infer an occurrence rate for strong variability among L9-T3.5 brown dwarfs of 40[+32-19]%, which is in agreement with previous estimates. The new variables identified in this work are also excellently suited for extensive multi-wavelength observations dedicated to probing the 3D structure of brown dwarf atmospheres.
astrophysics
The symmetries play important roles in physical systems. We study the symmetries of a Hamiltonian system by investigating the asymmetry of the Hamiltonian with respect to certain algebras. We define the asymmetry of an operator with respect to an algebraic basis in terms of their commutators. Detailed analysis is given to the Lie algebra $\mathfrak{su}(2)$ and its $q$-deformation. The asymmetry of the $q$-deformed integrable spin chain models is calculated. The corresponding geometrical pictures with respect to such asymmetry is presented.
quantum physics
We report a study of quantum oscillations (QO) in the magnetic torque of the nodal-line Dirac semimetal ZrSiS in the magnetic fields up to 35 T and the temperature range from 40 K down to 2 K, enabling high resolution mapping of the Fermi surface (FS) topology in the $k_z=\pi$ (Z-R-A) plane of the first Brillouin zone (FBZ). It is found that the oscillatory part of the measured magnetic torque signal consists of low frequency (LF) contributions (frequencies up to 1000 T) and high frequency (HF) contributions (several clusters of frequencies from 7-22 kT). Increased resolution and angle-resolved measurements allow us to show that the high oscillation frequencies originate from magnetic breakdown (MB) orbits involving clusters of individual $\alpha$ hole and $\beta$ electron pockets from the diamond shaped FS in the Z-R-A plane. Analyzing the HF oscillations we have unequivocally shown that the QO frequency from the dog-bone shaped Fermi pocket ($\beta$ pocket) amounts $\beta=591(15)$ T. Our findings suggest that most of the frequencies in the LF part of QO can also be explained by MB orbits when intraband tunneling in the dog-bone shaped $\beta$ electron pocket is taken into account. Our results give a new understanding of the novel properties of the FS of the nodal-line Dirac semimetal ZrSiS and sister compounds.
condensed matter
One of the problems in the current context of asymptotic symmetry is to extend the black hole considered to the rotating one. Therefore, we in this paper obtain a four-dimensional asymptotically flat rotating black hole solution including the displacement of supertraslation. Since it has been obtained not by solving Einstein equation but based on the already obtained supertranslated Schwarzschild black hole solution by using coordinate transformations, it is not the general solution. However, it is sure that it is `a' supertranslated rotating black hole solution as it can satisfy the Einstein equation. Then, as one of the interesting issues on the supertranslated rotating black hole, we analyze the classical gravitational perturbation with the effect of the rotating black hole by expanding with regard to $r$ from the infinity to the order where the effect of the rotation of the black hole can finally remain in the result.
high energy physics theory
We consider simulating an $n$-qubit Hamiltonian with nearest-neighbor interactions evolving for time $t$ on a quantum computer. We show that this simulation has gate complexity $(nt)^{1+o(1)}$ using product formulas, a straightforward approach that has been demonstrated by several experimental groups. While it is reasonable to expect this complexity---in particular, this was claimed without rigorous justification by Jordan, Lee, and Preskill---we are not aware of a straightforward proof. Our approach is based on an analysis of the local error structure of product formulas, as introduced by Descombes and Thalhammer and further simplified here. We prove error bounds for canonical product formulas, which include well-known constructions such as the Lie-Trotter-Suzuki formulas. We also develop a local error representation for time-dependent Hamiltonian simulation, and we discuss generalizations to periodic boundary conditions, constant-range interactions, and higher dimensions. Combined with a previous lower bound, our result implies that product formulas can simulate lattice Hamiltonians with nearly optimal gate complexity.
quantum physics
The article discusses distributed gradient-descent algorithms for computing local and global minima in nonconvex optimization. For local optimization, we focus on distributed stochastic gradient descent (D-SGD)--a simple network-based variant of classical SGD. We discuss local minima convergence guarantees and explore the simple but critical role of the stable-manifold theorem in analyzing saddle-point avoidance. For global optimization, we discuss annealing-based methods in which slowly decaying noise is added to D-SGD. Conditions are discussed under which convergence to global minima is guaranteed. Numerical examples illustrate the key concepts in the paper.
mathematics
We elaborate on s-confinement phases in three-dimensional $\mathcal{N}=2$ supersymmetric gauge theory, especially focusing on the $SU(N)$ and $USp(2N)$ gauge theories with anti-symmetric tensors and (anti-)fundamental matters. This will elucidate a quantum structure of the Coulomb moduli space of vacua. We stress the importance of so-called dressed Coulomb branch operators for describing these s-confinement phases. The 3d s-confinement phases are highly richer than the 4d ones since there is no chiral anomaly constraint on the matter contents.
high energy physics theory
Edge TPUs are a domain of accelerators for low-power, edge devices and are widely used in various Google products such as Coral and Pixel devices. In this paper, we first discuss the major microarchitectural details of Edge TPUs. Then, we extensively evaluate three classes of Edge TPUs, covering different computing ecosystems, that are either currently deployed in Google products or are the product pipeline, across 423K unique convolutional neural networks. Building upon this extensive study, we discuss critical and interpretable microarchitectural insights about the studied classes of Edge TPUs. Mainly, we discuss how Edge TPU accelerators perform across convolutional neural networks with different structures. Finally, we present our ongoing efforts in developing high-accuracy learned machine learning models to estimate the major performance metrics of accelerators such as latency and energy consumption. These learned models enable significantly faster (in the order of milliseconds) evaluations of accelerators as an alternative to time-consuming cycle-accurate simulators and establish an exciting opportunity for rapid hard-ware/software co-design.
computer science
Candidate counterterms break E7 type U-duality symmetry of $N \geq 5$ supergravity theories in four dimensions \cite{Kallosh:2011dp}. A proposal was made in \cite{Bossard:2011ij} to restore it, starting with a double set of vector fields and argued that a supersymmetric extension of their proposal should exist. We show that the extra vectors, needed for the deformation, can not be auxiliary fields in an eventual off-shell formulation $N \geq 5$ supergravity, assuming that such a formulation exists. Furthermore we show that these extra vector fields can not be dynamical either since that changes the unitary supermultiplets underlying these theories and requires one to go beyond the standard framework of extended simple supergravities. To show this we list all relevant unitary conformal supermultiplets of $SU(2,2|N+n)$. We find that doubling of vectors consistent with linearized supersymmetry requires to change the number of scalars, violating the coset structure of the theory, and also to add a finite number of higher spin fields, which do not admit consistent couplings to theories with spins $\leq 2$. Thus, the proposed duality restoring deformation along the lines of \cite{Bossard:2011ij} can not be implemented within the standard framework of extended supergravity theories. We argue therefore that, in the absence of anomalies, E7 type duality together with supersymmetry, might protect $N \geq 5$ supergravity from UV divergences, in particular, $N=5$ supergravity at 4 loops in d=4.
high energy physics theory
Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents' reports with those of their peers. In the detail-free multi-task setting, agents respond to multiple independent and identically distributed tasks, and the mechanism does not know the prior distribution of agents' signals. The goal is to provide an $\epsilon$-strongly truthful mechanism where truth-telling rewards agents "strictly" more than any other strategy profile (with $\epsilon$ additive error), and to do so while requiring as few tasks as possible. We design a family of mechanisms with a scoring function that maps a pair of reports to a score. The mechanism is strongly truthful if the scoring function is "prior ideal," and $\epsilon$-strongly truthful as long as the scoring function is sufficiently close to the ideal one. This reduces the above mechanism design problem to a learning problem -- specifically learning an ideal scoring function. We leverage this reduction to obtain the following three results. 1) We show how to derive good bounds on the number of tasks required for different types of priors. Our reduction applies to myriad continuous signal space settings. This is the first peer-prediction mechanism on continuous signals designed for the multi-task setting. 2) We show how to turn a soft-predictor of an agent's signals (given the other agents' signals) into a mechanism. This allows the practical use of machine learning algorithms that give good results even when many agents provide noisy information. 3) For finite signal spaces, we obtain $\epsilon$-strongly truthful mechanisms on any stochastically relevant prior, which is the maximal possible prior. In contrast, prior work only achieves a weaker notion of truthfulness (informed truthfulness) or requires stronger assumptions on the prior.
computer science
We find the low-temperature behavior of the Casimir-Polder free energy for a polarizable and magnetizable atom interacting with a plate made of ferromagnetic dielectric material. It is shown that the corresponding Casimir-Polder entropy goes to zero with vanishing temperature, i.e., the Nernst heat theorem is satisfied, if the dc conductivity of the plate material is disregarded in calculations. If the dc conductivity is taken into account, the Nernst theorem is violated. These results are discussed in light of recent experiments.
quantum physics
In this note we briefly present the results of our computation of special K\"ahler geometry for polynomial deformations of Berglund-H\"ubsch type Calabi-Yau manifolds. We also build mirror symmetric Gauge Linear Sigma Model and check that its partition function computed by Supersymmetric localization coincides with exponent of the K\"ahler potential of the special metric.
high energy physics theory
We show that a well-known asymptotic series for the logarithm of the central binomial coefficient is strictly enveloping in the sense of P\'olya and Szeg\"o, so the error incurred in truncating the series is of the same sign as the next term, and is bounded in magnitude by that term. We consider closely related asymptotic series for Binet's function, for $\ln\Gamma(z+1/2)$, and for the Riemann-Siegel theta function, and make some historical remarks.
mathematics
Information Retrieval using dense low-dimensional representations recently became popular and showed out-performance to traditional sparse-representations like BM25. However, no previous work investigated how dense representations perform with large index sizes. We show theoretically and empirically that the performance for dense representations decreases quicker than sparse representations for increasing index sizes. In extreme cases, this can even lead to a tipping point where at a certain index size sparse representations outperform dense representations. We show that this behavior is tightly connected to the number of dimensions of the representations: The lower the dimension, the higher the chance for false positives, i.e. returning irrelevant documents.
computer science
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms can ``leak" information from different models across different tasks, MTL poses a potential security risk. Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task. The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods. In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix. We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix. Our algorithms can be guaranteed not to underperform compared with STL methods. We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered. The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem.
statistics
Nowadays, modern applications are developed using components written in different programming languages. These systems introduce several advantages. However, as the number of languages increases, so does the challenges related to the development and maintenance of these systems. In such situations, developers may introduce design smells (i.e., anti-patterns and code smells) which are symptoms of poor design and implementation choices. Design smells are defined as poor design and coding choices that can negatively impact the quality of a software program despite satisfying functional requirements. Studies on mono-language systems suggest that the presence of design smells affects code comprehension, thus making systems harder to maintain. However, these studies target only mono-language systems and do not consider the interaction between different programming languages. In this paper, we present an approach to detect multi-language design smells in the context of JNI systems. We then investigate the prevalence of those design smells. Specifically, we detect 15 design smells in 98 releases of nine open-source JNI projects. Our results show that the design smells are prevalent in the selected projects and persist throughout the releases of the systems. We observe that in the analyzed systems, 33.95% of the files involving communications between Java and C/C++ contains occurrences of multi-language design smells. Some kinds of smells are more prevalent than others, e.g., Unused Parameters, Too Much Scattering, Unused Method Declaration. Our results suggest that files with multi-language design smells can often be more associated with bugs than files without these smells, and that specific smells are more correlated to fault-proneness than others.
computer science
The quantum anomalous Hall effect (QAHE) and magnetic Weyl semimetals (WSMs) are topological states induced by intrinsic magnetic moments and spin-orbit coupling. Their similarity suggests the possibility of achieving the QAHE by dimensional confinement of a magnetic WSM along one direction. In this study, we investigate the emergence of the QAHE in the two-dimensional (2D) limit of magnetic WSMs due to finite size effects in thin films and step-edges. We demonstrate the feasibility of this approach with effective models and real materials. To this end, we have chosen the layered magnetic WSM Co$_3$Sn$_2$S$_2$, which features a large anomalous Hall conductivity and anomalous Hall angle in its 3D bulk, as our material candidate. In the 2D limit of Co$_3$Sn$_2$S$_2$ two QAHE states exist depending on the stoichiometry of the 2D layer. One is a semimetal with a Chern number of 6, and the other is an insulator with a Chern number of 3. The latter has a band gap of 0.05 eV, which is much larger than that in magnetically doped topological insulators. Our findings naturally explain the existence of chiral states in step edges of bulk Co$_3$Sn$_2$S$_2$ which habe been reported in a recent experiment at $T = 4K$ and present a realistic avenue to realize QAH states in thin films of magnetic WSMs.
condensed matter
Flagella of eukaryotic cells are transient long cylindrical protrusions. The proteins needed to form and maintain flagella are synthesized in the cell body and transported to the distal tips. What `rulers' or `timers' a specific type of cells use to strike a balance between the outward and inward transport of materials so as to maintain a particular length of its flagella in the steady state is one of the open questions in cellular self-organization. Even more curious is how the two flagella of biflagellates, like Chlamydomonas Reinhardtii, communicate through their base to coordinate their lengths. In this paper we develop a stochastic model for flagellar length control based on a time-of-flight (ToF) mechanism. This ToF mechanism decides whether or not structural proteins are to be loaded onto an intraflagellar transport (IFT) train just before it begins its motorized journey from the base to the tip of the flagellum. Because of the ongoing turnover, the structural proteins released from the flagellar tip are transported back to the cell body also by IFT trains. We represent the traffic of IFT trains as a totally asymmetric simple exclusion process (TASEP). The ToF mechanism for each flagellum, together with the TASEP-based description of the IFT trains, combined with a scenario of sharing of a common pool of flagellar structural proteins in biflagellates, can account for all key features of experimentally known phenomena. These include ciliogenesis, resorption, deflagellation as well as regeneration after selective amputation of one of the two flagella. We also show that the experimental observations of Ishikawa and Marshall are consistent with the ToF mechanism of length control if the effects of the mutual exclusion of the IFT trains captured by the TASEP are taken into account. Moreover, we make new predictions on the flagellar length fluctuations and the role of the common pool.
physics
For an abelian group $A$, we give a precise homological description of the kernel of the natural map $\Gamma(A) \to A\otimes_\mathbb{Z} A$, $\gamma(a)\mapsto a\otimes a$, where $\Gamma$ is whitehead's quadratic functor from the category of abelian groups to itself.
mathematics
We present a diagrammatic Monte Carlo method for quantum impurity problems with general interactions and general hybridization functions. Our method uses a recursive determinant scheme to sample diagrams for the scattering amplitude. Unlike in other methods for general impurity problems, an approximation of the continuous hybridization function by a finite number of bath states is not needed, and accessing low temperature does not incur an exponential cost. We test the method for the example of molecular systems, where we systematically vary temperature, interatomic distance, and basis set size. We further apply the method to an impurity problem generated by a self-energy embedding calculation of correlated antiferromagnetic NiO. We find that the method is ideal for quantum impurity problems with a large number of orbitals but only moderate correlations.
condensed matter
We prove limitations on LOCC and separable measurements in bipartite state discrimination problems using techniques from convex optimization. Specific results that we prove include: an exact formula for the optimal probability of correctly discriminating any set of either three or four Bell states via LOCC or separable measurements when the parties are given an ancillary partially entangled pair of qubits; an easily checkable characterization of when an unextendable product set is perfectly discriminated by separable measurements, along with the first known example of an unextendable product set that cannot be perfectly discriminated by separable measurements; and an optimal bound on the success probability for any LOCC or separable measurement for the recently proposed state discrimination problem of Yu, Duan, and Ying.
quantum physics
Infarcted brain tissue resulting from acute stroke readily shows up as hyperintense regions within diffusion-weighted magnetic resonance imaging (DWI). It has also been proposed that computed tomography perfusion (CTP) could alternatively be used to triage stroke patients, given improvements in speed and availability, as well as reduced cost. However, CTP has a lower signal to noise ratio compared to MR. In this work, we investigate whether a conditional mapping can be learned by a generative adversarial network to map CTP inputs to generated MR DWI that more clearly delineates hyperintense regions due to ischemic stroke. We detail the architectures of the generator and discriminator and describe the training process used to perform image-to-image translation from multi-modal CT perfusion maps to diffusion weighted MR outputs. We evaluate the results both qualitatively by visual comparison of generated MR to ground truth, as well as quantitatively by training fully convolutional neural networks that make use of generated MR data inputs to perform ischemic stroke lesion segmentation. Segmentation networks trained using generated CT-to-MR inputs result in at least some improvement on all metrics used for evaluation, compared with networks that only use CT perfusion input.
electrical engineering and systems science
In a recent work we introduced a novel method to compute the effective reproduction number $R_t$ and we applied it to describe the development of the COVID-19 outbreak in Italy. The study is based on the number of daily positive swabs as reported by the Italian Dipartimento di Protezione Civile. Recently, the Italian Istituto Superiore di Sanit\`a made available the data relative of the symptomatic cases, where the reporting date is the date of beginning of symptoms instead of the date of the reporting of the positive swab. In this paper we will discuss merits and drawbacks of this data, quantitatively comparing the quality of the pandemic indicators computed with the two samples.
physics
In this study, we consider a quantum version of multicast network coding as a multicast protocol for sending universal quantum clones (UQCs) from a source node to the target nodes on a quantum network. By extending Owari et al.'s previous results for symmetric UQCs, we derive a protocol for multicasting $1\rightarrow 2$ ($1\rightarrow 3$) {\it asymmetric} UQCs of a $q^r$-dimensional state to two (three) target nodes.Our protocol works under the condition that each edge on a quantum network represented by an undirected graph $G$ transmits a $q$-dimensional state. There exists a classical solvable linear multicast network code with a source rate of $r$ on a classical network $G'$, where $G$ is an undirected underlying graph of an acyclic directed graph $G'$. We also assume free classical communication over a quantum network.
quantum physics
The simulation of real-time dynamics in lattice gauge theories is particularly hard for classical computing due to the exponential scaling of the required resources. On the other hand, quantum algorithms can potentially perform the same calculation with a polynomial dependence on the number of degrees of freedom. A precise estimation is however particularly challenging for the simulation of lattice gauge theories in arbitrary dimensions, where, gauge fields are dynamical variables, in addition to the particle fields. Moreover, there exist several choices for discretizing particles and gauge fields on a lattice, each of them coming at different prices in terms of qubit register size and circuit depth. Here we provide a resource counting for real-time evolution of $U(1)$ gauge theories, such as Quantum Electrodynamics, on arbitrary dimension using the Wilson fermion representation for the particles, and the Quantum Link Model approach for the gauge fields. We study the phenomena of flux-string breaking up to a genuine bi-dimensional model using classical simulations of the quantum circuits, and discuss the advantages of our discretization choice in simulation of more challenging $SU(N)$ gauge theories such as Quantum Chromodynamics.
quantum physics
This paper reports on the semi-supervised development of acoustic and language models for under-resourced, code-switched speech in five South African languages. Two approaches are considered. The first constructs four separate bilingual automatic speech recognisers (ASRs) corresponding to four different language pairs between which speakers switch frequently. The second uses a single, unified, five-lingual ASR system that represents all the languages (English, isiZulu, isiXhosa, Setswana and Sesotho). We evaluate the effectiveness of these two approaches when used to add additional data to our extremely sparse training sets. Results indicate that batch-wise semi-supervised training yields better results than a non-batch-wise approach. Furthermore, while the separate bilingual systems achieved better recognition performance than the unified system, they benefited more from pseudo-labels generated by the five-lingual system than from those generated by the bilingual systems.
electrical engineering and systems science
Heterogeneity in dynamics in the form of non-Gaussian molecular displacement distributions appears ubiquitously in soft matter. We address the quantification of such heterogeneity using an information-theoretic measure of the distance between the actual displacement distribution and its nearest Gaussian estimation. We explore the usefulness of this measure in two generic scenarios of random walkers in heterogeneous media. We show that our proposed measure leads to a better quantification of non-Gaussianity than the conventional ones based on moment ratios.
condensed matter
In this report we describe a tool for comparing the performance of graphical causal structure learning algorithms implemented in the TETRAD freeware suite of causal analysis methods. Currently the tool is available as package in the TETRAD source code (written in Java). Simulations can be done varying the number of runs, sample sizes, and data modalities. Performance on this simulated data can then be compared for a number of algorithms, with parameters varied and with performance statistics as selected, producing a publishable report. The package presented here may also be used to compare structure learning methods across platforms and programming languages, i.e., to compare algorithms implemented in TETRAD with those implemented in MATLAB, Python, or R.
statistics
Stochastic variational inference is an established way to carry out approximate Bayesian inference for deep models. While there have been effective proposals for good initializations for loss minimization in deep learning, far less attention has been devoted to the issue of initialization of stochastic variational inference. We address this by proposing a novel layer-wise initialization strategy based on Bayesian linear models. The proposed method is extensively validated on regression and classification tasks, including Bayesian DeepNets and ConvNets, showing faster and better convergence compared to alternatives inspired by the literature on initializations for loss minimization.
statistics
We provide an algorithm that uses Bayesian randomized benchmarking in concert with a local optimizer, such as SPSA, to find a set of controls that optimizes that average gate fidelity. We call this method Bayesian ACRONYM tuning as a reference to the analogous ACRONYM tuning algorithm. Bayesian ACRONYM distinguishes itself in its ability to retain prior information from experiments that use nearby control parameters; whereas traditional ACRONYM tuning does not use such information and can require many more measurements as a result. We prove that such information reuse is possible under the relatively weak assumption that the true model parameters are Lipshitz-continuous functions of the control parameters. We also perform numerical experiments that demonstrate that over-rotation errors in single qubit gates can be automatically tuned from 88% to 99.95% average gate fidelity using less than 1kB of data and fewer than 20 steps of the optimizer.
quantum physics
In this manuscript we examine an accelerated charged particle moving through an optical medium, and explore the emission of accelerated-Cherenkov radiation. The particle's reaction to acceleration creates a low-frequency spectral cutoff in the Cherenkov emission that has a sharp resonance at the superluminal threshold. Moreover, the effect of recoil on the radiation is incorporated kinematically through the use of an Unruh-DeWitt detector by setting an energy gap, i.e., the change in electron energy, to the recoil energy of the emitted photon. The simultaneous presence of recoil and acceleration conspire to produce a localized resonance peak in the emission. These theoretical considerations could be used to construct high precision tests of radiation reaction using Cherenkov emission under acceleration.
high energy physics phenomenology
In the framework of the modular symmetry approach to lepton flavour, we consider a class of theories where matter superfields transform in representations of the finite modular group $\Gamma_5 \simeq A_5$. We explicitly construct a basis for the 11 modular forms of weight 2 and level 5. We show how these forms arrange themselves into two triplets and a quintet of $A_5$. We also present multiplets of modular forms of higher weight. Finally, we provide an example of application of our results, constructing two models of neutrino masses and mixing based on the supersymmetric Weinberg operator.
high energy physics phenomenology
Ensembles of dopants have widespread applications in quantum information processing. However, miniaturization of corresponding devices is hampered by spin-spin interactions that reduce the coherence with increasing dopant density. Here, we investigate this limitation in erbium-doped crystals, in which these interactions are anisotropic and particularly strong. After implementing efficient spin initialization, microwave control, and readout of the spin ensemble, we demonstrate that the coherence limitation can be alleviated by dynamical decoupling. Our findings can be generalized to other dopants and hosts used for quantum sensors, microwave-to-optical conversion, and quantum memories.
quantum physics
This paper contributes towards the development of motion tracking algorithms for time-critical applications, proposing an infrastructure for solving dynamically the inverse kinematics of highly articulate systems such as humans. We present a method based on the integration of differential kinematics using distance measurement on SO(3) for which the convergence is proved using Lyapunov analysis. An experimental scenario, where the motion of a human subject is tracked in static and dynamic configurations, is used to validate the inverse kinematics method performance on human and humanoid models. Moreover, the method is tested on a human-humanoid retargeting scenario, verifying the usability of the computed solution for real-time robotics applications. Our approach is evaluated both in terms of accuracy and computational load, and compared to iterative optimization algorithms.
electrical engineering and systems science
Classical $(1+1)D$ cellular automata, as for instance Domany-Kinzel cellular automata, are paradigmatic systems for the study of non-equilibrium phenomena. Such systems evolve in discrete time-steps, and are thus free of time-discretisation errors. Moreover, information about critical phenomena can be obtained by simulating the evolution of an initial seed that, at any finite time, has support only on a finite light-cone. This allows for essentially numerically exact simulations, free of finite-size errors or boundary effects. Here, we show how similar advantages can be gained in the quantum regime: The many-body critical dynamics occurring in $(1+1)D$ quantum cellular automata with an absorbing state can be studied directly on an infinite lattice when starting from seed initial conditions. This can be achieved efficiently by simulating the dynamics of an associated one-dimensional, non-unitary quantum cellular automaton using tensor networks. We apply our method to a model introduced recently and find accurate values for universal exponents, suggesting that this approach can be a powerful tool for precisely classifying non-equilibrium universal physics in quantum systems.
quantum physics
Magnetic fields play a fundamental role for interior and atmospheric properties of M dwarfs and greatly influence terrestrial planets orbiting in the habitable zones of these low-mass stars. Determination of the strength and topology of magnetic fields, both on stellar surfaces and throughout the extended stellar magnetospheres, is a key ingredient for advancing stellar and planetary science. Here modern methods of magnetic field measurements applied to M-dwarf stars are reviewed, with an emphasis on direct diagnostics based on interpretation of the Zeeman effect signatures in high-resolution intensity and polarisation spectra. Results of the mean field strength measurements derived from Zeeman broadening analyses as well as information on the global magnetic geometries inferred by applying tomographic mapping methods to spectropolarimetric observations are summarised and critically evaluated. The emerging understanding of the complex, multi-scale nature of M-dwarf magnetic fields is discussed in the context of theoretical models of hydromagnetic dynamos and stellar interior structure altered by magnetic fields.
astrophysics
We study the black body radiation in cavities of different geometry with different boundary conditions, and the formulas of energy spectral densities in films and rods are obtained, also the approximate energy spectral densities in cubic boxes are calculated. We find that the energy spectral densities deviate from Planck's formula greatly when the length(s) at some directions of the cavity can be compared to the typical wavelengths of black body radiation in infinite volume and the boundary conditions also affect the results. We obtain an asymptotic formula of energy spectral density for a closed cavity with Dirichlet boundary condition and find it still fails when the size of the cavity is too small.
quantum physics
We discuss the role of higher dimensional operators in the spontaneous breaking of internal symmetry and scale invariance, in the context of the Lorentz invariant scalar field theory. Using the $\varepsilon$-expansion we determine phase diagrams and demonstrate that (un)stable RG flows computed with a certain basis of dimension 6 operators in the Lagrangian, map to (un)stable RG flows of another basis related to the first by field redefinitions. Crucial is the presence of reparametrization ghosts if Ostrogradsky ghosts appear.
high energy physics theory
Electron spin qubit in a quantum dot has been studied extensively for scalable quantum information processing over the past two decades. Recently, high-fidelity and fast single-spin control and strong spin-photon coupling have been demonstrated using a synthetic spin-orbit coupling created by a micromagnet. Such strong electrical driving suggests a strongly coupled spin-photon system. Here we study the relaxation, pure dephasing, and electrical manipulation of a dressed-spin qubit based on a driven single electron in a quantum dot. We find that the pure dephasing of a dressed qubit due to charge noise can be suppressed substantially at a sweet spot of the dressed qubit. The relaxation at low magnetic fields exhibits non-monotonic behavior due to the energy compensation from the driving field. Moreover, the longitudinal component of the synthetic spin-orbit field could provide fast electric-dipole-induced dressed-spin resonance (EDDSR). Based on the slow dephasing and fast EDDSR, we further propose a scheme for dressed-spin-based semiconductor quantum computing.
condensed matter
The parallel operation of multiple undulator lines with a wide spectral range is an important way to increase the efficiency of x-ray free electron laser (XFEL) facilities, especially for machines with high-repetition-rate. In this paper, a delay system based on four double bend achromats is proposed to delay electron beams, thereby changing the arrival time of those delayed electron beams in the accelerating structure behind the system. Combined with kickers, the delay system can be used to generate bunch-to-bunch energy changed electron beams in a continuous wave XFEL facility. Start-to-end simulations based on the Shanghai high-repetition-rate XFEL and extreme light facility parameters are performed to demonstrate that the delay system can flexibly control electron beam energy from 1.48 to 8.74 GeV at the end of the linac.
physics
The opening angle method is a most popular choice in biomechanics to estimate residual stresses in arteries. Experimentally, it means that an artery is cut into rings; then the rings are cut axially: they open up into sectors; and the corresponding opening angles are measured to give residual stress levels by solving an inverse problem. However, in the lab, for many tissues--more commonly in pathological tissues, the ring does not open according to the theory, into a neat single circular sector, but rather creates an asymmetric geometry, often with abruptly changing curvature(s). This phenomenon might be due to a number of reasons including variation in thickness, microstructure, varying mechanical properties, etc. As a result, these samples have to be eliminated from studies relying on the opening angle method, which limits progress in understanding and evaluating residual stresses in all real arteries. With this work we propose an effective approach to deal with these non-trivial openings of rings. First we digitalise pictures of opened rings to split them into multiple, connected circular sectors. Then we measure the corresponding opening angles for each sub-sector. Finally we can determine the non-homogeneous distribution of residual stresses for individual sectors in a closed ring configuration.
condensed matter
Recently two of the authors presented a spinorial extension of the scattering equations, the `polarized scattering equations' that incorporates spinor polarization data. These led to new worldsheet amplitude formulae for a variety of gauge, gravity and brane theories in six dimensions that naturally incorporate fermions and directly extend to maximal supersymmetry. This paper provides a number of improvements to the original formulae, together with extended details of the construction, examples and full proofs of some of the formulae by BCFW recursion and factorization. We show how our formulae reduce to corresponding formulae for maximally supersymmetric gauge, gravity and brane theories in five and four dimensions. In four dimensions our framework naturally gives the twistorial version of the 4d ambitwistor string, giving new insights into the nature of the refined and polarized scattering equations they give rise to, and on the relations between its measure and the CHY measure. Our formulae exhibit a natural double-copy structure being built from `half-integrands'. We give further discussion of the matrix of theories and formulae to which our half-integrands give rise, including controversial formulae for amplitudes involving Gerbes.
high energy physics theory
The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay -- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. DUNE is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. Central to achieving DUNE's physics program is a far detector that combines the many tens-of-kiloton fiducial mass necessary for rare event searches with sub-centimeter spatial resolution in its ability to image those events, allowing identification of the physics signatures among the numerous backgrounds. In the single-phase liquid argon time-projection chamber (LArTPC) technology, ionization charges drift horizontally in the liquid argon under the influence of an electric field towards a vertical anode, where they are read out with fine granularity. A photon detection system supplements the TPC, directly enhancing physics capabilities for all three DUNE physics drivers and opening up prospects for further physics explorations. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. Volume IV presents an overview of the basic operating principles of a single-phase LArTPC, followed by a description of the DUNE implementation. Each of the subsystems is described in detail, connecting the high-level design requirements and decisions to the overriding physics goals of DUNE.
physics
Machine learning models have widely been used in fraud detection systems. Most of the research and development efforts have been concentrated on improving the performance of the fraud scoring models. Yet, the downstream fraud alert systems still have limited to no model adoption and rely on manual steps. Alert systems are pervasively used across all payment channels in retail banking and play an important role in the overall fraud detection process. Current fraud detection systems end up with large numbers of dropped alerts due to their inability to account for the alert processing capacity. Ideally, alert threshold selection enables the system to maximize the fraud detection while balancing the upstream fraud scores and the available bandwidth of the alert processing teams. However, in practice, fixed thresholds that are used for their simplicity do not have this ability. In this paper, we propose an enhanced threshold selection policy for fraud alert systems. The proposed approach formulates the threshold selection as a sequential decision making problem and uses Deep Q-Network based reinforcement learning. Experimental results show that this adaptive approach outperforms the current static solutions by reducing the fraud losses as well as improving the operational efficiency of the alert system.
computer science
A detailed program is proposed in the Lagrangian formalism to investigate the dynamical behavior of a theory with singular Lagrangian. This program goes on, at different levels, parallel to the Hamiltonian analysis. In particular, we introduce the notions of first class and second class Lagrangian constraints. We show each sequence of first class constraints leads to a Neother identity and consequently to a gauge transformation. We give a general formula for counting the dynamical variables in Lagrangian formalism. As the main advantage of Lagrangian approach, we show the whole procedure can also be performed covariantly. Several examples are given to make our Lagrangian approach clear.
physics
The article discusses carbocatalysis provided with amorphous carbons. The discussion is conducted from the standpoint of the spin chemistry of graphene molecules, in the framework of which the amorphous carbocatalysts are a conglomerate of graphene-oxynitrothiohydride stable radicals presenting the basic structural units (BSUs) of the species. The chemical activity of the BSUs atoms is reliably determined computationally, which allows mapping the distribution of active sites in these molecular catalysts. The presented maps reliably evidence the BSUs radicalization provided with carbon atoms only, the non-terminated edge part of which presents a set of active cites. Spin mapping of carbocatalysts active cites is suggested as the first step towards the spin carbocatalysis of the species.
condensed matter
Missing data remains a very common problem in large datasets, including survey and census data containing many ordinal responses, such as political polls and opinion surveys. Multiple imputation (MI) is usually the go-to approach for analyzing such incomplete datasets, and there are indeed several implementations of MI, including methods using generalized linear models, tree-based models, and Bayesian non-parametric models. However, there is limited research on the statistical performance of these methods for multivariate ordinal data. In this article, we perform an empirical evaluation of several MI methods, including MI by chained equations (MICE) using multinomial logistic regression models, MICE using proportional odds logistic regression models, MICE using classification and regression trees, MICE using random forest, MI using Dirichlet process (DP) mixtures of products of multinomial distributions, and MI using DP mixtures of multivariate normal distributions. We evaluate the methods using simulation studies based on ordinal variables selected from the 2018 American Community Survey (ACS). Under our simulation settings, the results suggest that MI using proportional odds logistic regression models, classification and regression trees and DP mixtures of multinomial distributions generally outperform the other methods. In certain settings, MI using multinomial logistic regression models is able to achieve comparable performance, depending on the missing data mechanism and amount of missing data.
statistics
In this paper we discuss, within the Gross--Pitaevskii framework, superfluidity, soliton nucleation, and instabilities in a non-equilibrium polariton fluid injected by a spatially localized and continuous-wave coherent pump and flowing against a defect located outside the pump spot. In contrast to equilibrium condensates, the steady-state solutions of the driven-dissipative equations in this specific geometry hardly show a clean superfluid flow around the defect and rather feature a crossover from shallow to deep soliton-like perturbation. This is explained in terms of the properties of one-dimensional flows, in particular their weak dependence on the pump parameters and their rapid transition to a super-sonic regime under the effect of the quantum pressure; such a highly nonlinear behaviour calls for quantitative experimental tests of the underlying Gross--Pitaevskii equation. The role of disorder and of a incoherent reservoir in inducing non-stationary behaviours with moving vortices is also highlighted.
condensed matter
According to the World Bank, more than half of the world's population now lives in cities, creating burdens on the degraded city infrastructures and driving up the demand for new ones. Construction sites are abundant in already dense cities and have unavoidable impacts on surrounding environments and residents. However, such impacts were rarely quantified and made available to construction teams and local agencies to inform their planning decisions. A challenge in achieving this was the lack of availability of data that can provide insights about how urban residents respond to changes in their environment due to construction projects. Wider availability of data from city agencies nowadays provides opportunities for having such analysis possible. This paper provides the details of a generic data-driven approach that enables the analysis of impact of construction projects on quality of life in urban settings through the quantification of change on widely accepted quality of life indicators in cities. This paper also evaluated the approach using data from publicly construction projects' information and open city data portals from New York City. Historical 311 Service Requests along with 27 road reconstruction projects were used as testbeds. The results showed that 61% of the projects analyzed in this testbed experienced higher 311 requests after the commencement of construction, with main complaints of 'noise', 'air quality', and 'sewer' at the beginning of construction, and 'sanitation' and 'waste' towards the end. Prediction models, built using regression machine learning algorithms, achieved an R-Squared value of 0.67. The approach is capable of providing insights for government agencies and construction companies to take proactive actions based on expected complaint types through different phases of construction.
computer science
In the framework of a mesoscopical model for dielectric media we provide an analytical description for the electromagnetic field confined in a cylindrical cavity containing a finite dielectric sample. This system is apted to simulate the electromagnetic field in a optic fiber, in which two different regions, a vacuum region and a dielectric one, appear. A complete description for the scattering basis is introduced, together with field quantization and the two-point function. Furthermore, we also determine soliton-like solutions in the dielectric, propagating in the sample of nonlinear dielectric medium.
high energy physics theory
Joint optimization of multi-channel front-end and automatic speech recognition (ASR) has attracted much interest. While promising results have been reported for various tasks, past studies on its meeting transcription application were limited to small scale experiments. It is still unclear whether such a joint framework can be beneficial for a more practical setup where a massive amount of single channel training data can be leveraged for building a strong ASR back-end. In this work, we present our investigation on the joint modeling of a mask-based beamformer and Attention-Encoder-Decoder-based ASR in the setting where we have 75k hours of single-channel data and a relatively small amount of real multi-channel data for model training. We explore effective training procedures, including a comparison of simulated and real multi-channel training data. To guide the recognition towards a target speaker and deal with overlapped speech, we also explore various combinations of bias information, such as direction of arrivals and speaker profiles. We propose an effective location bias integration method called deep concatenation for the beamformer network. In our evaluation on various meeting recordings, we show that the proposed framework achieves a substantial word error rate reduction.
electrical engineering and systems science
The estimation of more than one parameter in quantum mechanics is a fundamental problem with relevant practical applications. In fact, the ultimate limits in the achievable estimation precision are ultimately linked with the non-commutativity of different observables, a peculiar property of quantum mechanics. We here consider several estimation problems for qubit systems and evaluate the corresponding quantumness R, a measure that has been recently introduced in order to quantify how much incompatible are the parameters to be estimated. In particular, R is an upper bound for the renormalized difference between the (asymptotically achievable) Holevo bound and the SLD Cram\'er-Rao bound (i.e. the matrix generalization of the single-parameter quantum Cram\'er-Rao bound). For all the estimation problems considered, we evaluate the quantumness R and, in order to better understand its usefulness in characterizing a multiparameter quantum statistical model, we compare it with the renormalized difference between the Holevo and the SLD-bound. Our results give evidence that R is a useful quantity to characterize multiparameter estimation problems, as for several quantum statistical model it is equal to the difference between the bounds and, in general, their behaviour qualitatively coincide. On the other hand, we also find evidence that for certain quantum statistical models the bound is not in tight, and thus R may overestimate the degree of quantum incompatibility between parameters.
quantum physics
Introduced by Breiman, Random Forests are widely used classification and regression algorithms. While being initially designed as batch algorithms, several variants have been proposed to handle online learning. One particular instance of such forests is the \emph{Mondrian Forest}, whose trees are built using the so-called Mondrian process, therefore allowing to easily update their construction in a streaming fashion. In this paper, we provide a thorough theoretical study of Mondrian Forests in a batch learning setting, based on new results about Mondrian partitions. Our results include consistency and convergence rates for Mondrian Trees and Forests, that turn out to be minimax optimal on the set of $s$-H\"older function with $s \in (0,1]$ (for trees and forests) and $s \in (1,2]$ (for forests only), assuming a proper tuning of their complexity parameter in both cases. Furthermore, we prove that an adaptive procedure (to the unknown $s \in (0, 2]$) can be constructed by combining Mondrian Forests with a standard model aggregation algorithm. These results are the first demonstrating that some particular random forests achieve minimax rates \textit{in arbitrary dimension}. Owing to their remarkably simple distributional properties, which lead to minimax rates, Mondrian trees are a promising basis for more sophisticated yet theoretically sound random forests variants.
statistics
In complex simulation environments, certain parameter space regions may result in non-convergent or unphysical outcomes. All parameters can therefore be labeled with a binary class describing whether or not they lead to valid results. In general, it can be very difficult to determine feasible parameter regions, especially without previous knowledge. We propose a novel algorithm to explore such an unknown parameter space and improve its feasibility classification in an iterative way. Moreover, we include an additional optimization target in the algorithm to guide the exploration towards regions of interest and to improve the classification therein. In our method we make use of well-established concepts from the field of machine learning like kernel support vector machines and kernel ridge regression. From a comparison with a Kriging-based exploration approach based on recently published results we can show the advantages of our algorithm in a binary feasibility classification scenario with a discrete feasibility constraint violation. In this context, we also propose an improvement of the Kriging-based exploration approach. We apply our novel method to a fully realistic, industrially relevant chemical process simulation to demonstrate its practical usability and find a comparably good approximation of the data space topology from relatively few data points.
statistics
Using recently developed Seifert fibering operators for 3D $\mathcal{N} = 2$ gauge theories, we formulate the necessary ingredients for a state-integral model of the topological quantum field theory dual to a given Seifert manifold under the 3D-3D correspondence, focusing on the case of Seifert homology spheres with positive orbifold Euler characteristic. We further exhibit a set of difference operators that annihilate the wavefunctions of this TQFT on hyperbolic three-manifolds, generalizing similar constructions for lens space partition functions and holomorphic blocks. These properties offer intriguing clues as to the structure of the underlying TQFT.
high energy physics theory
$b\to s\tau^+\tau^-$ measurements are highly motivated for addressing lepton-flavor-universality (LFU)-violating puzzles such as $R_{K^{(\ast)}}$ anomalies. The anomalies of $R_{D^{(*)}}$ and $R_{J/\psi}$ further strengthen their necessity and importance, given that the LFU-violating hints from both involve the third-generation leptons directly. $Z$ factories at the future $e^-e^+$ colliders stand at a great position to conduct such measurements because of their relatively high production rates and reconstruction efficiencies for $B$ mesons at the $Z$ pole. To fully explore this potential, we pursue a dedicated sensitivity study in four $b\to s\tau^+\tau^-$ benchmark channels, namely $B^0\to K^{\ast 0} \tau^+ \tau^-$, $B_s\to\phi \tau^+ \tau^-$, $B^+ \to K^+ \tau^+ \tau^- $ and $B_s \to \tau^+ \tau^-$, at the future $Z$ factories. We develop a fully tracker-based scheme for reconstructing the signal $B$ mesons and introduce a semi-quantitative method for estimating their major backgrounds. The simulations indicate that branching ratios of the first three channels can be measured with a precision $\sim \mathcal O(10^{-7} - 10^{-6})$ and that of $B_s \to \tau^+ \tau^-$ with a precision $\sim \mathcal O(10^{-5})$ at Tera-$Z$. The impacts of luminosity and tracker resolution on the expected sensitivities are explored. The interpretations of these results in effective field theory are also presented.
high energy physics phenomenology
We show that the spin-orbit coupling (SOC) in alpha-MnTe impacts the transport behavior by generating an anisotropic valence-band splitting, resulting in four spin-polarized pockets near Gamma. A minimal k-dot-p model is constructed to capture this splitting by group theory analysis, a tight-binding model and ab initio calculations. The model is shown to describe the rotation symmetry of the zero-field planer Hall effect (PHE). The upper limit of the PHE percentage is shown to be fundamentally determined by the band shape, and is quantitatively estimated to be roughly 31% by first principles.
condensed matter
This study models cross-national attitudes towards immigrants in East and Southeast Asia as a signed and weighted bipartite network of countries and evaluative reactions to a variety of political issues, or determinants. This network is then projected into two one-mode networks, one of countries and one of determinants, and community detection methods are applied. The paper aims to fill two deficiencies in the current research on attitudes towards immigrants: 1) the lack of cross-national studies in Asia, a region where migration is growing, and 2) the tendency of researchers to treat determinants as uncorrelated, despite the interdependent nature of evaluative reactions. The results show that the nine countries in the sample are a cohesive clique, showing greater similarities than differences in the determinants of their attitudes. A blockmodeling approach was employed to identify eight determinants in attitudes towards immigrants, namely views on independence and social dependencies, group identities, absolute or relative moral orientation, attitudes towards democracy, science and technology, prejudice and stigma, and two determinants related to religion. However, the findings of this survey yielded some surprising results when compared with the literature review. First, education was not found to be a significant determinants of attitudes towards immigrants, despite its strong and consistent predictive power in European models. Second, prejudice appears to be mediated in part by religion, especially in religious identification and belief in God. Group identity and prejudice also appear to be related, though only weakly. Finally, anxiety appears in clusters related to social norms, suggesting that fears regarding immigrants relates closely to expectations of others' behavior.
physics
6D object pose estimation is an important task that determines the 3D position and 3D rotation of an object in camera-centred coordinates. By utilizing such a task, one can propose promising solutions for various problems related to scene understanding, augmented reality, control and navigation of robotics. Recent developments on visual depth sensors and low-cost availability of depth data significantly facilitate object pose estimation. Using depth information from RGB-D sensors, substantial progress has been made in the last decade by the methods addressing the challenges such as viewpoint variability, occlusion and clutter, and similar looking distractors. Particularly, with the recent advent of convolutional neural networks, RGB-only based solutions have been presented. However, improved results have only been reported for recovering the pose of known instances, i.e., for the instance-level object pose estimation tasks. More recently, state-of-the-art approaches target to solve object pose estimation problem at the level of categories, recovering the 6D pose of unknown instances. To this end, they address the challenges of the category-level tasks such as distribution shift among source and target domains, high intra-class variations, and shape discrepancies between objects.
computer science
We propose a framework to construct "Domain-Wall Standard Model" in a non compact 5-dimensional space-time, where all the Standard Model (SM) fields are localized in certain domains of the 5th dimension and the SM is realized as a 4-dimensional effective theory without any compactification for the 5th dimension. In this context, we investigate the collider phenomenology of the Kaluza-Klein (KK) modes of the SM gauge bosons and the current constraints from the search for a new gauge boson resonance at the LHC Run-2. The couplings of the SM fermions with the KK-mode gauge bosons depend on the configuration of the SM fermions in the 5-dimensional bulk. This "geometry" of the model can be tested at the future Large Hadron Collider experiment, once a KK-mode of the SM gauge boson is discovered.
high energy physics phenomenology
We examine a model of two interacting populations of phase oscillators labelled `Blue' and `Red'. To this we apply tempered stable L\'{e}vy noise, a generalisation of Gaussian noise where the heaviness of the tails parametrised by a power law exponent $\alpha$ can be controlled by a tempering parameter $\lambda$. This system models competitive dynamics, where each population seeks both internal phase synchronisation and a phase advantage with respect to the other population, subject to exogenous stochastic shocks. We study the system from an analytic and numerical point of view to understand how the phase lag values and the shape of the noise distribution can lead to steady or noisy behaviour. Comparing the analytic and numerical studies shows that the bulk behaviour of the system can be effectively described by dynamics in the presence of tilted ratchet potentials. Generally, changes in $\alpha$ away from the Gaussian noise limit, $1< \alpha < 2$, disrupts the locking between Blue and Red, while increasing $\lambda$ acts to restore it. However we observe that with further decreases of $\alpha$ to small values, $\alpha\ll 1$, with $\lambda\neq 0$, locking between Blue and Red may be restored. This is seen analytically in a restoration of metastability through the ratchet mechanism, and numerically in transitions between periodic and noisy regions in a fitness landscape using a measure of noise. This non-monotonic transition back to an ordered regime is surprising for a linear variation of a parameter such as the power law exponent and provides a novel mechanism for guiding the collective behaviour of such a complex competitive dynamical system.
condensed matter
A new dark sector consisting of a pure non-abelian gauge theory has no renormalizable interaction with SM particles, and can thereby realise gravitational Dark Matter (DM). Gauge interactions confine at a scale $\Lambda_{\rm DM}$ giving bound states with typical lifetimes $\tau \sim M_{\rm Pl}^4/\Lambda^5_{\rm DM}$ that can be DM candidates if $\Lambda_{\rm DM} $ is below 100 TeV. Furthermore, accidental symmetries of group-theoretical nature produce special gravitationally stable bound states. In the presence of generic Planck-suppressed operators such states become long-lived: SU$(N)$ gauge theories contain bound states with $\tau \sim M_{\rm Pl}^8/\Lambda^9_{\rm DM}$; even longer lifetimes $\tau= (M_{\rm Pl}/\Lambda_{\rm DM})^{2N-4}/\Lambda_{\rm DM}$ arise from SO$(N)$ theories with $N \ge 8$, and possibly from $F_4$ or $E_8$. We compute their relic abundance generated by gravitational freeze-in and by inflationary fluctuations, finding that they can be viable DM candidates for $\Lambda_{\rm DM} \gtrsim 10^{10}$ GeV.
high energy physics phenomenology
In many real-world tasks, multiple agents must learn to coordinate with each other given their private observations and limited communication ability. Deep multiagent reinforcement learning (Deep-MARL) algorithms have shown superior performance in such challenging settings. One representative class of work is multiagent value decomposition, which decomposes the global shared multiagent Q-value $Q_{tot}$ into individual Q-values $Q^{i}$ to guide individuals' behaviors, i.e. VDN imposing an additive formation and QMIX adopting a monotonic assumption using an implicit mixing method. However, most of the previous efforts impose certain assumptions between $Q_{tot}$ and $Q^{i}$ and lack theoretical groundings. Besides, they do not explicitly consider the agent-level impact of individuals to the whole system when transforming individual $Q^{i}$s into $Q_{tot}$. In this paper, we theoretically derive a general formula of $Q_{tot}$ in terms of $Q^{i}$, based on which we can naturally implement a multi-head attention formation to approximate $Q_{tot}$, resulting in not only a refined representation of $Q_{tot}$ with an agent-level attention mechanism, but also a tractable maximization algorithm of decentralized policies. Extensive experiments demonstrate that our method outperforms state-of-the-art MARL methods on the widely adopted StarCraft benchmark across different scenarios, and attention analysis is further conducted with valuable insights.
computer science
Predictive hydrological uncertainty can be quantified by using ensemble methods. If properly formulated, these methods can offer improved predictive performance by combining multiple predictions. In this work, we use 50-year-long monthly time series observed in 270 catchments in the United States to explore the performances provided by an ensemble learning post-processing methodology for issuing probabilistic hydrological predictions. This methodology allows the utilization of flexible quantile regression models for exploiting information about the hydrological model's error. Its key differences with respect to basic two-stage hydrological post-processing methodologies using the same type of regression models are that (a) instead of a single point hydrological prediction it generates a large number of "sister predictions" (yet using a single hydrological model), and that (b) it relies on the concept of combining probabilistic predictions via simple quantile averaging. A major hydrological modelling challenge is obtaining probabilistic predictions that are simultaneously reliable and associated to prediction bands that are as narrow as possible; therefore, we assess both these desired properties of the predictions by computing their coverage probabilities, average widths and average interval scores. The results confirm the usefulness of the proposed methodology and its larger robustness with respect to basic two-stage post-processing methodologies. Finally, this methodology is empirically proven to harness the "wisdom of the crowd" in terms of average interval score, i.e., the average of the individual predictions combined by this methodology scores no worse -- usually better -- than the average of the scores of the individual predictions.
statistics
We consider a long diffusive Josephson junction where the weak link is a thin normal metal (N) - ferromagnetic (F) bilayer (N and F form parallel links between the superconductors (S)). We show that superconductivity in the weak link can be described by an effective one-dimensional Usadel equation containing a "diluted" exchange field as well as a weak depairing term that is caused by the inherent inhomogeneity of the bilayer. The depairing mechanism distinguishes the S(N/F)S system from an SFS junction and affects the density of states of the S(N/F)S junction. It results in the suppression of the minigap in the spin-resolved density of states. The depairing rate and the minigap are expressed in terms of geometrical parameters, the Thouless energy and the effective exchange field. The effective one-dimensional theory can be applied to various structures with thin inhomogenous links and shows good agreement with numerical solutions of the original two-dimensional equations. We also discuss ways to reveal the predicted effect experimentally.
condensed matter
Disc-driven planet migration is integral to the formation of planetary systems. In standard, gas-dominated protoplanetary discs, low-mass planets or planetary cores undergo rapid inwards migration and are lost to the central star. However, several recent studies indicate that the solid component in protoplanetary discs can have a significant dynamical effect on disc-planet interaction, especially when the solid-to-gas mass ratio approaches unity or larger and the dust-on-gas drag forces become significant. As there are several ways to raise the solid abundance in protoplanetary discs, for example through disc winds and dust-trapping in pressure bumps, it is important to understand how planets migrate through a dusty environment. To this end, we study planet migration in dust-rich discs via a systematic set of high-resolution, two-dimensional numerical simulations. We show that the inwards migration of low-mass planets can be slowed down by dusty dynamical corotation torques. We also identify a new regime of stochastic migration applicable to discs with dust-to-gas mass ratios $\gtrsim 0.3$ and particle Stokes numbers $\gtrsim 0.03$. In these cases, disc-planet interaction leads to the continuous development of small-scale, intense dust vortices that scatter the planet, which can potentially halt or even reverse the inwards planet migration. We briefly discuss the observational implications of our results and highlight directions for future work.
astrophysics
A new algorithm for 3D localization in multiplatform radar networks, comprising one transmitter and multiple receivers, is proposed. To take advantage of the monostatic sensor radiation pattern features, ad-hoc constraints are imposed in the target localization process. Therefore, the localization problem is formulated as a non-convex constrained Least Squares (LS) optimization problem which is globally solved in a quasi-closed-form leveraging Karush-Kuhn-Tucker (KKT) conditions. The performance of the new algorithm is assessed in terms of Root Mean Square Error (RMSE) in comparison with the benchmark Cramer Rao Lower Bound (CRLB) and some competitors from the open literature. The results corroborate the effectiveness of the new strategy which is capable of ensuring a lower RMSE than the counterpart methodologies especially in the low Signal to Noise Ratio (SNR) regime.
electrical engineering and systems science
We propose a flavour theory of leptons implementing an $A_4$ family symmetry. Our scheme provides a simple way to derive trimaximal neutrino mixing from first principles, leading to simple and testable predictions for neutrino mixing and CP violation. Dark matter mediates neutrino mass generation, as in the simplest scotogenic model.
high energy physics phenomenology
Motivation: Elastic net regression is a form of penalized regression that lies between ridge and least absolute shrinkage and selection operator (LASSO) regression. The elastic net penalty is a powerful tool controlling the impact of correlated predictors and the overall complexity of generalized linear regression models. The elastic net penalty has two tuning parameters: ${\lambda}$ for the complexity and ${\alpha}$ for the compromise between LASSO and ridge. The R package glmnet provides efficient tools for fitting elastic net models and selecting ${\lambda}$ for a given ${\alpha}.$ However, glmnet does not simultaneously search the ${\lambda} - {\alpha}$ space for the optional elastic net model. Results: We built the R package ensr, elastic net searcher. enser extends the functionality of glment to search the ${\lambda} - {\alpha}$ space and identify an optimal ${\lambda} - {\alpha}$ pair. Availability: ensr is available from the Comprehensive R Archive Network at https://cran.r-project.org/package=ensr
statistics
Early detection of breast cancer through screening mammography yields a 20-35% increase in survival rate; however, there are not enough radiologists to serve the growing population of women seeking screening mammography. Although commercial computer aided detection (CADe) software has been available to radiologists for decades, it has failed to improve the interpretation of full-field digital mammography (FFDM) images due to its low sensitivity over the spectrum of findings. In this work, we leverage a large set of FFDM images with loose bounding boxes of mammographically significant findings to train a deep learning detector with extreme sensitivity. Building upon work from the Hourglass architecture, we train a model that produces segmentation-like images with high spatial resolution, with the aim of producing 2D Gaussian blobs centered on ground-truth boxes. We replace the pixel-wise $L_2$ norm with a weak-supervision loss designed to achieve high sensitivity, asymmetrically penalizing false positives and false negatives while softening the noise of the loose bounding boxes by permitting a tolerance in misaligned predictions. The resulting system achieves a sensitivity for malignant findings of 0.99 with only 4.8 false positive markers per image. When utilized in a CADe system, this model could enable a novel workflow where radiologists can focus their attention with trust on only the locations proposed by the model, expediting the interpretation process and bringing attention to potential findings that could otherwise have been missed. Due to its nearly perfect sensitivity, the proposed detector can also be used as a high-performance proposal generator in two-stage detection systems.
computer science
Recent developments in matrix-product-state (MPS) investigations of many-body localization (MBL) are reviewed, with a discussion of benefits and limitations of the method. This approach allows one to explore the physics around the MBL transition in systems much larger than those accessible to exact diagonalization. System sizes and length scales that can be controllably accessed by the MPS approach are comparable to those studied in state-of-the-art experiments. Results for 1D, quasi-1D, and 2D random systems, as well as 1D quasi-periodic systems are presented. On time scales explored (up to $t \approx 300$ in units set by the hopping amplitude), a slow, subdiffusive transport in a rather broad disorder range on the ergodic side of the MBL transition is found. For 1D random spin chains, which serve as a "standard model" of the MBL transition, the MPS study demonstrates a substantial drift of the critical point $W_c(L)$ with the system size $L$: while for $L \approx 20$ we find $W_c \approx 4$, as also given by exact diagonalization, the MPS results for $L = 50$--100 provide evidence that the critical disorder saturates, in the large-$L$ limit, at $W_c \approx 5.5$. For quasi-periodic systems, these finite-size effects are much weaker, which suggests that they can be largely attributed to rare events. For quasi-1D ($d\times L$, with $d \ll L$) and 2D ($L\times L$) random systems, the MPS data demonstrate an unbounded growth of $W_c$ in the limit of large $d$ and $L$, in agreement with analytical predictions based on the rare-event avalanche theory.
condensed matter
We present a stack model for breaking down the complexity of entanglement-based quantum networks. More specifically, we focus on the structures and architectures of quantum networks and not on concrete physical implementations of network elements. We construct the quantum network stack in a hierarchical manner comprising several layers, similar to the classical network stack, and identify quantum networking devices operating on each of these layers. The layers responsibilities range from establishing point-to-point connectivity, over intra-network graph state generation, to inter-network routing of entanglement. In addition we propose several protocols operating on these layers. In particular, we extend the existing intra-network protocols for generating arbitrary graph states to ensure reliability inside a quantum network, where here reliability refers to the capability to compensate for devices failures. Furthermore, we propose a routing protocol for quantum routers which enables to generate arbitrary graph states across network boundaries. This protocol, in correspondence with classical routing protocols, can compensate dynamically for failures of routers, or even complete networks, by simply re-routing the given entanglement over alternative paths. We also consider how to connect quantum routers in a hierarchical manner to reduce complexity, as well as reliability issues arising in connecting these quantum networking devices.
quantum physics
This is a white paper in response to the National Academy of Sciences "Exoplanet Science Strategy" call. We summarize recent advances in theoretical habitability studies and argue that such studies will remain important for guiding and interpreting observations. Interactions between 1-D and 3-D climate modelers will be necessary to resolve recent discrepancies in model results and improve habitability studies. Observational capabilities will also need improvement. Although basic observations can be performed with present capabilities, technological advances will be necessary to improve climate models to the level needed for planetary habitability studies.
astrophysics
We derive for the first time an effective neutrino evolution Hamiltonian accounting for neutrino interactions with external magnetic field due to neutrino charge radii and anapole moment. The results are interesting for possible applications in astrophysics.
high energy physics phenomenology