text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We show that the dynamics of the kinematic space of a 2-dimensional CFT is gravitational and described by Jackiw-Teitelboim theory. We discuss the first law of this 2-dimensional dilaton gravity theory to support the relation between modular Hamiltonian and dilaton that underlies the kinematic space construction. It is further argued that Jackiw-Teitelboim gravity can be derived from a 2-dimensional version of Jacobson's maximal vacuum entanglement hypothesis. Applied to the kinematic space context, this leads us to the statement that the kinematic space of a 2-dimensional boundary CFT can be obtained from coupling the boundary CFT to JT gravity through a maximal vacuum entanglement principle.
|
high energy physics theory
|
Federated Learning (FL) is a distributed machine learning paradigm where data is decentralized among clients who collaboratively train a model in a computation process coordinated by a central server. By assigning a weight to each client based on the proportion of data instances it possesses, the rate of convergence to an accurate joint model can be greatly accelerated. Some previous works studied FL in a Byzantine setting, where a fraction of the clients may send the server arbitrary or even malicious information regarding their model. However, these works either ignore the issue of data unbalancedness altogether or assume that client weights are known to the server a priori, whereas, in practice, it is likely that weights will be reported to the server by the clients themselves and therefore cannot be relied upon. We address this issue for the first time by proposing a practical weight-truncation-based preprocessing method and demonstrating empirically that it is able to strike a good balance between model quality and Byzantine-robustness. We also establish analytically that our method can be applied to a randomly-selected sample of client weights.
|
computer science
|
Until recently little was known about the high-dimensional operators of the standard model effective field theory (SMEFT). However, in the past few years the number of these operators has been counted up to mass dimension 15 using techniques involving the Hilbert series. In this work I will show how to perform the same counting with a different method. This alternative approach makes it possible to cross-check results (it confirms the SMEFT numbers), but it also provides some more information on the operators beyond just counting their number. The considerations made here apply equally well to any other model besides SMEFT and, with this purpose in mind, they were implemented in a computer code.
|
high energy physics phenomenology
|
Gaussian processes (GPs) serve as flexible surrogates for complex surfaces, but buckle under the cubic cost of matrix decompositions with big training data sizes. Geospatial and machine learning communities suggest pseudo-inputs, or inducing points, as one strategy to obtain an approximation easing that computational burden. However, we show how placement of inducing points and their multitude can be thwarted by pathologies, especially in large-scale dynamic response surface modeling tasks. As remedy, we suggest porting the inducing point idea, which is usually applied globally, over to a more local context where selection is both easier and faster. In this way, our proposed methodology hybridizes global inducing point and data subset-based local GP approximation. A cascade of strategies for planning the selection of local inducing points is provided, and comparisons are drawn to related methodology with emphasis on computer surrogate modeling applications. We show that local inducing points extend their global and data-subset component parts on the accuracy--computational efficiency frontier. Illustrative examples are provided on benchmark data and a large-scale real-simulation satellite drag interpolation problem.
|
statistics
|
We provide conceptual proofs of the two most fundamental theorems concerning topological games and open covers: Hurewicz's Theorem concerning the Menger game, and Pawlikowski's Theorem concerning the Rothberger game.
|
mathematics
|
We explore different design choices for injecting noise into generative adversarial networks (GANs) with the goal of disentangling the latent space. Instead of traditional approaches, we propose feeding multiple noise codes through separate fully-connected layers respectively. The aim is restricting the influence of each noise code to specific parts of the generated image. We show that disentanglement in the first layer of the generator network leads to disentanglement in the generated image. Through a grid-based structure, we achieve several aspects of disentanglement without complicating the network architecture and without requiring labels. We achieve spatial disentanglement, scale-space disentanglement, and disentanglement of the foreground object from the background style allowing fine-grained control over the generated images. Examples include changing facial expressions in face images, changing beak length in bird images, and changing car dimensions in car images. This empirically leads to better disentanglement scores than state-of-the-art methods on the FFHQ dataset.
|
computer science
|
We study the global influence of curvature on the free energy landscape of two-dimensional binary mixtures confined on closed surfaces. Starting from a generic effective free energy, constructed on the basis of symmetry considerations and conservation laws, we identify several model-independent phenomena, such as a curvature-dependent line tension and local shifts in the binodal concentrations. To shed light on the origin of the phenomenological parameters appearing in the effective free energy, we further construct a lattice-gas model of binary mixtures on non-trivial substrates, based on the curved-space generalization of the two-dimensional Ising model. This allows us to decompose the interaction between the local concentration of the mixture and the substrate curvature into four distinct contributions, as a result of which the phase diagram splits into critical sub-diagrams. The resulting free energy landscape can admit, as stable equilibria, strongly inhomogeneous mixed phases, which we refer to as antimixed states below the critical temperature. We corroborate our semi-analytical findings with phase-field numerical simulations on realistic curved lattices. Despite this work being primarily motivated by recent experimental observations of multi-component lipid vesicles supported by colloidal scaffolds, our results are applicable to any binary mixture confined on closed surfaces of arbitrary geometry.
|
condensed matter
|
Quantum Key Distribution (QKD) via satellite offers up the possibility of unconditionally secure communications on a global scale. Increasing the secret key rate in such systems, via photonic engineering at the source, is a topic of much ongoing research. In this work we investigate the use of photon-added states and photon-subtracted states, derived from two mode squeezed vacuum states, as examples of such photonic engineering. Specifically, we determine which engineered-photonic state provides for better QKD performance when implemented over channels connecting terrestrial receivers with Low-Earth-Orbit satellites. We quantify the impact the number of photons that are added or subtracted has, and highlight the role played by the adopted model for atmospheric turbulence and loss on the predicted key rates. Our results are presented in terms of the complexity of deployment used, with the simplest deployments ignoring any estimate of the channel, and the more sophisticated deployments involving a feedback loop that is used to optimize the key rate for each channel estimation. The optimal quantum state is identified for each deployment scenario investigated.
|
quantum physics
|
Deep learning classifiers for characterization of whole slide tissue morphology require large volumes of annotated data to learn variations across different tissue and cancer types. As is well known, manual generation of digital pathology training data is time consuming and expensive. In this paper, we propose a semi-automated method for annotating a group of similar instances at once, instead of collecting only per-instance manual annotations. This allows for a much larger training set, that reflects visual variability across multiple cancer types and thus training of a single network which can be automatically applied to each cancer type without human adjustment. We apply our method to the important task of classifying Tumor Infiltrating Lymphocytes (TILs) in H&E images. Prior approaches were trained for individual cancer types, with smaller training sets and human-in-the-loop threshold adjustment. We utilize these thresholded results as large scale "semi-automatic" annotations. Combined with existing manual annotations, our trained deep networks are able to automatically produce better TIL prediction results in 12 cancer types, compared to the human-in-the-loop approach.
|
electrical engineering and systems science
|
We consider a multi-layer network with two layers, $\mathcal{L}_{1}$, $\mathcal{L}_{2}$. Their intra-layer topology shows a scale-free degree distribution and a core-periphery structure. A nested structure describes the inter-layer topology, i.e., some nodes from $\mathcal{L}_{1}$, the generalists, have many links to nodes in $\mathcal{L}_{2}$, specialists only have a few. This structure is verified by analyzing two empirical networks from ecology and economics. To probe the robustness of the multi-layer network, we remove nodes from $\mathcal{L}_{1}$ with their inter- and intra-layer links and measure the impact on the size of the largest connected component, $F_{2}$, in $\mathcal{L}_{2}$, which we take as a robustness measure. We test different attack scenarios by preferably removing peripheral or core nodes. We also vary the intra-layer coupling between generalists and specialists, to study their impact on the robustness of the multi-layer network. We find that some combinations of attack scenario and intra-layer coupling lead to very low robustness values, whereas others demonstrate high robustness of the multi-layer network because of the intra-layer links. Our results shed new light on the robustness of bipartite networks, which consider only inter-layer, but no intra-layer links.
|
physics
|
We study a surprising phenomenon in which Feynman integrals in $D=4-2\varepsilon$ space-time dimensions as $\varepsilon \to 0$ can be fully characterized by their behavior in the opposite limit, $\varepsilon \to \infty$. More concretely, we consider vector bundles of Feynman integrals over kinematic spaces, whose connections have a polynomial dependence on $\varepsilon$ and are known to be governed by intersection numbers of twisted forms. They give rise to differential equations that can be obtained exactly as a truncating expansion in either $\varepsilon$ or $1/\varepsilon$. We use the latter for explicit computations, which are performed by expanding intersection numbers in terms of Saito's higher residue pairings (previously used in the context of topological Landau-Ginzburg models and mirror symmetry). These pairings localize on critical points of a certain Morse function, which correspond to regions in the loop-momentum space that were previously thought to govern only the large-$D$ physics. The results of this work leverage recent understanding of an analogous situation for moduli spaces of curves, where the $\alpha' \to 0$ and $\alpha' \to \infty$ limits of intersection numbers coincide for scattering amplitudes of massless quantum field theories.
|
high energy physics theory
|
We discuss the Hard Dense Loop resummation at finite quark mass and evaluate the equation of state (EoS) of cold and dense QCD matter in $\beta$ equilibrium. The resummation in the quark sector has an effect of lowering the baryon number density and the EoS turns out to have much smaller uncertainty than the perturbative QCD estimate. Our numerical results favor smooth matching between the EoS from the resummed QCD calculation at high density and the extrapolated EoS from the nuclear matter density region. We also point out that the speed of sound in our EoS slightly exceeds the conformal limit.
|
high energy physics phenomenology
|
We derive an exact matrix product state representation of the Haldane-Rezayi state on both the cylinder and torus geometry. Our derivation is based on the description of the Haldane-Rezayi state as a correlator in a non-unitary logarithmic conformal field theory. This construction faithfully captures the ten degenerate ground states of this model state on the torus. Using the cylinder geometry, we probe the gapless nature of the phase by extracting the correlation length, which diverges in the thermodynamic limit. The numerically extracted topological entanglement entropies seem to only probe the Abelian part of the theory, which is reminiscent of the Gaffnian state, another model state deriving from a non-unitary conformal field theory.
|
condensed matter
|
This paper summarises the theory and functionality behind Questaal, an open-source suite of codes for calculating the electronic structure and related properties of materials from first principles. The formalism of the linearised muffin-tin orbital (LMTO) method is revisited in detail and developed further by the introduction of short-ranged tight-binding basis functions for full-potential calculations. The LMTO method is presented in both Green's function and wave function formulations for bulk and layered systems. The suite's full-potential LMTO code uses a sophisticated basis and augmentation method that allows an efficient and precise solution to the band problem at different levels of theory, most importantly density functional theory, LDA+U, quasi-particle self-consistent GW and combinations of these with dynamical mean field theory. This paper details the technical and theoretical bases of these methods, their implementation in Questaal, and provides an overview of the code's design and capabilities.
|
condensed matter
|
We leverage neural networks as universal approximators of monotonic functions to build a parameterization of conditional cumulative distribution functions (CDFs). By the application of automatic differentiation with respect to response variables and then to parameters of this CDF representation, we are able to build black box CDF and density estimators. A suite of families is introduced as alternative constructions for the multivariate case. At one extreme, the simplest construction is a competitive density estimator against state-of-the-art deep learning methods, although it does not provide an easily computable representation of multivariate CDFs. At the other extreme, we have a flexible construction from which multivariate CDF evaluations and marginalizations can be obtained by a simple forward pass in a deep neural net, but where the computation of the likelihood scales exponentially with dimensionality. Alternatives in between the extremes are discussed. We evaluate the different representations empirically on a variety of tasks involving tail area probabilities, tail dependence and (partial) density estimation.
|
statistics
|
In this paper, we study a class of heterotic Landau-Ginzburg models. We show that the action can be written as a sum of BRST-exact and non-exact terms. The non-exact terms involve the pullback of the complexified Kahler form to the worldsheet and terms arising from the superpotential, which is a Grassmann-odd holomorphic function of the superfields. We then demonstrate that the action is invariant on-shell under supersymmetry transformations up to a total derivative. Finally, we extend the analysis to the case in which the superpotential is not holomorphic. In this case, we find that supersymmetry imposes a constraint which relates the nonholomorphic parameters of the superpotential to the Hermitian curvature. Various special cases of this constraint have previously been used to establish properties of Mathai-Quillen form analogues which arise in the corresponding heterotic Landau-Ginzburg models. There, it was claimed that supersymmetry imposes those constraints. Our goal in this paper is to support that claim. The analysis for the nonholomorphic case also reveals a constraint imposed by supersymmetry that we did not anticipate from studies of Mathai-Quillen form analogues.
|
high energy physics theory
|
The inflationary model proposed by Starobinski in 1979 predicts an amplitude of the spectrum of primordial gravitational waves, parametrized by the tensor to scalar ratio, of $r=0.0037$ in case of a scalar spectral index of $n_S=0.965$. This amplitude is currently used as a target value in the design of future CMB experiments with the ultimate goal of measuring it at more than five standard deviations. Here we evaluate how stable are the predictions of the Starobinski model on $r$ considering the experimental uncertainties on $n_S$ and the assumption of $\Lambda$CDM. We also consider inflationary models where the $R^2$ term in Starobinsky action is generalized to a $R^{2p}$ term with index $p$ close to unity. We found that current data place a lower limit of $r>0.0013$ at $95 \%$ C.L. for the classic Starobinski model, and predict also a running of the scalar index different from zero at more than three standard deviation in the range $dn/dlnk=-0.0006_{-0.0001}^{+0.0002}$. A level of gravitational waves of $r\sim0.001$ is therefore possible in the Starobinski scenario and it will not be clearly detectable by future CMB missions as LiteBIRD and CMB-S4. When assuming a more general $R^{2p}$ inflation we found no expected lower limit on $r$, and a running consistent with zero. We found that current data are able to place a tight constraints on the index of $R^{2p}$ models at $95\%$ C.L. i.e. $p= 0.99^{+0.02}_{-0.03}$.
|
astrophysics
|
Calculations in field theory are usually accomplished by employing some variants of perturbation theory, for instance using loop expansions. These calculations result in asymptotic series in powers of small coupling parameters, which as a rule are divergent for finite values of the parameters. In this paper, a method is described allowing for the extrapolation of such asymptotic series to finite values of the coupling parameters, and even to their infinite limits. The method is based on self-similar approximation theory. This theory approximates well a large class of functions, rational, irrational, and transcendental. A method is presented, resulting in self-similar factor approximants allowing for the extrapolation of functions to arbitrary values of coupling parameters from only the knowledge of expansions in powers of small coupling parameters. The efficiency of the method is illustrated by several problems of quantum field theory.
|
high energy physics phenomenology
|
For many phenomena, data are collected on a large scale, resulting in high-dimensional and high-frequency data. In this context, functional data analysis (FDA) is attracting interest. FDA deals with data that are defined on an intrinsically infinite-dimensional space. These data are called functional data. However, the infinite-dimensional data might be driven by a small number of latent variables. Hence, factor models are relevant for functional data. In this paper, we study functional factor models for time-dependent functional data. We propose nonparametric estimators under stationary and nonstationary processes. We obtain estimators that consider the time-dependence property. Specifically, we use the information contained on the covariances at different lags. We show that the proposed estimators are consistent. Through Monte Carlo simulations, we find that our methodology outperforms the common estimators based on functional principal components. We also apply our methodology to monthly yield curves. In general, the suitable integration of time-dependent information improves the estimation of the latent factors.
|
statistics
|
Context. The Vista Variables in the Via Lactea (VVV) near-infrared variability survey explores some of the most complex regions of the Milky Way bulge and disk in terms of high extinction and high crowding. Aims. We add a new wavelength dimension to the optical information available at the American Association of Variable Star Observers International Variable Star Index (VSX-AAVSO) catalogue to test the VVV survey near-infrared photometry to better characterise these objects. Methods. We cross-matched the VVV and the VSX-AAVSO catalogues along with Gaia Data Release 2 photometry and parallax. Results. We present a catalogue that includes accurate individual coordinates, near-infrared magnitudes (ZY JHKs), extinctions Aks, and distances based on Gaia parallaxes. We also show the near-infrared CMDs and spatial distributions for the different VSX types of variable stars, including important distance indicators, such as RR Lyrae, Cepheids, and Miras. By analysing the photometric flags in our catalogue, we found that about 20% of the stars with measured and verified variability are flagged as non-stellar sources, even when they are outside of the saturation and/or noise regimes. Additionally, we pair-matched our sample with the VIVA catalogue and found that more than half of our sources are missing from the VVV variability list, mostly due to observations with low signal-to-noise ratio or photometric problems with a low percentage due to failures in the selection process. Conclusions. Our results suggest that the current knowledge of the variability in the Galaxy is biased to nearby stars with low extinction. The present catalogue also provides the groundwork for characterising the results of future large variability surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time in the highly crowded and reddened regions of the Galactic plane, as well as follow-up campaigns for
|
astrophysics
|
Let $g \geq 2$ and let the Torelli map denote the map sending a genus $g$ curve to its principally polarized Jacobian. We show that the restriction of the Torelli map to the hyperelliptic locus is an immersion in characteristic not $2$. In characteristic $2$, we show the Torelli map restricted to the hyperelliptic locus fails to be an immersion because it is generically inseparable; moreover, the induced map on tangent spaces has kernel of dimension $g-2$ at every point.
|
mathematics
|
We use the Dirac operator technique to establish sharp distance estimates for compact spin manifolds under lower bounds on the scalar curvature in the interior and on the mean curvature of the boundary. In the situations we consider, we thereby give refined answers to questions on metric inequalities recently proposed by Gromov. This includes optimal estimates for Riemannian bands and for the long neck problem. In the case of bands over manifolds of non-vanishing $\widehat{\mathrm{A}}$-genus, we establish a rigidity result stating that any band attaining the predicted upper bound is isometric to a particular warped product over some spin manifold admitting a parallel spinor. Furthermore, we establish scalar- and mean curvature extremality results for certain log-concave warped products. The latter includes annuli in all simply-connected space forms. On a technical level, our proofs are based on new spectral estimates for the Dirac operator augmented by a Lipschitz potential together with local boundary conditions.
|
mathematics
|
Winds from young massive stars contribute a large amount of energy to their host molecular clouds. This has consequences for the dynamics and observable structure of star-forming clouds. In this paper, we present radiative magnetohydrodynamic simulations of turbulent molecular clouds that form individual stars of 30, 60 and 120 solar masses emitting winds and ultraviolet radiation following realistic stellar evolution tracks. We find that winds contribute to the total radial momentum carried by the expanding nebula around the star at 10 % of the level of photoionisation feedback, and have only a small effect on the radial expansion of the nebula. Radiation pressure is largely negligible in the systems studied here. The 3D geometry and evolution of wind bubbles is highly aspherical and chaotic, characterised by fast-moving "chimneys" and thermally-driven "plumes". These plumes can sometimes become disconnected from the stellar source due to dense gas flows in the cloud. Our results compare favourably with the findings of relevant simulations, analytic models and observations in the literature while demonstrating the need for full 3D simulations including stellar winds. However, more targeted simulations are needed to better understand results from observational studies.
|
astrophysics
|
We present a renormalization group analysis for the hyperbolic sine-Gordon (sinh-Gordon) model in two dimensions. We derive the renormalization group equations based on the dimensional regularization method and the Wilson method. The same equations are obtained using both these methods. We have two parameters $\alpha$ and $\beta\equiv \sqrt{t}$ where $\alpha$ indicates the strength of interaction of a real salar field and $t=\beta^2$ is related with the normalization of the action. We show that $\alpha$ is renormalized to zero in the high-energy region, that is, the sinh-Gordon theory is an asymptotically free theory. We also show a non-renormalization property that the beta function of $t$ vanishes in two dimensions.
|
high energy physics theory
|
The paper presents a system of nonlinear Lorentz-invariant equations. The behavior of their localized solutions is analogous to the processes of formation of azimuthal-anisotropic elliptical flows of quark-gluon plasma and hadron jets. In contrast to the hydrodynamic models of an ideal fluid, the elliptic flow in this model has a significant negative acceleration. The presence of the negative acceleration allows us to propose a hypothesis about the direct elliptic flow of photons as braking radiation QGP.
|
high energy physics phenomenology
|
The ability to follow the dynamics of a quantum system in a quantitative manner is of key importance for quantum technology. Despite its central role, justifiable deduction of the quantum dynamics of a single quantum system in terms of a macroscopical observable remains a challenge. Here we show that the relation between the readout signal of a single electron spin and the quantum dynamics of the single nuclear spin is given by a parameter related to the measurement strength. We determine this measurement strength in independent experiments and use this value to compare our analysis of the quantum dynamics with experimental results. We prove the validity of our approach by measuring violations of the Leggett-Garg inequality.
|
quantum physics
|
We obtain parameters for non-orthogonal and orthogonal TB models from two-atomic molecules for all combinations of elements of period 1 to 6 and group 3 to 18 of the periodic table. The TB bond parameters for 1711 homoatomic and heteroatomic dimers show clear chemical trends. In particular, using our parameters we compare to the rectangular d-band model, the reduced sp TB model as well as canonical TB models for sp- and d-valent systems which have long been used to gain qualitative insight into the interatomic bond. The transferability of our dimer-based TB bond parameters to bulk systems is discussed exemplarily for the bulk ground-state structures of Mo and Si. Our dimer-based TB bond parameters provide a well-defined and promising starting point for developing refined TB parameterizations and for making the insight of TB available for guiding materials design across the periodic table.
|
condensed matter
|
The dissolution of rocks by rainfall commonly generates streamwise parallel channels, yet the occurrence of these natural patterns remains to be understood. Here, we report the emergence in the laboratory of a streamwise dissolution pattern at the surface of an initially flat soluble material, inclined and subjected to a thin runoff water flow. Nearly parallel grooves about 1 mm wide and directed along the main slope spontaneously form. Their width and depth increase continuously with time until their crests emerge and channelize the flow. Our observations may constitute the early stage of the patterns observed in the field.
|
physics
|
Classes with bounded rankwidth are MSO-transductions of trees and classes with bounded linear rankwidth are MSO-transductions of paths. These results show a strong link between the properties of these graph classes considered from the point of view of structural graph theory and from the point of view of finite model theory. We take both views on classes with bounded linear rankwidth and prove structural and model theoretic properties of these classes: 1) Graphs with linear rankwidth at most $r$ are linearly \mbox{$\chi$-bounded}. Actually, they have bounded $c$-chromatic number, meaning that they can be colored with $f(r)$ colors, each color inducing a cograph. 2) Based on a Ramsey-like argument, we prove for every proper hereditary family $\mathcal F$ of graphs (like cographs) that there is a class with bounded rankwidth that does not have the property that graphs in it can be colored by a bounded number of colors, each inducing a subgraph in~$\mathcal F$. 3) For a class $\mathcal C$ with bounded linear rankwidth the following conditions are equivalent: a) $\mathcal C$~is~stable, b)~$\mathcal C$~excludes some half-graph as a semi-induced subgraph, c) $\mathcal C$ is a first-order transduction of a class with bounded pathwidth. These results open the perspective to study classes admitting low linear rankwidth covers.
|
computer science
|
In this study we establish connections between asymptotic functions and properties of solutions to important problems in wireless networks. We start by introducing a class of self-mappings (called asymptotic mappings) constructed with asymptotic functions, and we show that spectral properties of these mappings explain the behavior of solutions to some maxmin utility optimization problems. For example, in a common family of max-min utility power control problems, we prove that the optimal utility as a function of the power available to transmitters is approximately linear in the low power regime. However, as we move away from this regime, there exists a transition point, easily computed from the spectral radius of an asymptotic mapping, from which gains in utility become increasingly marginal. From these results we derive analogous properties of the transmit energy efficiency. In this study we also generalize and unify existing approaches for feasibility analysis in wireless networks. Feasibility problems often reduce to determining the existence of the fixed point of a standard interference mapping, and we show that the spectral radius of an asymptotic mapping provides a necessary and sufficient condition for the existence of such a fixed point. We further present a result that determines whether the fixed point satisfies a constraint given in terms of a monotone norm.
|
electrical engineering and systems science
|
The BFV formulation of a given gauge theory is usually significantly easier to obtain than its BV formulation. Grigoriev and Damgaard introduced simple formulas for obtaining the latter from the former. Since BFV relies on the Hamiltonian version of the gauge theory, however, it does not come as a surprise that in general the resulting BV theory does not exhibit space-time covariance. We provide an explicit example of this phenomenon in two spacetime dimensions and show how to restore covariance of the BV data by improving the Grigoriev--Damgaard procedure with appropriate adaptations of its original formulas.
|
high energy physics theory
|
We present an analysis of two isovector scalar resonant contributions to the $B$ decays into charmonia plus $K\bar K$ or $\pi\eta$ pair in the perturbative QCD approach. The Flatt\'{e} model for the $a_0(980)$ resonance and the Breit Wigner formula for the $a_0(1450)$ resonance are adopted to parametrize the timelike form factors in the dimeson distribution amplitudes, which capture the important final state interactions in these processes. The predicted distribution in the $K^+K^-$ invariant mass as well as its integrated branching ratio for the $a_0(980)$ resonance in the $B^0\rightarrow J/\psi K^+K^-$ mode agree well with the current available experimental data. The obtained branching ratio of the quasi-two-body decay $B^0\rightarrow J/\psi a_0(980)(\rightarrow \pi^0\eta)$ can reach the order of $10^{-6}$, letting the corresponding measurement appear feasible. For the $a_0(1450)$ component, our results could be tested by further experiments in the LHCb and Belle II. We also discuss some theoretical uncertainties in detail in our calculation.
|
high energy physics phenomenology
|
Out of the participants in a randomized experiment with anticipated heterogeneous treatment effects, is it possible to identify which ones have a positive treatment effect, even though each has only taken either treatment or control but not both? While subgroup analysis has received attention, claims about individual participants are more challenging. We frame the problem in terms of multiple hypothesis testing: we think of each individual as a null hypothesis (the potential outcomes are equal, for example) and aim to identify individuals for whom the null is false (the treatment potential outcome stochastically dominates the control, for example). We develop a novel algorithm that identifies such a subset, with nonasymptotic control of the false discovery rate (FDR). Our algorithm allows for interaction -- a human data scientist (or a computer program acting on the human's behalf) may adaptively guide the algorithm in a data-dependent manner to gain high identification power. We also propose several extensions: (a) relaxing the null to nonpositive effects, (b) moving from unpaired to paired samples, and (c) subgroup identification. We demonstrate via numerical experiments and theoretical analysis that the proposed method has valid FDR control in finite samples and reasonably high identification power.
|
statistics
|
An important component of every country's COVID-19 response is fast and efficient testing - to identify and isolate cases, as well as for early detection of local hotspots. For many countries, producing a sufficient number of tests has been a serious limiting factor in their efforts to control COVID-19 infections. Group testing is a well-established mathematical tool, which can provide a substantial and inexpensive expansion of testing capacity. In this note, we compare several popular group testing schemes in the context of qPCR testing for COVID-19. We find that in practical settings, for identification of individuals with COVID-19, Dorfman testing is the best choice at prevalences up to 30%, while for estimation of COVID-19 prevalence rates in the total population, Gibbs-Gower testing is the best choice at prevalences up to 30% given a fixed and relatively small number of tests. For instance, at a prevalence of up to 2%, Dorfman testing gives an efficiency gain of 3.5--8; at 1% prevalence, Gibbs-Gower testing gives an efficiency gain of 18, even when capping the pool size at a feasible number . This note is intended as a helpful handbook for labs implementing group testing methods.
|
statistics
|
A path integral Monte Carlo method (PIMC) based on Feynman-Kac formula for mixed boundary conditions of elliptic equations is proposed to solve the forward problem of electrical impedance tomography (EIT) on the boundary to obtain electrical potentials. The forward problem is an important part for iterative algorithms of the inverse problem of EIT, which has attracted continual interest due to its applications in medical imaging and material testing of materials. By simulating reflecting Brownian motion with walk-on-sphere techniques and calculating its corresponding local time, we are able to obtain accurate voltage-to-current map for the conductivity equation with mixed boundary conditions for a 3-D spherical object with eight electrodes. Due to the local property of the PIMC method, the solution of the map can be done locally for each electrode in a parallel manner.
|
mathematics
|
We present and discuss some basic elements of the Standard Model hypercolor extension. Appearance of a set of hyperquarks bound states is resulted from $\sigma-$model using; due to specific symmetries of this minimal extension, there arise stable hypermesons and hyperbaryons which are interpreted as the Dark Matter candidates. Knowing estimations of their masses from analysis of Dark Matter annihilation kinetics, some processes of high energy cosmic rays scattering off these particles are analyzed for the search of Dark Matter manifestations.
|
high energy physics phenomenology
|
The global existence of classical solutions to reaction-diffusion systems in arbitrary space dimensions is studied. The nonlinearities are assumed to be quasi-positive, to have (slightly super-) quadratic growth, and to possess a mass control, which includes the important cases as mass conservation and mass dissipation. Under these assumptions, the local classical solution is shown to be global and, in case of mass conservation or mass dissipation, to have $L^{\infty}$-norm growing at most polynomially in time. Applications include skew-symmetric Lotka-Volterra systems and quadratic reversible chemical reactions.
|
mathematics
|
Quantum computing is a winsome field that concerns with the behaviour and nature of energy at the quantum level to improve the efficiency of computations. In recent years, quantum computation is receiving much attention for its capability to solve difficult problems efficiently in contrast to classical computers. Specifically, some well-known public-key cryptosystems depend on the difficulty of factoring large numbers, which takes a very long time. It is expected that the emergence of a quantum computer has the potential to break such cryptosystems by 2020 due to the discovery of powerful quantum algorithms (Shor's factoring, Grover's search algorithm and many more). In this paper, we have designed a quantum variant of the second fastest classical factorization algorithm named "Quadratic Sieve". We have constructed the simulation framework of quantized quadratic sieve algorithm using high-level programming language Mathematica. Further, the simulation results are performed on a classical computer to get a feel of the quantum system and proved that it is more efficient than its classical variants from computational complexity point of view.
|
quantum physics
|
We use Constraint Satisfaction methods to enumerate and construct set-theoretic solutions to the Yang-Baxter equation of small size. We show that there are 321931 involutive solutions of size nine, 4895272 involutive solutions of size ten and 422449480 non-involutive solution of size eight. Our method is then used to enumerate non-involutive biquandles.
|
mathematics
|
We present an approach for adapting convolutional neural networks for object recognition and classification to scientific literature layout detection (SLLD), a shared subtask of several information extraction problems. Scientific publications contain multiple types of information sought by researchers in various disciplines, organized into an abstract, bibliography, and sections documenting related work, experimental methods, and results; however, there is no effective way to extract this information due to their diverse layout. In this paper, we present a novel approach to developing an end-to-end learning framework to segment and classify major regions of a scientific document. We consider scientific document layout analysis as an object detection task over digital images, without any additional text features that need to be added into the network during the training process. Our technical objective is to implement transfer learning via fine-tuning of pre-trained networks and thereby demonstrate that this deep learning architecture is suitable for tasks that lack very large document corpora for training ab initio. As part of the experimental test bed for empirical evaluation of this approach, we created a merged multi-corpus data set for scientific publication layout detection tasks. Our results show good improvement with fine-tuning of a pre-trained base network using this merged data set, compared to the baseline convolutional neural network architecture.
|
computer science
|
The accuracy of smartphone-based positioning methods using WiFi usually suffers from ranging errors caused by non-line-of-sight (NLOS) conditions. Previous research usually exploits several statistical features from a long time series (hundreds of samples) of WiFi received signal strength (RSS) or WiFi round-trip time (RTT) to achieve a high identification accuracy. However, the long time series or large sample size attributes to high power and time consumption in data collection for both training and testing. This will also undoubtedly be detrimental to user experience as the waiting time of getting enough samples is quite long. Therefore, this paper proposes a new real-time NLOS/LOS identification method for smartphone-based indoor positioning system using WiFi RTT and RSS. Based on our extensive analysis of RSS and RTT features, a machine learning-based method using random forest was chosen and developed to separate the samples for NLOS/LOS conditions. Experiments in different environments show that our method achieves a discrimination accuracy of about 94% with a sample size of 10. Considering the theoretically shortest WiFi ranging interval of 100ms of the RTT-enabled smartphones, our algorithm is able to provide the shortest latency of 1s to get the testing result among all of the state-of-art methods.
|
electrical engineering and systems science
|
Gamma-ray burst (GRB) data suggest that the jets from GRBs in the high redshift universe are more narrowly collimated than those at lower redshifts. This implies that we detect relatively fewer long GRB progenitor systems (i.e. massive stars) at high redshifts, because a greater fraction of GRBs have their jets pointed away from us. As a result, estimates of the star formation rate (from the GRB rate) at high redshifts may be diminished if this effect is not taken into account. In this paper, we estimate the star formation rate (SFR) using the observed GRB rate, accounting for an evolving jet opening angle. We find that the SFR in the early universe (z > 3) can be up to an order of magnitude higher than the canonical estimates, depending on the severity of beaming angle evolution and the fraction of stars that make long gamma-ray bursts. Additionally, we find an excess in the SFR at low redshifts, although this lessens when accounting for evolution of the beaming angle. Finally, under the assumption that GRBs do in fact trace canonical forms of the cosmic SFR, we constrain the resulting fraction of stars that must produce GRBs, again accounting for jet beaming-angle evolution. We find this assumption suggests a high fraction of stars in the early universe producing GRBs - a result that may, in fact, support our initial assertion that GRBs do not trace canonical estimates of the SFR.
|
astrophysics
|
Following a series of similar calculations in simpler non-conformal holographic setups, we determine the quasinormal mode spectrum for an operator dual to a gauge-invariant scalar field within the Improved Holographic QCD framework. At temperatures somewhat above the critical temperature of the deconfinement transition, we find a small number of clearly separated modes followed by a branch-cut-like structure parallel to the real axis, the presence of which is linked to the form of the IHQCD potential employed. The temperature dependence of the lowest nonzero mode is furthermore used to study the thermalization time of the corresponding correlator, which is found to be of the order of the inverse critical temperature near the phase transition and decrease slightly faster than $1/T$ at higher temperatures.
|
high energy physics phenomenology
|
Low background searches for astrophysical neutrino sources anywhere in the sky can be performed using cascade events induced by neutrinos of all flavors interacting in IceCube with energies as low as ~1 TeV. Previously, we showed that even with just two years of data, the resulting sensitivity to sources in the southern sky is competitive with IceCube and ANTARES analyses using muon tracks induced by charge current muon neutrino interactions - especially if the neutrino emission follows a soft energy spectrum or originates from an extended angular region. Here, we extend that work by adding five more years of data, significantly improving the cascade angular resolution, and including tests for point-like or diffuse Galactic emission to which this dataset is particularly well-suited. For many of the signal candidates considered, this analysis is the most sensitive of any experiment. No significant clustering was observed, and thus many of the resulting constraints are the most stringent to date. In this paper we will describe the improvements introduced in this analysis and discuss our results in the context of other recent work in neutrino astronomy.
|
astrophysics
|
The $\alpha$-attractor inflationary models are nowadays favored by CMB Planck observations. Their similarity with canonical quintessence models motivates the exploration of a common framework that explains both inflation and dark energy. We study the expected constraints that next-generation cosmological experiments will be able to impose for the dark energy $\alpha$-attractor model. We systematically account for the constraining power of SNIa from WFIRST, BAO from DESI and WFIRST, galaxy clustering and shear from LSST and Stage-4 CMB experiments. We assume a tensor-to-scalar ratio, $10^{-3} < r < 10^{-2}$, which permits to explore the wide regime sufficiently close, but distinct, to a cosmological constant, without need of fine tunning the initial value of the field. We find that the combination S4CMB + LSST + SNIa will achieve the best results, improving the FoM by almost an order of magnitude; respect to the S4CMB + BAO + SNIa case. We find this is also true for the FoM of the $w_0 - w_a$ parameters. Therefore, future surveys will be uniquely able to probe models connecting early and late cosmic acceleration.
|
astrophysics
|
Artificial neural networks are used to fit a potential energy surface. We demonstrate the benefits of using not only energies, but also their first and second derivatives as training data for the neural network. This ensures smooth and accurate Hessian surfaces, which are required for rate constant calculations using instanton theory. Our aim was a local, accurate fit rather than a global PES, because instanton theory requires information on the potential only in the close vicinity of the main tunneling path. Elongations along vibrational normal modes at the transition state are used as coordinates for the neural network. The method is applied to the hydrogen abstraction reaction from methanol, calculated on a coupled-cluster level of theory. The reaction is essential in astrochemistry to explain the deuteration of methanol in the interstellar medium.
|
physics
|
Background: Studies have shown that human mobility is an important factor in dengue epidemiology. Changes in mobility resulting from COVID-19 pandemic set up a real-life situation to test this hypothesis. Our objective was to evaluate the effect of reduced mobility due to this pandemic in the occurrence of dengue in the state of S\~ao Paulo, Brazil. Method: It is an ecological study of time series, developed between January and August 2020. We use the number of confirmed dengue cases and residential mobility, on a daily basis, from secondary information sources. Mobility was represented by the daily percentage variation of residential population isolation, obtained from the Google database. We modeled the relationship between dengue occurrence and social distancing by negative binomial regression, adjusted for seasonality. We represent the social distancing dichotomously (isolation versus no isolation) and consider lag for isolation from the dates of occurrence of dengue. Results: The risk of dengue decreased around 9.1% (95% CI: 14.2 to 3.7) in the presence of isolation, considering a delay of 20 days between the degree of isolation and the dengue first symptoms. Conclusions: We have shown that mobility can play an important role in the epidemiology of dengue and should be considered in surveillance and control activities
|
statistics
|
It has been previously shown that any measurement system specific relationship (SSR)/ mathematical-model Y_d = f_d ({X_m}) or so is bracketed with certain parameters which should prefix the achievable-accuracy/ uncertainty (e_d^Y) of a desired result y_d. Here we clarify how the element-specific-expressions of isotopic abundances and/ or atomic weight could be parametrically distinguished from one another, and the achievable accuracy be even a priori predicted. It is thus signified that, irrespective of whether the measurement-uncertainty (u_m) could be purely random by origin or not, e_d^Y should be a systematic parameter. Further, by property-governing-factors, any SSR should belong to either variable-independent (F.1) or -dependent (F.2) family of SSRs/ models. The SSRs here are shown to be the members of the F.2 family. That is, it is pointed out that, and explained why, the uncertainty (e) of determining an either isotopic abundance or atomic weight should vary, even for any given measurement-accuracy(s) u_m(s), as a function of the measurable-variable(s) X_m(s). However, the required computational-step has been shown to behave as an error-sink in the overall process of indirect measurement in question.
|
physics
|
This paper reviews methods that are used for adequacy risk assessment considering solar power and for assessment of the capacity value of solar power. The properties of solar power are described as seen from the perspective of the power-system operator, comparing differences in energy availability and capacity factors with those of wind power. Methodologies for risk calculations considering variable generation are surveyed, including the probability background, statistical-estimation approaches, and capacity-value metrics. Issues in incorporating variable generation in capacity markets are described, followed by a review of applied studies considering solar power. Finally, recommendations for further research are presented.
|
statistics
|
Rather general considerations from the string theory landscape suggest a statistical preference within the multiverse for soft SUSY breaking terms as large as possible subject to a pocket universe value for the weak scale not greater than a factor of 2-5 from our measured value. Within the gravity/moduli-mediated SUSY breaking framework, the Higgs mass is pulled to m_h~ 125 GeV while first/second generation scalars are pulled to tens of TeV scale and gauginos and third generation scalars remain at the few TeV range. In this case, one then expects comparable moduli- and anomaly-mediated contributions to soft terms, leading to mirage mediation. For an assumed stringy natural value of the SUSY mu parameter, we evaluate predicted sparticle mass spectra for mirage mediation from a statistical scan of the string landscape. We then expect a compressed spectrum of gauginos along with a higgsino-like LSP. For a linear (quadratic) statistical draw with gravitino mass m_{3/2}~ 20 TeV, then the most probable mirage scale is predicted to be around \mu_{mir}~10^{13} (10^{14}) GeV. SUSY should appear at high-luminosity LHC via higgsino pair production into soft dilepton pairs. Distinguishing mirage mediation from models with unified gaugino masses may have to await construction of an ILC with \sqrt{s}>2m(higgsino).
|
high energy physics phenomenology
|
Cryptanalysis on standard quantum cryptographic systems generally involves finding optimal adversarial attack strategies on the underlying protocols. The core principle of modelling quantum attacks in many cases reduces to the adversary's ability to clone unknown quantum states which facilitates the extraction of some meaningful secret information. Explicit optimal attack strategies typically require high computational resources due to large circuit depths or, in many cases, are unknown. In this work, we propose variational quantum cloning (VQC), a quantum machine learning based cryptanalysis algorithm which allows an adversary to obtain optimal (approximate) cloning strategies with short depth quantum circuits, trained using hybrid classical-quantum techniques. The algorithm contains operationally meaningful cost functions with theoretical guarantees, quantum circuit structure learning and gradient descent based optimisation. Our approach enables the end-to-end discovery of hardware efficient quantum circuits to clone specific families of quantum states, which in turn leads to an improvement in cloning fidelites when implemented on quantum hardware: the Rigetti Aspen chip. Finally, we connect these results to quantum cryptographic primitives, in particular quantum coin flipping. We derive attacks on two protocols as examples, based on quantum cloning and facilitated by VQC. As a result, our algorithm can improve near term attacks on these protocols, using approximate quantum cloning as a resource.
|
quantum physics
|
We investigate the phase diagram of a general class of $4$-dimensional exact regular hairy planar black holes. For some particular values of the parameters in the moduli potential, these solutions can be embedded in $\omega$-deformed $\mathcal{N}=8$ gauged supergravity. We construct the hairy soliton that is the ground state of the theory and show that there exist first order phase transitions.
|
high energy physics theory
|
In the study of reaction networks and the polynomial dynamical systems that they generate, special classes of networks with important properties have been identified. These include reversible, weakly reversible}, and, more recently, endotactic networks. While some inclusions between these network types are clear, such as the fact that all reversible networks are weakly reversible, other relationships are more complicated. Adding to this complexity is the possibility that inclusions be at the level of the dynamical systems generated by the networks rather than at the level of the networks themselves. We completely characterize the inclusions between reversible, weakly reversible, endotactic, and strongly endotactic network, as well as other less well studied network types. In particular, we show that every strongly endotactic network in two dimensions can be generated by an extremally weakly reversible network. We also introduce a new class of source-only networks, which is a computationally convenient property for networks to have, and show how this class relates to the above mentioned network types.
|
mathematics
|
Noble-metal nano-particles have been the industry-standard for plasmonic applications due to their highly populated plasmon generations. Despite their remarkable plasmonic performance, their widespread use in plasmonic applications is commonly hindered due to limitations on the available laser sources and relatively low operating temperatures needed to retain mechanical strength in these materials. Motivated by recent experimental works, in which exotic hexagonal-closed-packed (HCP) phases have been identified in Gold (Au), and Silver (Ag), we present the plasmonic performance of two HCP polytypes in these materials using high-accuracy first-principles simulations. HCP phases commonly reach thermal and mechanical stability at high temperatures due to monotonically decreasing Gibbs free energy differences with respect to the corresponding face-centered-cubic (FCC) phases. We find that several of these polytypes are harder and produce bulk plasmons at the lower energies with comparable life-times than their conventional FCC counterparts. We also show that surface plasmons are generated at substantially lower energies for perfectly spherical nano-grains embedded on face-centered cubic matrices compared to their bulk counterparts. Furthermore, the increasing grain-size slightly shifts the surface-plasmon peaks to higher energies and increases the plasmon intensity. Our work suggests that noble metal nano-particles can be tailored to develop exotic HCP phases to obtain novel plasmonic properties.
|
condensed matter
|
We propose an algorithm for electrocardiogram (ECG) segmentation using a UNet-like full-convolutional neural network. The algorithm receives an arbitrary sampling rate ECG signal as an input, and gives a list of onsets and offsets of P and T waves and QRS complexes as output. Our method of segmentation differs from others in speed, a small number of parameters and a good generalization: it is adaptive to different sampling rates and it is generalized to various types of ECG monitors. The proposed approach is superior to other state-of-the-art segmentation methods in terms of quality. In particular, F1-measures for detection of onsets and offsets of P and T waves and for QRS-complexes are at least 97.8%, 99.5%, and 99.9%, respectively.
|
electrical engineering and systems science
|
Recently unsupervised Bilingual Lexicon Induction (BLI) without any parallel corpus has attracted much research interest. One of the crucial parts in methods for the BLI task is the matching procedure. Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings. Thus, We propose a relaxed matching procedure to find a more precise matching between two languages. We also find that aligning source and target language embedding space bidirectionally will bring significant improvement. We follow the previous iterative framework to conduct experiments. Results on standard benchmark demonstrate the effectiveness of our proposed method, which substantially outperforms previous unsupervised methods.
|
computer science
|
This paper shows that the generalized logistic distribution model is derived from the well-known compartment model, consisting of susceptible, infected and recovered compartments, abbreviated as the SIR model, under certain conditions. In the SIR model, there are uncertainties in predicting the final values for the number of infected population and the infectious parameter. However, by utilizing the information obtained from the generalized logistic distribution model, we can perform the SIR numerical computation more stably and more accurately. Applications to severe acute respiratory syndrome (SARS) and Coronavirus disease 2019 (COVID-19) using this combined method are also introduced.
|
statistics
|
Time series forecasting plays an increasingly important role in modern business decisions. In today's data-rich environment, people often aim to choose the optimal forecasting model for their data. However, identifying the optimal model often requires professional knowledge and experience, making accurate forecasting a challenging task. To mitigate the importance of model selection, we propose a simple and reliable algorithm and successfully improve the forecasting performance. Specifically, we construct multiple time series with different sub-seasons from the original time series. These derived series highlight different sub-seasonal patterns of the original series, making it possible for the forecasting methods to capture diverse patterns and components of the data. Subsequently, we make forecasts for these multiple series separately with classical statistical models (ETS or ARIMA). Finally, the forecasts of these multiple series are combined with equal weights. We evaluate our approach on the widely-used forecasting competition datasets (M1, M3, and M4), in terms of both point forecasts and prediction intervals. We observe improvements in performance compared with the benchmarks. Our approach is particularly suitable and robust for the datasets with higher frequencies. To demonstrate the practical value of our proposition, we showcase the performance improvements from our approach on hourly load data.
|
statistics
|
We show that for a quasicompact quasiseparated scheme $X$, the following assertions are equivalent: (1) the category $\operatorname{QCoh}(X)$ of all quasicoherent sheaves on $X$ has a flat generator; (2) for every injective object $\mathcal E$ of $\operatorname{QCoh}(X)$, the internal hom functor into $\mathcal E$ is exact; (3) the scheme $X$ is semiseparated.
|
mathematics
|
The aim of this note is to describe the computation of post-Minkwoskian Hamiltonians in modified theories of gravity. Exploiting a recent relation between amplitudes of massive scalars and Hamiltonians for relativistic point-particles, we define a post-Minkowskian potential at second order in Newton's constant arising from $\mathcal{R}^3$ modifications in General Relativity. Using this result we calculate the associated contribution to the scattering angle for binary black holes at second post-Minkowskian order, showing agreement in the non relativistic limit with previous results for the bending angle of a massless particle around a static massive source in $\mathcal{R}^3$ theories.
|
high energy physics theory
|
Every day around the world, interminable terabytes of data are being captured for surveillance purposes. A typical 1-2MP CCTV camera generates around 7-12GB of data per day. Frame-by-frame processing of such enormous amount of data requires hefty computational resources. In recent years, compressive sensing approaches have shown impressive results in signal processing by reducing the sampling bandwidth. Different sampling mechanisms were developed to incorporate compressive sensing in image and video acquisition. Pixel-wise coded exposure is one among the promising sensing paradigms for capturing videos in the compressed domain, which was also realized into an all-CMOS sensor \cite{Xiong2017}. Though cameras that perform compressive sensing save a lot of bandwidth at the time of sampling and minimize the memory required to store videos, we cannot do much in terms of processing until the videos are reconstructed to the original frames. But, the reconstruction of compressive-sensed (CS) videos still takes a lot of time and is also computationally expensive. In this work, we show that object detection and localization can be possible directly on the CS frames (easily upto 20x compression). To our knowledge, this is the first time that the problem of object detection and localization on CS frames has been attempted. Hence, we also created a dataset for training in the CS domain. We were able to achieve a good accuracy of 46.27\% mAP(Mean Average Precision) with the proposed model with an inference time of 23ms directly on the compressed frames(approx. 20 original domain frames), this facilitated for real-time inference which was verified on NVIDIA TX2 embedded board. Our framework will significantly reduce the communication bandwidth, and thus reduction in power as the video compression will be done at the image sensor processing core.
|
computer science
|
In this paper, we consider the problem of minimizing the uplink delays of users in a 5G cellular network. Such cellular network is based on a Cloud Radio Access Network (CRAN) architecture with limited fronthaul capacity, where our goal is to minimize delays of all users through an optimal resource allocation. Earlier works minimize average delay of each user assuming same transmit power for all users. Combining Pareto optimization and Markov Decision Process (MDP), we show that every desired balance in the trade-off among infinite-horizon average-reward delays, is achievable by minimizing a properly weighted sum delays. In addition, we solve the problem in two realistic scenarios; considering both power control and different (random) service times for the users. In the latter scenario, we are able to define and minimize the more preferred criterion of total delay vs. average delay for each user. We will show that the resulting problem is equivalent to a discounted-reward infinite-horizon MDP. Simulations show significant improvement in terms of wider stability region for arrival rates in power-controlled scenario and considerably reduced sum of users total delays in the case of random service times.
|
electrical engineering and systems science
|
Pricing actuaries typically operate within the framework of generalized linear models (GLMs). With the upswing of data analytics, our study puts focus on machine learning methods to develop full tariff plans built from both the frequency and severity of claims. We adapt the loss functions used in the algorithms such that the specific characteristics of insurance data are carefully incorporated: highly unbalanced count data with excess zeros and varying exposure on the frequency side combined with scarce, but potentially long-tailed data on the severity side. A key requirement is the need for transparent and interpretable pricing models which are easily explainable to all stakeholders. We therefore focus on machine learning with decision trees: starting from simple regression trees, we work towards more advanced ensembles such as random forests and boosted trees. We show how to choose the optimal tuning parameters for these models in an elaborate cross-validation scheme, we present visualization tools to obtain insights from the resulting models and the economic value of these new modeling approaches is evaluated. Boosted trees outperform the classical GLMs, allowing the insurer to form profitable portfolios and to guard against potential adverse risk selection.
|
statistics
|
Donsker Theorem is perhaps the most famous invariance principle result for Markov processes. It states that when properly normalized, a random walk behaves asymptotically like a Brownian motion. This approach can be extended to general Markov processes whose driving parameters are taken to a limit, which can lead to insightful results in contexts like large distributed systems or queueing networks. The purpose of this paper is to assess the rate of convergence in these so-called diffusion approximations, in a queueing context. To this end, we extend the functional Stein method introduced for the Brownian approximation of Poisson processes, to two simple examples: the single-server queue and the infinite-server queue. By doing so, we complete the recent applications of Stein's method to queueing systems, with results concerning the whole trajectory of the considered process, rather than its stationary distribution.
|
mathematics
|
The need to monitor industrial processes, detecting changes in process parameters in order to promptly correct problems that may arise, generates a particular area of interest. This is particularly critical and complex when the measured value falls below the sensitivity limits of the measuring system or below detection limits, causing much of their observations are incomplete. Such observations to be called incomplete observations or left censored data. With a high level of censorship, for example greater than 70%, the application of traditional methods for monitoring processes is not appropriate. It is required to use appropriate data analysis statistical techniques, to assess the actual state of the process at any time. This paper proposes a way to estimate process parameters in such cases and presents the corresponding control chart, from an algorithm that is also presented.
|
statistics
|
Recently, it has been found that JT gravity, which is a two-dimensional theory with bulk action $ -\frac{1}{2}\int {\mathrm d}^2x \sqrt g\phi(R+2)$, is dual to a matrix model, that is, a random ensemble of quantum systems rather than a specific quantum mechanical system. In this article, we argue that a deformation of JT gravity with bulk action $ -\frac{1}{2}\int {\mathrm d}^2x \sqrt g(\phi R+W(\phi))$ is likewise dual to a matrix model. With a specific procedure for defining the path integral of the theory, we determine the density of eigenvalues of the dual matrix model. There is a simple answer if $W(0)=0$, and otherwise a rather complicated answer.
|
high energy physics theory
|
We show that polarization singularities, generic for any complex vector fields but so far mostly studied for electromagnetic fields, appear naturally in inhomogeneous (yet monochromatic) sound and water-surface (e.g., gravity or capillary) wave fields in fluids or gases. The vector properties of these waves are described by the velocity or displacement fields characterizing the local oscillatory motion of the medium particles. We consider a number of examples revealing C-points of purely circular polarization and polarization M\"{o}bius strips (formed by major axes of polarization ellipses) around the C-points in sound and gravity wave fields. Our results (i) offer a new readily accessible platform for studies of polarization singularities and topological features of complex vector wavefields and (ii) can play an important role in characterizing vector (e.g., dipole) wave-matter interactions in acoustics and fluid mechanics.
|
physics
|
Optical imaging of genetically encoded calcium indicators is a powerful tool to record the activity of a large number of neurons simultaneously over a long period of time from freely behaving animals. However, determining the exact time at which a neuron spikes and estimating the underlying firing rate from calcium fluorescence data remains challenging, especially for calcium imaging data obtained from a longitudinal study. We propose a multi-trial time-varying $\ell_0$ penalized method to jointly detect spikes and estimate firing rates by robustly integrating evolving neural dynamics across trials. Our simulation study shows that the proposed method performs well in both spike detection and firing rate estimation. We demonstrate the usefulness of our method on calcium fluorescence trace data from two studies, with the first study showing differential firing rate functions between two behaviors and the second study showing evolving firing rate function across trials due to learning.
|
statistics
|
Machine translation is the task of translating texts from one language to another using computers. It has been one of the major tasks in natural language processing and computational linguistics and has been motivating to facilitate human communication. Kurdish, an Indo-European language, has received little attention in this realm due to the language being less-resourced. Therefore, in this paper, we are addressing the main issues in creating a machine translation system for the Kurdish language, with a focus on the Sorani dialect. We describe the available scarce parallel data suitable for training a neural machine translation model for Sorani Kurdish-English translation. We also discuss some of the major challenges in Kurdish language translation and demonstrate how fundamental text processing tasks, such as tokenization, can improve translation performance.
|
computer science
|
The Dark Side of the Universe, which includes the cosmological inflation in the early Universe, the current dark energy and dark matter, can be theoretically described by supergravity, though it is non-trivial. We recall the arguments pro and contra supersymmetry and supergravity, and define the viable supergravity models describing the Dark Side of the Universe in agreement with all current observations. Our approach to inflation is based on the Starobinsky model, the dark energy is identified with the positive cosmological constant (de Sitter vacuum), and the dark matter particle is given by the lightest supersymmetric particle identified with the supermassive gravitino. The key role is played by spontaneous supersymmetry breaking.
|
high energy physics theory
|
In this paper we analyze a recently proposed approach for the construction of antisymmetric functions for atomic and molecular systems. It is based on the assumption that the main problems with Hartree-Fock wavefunctions stem from their lack of proper permutation symmetry. This alternative building approach is based on products of a space times a spin function with opposite permutation symmetry. The main argument for devising such factors is that the eigenfunctions of the non-relativistic Hamiltonian are either symmetric or antisymmetric with respect to the transposition of the variables of a pair of electrons. However, since the eigenfunctions of the non-relativistic Hamiltonian are basis for the irreducible representations of the symmetric group they are not necessarily symmetric or antisymmetric, except in the trivial case of two electrons. We carry out a simple and straightforward general analysis of the symmetry of the eigenfunctions of the non-relativistic Hamiltonian and illustrate our conclusions by means of two exactly-solvable models of $N=2$ and $N=3$ identical interacting particles.
|
quantum physics
|
Photocatalytic reactions on TiO2 have recently gained an enormous resurgence because of various new strategies and findings that promise to drastically increase efficiency and specificity of such reactions by modifications of the titania scaffold and chemistry. In view of geometry, in particular, anodic TiO2 nanotubes have attracted wide interest, as they allow a high degree of control over the separation of photogenerated charge carriers not only in photocatalytic reactions but also in photoelectrochemical reactions. A key advantage of ordered nanotube arrays is that nanotube modifications can be embedded site specifically into the tube wall; that is, cocatalysts, doping species, or junctions can be placed at highly defined desired locations (or with a desired regular geometry or pattern) along the tube wall. This allows an unprecedented level of engineering of energetics of reaction sites for catalytic and photocatalytic reactions, which target not only higher efficiencies but also the selectivity of reactions. Many recent tube alterations are of a morphologic nature (mesoporous structures, designed faceted crystallites, hybrids, or 1D structures), but a number of color-coded (namely, black, blue, red, green, gray) modifications have attracted wide interest because of the extension of the light absorption spectrum of titania in the visible range and because unique catalytic activity can be induced. The present Perspective gives an overview of TiO2 nanotubes in photocatalysis with an emphasis on the most recent advances in the use of nanotube arrays and discusses the underlying concepts in tailoring their photocatalytic reactivity.
|
physics
|
Federated learning is an appealing framework for analyzing sensitive data from distributed health data networks. Under this framework, data partners at local sites collaboratively build an analytical model under the orchestration of a coordinating site, while keeping the data decentralized. While integrating information from multiple sources may boost statistical efficiency, existing federated learning methods mainly assume data across sites are homogeneous samples of the global population, failing to properly account for the extra variability across sites in estimation and inference. Drawing on a multi-hospital electronic health records network, we develop an efficient and interpretable tree-based ensemble of personalized treatment effect estimators to join results across hospital sites, while actively modeling for the heterogeneity in data sources through site partitioning. The efficiency of this approach is demonstrated by a study of causal effects of oxygen saturation on hospital mortality and backed up by comprehensive numerical results.
|
statistics
|
Instruments achieve sharper and finer observations of micron-in-size dust grains in the top layers of young stellar discs. To provide accurate models, we revisit the theory of dust settling for small grains, when gas stratification, dust inertia and finite correlation times for the turbulence should be handled simultaneously. We start from a balance of forces and derive distributions at steady-state. Asymptotic expansions require caution since limits do not commute. In particular, non-physical bumpy distributions appear when turbulence is purely diffusive. This excludes very short correlation times for real discs, as predicted by numerical simulations.
|
astrophysics
|
This paper introduces a new optimization model that integrates the multi-port stowage planning problem with the container relocation problem. This problem is formulated as a binary mathematical programming model that must find the containers' move sequence so that the number of relocations during the whole journey of a ship, as well as the associated port yards is minimized. Modeling by binary variables to represent the cargo status in a ship and yards makes the problem very complex to be solved by exact methods. To the best of our knowledge, this integrated model has not been developed yet as that such problems are always addressed in a partitioned or hierarchical way. A demonstration of the benefits of an integrated approach is given. The model is solved in two different commercial solvers and the results for randomly generated instances are presented and compared to the hierarchical approach. Two heuristics approaches are proposed to quickly generate feasible solutions for warm-starting the model. Extensive computational tests are performed and the results indicate that the solution approaches can reach optimal solutions for small sized instances and good quality solutions on real-scale based instances within reasonable computation time. This is a promising model to support decisions in these problems in an integrated way.
|
mathematics
|
Using topological string techniques, we compute BPS counting functions of 5d gauge theories which descend from 6d superconformal field theories upon circle compactification. Such theories are naturally organized in terms of nodes of Higgsing trees. We demonstrate that the specialization of the partition function as we move from the crown to the root of a tree is determined by homomorphisms between rings of Weyl invariant Jacobi forms. Our computations are made feasible by the fact that symmetry enhancements of the gauge theory which are manifest on the massless spectrum are inherited by the entire tower of BPS particles. In some cases, these symmetry enhancements have a nice relation to the 1-form symmetry of the associated gauge theory.
|
high energy physics theory
|
The population-attributable fraction (PAF) expresses the percentage of events that could have been prevented by eradicating a certain exposure in a certain population. It can be strongly time-dependent because either exposure incidence or excess risk may change over time. Competing events may moreover hinder the outcome of interest from being observed. Occurrence of either of these events may, in turn, prevent the exposure of interest. Estimation approaches thus need to carefully account for the timing of potential events in such highly dynamic settings. The use of multistate models (MSMs) has been widely encouraged to meet this need so as to eliminate preventable yet common types of bias, such as immortal time bias. However, certain MSM based proposals for PAF estimation fail to fully eliminate such biases. In addition, assessing whether patients die from rather than with a certain exposure, not only requires adequate modeling of the timing of events, but also of the confounding factors affecting these events and their timing. While proposed MSM approaches for confounding adjustment may be sufficient to accommodate imbalances between infected and uninfected patients present at baseline, these proposals generally fail to adequately tackle time-dependent confounding. For this, a class of generalized methods (g-methods) which includes inverse probability (IP) weighting can be used. Because the connection between MSMs and g-methods is not readily apparent, we here provide a detailed mapping between MSM and IP of censoring weighting approaches for estimating PAFs. In particular, we illustrate that the connection between these two approaches can be made more apparent by means of a weighting-based characterization of MSM approaches that aids to both pinpoint current shortcomings of MSM based proposals and to enhance intuition into simple modifications to overcome these limitations.
|
statistics
|
Keyword spotting and in particular Wake-Up-Word (WUW) detection is a very important task for voice assistants. A very common issue of voice assistants is that they get easily activated by background noise like music, TV or background speech that accidentally triggers the device. In this paper, we propose a Speech Enhancement (SE) model adapted to the task of WUW detection that aims at increasing the recognition rate and reducing the false alarms in the presence of these types of noises. The SE model is a fully-convolutional denoising auto-encoder at waveform level and is trained using a log-Mel Spectrogram and waveform reconstruction losses together with the BCE loss of a simple WUW classification network. A new database has been purposely prepared for the task of recognizing the WUW in challenging conditions containing negative samples that are very phonetically similar to the keyword. The database is extended with public databases and an exhaustive data augmentation to simulate different noises and environments. The results obtained by concatenating the SE with a simple and state-of-the-art WUW detectors show that the SE does not have a negative impact on the recognition rate in quiet environments while increasing the performance in the presence of noise, especially when the SE and WUW detector are trained jointly end-to-end.
|
electrical engineering and systems science
|
Sparsity promoting regularizers are widely used to impose low-complexity structure (e.g. l1-norm for sparsity) to the regression coefficients of supervised learning. In the realm of deterministic optimization, the sequence generated by iterative algorithms (such as proximal gradient descent) exhibit "finite activity identification", namely, they can identify the low-complexity structure in a finite number of iterations. However, most online algorithms (such as proximal stochastic gradient descent) do not have the property owing to the vanishing step-size and non-vanishing variance. In this paper, by combining with a screening rule, we show how to eliminate useless features of the iterates generated by online algorithms, and thereby enforce finite activity identification. One consequence is that when combined with any convergent online algorithm, sparsity properties imposed by the regularizer can be exploited for computational gains. Numerically, significant acceleration can be obtained.
|
computer science
|
Currently, we have only limited means to probe the presence of planets at large orbital separations. Foreman-Mackey et al. searched for long-period transiting planets in the Kepler light curves using an automated pipeline. Here, we apply their pipeline, with minor modifications, to a larger sample and use updated stellar parameters from Gaia DR2. The latter boosts the stellar radii for most of the planet candidates found by FM16, invalidating a number of them as false positives. We identify 15 candidates, including two new ones. All have sizes from 0.3 to 1 $R_{\rm J}$, and all but two have periods from 2 to 10 yr. We report two main findings based on this sample. First, the planet occurrence rate for the above size and period ranges is $0.70^{+0.40}_{-0.20}$ planets per Sun-like star, with the frequency of cold Jupiters agreeing with that from radial velocity surveys. Planet occurrence rises with decreasing planet size, roughly describable as $dN/d\log R \propto R^{\alpha}$ with $\alpha = -1.6^{+1.0}_{-0.9}$, i.e., Neptune-sized planets are some four times more common than Jupiter-sized ones. Second, five out of our 15 candidates orbit stars with known transiting planets at shorter periods, including one with five inner planets. We interpret this high incidence rate to mean: (1) almost all our candidates should be genuine; (2) across a large orbital range (from $\sim 0.05$ to a few astronomical units), mutual inclinations in these systems are at most a few degrees; and (3) large outer planets exist almost exclusively in systems with small inner planets.
|
astrophysics
|
This paper considers a new bootstrap procedure to estimate the distribution of high-dimensional $\ell_p$-statistics, i.e. the $\ell_p$-norms of the sum of $n$ independent $d$-dimensional random vectors with $d \gg n$ and $p \in [1, \infty]$. We provide a non-asymptotic characterization of the sampling distribution of $\ell_p$-statistics based on Gaussian approximation and show that the bootstrap procedure is consistent in the Kolmogorov-Smirnov distance under mild conditions on the covariance structure of the data. As an application of the general theory we propose a bootstrap hypothesis test for simultaneous inference on high-dimensional mean vectors. We establish its asymptotic correctness and consistency under high-dimensional alternatives, and discuss the power of the test as well as the size of associated confidence sets. We illustrate the bootstrap and testing procedure numerically on simulated data.
|
mathematics
|
We revisit certain natural algebraic transformations on the space of 3D topological quantum field theories (TQFTs) called "Galois conjugations." Using a notion of multiboundary entanglement entropy (MEE) defined for TQFTs on compact 3-manifolds with disjoint boundaries, we give these abstract transformations additional physical meaning. In the process, we prove a theorem on the invariance of MEE along orbits of the Galois action in the case of arbitrary Abelian theories defined on any link complement in $S^3$. We then give a generalization to non-Abelian TQFTs living on certain infinite classes of torus link complements. Along the way, we find an interplay between the modular data of non-Abelian TQFTs, the topology of the ambient spacetime, and the Galois action. These results are suggestive of a deeper connection between entanglement and fusion.
|
high energy physics theory
|
An Ohm-Rush algebra $R \rightarrow S$ is called *McCoy* if for any zero-divisor $f$ in $S$, its content $c(f)$ has nonzero annihilator in $R$, because McCoy proved this when $S=R[x]$. We answer a question of Nasehpour by giving an example of a faithfully flat Ohm-Rush algebra with the McCoy property that is not a weak content algebra. However, we show that a faithfully flat Ohm-Rush algebra is a weak content algebra iff $R/I \rightarrow S/I S$ is McCoy for all radical (resp. prime) ideals $I$ of $R$. When $R$ is Noetherian (or has the more general \emph{fidel (A)} property), we show that it is equivalent that $R/I \rightarrow S/IS$ is McCoy for all ideals.
|
mathematics
|
Chiral dynamics is a pretty mature field. Nonetheless, there are many exciting new developments. In this opening talk, I consider S-wave, isospin-zero pion-pion scattering and the calculation of the width of the lightest baryon resonances at two loops. New insights into the chiral dynamics of charm-strange mesons are discussed as well as recent results on the flavor decomposition of the pion-nucleon $\sigma$-term. I end with a short wish-list of lattice QCD tests pertinent to chiral dynamics.
|
high energy physics phenomenology
|
We present an algorithm that is well suited to find the linear layout of the multiple flow-direction network (directed acyclic graph) for an efficient implicit computation of the erosion term in landscape evolution models. The time complexity of the algorithm varies linearly with the number of nodes in the domain, making it very efficient. The resulting numerical scheme allows us to achieve accurate steady-state solutions in conditions of high erosion rates leading to heavily dissected landscapes. We also establish that contrary to single flow-direction methods such as D8, D$\infty$ multiple flow-direction method follows the theoretical prediction of the linear stability analysis and correctly captures the transition from smooth to the channelized regimes. We finally show that the obtained numerical solutions follow the theoretical temporal variation of mean elevation.
|
physics
|
We reexamine the electronic structure of graphene on SiC substrate by angle-resolved photoemission spectroscopy. We directly observed multiply cloning of Dirac cone, in addition to ones previously attributed to reconstruction. The locations, relative distances and anisotropy of Dirac cone replicas fully agree with the moir\'e pattern of graphene-SiC heterostructure. Our results provide a straightforward example of moir\'e potential modulation in engineering electronic structure with Dirac fermions.
|
condensed matter
|
The bosonic representation of the half string ghost in the full string basis is examined in full. The proof that the comma 3- vertex (matter and ghost) in the bosonic representation satisfy the Ward-like identities is established thus completing the proof of the Bose Fermi equivalence in the half string theory.
|
high energy physics theory
|
We consider latent Gaussian fields for modelling spatial dependence in the context of both spatial point patterns and areal data, providing two different applications. The inhomogeneous Log-Gaussian Cox Process model is specified to describe a seismic sequence occurred in Greece, resorting to the Stochastic Partial Differential Equations. The Besag-York-Mollie model is fitted for disease mapping of the Covid-19 infection in the North of Italy. These models both belong to the class of Bayesian hierarchical models with latent Gaussian fields whose posterior is not available in closed form. Therefore, the inference is performed with the Integrated Nested Laplace Approximation, which provides accurate and relatively fast analytical approximations to the posterior quantities of interest.
|
statistics
|
The problem of the position and spin in relativistic quantum mechanics is analyzed in detail. It is definitively shown that the position and spin operators in the Foldy-Wouthuysen representation (but not in the Dirac one) are quantum-mechanical counterparts of the classical position and spin variables. The probabilistic interpretation is valid only for Foldy-Wouthuysen wave functions. The relativistic spin operators are discussed. The spin-orbit interaction does not exist for a free particle if the conventional operators of the orbital angular momentum and the rest-frame spin are used. Alternative definitions of the orbital angular momentum and the spin are based on noncommutative geometry, do not satisfy standard commutation relations, and can allow the spin-orbit interaction.
|
quantum physics
|
We report the recovery of returning Halley-type comet 12P/Pons-Brooks using the 4.3 m Lowell Discovery Telescope, at a heliocentric distance of 11.89 au. Comparative analysis with a dust model suggests that the comet may have been active since $\sim30$ au from the Sun. We derive a nucleus radius of $17\pm6$ km from the nucleus photometry, though this number is likely an overestimation due to the contamination from dust and gas. Continuing monitoring is encouraged in anticipation of the comet's forthcoming perihelion in 2024 April.
|
astrophysics
|
Charge and spin transport in spintronics devices can be described by a spin diffusion equation suitable for modelling scales much larger than the scattering and atomic scales. This work concerns the coarse graining procedure used to compute the coefficients of the diffusion equation, which are sensitive the details of individual atoms and impurities. We show with two simple examples that in spintronics devices which have both a spin-orbit interaction and magnetization, standard coarse graining can easily obtain diffusion equations which fail to conserve electronic charge. The same failure can occur in systems with both a spin-orbit interaction and orbital polarization. We show that linear response theory, coupled with the self-consistent Born approximation and ladder diagrams, offers an improved way of calculating diffusion equations. We show that the resulting equations satisfy a Ward-Takahashi identity that guarantees charge conservation.
|
condensed matter
|
Topological insulators are materials that have a gapped bulk energy spectrum, but contain protected in-gap states appearing at their surface. These states exhibit remarkable properties such as unidirectional propagation and robustness to noise that offer an opportunity to improve the performance and scalability of quantum technologies. For quantum applications, it is essential that the topological states are indistinguishable. Here we report high-visibility quantum interference of single photon topological states in an integrated photonic circuit. Two topological boundary-states, initially at opposite edges of a coupled waveguide array, are brought into proximity, where they interfere and undergo a beamsplitter operation. We observe $93.1\pm2.8\%$ visibility Hong-Ou-Mandel (HOM) interference, a hallmark non-classical effect that is at the heart of linear optics-based quantum computation. Our work shows that it is feasible to generate and control highly indistinguishable single photon topological states, opening pathways to enhanced photonic quantum technology with topological properties, and to study quantum effects in topological materials.
|
quantum physics
|
The ability to provide very high data rates is a significant benefit of optical wireless communication (OWC) systems. In this paper, an optical wireless downlink in a data centre that uses wavelength division multiple access (WDMA) is designed. Red, yellow, green and blue (RYGB) laser diodes (LDs) are used as transmitters to provide a high modulation bandwidth. A WDMA scheme based on RYGB LDs is used to provide communication for multiple racks at the same time from the same light unit. Two types of optical receivers are examined in this study; an angle diversity receiver (ADR) with three branches and a 10 pixel imaging receiver (ImR). The proposed data centre achieves high data rates with a higher signal-to-interference-plus-noise ratio (SINR) for each rack while using simple on-off-keying (OOK) modulation.
|
electrical engineering and systems science
|
An efficient simulation-based methodology is proposed for the rolling window estimation of state space models, called particle rolling Markov chain Monte Carlo (MCMC) with double block sampling. In our method, which is based on Sequential Monte Carlo (SMC), particles are sequentially updated to approximate the posterior distribution for each window by learning new information and discarding old information from observations. Th particles are refreshed with an MCMC algorithm when the importance weights degenerate. To avoid degeneracy, which is crucial for reducing the computation time, we introduce a block sampling scheme and generate multiple candidates by the algorithm based on the conditional SMC. The theoretical discussion shows that the proposed methodology with a nested structure is expressed as SMC sampling for the augmented space to provide the justification. The computational performance is evaluated in illustrative examples, showing that the posterior distributions of the model parameters are accurately estimated. The proofs and additional discussions (algorithms and experimental results) are provided in the Supplementary Material.
|
statistics
|
We focus on two types of coherent states, the coherent states of multi graviton states and the coherent states of giant graviton states, in the context of gauge/gravity correspondence. We conveniently use a phase shift operator and its actions on the superpositions of these coherent states. We find $N$-state Schrodinger cat states which approach the one-row Young tableau states, with fidelity between them asymptotically reaches 1 at large $N$. The quantum Fisher information of these states is proportional to the variance of the excitation energy of the underlying states, and characterizes the localizability of the states in the angular direction in the phase space. We analyze the correlation and entanglement between gravitational degrees of freedom using different regions of the phase space plane in bubbling AdS. The correlation between two entangled rings in the phase space plane is related to the area of the annulus between the two rings. We also analyze two types of noisy coherent states, which can be viewed as interpolated states that interpolate between a pure coherent state in the noiseless limit and a maximally mixed state in the large noise limit.
|
high energy physics theory
|
A number of current approaches to quantum and neuromorphic computing use superconductors as the basis of their platform or as a measurement component, and will need to operate at cryogenic temperatures. Semiconductor systems are typically proposed as a top-level control in these architectures, with low-temperature passive components and intermediary superconducting electronics acting as the direct interface to the lowest-temperature stages. The architectures, therefore, require a low-power superconductor-semiconductor interface, which is not currently available. Here we report a superconducting switch that is capable of translating low-voltage superconducting inputs directly into semiconductor-compatible (above 1,000 mV) outputs at kelvin-scale temperatures (1 K or 4 K). To illustrate the capabilities in interfacing superconductors and semiconductors, we use it to drive a light-emitting diode (LED) in a photonic integrated circuit, generating photons at 1 K from a low-voltage input and detecting them with an on-chip superconducting single-photon detector. We also characterize our device's timing response (less than 300 ps turn-on, 15 ns turn-off), output impedance (greater than 1 M{\Omega}), and energy requirements (0.18 fJ/um^2, 3.24 mV/nW).
|
physics
|
Sparse representation using over-complete dictionaries have shown to produce good quality results in various image processing tasks. Dictionary learning algorithms have made it possible to engineer data adaptive dictionaries which have promising applications in image compression and image enhancement. The most common sparse dictionary learning algorithms use the techniques of matching pursuit and K-SVD iteratively for sparse coding and dictionary learning respectively. While this technique produces good results, it requires a large number of iterations to converge to an optimal solution. In this article, we use a closed form stabilized convex optimization technique for both sparse coding and dictionary learning. The approach results in providing the best possible dictionary and the sparsest representation resulting in minimum reconstruction error. Once the image is reconstructed from the compressively sensed samples, we use adaptive frequency and spatial filtering techniques to move towards exact image recovery. It is clearly seen from the results that the proposed algorithm provides much better reconstruction results than conventional sparse dictionary techniques for a fixed number of iterations. Depending inversely upon the number of details present in the image, the proposed algorithm reaches the optimal solution with a significantly lower number of iterations. Consequently, high PSNR and low MSE is obtained using the proposed algorithm for our compressive sensing framework.
|
electrical engineering and systems science
|
We study holomorphic blocks in the three dimensional ${\mathcal N}=2$ gauge theory that describes the $\mathbb{CP}^1$ model. We apply exact WKB methods to analyze the line operator identities associated to the holomorphic blocks and derive the analytic continuation formulae of the blocks as the twisted mass and FI parameter are varied. The main technical result we utilize is the connection formula for the ${}_1\phi_1$ $q$-hypergeometric function. We show in detail how the $q$-Borel resummation methods reproduce the results obtained previously by using block-integral methods.
|
high energy physics theory
|
The layered MnBi2nTe3n+1 family represents the first intrinsic antiferromagnetic topological insulator (AFM TI, protected by a combination symmetry ) ever discovered, providing an ideal platform to explore novel physics such as quantum anomalous Hall effect at elevated temperature and axion electrodynamics. Recent angle-resolved photoemission spectroscopy (ARPES) experiments on this family have revealed that all terminations exhibit (nearly) gapless topological surface states (TSSs) within the AFM state, violating the definition of the AFM TI, as the surfaces being studied should be -breaking and opening a gap. Here we explain this curious paradox using a surface-bulk band hybridization picture. Combining ARPES and first-principles calculations, we prove that only an apparent gap is opened by hybridization between TSSs and bulk bands. The observed (nearly) gapless features are consistently reproduced by tight-binding simulations where TSSs are coupled to a Rashba-split bulk band. The Dirac-cone-like spectral features are actually of bulk origin, thus not sensitive to the-breaking at the AFM surfaces. This picture explains the (nearly) gapless behaviour found in both Bi2Te3- and MnBi2Te4-terminated surfaces and is applicable to all terminations of MnBi2nTe3n+1 family. Our findings highlight the role of band hybridization, superior to magnetism in this case, in shaping the general surface band structure in magnetic topological materials for the first time.
|
condensed matter
|
We consider the problem of guaranteeing that the transient voltages and currents stay within prescribed bounds in Direct Current (DC) microgrids, when the controller does not have access to accurate system dynamics due to the load being unknown and/or time-varying. To achieve this, we propose an optimization based controller design using control barrier functions. We show that the proposed controller has a decentralized structure and is robust with respect to the uncertainty in the precise values of the system parameters, such as the load.
|
electrical engineering and systems science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.