text
stringlengths
11
9.77k
label
stringlengths
2
104
Multi-fidelity Gaussian process is a common approach to address the extensive computationally demanding algorithms such as optimization, calibration and uncertainty quantification. Adaptive sampling for multi-fidelity Gaussian process is a changing task due to the fact that not only we seek to estimate the next sampling location of the design variable, but also the level of the simulator fidelity. This issue is often addressed by including the cost of the simulator as an another factor in the searching criterion in conjunction with the uncertainty reduction metric. In this work, we extent the traditional design of experiment framework for the multi-fidelity Gaussian process by partitioning the prediction uncertainty based on the fidelity level and the associated cost of execution. In addition, we utilize the concept of Believer which quantifies the effect of adding an exploratory design point on the Gaussian process uncertainty prediction. We demonstrated our framework using academic examples as well as a industrial application of steady-state thermodynamic operation point of a fluidized bed process
statistics
Harnessing the unique features of topological materials for the development of a new generation of topological based devices is a challenge of paramount importance. Using Floquet scattering theory combined with atomistic models we study the interplay between laser illumination, spin and topology in a two-dimensional material with spin-orbit coupling. Starting from a topological phase, we show how laser illumination can selectively disrupt the topological edge states depending on their spin. This is manifested by the generation of pure spin currents and spin-polarized charge photocurrents under linearly and circularly polarized laser-illumination, respectively. Our results open a path for the generation and control of spin-polarized photocurrents.
condensed matter
Tetraphenyl-butadiene (TPB) is an organic fluorescent chemical compound generally used as wavelength shifter thanks to its extremely high efficiency to convert ultra-violet photons into visible light. A common method to use TPB with detectors sensitive to visible light, such as photomultiplier tubes (PMTs), is to deposit thin layers on the device window. To obtain effective TPB layers, different procedures can be used. In this work a specific evaporation technique adopted to coat 8 in. convex windows photomultiplier tubes is presented. It consists in evaporating TPB by means of a Knudsen cell, which allows to strictly control the process, and in a rotating sample support, which guarantees the uniformity of the deposition. Simulation results and experimental tests demonstrate the effectiveness of this evaporation technique from the point of view of deposition uniformity and light conversion efficiency.
physics
Exchange $\mu-\tau$ symmetry in the effective Majorana neutrino mass matrix does predict a maximal mixing for atmospheric neutrino oscillations asides to a null mixing that cannot be straightforwardly identified with reactor neutrino oscillations mixing, $\theta_{13}$, unless a specific hierarchy is assumed for the mass eigenstates. Otherwise, a non zero value for $\theta_{13}$ is predicted already at the level of an exact $\mu-\tau$ symmetry. In this case, solar neutrino mixing and scale as well as a corrected atmospheric mixing do arise from the breaking of the symmetry. I present a mass matrix texture proposal for normal hierarchy that realizes this scenario, where the smallness of $\tan\theta_{13}$, is naturally given by the parameter $\epsilon\approx\sqrt{\Delta m^2_{sol}/\Delta m^2_{ATM}}$. The texture also allows for the introduction of CP violation within the expected region without further constrains.
high energy physics phenomenology
In this work, we focus on variational Bayesian inference on the sparse Deep Neural Network (DNN) modeled under a class of spike-and-slab priors. Given a pre-specified sparse DNN structure, the corresponding variational posterior contraction rate is characterized that reveals a trade-off between the variational error and the approximation error, which are both determined by the network structural complexity (i.e., depth, width and sparsity). However, the optimal network structure, which strikes the balance of the aforementioned trade-off and yields the best rate, is generally unknown in reality. Therefore, our work further develops an {\em adaptive} variational inference procedure that can automatically select a reasonably good (data-dependent) network structure that achieves the best contraction rate, without knowing the optimal network structure. In particular, when the true function is H{\"o}lder smooth, the adaptive variational inference is capable to attain (near-)optimal rate without the knowledge of smoothness level. The above rate still suffers from the curse of dimensionality, and thus motivates the teacher-student setup, i.e., the true function is a sparse DNN model, under which the rate only logarithmically depends on the input dimension.
mathematics
The mixture cure model for analyzing survival data is characterized by the assumption that the population under study is divided into a group of subjects who will experience the event of interest over some finite time horizon and another group of cured subjects who will never experience the event irrespective of the duration of follow-up. When using the Bayesian paradigm for inference in survival models with a cure fraction, it is common practice to rely on Markov chain Monte Carlo (MCMC) methods to sample from posterior distributions. Although computationally feasible, the iterative nature of MCMC often implies long sampling times to explore the target space with chains that may suffer from slow convergence and poor mixing. Furthermore, extra efforts have to be invested in diagnostic checks to monitor the reliability of the generated posterior samples. An alternative strategy for fast and flexible sampling-free Bayesian inference in the mixture cure model is suggested in this paper by combining Laplace approximations and penalized B-splines. A logistic regression model is assumed for the cure proportion and a Cox proportional hazards model with a P-spline approximated baseline hazard is used to specify the conditional survival function of susceptible subjects. Laplace approximations to the conditional latent vector are based on analytical formulas for the gradient and Hessian of the log-likelihood, resulting in a substantial speed-up in approximating posterior distributions. Results show that LPSMC is an appealing alternative to classic MCMC for approximate Bayesian inference in standard mixture cure models.
statistics
Age-related Macular Degeneration (AMD) is a progressive visual impairment affecting millions of individuals. Since there is no current treatment for the disease, the only means of improving the lives of individuals suffering from the disease is via assistive technologies. In this paper we propose a novel and effective methodology to accurately generate a parametric model for the perceptual deficit caused by the physiological deterioration of a patient's retina due to AMD. Based on the parameters of the model, a mechanism is developed to simulate the patient's perception as a result of the disease. This simulation can effectively deliver the perceptual impact and its progression to the patient's eye doctor. In addition, we propose a mixed-reality apparatus and interface to allow the patient recover functional vision and to compensate for the perceptual loss caused by the physiological damage. The results obtained by the proposed approach show the superiority of our framework over the state-of-the-art low-vision systems.
electrical engineering and systems science
One of the primary challenges of system identification is determining how much data is necessary to adequately fit a model. Non-asymptotic characterizations of the performance of system identification methods provide this knowledge. Such characterizations are available for several algorithms performing open-loop identification. Often times, however, data is collected in closed-loop. Application of open-loop identification methods to closed-loop data can result in biased estimates. One method used by subspace identification techniques to eliminate these biases involves first fitting a long-horizon autoregressive model, then performing model reduction. The asymptotic behavior of such algorithms is well characterized, but the non-asymptotic behavior is not. This work provides a non-asymptotic characterization of one particular variant of these algorithms. More specifically, we provide non-asymptotic upper bounds on the generalization error of the produced model, as well as high probability bounds on the difference between the produced model and the finite horizon Kalman Filter.
electrical engineering and systems science
Currently, the most accurate and stable clocks use optical interrogation of either a single ion or an ensemble of neutral atoms confined in an optical lattice. Here, we demonstrate a new optical clock system based on an array of individually trapped neutral atoms with single-atom readout, merging many of the benefits of ion and lattice clocks as well as creating a bridge to recently developed techniques in quantum simulation and computing with neutral atoms. We evaluate single-site resolved frequency shifts and short-term stability via self-comparison. Atom-by-atom feedback control enables direct experimental estimation of laser noise contributions. Results agree well with an ab initio Monte Carlo simulation that incorporates finite temperature, projective read-out, laser noise, and feedback dynamics. Our approach, based on a tweezer array, also suppresses interaction shifts while retaining a short dead time, all in a comparatively simple experimental setup suited for transportable operation. These results establish the foundations for a third optical clock platform and provide a novel starting point for entanglement-enhanced metrology, quantum clock networks, and applications in quantum computing and communication with individual neutral atoms that require optical clock state control.
physics
Alzheimer's disease (AD) is a progressive and incurable neurodegenerative disease which destroys brain cells and causes loss to patient's memory. An early detection can prevent the patient from further damage of the brain cells and hence avoid permanent memory loss. In past few years, various automatic tools and techniques have been proposed for diagnosis of AD. Several methods focus on fast, accurate and early detection of the disease to minimize the loss to patients mental health. Although machine learning and deep learning techniques have significantly improved medical imaging systems for AD by providing diagnostic performance close to human level. But the main problem faced during multi-class classification is the presence of highly correlated features in the brain structure. In this paper, we have proposed a smart and accurate way of diagnosing AD based on a two-dimensional deep convolutional neural network (2D-DCNN) using imbalanced three-dimensional MRI dataset. Experimental results on Alzheimer Disease Neuroimaging Initiative magnetic resonance imaging (MRI) dataset confirms that the proposed 2D-DCNN model is superior in terms of accuracy, efficiency, and robustness. The model classifies MRI into three categories: AD, mild cognitive impairment, and normal control: and has achieved 99.89% classification accuracy with imbalanced classes. The proposed model exhibits noticeable improvement in accuracy as compared to the state-fo-the-art methods.
electrical engineering and systems science
High-dimensional streaming data are becoming increasingly ubiquitous in many fields. They often lie in multiple low-dimensional subspaces, and the manifold structures may change abruptly on the time scale due to pattern shift or occurrence of anomalies. However, the problem of detecting the structural changes in a real-time manner has not been well studied. To fill this gap, we propose a dynamic sparse subspace learning (DSSL) approach for online structural change-point detection of high-dimensional streaming data. A novel multiple structural change-point model is proposed and it is shown to be equivalent to maximizing a posterior under certain conditions. The asymptotic properties of the estimators are investigated. The penalty coefficients in our model can be selected by AMDL criterion based on some historical data. An efficient Pruned Exact Linear Time (PELT) based method is proposed for online optimization and change-point detection. The effectiveness of the proposed method is demonstrated through a simulation study and a real case study using gesture data for motion tracking.
statistics
As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called Bayesian Hamiltonian Learning (BHL), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, BHL can be used online as an adaptive measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable BHL to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.
quantum physics
We present a critical assessment of the SN1987A supernova cooling bound on axions and other light particles. Core-collapse simulations used in the literature to substantiate the bound omitted from the calculation the envelope exterior to the proto-neutron star (PNS). As a result, the only source of neutrinos in these simulations was, by construction, a cooling PNS. We show that if the canonical delayed neutrino mechanism failed to explode SN1987A, and if the pre-collapse star was rotating, then an accretion disk would form that could explain the late-time ($t\gtrsim5$ sec) neutrino events. Such accretion disk would be a natural feature if SN1987A was a collapse-induced thermonuclear explosion. Axions do not cool the disk and do not affect its neutrino output, provided the disk is optically-thin to neutrinos, as it naturally is. These considerations cast doubt on the supernova cooling bound.
high energy physics phenomenology
Pauli spin blockade (PSB) has long been an important tool for spin read-out in double quantum dot (DQD) systems with interdot tunneling $t$. In this paper we show that the blockade is lifted if the two dots experience distinct effective magnetic fields caused by site-dependent g-tensors $g_L$ and $g_R$ for the left and right dot, and that this effect can be more pronounced than the leakage current due to the spin-orbit interaction (SOI) via spin-flip tunneling and the hyperfine interaction (HFI) of the electron spin with the host nuclear spins. Using analytical results obtained in special parameter regimes, we show that information about both the out-of-plane and in-plane g-factors of the dots can be inferred from characteristic features of the magneto-transport curve. For a symmetric DQD, we predict a pronounced maximum in the leakage current at the characteristic out-of-plane magnetic field $B^* = t/ \mu_B \sqrt{g_z^L g_z^R}$ which we term the g-tensor resonance of the system. Moreover, we extend the results to contain the effects of strong SOI and argue that in this more general case the leakage current carries information about the g-tensor components and SOI of the system.
condensed matter
Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al., 2020). For example, language models retain factual knowledge from their training corpora that can be extracted by asking them to "fill in the blank" in a sentential prompt. However, where does this prompt come from? We explore the idea of learning prompts by gradient descent -- either fine-tuning prompts taken from previous work, or starting from random initialization. Our prompts consist of "soft words," i.e., continuous vectors that are not necessarily word type embeddings from the language model. Furthermore, for each task, we optimize a mixture of prompts, learning which prompts are most effective and how to ensemble them. Across multiple English LMs and tasks, our approach hugely outperforms previous methods, showing that the implicit factual knowledge in language models was previously underestimated. Moreover, this knowledge is cheap to elicit: random initialization is nearly as good as informed initialization.
computer science
We construct a general class of (small) $\mathcal{N}=(0,4)$ superconformal solutions in M-theory of the form AdS$_3\times S^3/\mathbb{Z}_k\times \text{CY}_2$, foliated over an interval. These solutions describe M-strings in M5-brane intersections. The $M$-strings support $(0,4)$ quiver CFTs that are in correspondence with our backgrounds. We compute the central charge and show that it scales linearly with the total number of $M$-strings. We introduce momentum charge, thus allowing for a description in terms of M(atrix) theory. Upon reduction to Type IIA, we find a new class of solutions with four Poincar\'e supercharges of the form AdS$_2\times S^3\times \text{CY}_2\times \mathcal{I}$, that we extend to the massive IIA case. We generalise our constructions to provide a complete class of AdS$_3$ solutions to M-theory with (0,4) supersymmetry and SU(2) structure. We also construct new $AdS_2\times S^3\times \text{M}_4\times \mathcal{I}$ solutions to massive IIA, with M$_4$ a 4d K\"ahler manifold and four Poincar\'e supercharges.
high energy physics theory
We give two constructions of hyperbolic metrics on Heegaard splittings satisfying certain conditions that only use tools from the deformation theory of Kleinian groups. In particular, we do not rely on the solution of the Geometrization Conjecture by Perelman. Both constructions apply to random Heegaard splitting with asymptotic probability 1. The first construction provides explicit uniform bilipschitz models for the hyperbolic metric. The second one gives a general criterion for a curve on a Heegaard surface to be a short geodesic for the hyperbolic structure, such curves are abundant in a random setting. As an application of the model metrics, we discuss the coarse growth rate of geometric invariants, such as diameter and injectivity radius, and questions about arithmeticity and commensurability in families of random 3-manifolds.
mathematics
In this paper we study (-1) classes} for the blow up of n-dimensional projective space in several points. We generalize Noether's inequality, and we prove that all (-1) classes are in bijective correspondence with the orbit of the Weyl group action on one exceptional divisor following Nagata's original approach. This correspondence was first noticed by Laface and Ugaglia. Moreover, we prove that the irreducibility condition from the definition of (-1) classes can be replaced by the numerical condition of having positive intersection with all (-1) classes of smaller degree via the Mukai pairing.
mathematics
Two novel robust nonlinear stochastic full pose (i.e, attitude and position) estimators on the Special Euclidean Group SE(3) are proposed using the available uncertain measurements. The resulting estimators utilize the basic structure of the deterministic pose estimators adopting it to the stochastic sense. The proposed estimators for six degrees of freedom (DOF) pose estimations consider the group velocity vectors to be contaminated with constant bias and Gaussian random noise, unlike nonlinear deterministic pose estimators which disregard the noise component in the estimator derivations. The proposed estimators ensure that the closed loop error signals are semi-globally uniformly ultimately bounded in mean square. The efficiency and robustness of the proposed estimators are demonstrated by the numerical results which test the estimators against high levels of noise and bias associated with the group velocity and body-frame measurements and large initialization error. Keywords: Nonlinear stochastic pose filter, pose observer, position, attitude, Ito, stochastic differential equations, Brownian motion process, adaptive estimate, feature, inertial measurement unit, inertial vision system, 6 DOF, IMU, SE(3), SO(3), orientation, landmark, Gaussian, noise.
electrical engineering and systems science
Moderating content in social media platforms is a formidable challenge due to the unprecedented scale of such systems, which typically handle billions of posts per day. Some of the largest platforms such as Facebook blend machine learning with manual review of platform content by thousands of reviewers. Operating a large-scale human review system poses interesting and challenging methodological questions that can be addressed with operations research techniques. We investigate the problem of optimally operating such a review system at scale using ideas from queueing theory and simulation.
computer science
Starting from a spin-fermion model for the cuprate superconductors, we obtain an effective interaction for the charge carriers by integrating out the spin degrees of freedom. Our model predicts a quantum critical point for the superconducting interaction coupling, which sets up a threshold for the onset of superconductivity in the system. We show that the physical value of this coupling is below this threshold, thus explaining why there is no superconducting phase for the undoped system. Then, by including doping, we find a dome-shaped dependence of the critical temperature as charge carriers are added to the system, in agreement with the experimental phase diagram. The superconducting critical temperature is calculated without adjusting any free parameter and yields, at optimal doping $ T_c \sim $ 45 K, which is comparable to the experimental data.
condensed matter
Model-Based Image Reconstruction (MBIR) methods significantly enhance the quality of computed tomographic (CT) reconstructions relative to analytical techniques, but are limited by high computational cost. In this paper, we propose a multi-agent consensus equilibrium (MACE) algorithm for distributing both the computation and memory of MBIR reconstruction across a large number of parallel nodes. In MACE, each node stores only a sparse subset of views and a small portion of the system matrix, and each parallel node performs a local sparse-view reconstruction, which based on repeated feedback from other nodes, converges to the global optimum. Our distributed approach can also incorporate advanced denoisers as priors to enhance reconstruction quality. In this case, we obtain a parallel solution to the serial framework of Plug-n-play (PnP) priors, which we call MACE-PnP. In order to make MACE practical, we introduce a partial update method that eliminates nested iterations and prove that it converges to the same global solution. Finally, we validate our approach on a distributed memory system with real CT data. We also demonstrate an implementation of our approach on a massive supercomputer that can perform large-scale reconstruction in real-time.
electrical engineering and systems science
In this article, we investigate the historical series of the total number of deaths per month in Brazil since 2015 using time series analysis techniques, in order to assess whether the COVID-19 pandemic caused any change in the series' generating mechanism. The results obtained so far indicate that there was no statistical significant impact.
statistics
Particle filters are a class of algorithms that are used for "tracking" or "filtering" in real-time for a wide array of time series models. Despite their comprehensive applicability, particle filters are not always the tool of choice for many practitioners, due to how difficult they are to implement. This short article presents PF, a C++ header-only template library that provides fast implementations of many different particle filters. A tutorial along with an extensive fully-worked example is provided.
statistics
The advances in the fields of scanning probe microscopy, scanning tunneling spectroscopy, point contact spectroscopy and point contact Andreev reflection spectroscopy to study the properties of conventional and quantum materials at cryogenic conditions have prompted the development of nanopositioners and nanoscanners with enhanced spatial resolution. Piezoelectric-actuator stacks as nanopositioners with working strokes $>100~\mu\mathrm{m}$ and positioning resolution $\sim$(1-10) nm are desirable for both basic research and industrial applications. However, information on the performance of most commercial piezoelectric-actuators in cryogenic environment and in the presence of magnetic fields in excess of 5\,T is generally not available. In particular, the magnitude, rate and the associated hysteresis of the piezo-displacement at cryogenic temperatures are the most relevant parameters that determine whether a particular piezoelectric-actuator can be used as a nanopositioner. Here, the design and realization of an experimental set-up based on interferometric techniques to characterize a commercial piezoelectric-actuator over a temperature range of $2~\mathrm{K}\leq{T}\leq260~\mathrm{K}$ and magnetic fields up to 6\,T is presented. The studied piezoelectric-actuator has a maximum displacement of $30~\mu\mathrm{m}$ at room temperature for a maximum driving voltage of 75\,V, which reduces to $1.2~\mu\mathrm{m}$ with an absolute hysteresis of $\left(9.1\pm3.3\right)~\mathrm{nm}$ at $T=2\,\mathrm{K}$. The magnetic field is shown to have no substantial effect on the piezo properties of the studied piezoelectric-actuator stack.
condensed matter
The thermoelectric behaviour of quark-gluon plasma has been studied within the framework of an effective kinetic theory by adopting a quasiparticle model to incorporate the thermal medium effects. The thermoelectric response of the medium has been quantified in terms of the Seebeck coefficient. The dependence of the collisional aspects of the QCD medium on the Seebeck coefficient has been estimated by utilizing relaxation time approximation and Bhatnagar-Gross-Krook collision kernels in the effective Boltzmann equation. The thermoelectric coefficient is seen to depend on the quark chemical potential and collision aspects of the medium. Besides, the thermoelectric effect has been explored in a magnetized medium and the respective transport coefficients, such as magnetic field-dependent Seebeck coefficient and Nernst coefficient, have been estimated. The impacts of hot QCD medium interactions incorporated through the effective model and the magnetic field on the thermoelectric responses of the medium have been observed to be more prominent in the temperature regimes not very far from the transition temperature.
high energy physics phenomenology
Present-day and next-generation accelerators, particularly for applications in driving wakefield-based schemes, require longitudinal beam shaping and attendant longitudinal characterization for experimental optimization. Here we present a diagnostic method which reconstructs the longitudinal beam profile at the location of a wakefield-generating source. The methods presented derive the longitudinal profile of a charged particle beam solely from measurement of the time-resolved centroid energy change due to wakefield effects. The reconstruction technique is based on a deconvolution algorithm that is fully generalizable to any analytically or numerically calculable Green's function for the wakefield excitation mechanism. This method is shown to yield precise features in the longitudinal current distribution reconstruction. We demonstrate the accuracy and efficacy of this technique using simulations and experimental examples, in both plasmas and dielectric structures, and compare to the experimentally measured longitudinal beam parameters as available. The limits of resolution and applicability to relevant scenarios are also examined.
physics
The mode I crack tip asymptotic response of a solid characterised by strain gradient plasticity is investigated. It is found that elastic strains dominate plastic strains near the crack tip, and thus the Cauchy stress and the strain state are given asymptotically by the elastic K-field. This crack tip elastic zone is embedded within an annular elasto-plastic zone. This feature is predicted by both a crack tip asymptotic analysis and a finite element computation. When small scale yielding applies, three distinct regimes exist: an outer elastic K field, an intermediate elasto-plastic field, and an inner elastic K field. The inner elastic core significantly influences the crack opening profile. Crack tip plasticity is suppressed when the material length scale $\ell$ of the gradient theory is on the order of the plastic zone size estimation, as dictated by the remote stress intensity factor. A generalized J-integral for strain gradient plasticity is stated and used to characterise the asymptotic response ahead of a short crack. Finite element analysis of a cracked three point bend specimen reveals that the crack tip elastic zone persists in the presence of bulk plasticity and an outer J-field.
condensed matter
Quantum scale symmetry is the realization of scale invariance in a quantum field theory. No parameters with dimension of length or mass are present in the quantum effective action. Quantum scale symmetry is generated by quantum fluctuations via the presence of fixed points for running couplings. As for any global symmetry, the ground state or cosmological state may be scale invariant or not. Spontaneous breaking of scale symmetry leads to massive particles and predicts a massless Goldstone boson. A massless particle spectrum follows from scale symmetry of the effective action only if the ground state is scale symmetric. Approximate scale symmetry close to a fixed point leads to important predictions for observations in various areas of fundamental physics. We review consequences of scale symmetry for particle physics, quantum gravity and cosmology. For particle physics, scale symmetry is closely linked to the tiny ratio between the Fermi scale of weak interactions and the Planck scale for gravity. For quantum gravity, scale symmetry is associated to the ultraviolet fixed point which allows for a non-perturbatively renormalizable quantum field theory for all known interactions. The interplay between gravity and particle physics at this fixed point permits to predict couplings of the standard model or other "effective low energy models" for momenta below the Planck mass. In particular, quantum gravity determines the ratio of Higgs boson mass and top quark mass. In cosmology, approximate scale symmetry explains the almost scale-invariant primordial fluctuation spectrum which is at the origin of all structures in the universe. The pseudo-Goldstone boson of spontaneously broken approximate scale symmetry may be responsible for dynamical dark energy and a solution of the cosmological constant problem.
high energy physics theory
Quantum Key Distribution (QKD) allows distant parties to exchange cryptographic keys with unconditional security by encoding information on the degrees of freedom of photons. Polarization encoding has been extensively used in QKD implementations along free-space, optical fiber and satellite-based links. However, the polarization encoders used in such implementations are unstable, expensive, complex and can even exhibit side-channels that undermine the security of the implemented protocol. Here we propose a self-compensating polarization encoder based on a Lithium Niobate phase modulator inside a Sagnac interferometer and implement it using only standard telecommunication commercial off-the-shelves components (COTS). Our polarization encoder combines a simple design and high stability reaching an intrinsic quantum bit error rate as low as 0.2%. Since realization is possible from the 800 nm to the 1550 nm band by using COTS, our polarization modulator is a promising solution for free-space, fiber and satellite-based QKD.
quantum physics
We develop a method based on Markov chains to model fluorescence, absorption, and scattering in nanophotonic systems. We show that the method reproduces Beer-Lambert's Law and Kirchhoff's Law, but also can be used to analyze deviations from these laws when some of their assumptions are violated. We show how to use the method to analyze a luminescent solar concentrator (LSC) based on semiconductor nanocrystals.
physics
We study non-equilibrium order parameter dynamics of the non-linear sigma model in the large $N$ limit, using Keldysh formalism. We provide a scheme for obtaining stable numerical solutions of the Keldysh saddle point equations, and use them to study the order parameter dynamics of the model either following a ramp, or in the presence of a periodic drive. We find that the transient dynamics of the order parameter in the presence of a periodic drive is controlled by the drive frequency displaying the phenomenon of synchronization. We also study the approach of the order parameter to its steady state value following a ramp and find out the effective temperature of the steady state. We chart out the steady state temperature of the ordered phase as a function of ramp time and amplitude, and discuss the relation of our results to experimentally realizable spin models.
condensed matter
Assessing the economic impact of COVID-19 pandemic and public health policies is essential for a rapid recovery. In this paper, we analyze the impact of mobility contraction on furloughed workers and excess deaths in Italy. We provide a link between the reduction of mobility and excess deaths, confirming that the first countrywide lockdown has been effective in curtailing the COVID-19 epidemics. Our analysis points out that a mobility contraction of 10% leads to a mortality reduction of 5% whereas it leads to an increase of 50% in full time equivalent furloughed workers. Based on our results, we propose a prioritizing policy for the most advanced stage of the COVID-19 vaccination campaign, considering the unemployment risk of the healthy active population. Keywords: COVID-19 mortality; Furlough schemes; Economic impact of lockdowns; Vaccination rollout: Unemployment risk
physics
Inspired by previous work of Kusner and Bauer-Kuwert, we prove a strict inequality between the Willmore energies of two surfaces and their connected sum in the context of isoperimetric constraints. Building on previous work by Keller-Mondino-Rivi\`ere, our strict inequality leads to existence of minimisers for the isoperimetric constrained Willmore problem in every genus, provided the minimal energy lies strictly below $8\pi$. Besides the geometric interest, such a minimisation problem has been studied in the literature as a simplified model in the theory of lipid bilayer cell membranes.
mathematics
Turbulent flows within and over sparse canopies are investigated using direct numerical simulations. We focus on the effect of the canopy on the background turbulence, the part of the flow that remains once the element-induced flow is filtered out. In channel flows, the distribution of the total stress is linear with height. Over smooth walls, the total stress is only the `fluid stress' $\tau_f$, the sum of the viscous and the Reynolds shear stresses. In canopies, in turn, there is an additional contribution from the canopy drag, which can dominate within. We find that, for sparse canopies, the ratio of the viscous and the Reynolds shear stresses in $\tau_f$ at each height is similar to that over smooth-walls, even within the canopy. From this, a height-dependent scaling based on $\tau_f$ is proposed. Using this scaling, the background turbulence within the canopy shows similarities with turbulence over smooth walls. This suggests that the background turbulence scales with $\tau_f$, rather than with the conventional scaling based on the total stress. This effect is essentially captured when the canopy is substituted by a drag force that acts on the mean velocity profile alone, aiming to produce the correct $\tau_f$, without the discrete presence of the canopy elements acting directly on the fluctuations. The proposed mean-only forcing is shown to produce better estimates for the turbulent fluctuations compared to a conventional, homogeneous-drag model. The present results thus suggest that a sparse canopy acts on the background turbulence primarily through the change it induces on the mean velocity profile, which in turn sets the scale for turbulence, rather than through a direct interaction of the canopy elements with the fluctuations. The effect of the element-induced flow, however, requires the representation of the individual canopy elements.
physics
In this work we extend the notion of universal quantum Hamiltonians to the setting of translationally-invariant systems. We present a construction that allows a two-dimensional spin lattice with nearest-neighbour interactions, open boundaries, and translational symmetry to simulate any local target Hamiltonian---i.e. to reproduce the whole of the target system within its low-energy subspace to arbitrarily-high precision. Since this implies the capability to simulate non-translationally-invariant many-body systems with translationally-invariant couplings, any effect such as characteristics commonly associated to systems with external disorder, e.g. many-body localization, can also occur within the low-energy Hilbert space sector of translationally-invariant systems. Then we sketch a variant of the universal lattice construction optimized for simulating translationally-invariant target Hamiltonians. Finally we prove that qubit Hamiltonians consisting of Heisenberg or XY interactions of varying interaction strengths restricted to the edges of a connected translationally-invariant graph embedded in $\mathbb{R}^D$ are universal, and can efficiently simulate any geometrically local Hamiltonian in $\mathbb{R}^D$.
quantum physics
We show the existence of H\"older continuous periodic solution with compact support in time of the Boussinesq equations with partial viscosity. The H\"older regularity of the solution we constructed is anisotropic which is compatible with partial viscosity of the equations.
mathematics
The Asynchronous Time-based Image Sensor (ATIS) and the Spiking Neural Network Architecture (SpiNNaker) are both neuromorphic technologies that "unconventionally" use binary spikes to represent information. The ATIS produces spikes to represent the change in light falling on the sensor, and the SpiNNaker is a massively parallel computing platform that asynchronously sends spikes between cores for processing. In this demonstration we show these two hardware used together to perform a visual tracking task. We aim to show the hardware and software architecture that integrates the ATIS and SpiNNaker together in a robot middle-ware that makes processing agnostic to the platform (CPU or SpiNNaker). We also aim to describe the algorithm, why it is suitable for the "unconventional" sensor and processing platform including the advantages as well as challenges faced.
computer science
This is the first installment of a series of three papers in which we describe a method to determine higher-point correlation functions in one-loop open-superstring amplitudes from first principles. In this first part, we exploit the synergy between the cohomological features of pure-spinor superspace and the pure-spinor zero-mode integration rules of the one-loop amplitude prescription. This leads to the study of a rich variety of multiparticle superfields which are local, have covariant BRST variations, and are compatible with the particularities of the pure-spinor amplitude prescription. Several objects related to these superfields, such as their non-local counterparts and the so-called BRST pseudo-invariants, are thoroughly reviewed and put into new light. Their properties will turn out to be mysteriously connected to products of one-loop worldsheet functions in packages dubbed "generalized elliptic integrands", whose prominence will be seen in the later parts of this series of papers.
high energy physics theory
We demonstrate several techniques to encourage practical uses of neural networks for fluid flow estimation. In the present paper, three perspectives which are remaining challenges for applications of machine learning to fluid dynamics are considered: 1. interpretability of machine-learned results, 2. bulking out of training data, and 3. generalizability of neural networks. For the interpretability, we first demonstrate two methods to observe the internal procedure of neural networks, i.e., visualization of hidden layers and application of gradient-weighted class activation mapping (Grad-CAM), applied to canonical fluid flow estimation problems -- $(1)$ drag coefficient estimation of a cylinder wake and $(2)$ velocity estimation from particle images. It is exemplified that both approaches can successfully tell us evidences of the great capability of machine learning-based estimations. We then utilize some techniques to bulk out training data for super-resolution analysis and temporal prediction for cylinder wake and NOAA sea surface temperature data to demonstrate that sufficient training of neural networks with limited amount of training data can be achieved for fluid flow problems. The generalizability of machine learning model is also discussed by accounting for the perspectives of inter/extrapolation of training data, considering super-resolution of wakes behind two parallel cylinders. We find that various flow patterns generated by complex interaction between two cylinders can be reconstructed well, even for the test configurations regarding the distance factor. The present paper can be a significant step toward practical uses of neural networks for both laminar and turbulent flow problems.
physics
In this paper, we introduce equivalence testing procedures for standardized effect sizes in a linear regression. We show how to define valid hypotheses and calculate p-values for these tests. Such tests are necessary to confirm the lack of a meaningful association between an outcome and predictors. A simulation study is conducted to examine type I error rates and statistical power. We also compare using equivalence testing as part of a frequentist testing scheme with an alternative Bayesian testing approach. The results indicate that the proposed equivalence test is a potentially useful tool for "testing the null."
statistics
We propose a novel technique for the combination of multi-jet merged simulations in the five-flavor scheme with calculations for the production of b-quark associated final states in the four-flavor scheme. We show the equivalence of our algorithm to the FONLL method at the fixed-order and logarithmic accuracy inherent to the matrix-element and parton-shower simulation employed in the multi-jet merging. As a first application we discuss Zbb production at the Large Hadron Collider.
high energy physics phenomenology
We investigate the computational complexity of separating two distinct vertices s and z by vertex deletion in a temporal graph. In a temporal graph, the vertex set is fixed but the edges have (discrete) time labels. Since the corresponding Temporal (s, z)-Separation problem is NP-hard, it is natural to investigate whether relevant special cases exist that are computationally tractable. To this end, we study restrictions of the underlying (static) graph---there we observe polynomial-time solvability in the case of bounded treewidth---as well as restrictions concerning the "temporal evolution" along the time steps. Systematically studying partially novel concepts in this direction, we identify sharp borders between tractable and intractable cases.
computer science
The deconvolution, or cleaning, of radio interferometric images often involves computing model visibilities from a list of clean components, in order that the contribution from the model can be subtracted from the observed visibilities. This step is normally performed using a forward fast Fourier transform (FFT), followed by a 'degridding' step that interpolates over the uv plane to construct the model visibilities. An alternative approach is to calculate the model visibilities directly by summing over all the members of the clean component list, which is a more accurate method that can also be much slower. However, if the clean components are used to construct a model image on the surface of the celestial sphere then the model visibilities can be generated directly from the wavelet coefficients, and the sparsity of the model means that most of these coefficients are zero, and can be ignored. We have constructed a prototype imager that uses a spherical-wavelet representation of the model image to generate model visibilities during each major cycle, and find empirically that the execution time scales with the wavelet resolution level, J, as O(1.07 J), and with the number of distinct clean components, N_C, as O(N_C). The prototype organises the wavelet coefficients into a tree structure, and does not store or process the zero wavelet coefficients.
astrophysics
Joint ptycho-tomography is a powerful computational imaging framework to recover the refractive properties of a 3D object while relaxing the requirements for probe overlap that is common in conventional phase retrieval. We use an augmented Lagrangian scheme for formulating the constrained optimization problem and employ an alternating direction method of multipliers (ADMM) for the joint solution. ADMM allows the problem to be split into smaller and computationally more manageable subproblems: ptychographic phase retrieval, tomographic reconstruction, and regularization of the solution. We extend our ADMM framework with plug-and-play (PnP) denoisers by replacing the regularization subproblem with a general denoising operator based on machine learning. While the PnP framework enables integrating such learned priors as denoising operators, tuning of the denoiser prior remains challenging. To overcome this challenge, we propose a tuning parameter to control the effect of the denoiser and to accelerate the solution. In our simulations, we demonstrate that our proposed framework with parameter tuning and learned priors generates high-quality reconstructions under limited and noisy measurement data.
electrical engineering and systems science
We report a photoacoustic spectroscopy setup with a high-power mid-infrared frequency comb as the light source. The setup is used in broadband spectroscopy of radiocarbon methane. Due to the high sensitivity of a cantilever-enhanced photoacoustic cell and the high power light source, we can reach a detection limit below 100 ppb in a broadband measurement with a sample volume of only a few milliliters. The first infrared spectrum of $^{14}\text{CH}_4$ is reported and given a preliminary assignment. The results lay a foundation for the development of optical detection systems for radiocarbon methane.
physics
Based on an equivalent model for quantizers with noisy inputs recently presented in [35], we propose a method of digital dithering at the transmitter that may significantly reduce the resolution requirements of MIMO downlink Digital to Analog Convertors (DAC). We use this equivalent model to analyze the effect of the dither Probability Density Function (PFD), and show that the uniform PDF produces an optimal (linear) result. Relative to other methods of DAC quantization error reduction our approach has the benefits of low computational complexity, compatibility with all existing standards, and blindness (no need for channel state information).
electrical engineering and systems science
We investigate certain word-construction games with variable turn orders. In these games, Alice and Bob take turns on choosing consecutive letters of a word of fixed length, with Alice winning if the result lies in a predetermined target language. The turn orders that result in a win for Alice form a binary language that is regular whenever the target language is, and we prove some upper and lower bounds for its state complexity based on that of the target language.
computer science
Estimating the proportion of signals hidden in a large amount of noise variables is of interest in many scientific inquires. In this paper, we consider realistic but theoretically challenging settings with arbitrary covariance dependence between variables. We define mean absolute correlation (MAC) to measure the overall dependence level and investigate a family of estimators for their performances in the full range of MAC. We explicit the joint effect of MAC dependence and signal sparsity on the performances of the family of estimators and discover that no single estimator in the family is most powerful under different MAC dependence levels. Informed by the theoretical insight, we propose a new estimator to better adapt to arbitrary covariance dependence. The proposed method compares favorably to several existing methods in extensive finite-sample settings with strong to weak covariance dependence and real dependence structures from genetic association studies.
statistics
Hypervolume is widely used in the evolutionary multi-objective optimization (EMO) field to evaluate the quality of a solution set. For a solution set with $\mu$ solutions on a Pareto front, a larger hypervolume means a better solution set. Investigating the distribution of the solution set with the largest hypervolume is an important topic in EMO, which is the so-called hypervolume optimal $\mu$-distribution. Theoretical results have shown that the $\mu$ solutions are uniformly distributed on a linear Pareto front in two dimensions. However, the $\mu$ solutions are not always uniformly distributed on a single-line Pareto front in three dimensions. They are only uniform when the single-line Pareto front has one constant objective. In this paper, we further investigate the hypervolume optimal $\mu$-distribution in three dimensions. We consider the line- and plane-based Pareto fronts. For the line-based Pareto fronts, we extend the single-line Pareto front to two-line and three-line Pareto fronts, where each line has one constant objective. For the plane-based Pareto fronts, the linear triangular and inverted triangular Pareto fronts are considered. First, we show that the $\mu$ solutions are not always uniformly distributed on the line-based Pareto fronts. The uniformity depends on how the lines are combined. Then, we show that a uniform solution set on the plane-based Pareto front is not always optimal for hypervolume maximization. It is locally optimal with respect to a $(\mu+1)$ selection scheme. Our results can help researchers in the community to better understand and utilize the hypervolume indicator.
computer science
Neural end-to-end TTS can generate very high-quality synthesized speech, and even close to human recording within similar domain text. However, it performs unsatisfactory when scaling it to challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of "exposure bias" To address this issue, we propose two novel methods, which learn to predict future by improving agreement between forward and backward decoding sequence. The first one is achieved by introducing divergence regularization terms into model training objective to reduce the mismatch between two directional models, namely L2R and R2L (which generates targets from left-to-right and right-to-left, respectively). While the second one operates on decoder-level and exploits the future information during decoding. In addition, we employ a joint training strategy to allow forward and backward decoding to improve each other in an interactive process. Experimental results show our proposed methods especially the second one (bidirectional decoder regularization), leads a significantly improvement on both robustness and overall naturalness, as outperforming baseline (the revised version of Tacotron2) with a MOS gap of 0.14 in a challenging test, and achieving close to human quality (4.42 vs. 4.49 in MOS) on general test.
electrical engineering and systems science
Port-based teleportation (PBT) is a teleportation protocol that employs a number of Bell pairs and a joint measurement to enact an approximate input-output identity channel. Replacing the Bell pairs with a different multi-qubit resource state changes the enacted channel and allows the PBT protocol to simulate qubit channels beyond the identity. The channel resulting from PBT using a general resource state is consequently of interest. In this work, we fully characterise the Choi matrix of the qubit channel simulated by the PBT protocol in terms of its resource state. We also characterise the PBT protocol itself, by finding a description of the map from the resource state to the Choi matrix of the channel that is simulated by using that resource state. Finally, we exploit our expressions to show improved simulations of the amplitude damping channel by means of PBT with a finite number of ports.
quantum physics
The proper orthogonal decomposition (POD) is a powerful classical tool in fluid mechanics used, for instance, for model reduction and extraction of coherent flow features. However, its applicability to high-resolution data, as produced by three-dimensional direct numerical simulations, is limited owing to its computational complexity. Here, we propose a wavelet-based adaptive version of the POD (the wPOD), in order to overcome this limitation. The amount of data to be analyzed is reduced by compressing them using biorthogonal wavelets, yielding a sparse representation while conveniently providing control of the compression error. Numerical analysis shows how the distinct error contributions of wavelet compression and POD truncation can be balanced under certain assumptions, allowing us to efficiently process high-resolution data from three-dimensional simulations of flow problems. Using a synthetic academic test case, we compare our algorithm with the randomized singular value decomposition. Furthermore, we demonstrate the ability of our method analyzing data of a 2D wake flow and a 3D flow generated by a flapping insect computed with direct numerical simulation.
physics
Recent experiments reported an unusual nematic behavior of heavily hole-doped pnictides $A$Fe$_{2}$As$_{2}$, with alkali $A$ = Rb, Cs. In contrast to the $B_{2g}$ nematic order of the parent $Ae$Fe$_{2}$As$_{2}$ compounds (with alkaline earth $Ae$ = Sr, Ba), characterized by unequal nearest-neighbor Fe-Fe bonds, in the hole-doped systems nematic order is observed in the $B_{1g}$ channel, characterized by unequal next-nearest-neighbor Fe-Fe (diagonal Fe-As-Fe) bonds. In this work, using density functional theory, we attribute this behavior to the evolution of the magnetic ground state along the series $Ae_{1-x}A_{x}$Fe$_{2}$As$_{2}$, from single stripes for small $x$ to double stripes for large $x$. Our simulations using the reduced Stoner theory show that fluctuations of Fe moments are essential for the stability of the double-stripe configuration. We propose that the change in the nature of the magnetic ground state is responsible for the change in the symmetry of the vestigial nematic order that it supports.
condensed matter
We present recent developments of the local analytic sector subtraction of infrared singularities for final state real radiation at NNLO in QCD.
high energy physics phenomenology
This paper studies fast adaptive beamforming optimization for the signal-to-interference-plus-noise ratio balancing problem in a multiuser multiple-input single-output downlink system. Existing deep learning based approaches to predict beamforming rely on the assumption that the training and testing channels follow the same distribution which may not hold in practice. As a result, a trained model may lead to performance deterioration when the testing network environment changes. To deal with this task mismatch issue, we propose two offline adaptive algorithms based on deep transfer learning and meta-learning, which are able to achieve fast adaptation with the limited new labelled data when the testing wireless environment changes. Furthermore, we propose an online algorithm to enhance the adaptation capability of the offline meta algorithm in realistic non-stationary environments. Simulation results demonstrate that the proposed adaptive algorithms achieve much better performance than the direct deep learning algorithm without adaptation in new environments. The meta-learning algorithm outperforms the deep transfer learning algorithm and achieves near optimal performance. In addition, compared to the offline meta-learning algorithm, the proposed online meta-learning algorithm shows superior adaption performance in changing environments.
computer science
One of the established multi-lingual methods for testing speech intelligibility is the matrix sentence test (MST). Most versions of this test are designed with audio-only stimuli. Nevertheless, visual cues play an important role in speech intelligibility, mostly making it easier to understand speech by speechreading. In this work we present the creation and evaluation of dubbed videos for the Oldenburger female MST (OLSA). 28 normal-hearing participants completed test and retest sessions with conditions including audio and visual modalities, speech in quiet and noise, and open and closed-set response formats. The levels to reach 80% sentence intelligibility were measured adaptively for the different conditions. In quiet, the audiovisual benefit compared to audio-only was 7 dB in sound pressure level (SPL). In noise, the audiovisual benefit was 5 dB in signal-to-noise ratio (SNR). Speechreading scores ranged from 0% to 84% speech reception in visual-only sentences, with an average of 50% across participants. This large variability in speechreading abilities was reflected in the audiovisual speech reception thresholds (SRTs), which had a larger standard deviation than the audio-only SRTs. Training and learning effects in audiovisual sentences were found: participants improved their SRTs by approximately 3 dB SNR after 5 trials. Participants retained their best scores on a separate retest session and further improved their SRTs by approx. -1.5 dB.
electrical engineering and systems science
In real-world decision-making problems, for instance in the fields of finance, robotics or autonomous driving, keeping uncertainty under control is as important as maximizing expected returns. Risk aversion has been addressed in the reinforcement learning literature through risk measures related to the variance of returns. However, in many cases, the risk is measured not only on a long-term perspective, but also on the step-wise rewards (e.g., in trading, to ensure the stability of the investment bank, it is essential to monitor the risk of portfolio positions on a daily basis). In this paper, we define a novel measure of risk, which we call reward volatility, consisting of the variance of the rewards under the state-occupancy measure. We show that the reward volatility bounds the return variance so that reducing the former also constrains the latter. We derive a policy gradient theorem with a new objective function that exploits the mean-volatility relationship, and develop an actor-only algorithm. Furthermore, thanks to the linearity of the Bellman equations defined under the new objective function, it is possible to adapt the well-known policy gradient algorithms with monotonic improvement guarantees such as TRPO in a risk-averse manner. Finally, we test the proposed approach in two simulated financial environments.
computer science
Point forecasts can be interpreted as functionals (i.e., point summaries) of predictive distributions. We consider the situation where forecasters' directives are hidden and develop methodology for the identification of the unknown functional based on time series data of point forecasts and associated realizations. Focusing on the natural cases of state-dependent quantiles and expectiles, we provide a generalized method of moments estimator for the functional, along with tests of optimality relative to information sets that are specified by instrumental variables. Using simulation, we demonstrate that our optimality test is better calibrated and more powerful than existing solutions. In empirical examples, Greenbook gross domestic product (GDP) forecasts of the US Federal Reserve and model output for precipitation from the European Centre for Medium-Range Weather Forecasts (ECMWF) are indicative of overstatement in anticipation of extreme events.
statistics
We explore what may be learned by close encounters between extrasolar minor bodies like `Oumuamua and the Sun. These encounters may yield strong constraints on the bulk composition and possible origin of `Oumuamua-like objects. We find that such objects collide with the Sun once every 30 years, while about 2 pass within the orbit of Mercury each year. We identify preferred orientations for the orbits of extrasolar objects and point out known Solar System bodies with these orientations. We conclude using a simple Bayesian analysis that about one of these objects is extrasolar in origin, even if we cannot tell which.
astrophysics
The reliable detection of environmental molecules in the presence of noise is an important cellular function, yet the underlying computational mechanisms are not well understood. We introduce a model of two interacting sensors which allows for the principled exploration of signal statistics, cooperation strategies and the role of energy consumption in optimal sensing, quantified through the mutual information between the signal and the sensors. Here we report that in general the optimal sensing strategy depends both on the noise level and the statistics of the signals. For joint, correlated signals, energy consuming (nonequilibrium), asymmetric couplings result in maximum information gain in the low-noise, high-signal-correlation limit. Surprisingly we also find that energy consumption is not always required for optimal sensing. We generalise our model to incorporate time integration of the sensor state by a population of readout molecules, and demonstrate that sensor interaction and energy consumption remain important for optimal sensing.
physics
Identifying direct links between genomic pathways and clinical endpoints for highly fatal diseases such as cancer is a formidable task. By selecting statistically relevant associations between a wealth of intermediary variables such as imaging and genomic measurements, integrative analyses can potentially result in sharper clinical models with interpretable parameters, in terms of their mechanisms. Estimates of uncertainty in the resulting models are however unreliable unless inference accounts for the preceding steps of selection. In this article, we develop selection-aware Bayesian methods which are: (i) amenable to a flexible class of integrative Bayesian models post a selection of promising variables via $\ell_1$-regularized algorithms; (ii) enjoy computational efficiency due to a focus on sharp models with meaning; (iii) strike a crucial tradeoff between the quality of model selection and inferential power. Central to our selection-aware workflow, a conditional likelihood constructed with a reparameterization map is deployed for obtaining uncertainty estimates in integrative models. Investigating the potential of our methods in a radiogenomic analysis, we successfully recover several important gene pathways and calibrate uncertainties for their associations with patient survival times.
statistics
We study the optical properties of glass exposed to ionizing radiation, as it occurs in the space environment. 24 glass types have been considered, both space qualified and not space qualified. 72 samples (3 for each glass type) have been irradiated to simulate a total dose of 10krad and 30krad, imposed by a proton beam at KVI-Centre of Advanced Radiation Technology (Groeningen). Combining the information about stopping power and proton fluence, the time required to reproduce any given total dose in the real environment can be easily obtained. The optical properties, such as spectral transmission and light scattering have been measured before and after irradiation for each sample. Transmission has been characterized within the wavelength range 200 nm-1100 nm. Indications that systematical issues depend on the dopant or composition are found and described. This work aims at extending the existing list of space-compliant glasses in terms of radiation damage.
physics
We introduce a new stochastic multi-armed bandit setting where arms are grouped inside ``ordered'' categories. The motivating example comes from e-commerce, where a customer typically has a greater appetence for items of a specific well-identified but unknown category than any other one. We introduce three concepts of ordering between categories, inspired by stochastic dominance between random variables, which are gradually weaker so that more and more bandit scenarios satisfy at least one of them. We first prove instance-dependent lower bounds on the cumulative regret for each of these models, indicating how the complexity of the bandit problems increases with the generality of the ordering concept considered. We also provide algorithms that fully leverage the structure of the model with their associated theoretical guarantees. Finally, we have conducted an analysis on real data to highlight that those ordered categories actually exist in practice.
computer science
We study Riesz distributions in the framework of rational Dunkl theory associated with root systems of type A. As an important tool, we employ a Laplace transform involving the associated Dunkl kernel, which essentially goes back to Macdonald, but was so far only established at a formal level. We give a rigorous treatment of this transform based on suitable estimates of the type A Dunkl kernel. Our main result is a precise analogue in the Dunkl setting of a well-known result by Gindikin, stating that a Riesz distribution on a symmetric cone is a positive measure if and only if its exponent is contained in the Wallach set. For Riesz distributions in the Dunkl setting, we obtain an analogous characterization in terms of a generalized Wallach set which depends on the multiplicity parameter on the root system.
mathematics
We calculate the first three Seeley-DeWitt coefficients for fluctuation of the massless fields of an $\mathcal{N}=2$ Einstein-Maxwell supergravity theory (EMSGT) distributed into different multiplets in $d=4$ space-time dimensions. By utilizing the Seeley-DeWitt data in the quantum entropy function formalism, we then obtain the logarithmic correction contribution of individual multiplets to the entropy of extremal Kerr-Newman family of black holes. Our results allow us to find the logarithmic entropy corrections for the extremal black holes in a fully matter coupled $\mathcal{N}=2,d=4$ EMSGT, in a particular class of $\mathcal{N}=1,d=4$ EMSGT as consistent decomposition of $\mathcal{N}=2$ multiplets ($\mathcal{N}=2 \to \mathcal{N}=1$) and in $\mathcal{N} \geq 3,d=4$ EMSGTs by decomposing them into $\mathcal{N}=2$ multiplets ($\mathcal{N} \geq 3 \to \mathcal{N}=2$). For completeness, we also obtain logarithmic entropy correction results for the non-extremal Kerr-Newman black holes in the matter coupled $\mathcal{N} \geq 1,d=4$ EMSGTs by employing the same Seeley-DeWitt data into a different Euclidean gravity approach developed in arXiv:1205.0971.
high energy physics theory
Recently C. Bardos et al. presented in their fine paper \cite{Bardos} a proof of an Onsager type conjecture on renormalization property and the entropy conservation laws for the relativistic Vlasov-Maxwell system. Particularly, authors proved that if the distribution function $u \in L^{\infty}(0,T;W^{\alpha,p}(\mathbb{R}^6))$ and the electromagnetic field $E,B \in L^{\infty}(0,T;W^{\beta,q}(\mathbb{R}^3))$, with $\alpha, \beta \in (0,1)$ such that $\alpha\beta + \beta + 3\alpha - 1>0$ and $1/p+1/q\le 1$, then the renormalization property and entropy conservation laws hold. To determine a complete proof of this work, in the present paper we improve their results under a weaker regularity assumptions for weak solution to the relativistic Vlasov-Maxwell equations. More precisely, we show that under the similar hypotheses, the renormalization property and entropy conservation laws for the weak solution to the relativistic Vlasov-Maxwell's system even hold for the end point case $\alpha\beta + \beta + 3\alpha - 1 = 0$. Our proof is based on the better estimations on regularization operators.
mathematics
We propose "breathing $k$-means", a novel approximation algorithm for the $k$-means problem. After seeding the centroid set with the well-known $k$-means++ algorithm, the new method cyclically increases and decreases the number of centroids in order to find an improved solution for the given problem. The $k$-means++ solutions used for seeding are typically improved significantly while the extra computational cost is moderate. The effectiveness of our method is demonstrated on a variety of $k$-means problems including all those used in the original $k$-means++ publication. The Python implementation of the new algorithm consists of 78 lines of code.
computer science
Selective inference is considered for testing trees and edges in phylogenetic tree selection from molecular sequences. This improves the previously proposed approximately unbiased test by adjusting the selection bias when testing many trees and edges at the same time. The newly proposed selective inference $p$-value is useful for testing selected edges to claim that they are significantly supported if $p>1-\alpha$, whereas the non-selective $p$-value is still useful for testing candidate trees to claim that they are rejected if $p<\alpha$. The selective $p$-value controls the type-I error conditioned on the selection event, whereas the non-selective $p$-value controls it unconditionally. The selective and non-selective approximately unbiased $p$-values are computed from two geometric quantities called signed distance and mean curvature of the region representing tree or edge of interest in the space of probability distributions. These two geometric quantities are estimated by fitting a model of scaling-law to the non-parametric multiscale bootstrap probabilities. Our general method is applicable to a wider class of problems; phylogenetic tree selection is an example of model selection, and it is interpreted as the variable selection of multiple regression, where each edge corresponds to each predictor. Our method is illustrated in a previously controversial phylogenetic analysis of human, rabbit and mouse.
statistics
The goal of this paper is to adapt speaker embeddings for solving the problem of speaker diarisation. The quality of speaker embeddings is paramount to the performance of speaker diarisation systems. Despite this, prior works in the field have directly used embeddings designed only to be effective on the speaker verification task. In this paper, we propose three techniques that can be used to better adapt the speaker embeddings for diarisation: dimensionality reduction, attention-based embedding aggregation, and non-speech clustering. A wide range of experiments is performed on various challenging datasets. The results demonstrate that all three techniques contribute positively to the performance of the diarisation system achieving an average relative improvement of 25.07% in terms of diarisation error rate over the baseline.
electrical engineering and systems science
We consider a class of sums over products of Z-sums whose arguments differ by a symbolic integer. Such sums appear, for instance, in the expansion of Gauss hypergeometric functions around integer indices that depend on a symbolic parameter. We present a telescopic algorithm for efficiently converting these sums into generalized polylogarithms, Z-sums, and cyclotomic harmonic sums for generic values of this parameter. This algorithm is illustrated by computing the double pentaladder integrals through ten loops, and a family of massive self-energy diagrams through $O(\epsilon^6)$ in dimensional regularization. We also outline the general telescopic strategy of this algorithm, which we anticipate can be applied to other classes of sums.
high energy physics theory
The use of photoferroic materials that combine ferroelectric and light harvesting properties in a photovoltaic device is a promising route to significantly improve the efficiency of solar cells. These materials do not require the formation of a p-n junction and can produce photovoltages well above the value of the band gap, because of the spontaneous intrinsic polarization and the formation of domain walls. In this perspective, we discuss the recent experimental progresses and challenges for the synthesis of these materials and the theoretical discovery of novel photoferroic materials using a high-throughput approach.
condensed matter
The origin of thermal optical and UV emission from stellar tidal disruption flares (TDFs) remains an open question. We present Hubble Space Telescope far-UV (FUV) observations of eight optical/UV selected TDFs 5-10 years post-peak. Six sources are cleanly detected, showing point-like FUV emission from the centers of their host galaxies. We discover that the light curves of TDFs from low-mass black holes ($<10^{6.5} M_\odot$) show significant late-time flattening. Conversely, FUV light curves from high-mass black hole TDFs are generally consistent with an extrapolation from the early-time light curve. The observed late-time emission cannot be explained by existing models for early-time TDF light curves (i.e. reprocessing or circularization shocks), but is instead consistent with a viscously spreading, unobscured accretion disk. These disk models can only reproduce the observed FUV luminosities, however, if they are assumed to be thermally and viscously stable, in contrast to the simplest predictions of alpha-disk theory. For one TDF in our sample, we measure an upper limit to the UV luminosity that is significantly lower than expectations from theoretical modeling and an extrapolation of the early-time light curve. This dearth of late-time emission could be due to a disk instability/state change absent in the rest of the sample. The disk models that explain the late-time UV detections solve the TDF "missing energy problem" by radiating a rest-mass energy of ~0.1 solar mass over a period of decades, primarily in extreme UV wavelengths.
astrophysics
Speech communication systems are prone to performance degradation in reverberant and noisy acoustic environments. Dereverberation and noise reduction algorithms typically require several model parameters, e.g. the speech, reverberation and noise power spectral densities (PSDs). A commonly used assumption is that the noise PSD matrix is known. However, in practical acoustic scenarios, the noise PSD matrix is unknown and should be estimated along with the speech and reverberation PSDs. In this paper, we consider the case of rank-deficient noise PSD matrix, which arises when the noise signal consists of multiple directional interference sources, whose number is less than the number of microphones. We derive two closed-form maximum likelihood estimators (MLEs). The first is a non-blocking-based estimator which jointly estimates the speech, reverberation and noise PSDs, and the second is a blocking-based estimator, which first blocks the speech signal and then jointly estimates the reverberation and noise PSDs. Both estimators are analytically compared and analyzed, and mean square errors (MSEs) expressions are derived. Furthermore, Cramer-Rao Bounds (CRBs) on the estimated PSDs are derived. The proposed estimators are examined using both simulation and real reverberant and noisy signals, demonstrating the advantage of the proposed method compared to competing estimators.
electrical engineering and systems science
Optical quantum states defined in temporal modes, especially non-Gaussian states like photon-number states, play an important role in quantum computing schemes. In general, the temporal-mode structures of these states are characterized by one or more complex functions called temporal-mode functions (TMFs). Although we can calculate TMF theoretically in some cases, experimental estimation of TMF is more advantageous to utilize the states with high purity. In this paper, we propose a method to estimate complex TMFs. This method can be applied not only to arbitrary single-temporal-mode non-Gaussian states but also to two-temporal-mode states containing two photons. This method is implemented by continuous-wave (CW) dual homodyne measurement and doesn't need prior information of the target states nor state reconstruction procedure. We demonstrate this method by analyzing several experimentally created non-Gaussian states.
quantum physics
The radical departure from classical physics implies quantum coherence, i.e., coherent superposition of eigenstates of Hermitian operators with a discrete spectrum. In resource theory, quantum coherence is a resource for quantum operations. Typically the stochastic phenomenon induces decoherence effects. However, in the present work, we prove that nonunitary evolution leads to the generation of quantum coherence in some cases. Specifically, we consider the neutrino propagation in the dissipative environment, namely in a magnetic field with a stochastic component, and focus on neutrino flavor, spin and spin-flavor oscillations. We present exact analytical results for quantum coherence in neutrino oscillations quantified in terms of the relative entropy. Starting from an initial zero coherence state, we observe persistent oscillations of coherence during the dissipative evolution. We found that after dissipative evolution, the initial spin-polarized state entirely thermalizes, and in the final steady state, the spin-up/down states have the same probabilities. On the other hand, neutrino flavor states also thermalize, byt the populations of two flavor states do not equate to each other. The initial flavor still dominates in the final steady state
high energy physics phenomenology
A magnetoactive elastomer (MAE) consisting of single-domain La0.8Ag0.2Mn1.2O3 nanoparticles with a Curie temperature close to room temperature (TC = 308 K) in a silicone matrix has been prepared and comprehensively studied. It has been found that at room temperature and above, MAE particles are magnetized superparamagnetically with a low coercivity below 10 Oe, and the influence of magnetic anisotropy on the appearance of a torque is justified. A coupling between magnetization and magnetoelasticity has been also established. The mechanisms of the appearance of magnetoelasticity, including the effect of MAE rearrangement and MAE compression by magnetized particles, have been revealed. It has been found that the magnetoelastic properties of MAE have critical features near TC. The magnetoelastic properties of MAE disappear at T > TC and are restored at T < TC. This makes it possible to use MAE at room temperature as a smart material for devices with self-regulating magnetoelastic properties.
condensed matter
B-meson decays play an important role in flavour physics. The $B \rightarrow \pi K$ decays are dominated by QCD loop diagrams (penguins) but also electroweak penguins, where New Physics may enter, have a significant impact on the decay amplitude. Since measurements from B-factories indicate deviations from the Standard Model picture, we perform a state-of-the-art analysis to explore the correlation of the CP asymmetries and to get an updated picture. We propose a strategy for the optimal determination of the parameters which describe electroweak penguin effects and apply it to current data, utilising both neutral and charged $B \rightarrow \pi K$ decays. This new method can be fully exploited at the future Belle-II experiment, which will hopefully answer the question: Do these decays imply New Physics?
high energy physics phenomenology
The Brier score is commonly used for evaluating probability predictions. In survival analysis, with right-censored observations of the event times, this score can be weighted by the inverse probability of censoring (IPCW) to retain its original interpretation. It is common practice to estimate the censoring distribution with the Kaplan-Meier estimator, even though it assumes that the censoring distribution is independent of the covariates. This paper discusses the general impact of the censoring estimates on the Brier score and shows that the estimation of the censoring distribution can be problematic. In particular, when the censoring times can be identified from the covariates, the IPCW score is no longer valid. For administratively censored data, where the potential censoring times are known for all individuals, we propose an alternative version of the Brier score. This administrative Brier score does not require estimation of the censoring distribution and is valid even if the censoring times can be identified from the covariates.
statistics
We construct and analyze a phase diagram of a self-interacting matrix field coupled to curvature of the non-commutative truncated Heisenberg space. In the infinite size limit, the model reduces to the renormalizable Grosse-Wulkenhaar's. The curvature term is crucial to renormalization. When turned off, the triple point collapses into the origin as matrices grow larger. When turned on, the triple point shifts away proportionally to the coupling strength and matrix size. Coupling attenuation that renormalizes the Grosse-Wulkenhaar model cannot contain the shifting, and the translational symmetry-breaking stripe phase escapes to infinity, taking away the problematic UV/IR mixing.
high energy physics theory
Stars born at the same time in the same place should have formed from gas of the same element composition. But most stars subsequently disperse from their birth siblings, in orbit and orbital phase, becoming 'field stars'. Here we explore and provide direct observational evidence for this process in the Milky Way disc, by quantifying the probability that orbit-similarity among stars implies indistinguishable metallicity. We define the orbit similarity among stars through their distance in action-angle space, $\Delta (J,\theta)$, and their abundance similarity simply by $\Delta$[Fe/H]. Analyzing a sample of main sequence stars from Gaia DR2 and LAMOST, we find an excess of pairs with the same metallicity ($\Delta\mathrm{[Fe/H]}<0.1$) that extends to remarkably large separations in $\Delta (J,\theta)$ that correspond to nearly 1 kpc distances. We assess the significance of this effect through a mock sample, drawn from a smooth and phase-mixed orbit distribution. Through grouping such star pairs into associations with a friend-of-friends algorithm linked by $\Delta (J,\theta)$, we find 100s of mono-abundance groups with $\ge 3$ (to $\gtrsim 20$) members; these groups -- some clusters, some spread across the sky -- are over an order-of-magnitude more abundant than expected for a smooth phase-space distribution, suggesting that we are witnessing the 'dissolution' of stellar birth associations into the field.
astrophysics
The potential of digital twin technology is immense, specifically in the infrastructure, aerospace, and automotive sector. However, practical implementation of this technology is not at an expected speed, specifically because of lack of application-specific details. In this paper, we propose a novel digital twin framework for stochastic nonlinear multi-degree of freedom (MDOF) dynamical systems. The approach proposed in this paper strategically decouples the problem into two time-scales -- (a) a fast time-scale governing the system dynamics and (b) a slow time-scale governing the degradation in the system. The proposed digital twin has four components - (a) a physics-based nominal model (low-fidelity), (b) a Bayesian filtering algorithm a (c) a supervised machine learning algorithm and (d) a high-fidelity model for predicting future responses. The physics-based nominal model combined with Bayesian filtering is used combined parameter state estimation and the supervised machine learning algorithm is used for learning the temporal evolution of the parameters. While the proposed framework can be used with any choice of Bayesian filtering and machine learning algorithm, we propose to use unscented Kalman filter and Gaussian process. Performance of the proposed approach is illustrated using two examples. Results obtained indicate the applicability and excellent performance of the proposed digital twin framework.
statistics
Classifier chains have recently been proposed as an appealing method for tackling the multi-label classification task. In addition to several empirical studies showing its state-of-the-art performance, especially when being used in its ensemble variant, there are also some first results on theoretical properties of classifier chains. Continuing along this line, we analyze the influence of a potential pitfall of the learning process, namely the discrepancy between the feature spaces used in training and testing: While true class labels are used as supplementary attributes for training the binary models along the chain, the same models need to rely on estimations of these labels at prediction time. We elucidate under which circumstances the attribute noise thus created can affect the overall prediction performance. As a result of our findings, we propose two modifications of classifier chains that are meant to overcome this problem. Experimentally, we show that our variants are indeed able to produce better results in cases where the original chaining process is likely to fail.
computer science
We study 3-jet event topologies in proton-proton collisions at a centre-of-mass energy of $\sqrt{s} = 13 {\rm\ TeV}$ in a configuration, where one jet is present in the central pseudorapidity region ($|\eta| < 2.0$) while two other jets are in a more forward (same hemisphere) area ($|\eta| > 2.0$). We compare various parton level predictions using: collinear factorisation, $\KT{}$-factorisation with fully off-shell matrix elements and the hybrid framework. We study the influence of different parton distribution functions, initial state radiation, final state radiation, and hadronisation. We focus on differential cross sections as a function of azimuthal angle difference between the leading dijet system and the third jet, which is found to have excellent sensitivity to the physical effects under study.
high energy physics phenomenology
Free energy calculations based on atomistic Hamiltonians and sampling are key to a first principles understanding of biomolecular processes, material properties, and macromolecular chemistry. Here, we generalize the Free Energy Perturbation method and derive non-linear Hamiltonian transformation sequences for optimal sampling accuracy that differ markedly from established linear transformations. We show that our sequences are also optimal for the Bennett Acceptance Ratio (BAR) method, and our unifying framework generalizes BAR to small sampling sizes and non-Gaussian error distributions. Simulations on a Lennard-Jones gas show that an order of magnitude less sampling is required compared to established methods.
physics
In this paper, we present two constructions of forward self-similar solutions to the $3$D incompressible Navier-Stokes system, as the singular limit of forward self-similar solutions to certain parabolic systems.
mathematics
The behavior of flagellated bacteria swimming in non-Newtonian media remains an area with contradictory and conflicting results. We report on the behavior of wild-type and smooth-swimming E. coli in Newtonian, shear thinning and viscoelastic media, measuring their trajectories and swimming speed using a three dimensional real-time tracking microscope. We conclude that the speed enhancement in Methocel solution at higher concentration is due to shear-thinning and an analytical model is used to support our experimental result. We argue that shear-induced normal stresses reduce the wobbling behavior during cell swimming but do not significantly affect swimming speed. However, the normal stresses play an important role in decreasing the flagellar bundling time which changes the swimming speed distribution. A dimensionless number, the "Strangulation number" (Str) is proposed and used to characterize this effect.
physics
We explore systematically perturbations of self-similar solutions to the Einstein-axion-dilaton system, whose dynamics are invariant under spacetime dilations combined with internal $SL(2,R)$ transformations. The self-similar solutions capture the enticing behavior critical systems on the verge of gravitational collapse, in arbitrary spacetime dimensions. Our methods rest on a combination of analytical and numerical tools, apply to all three conjugacy classes of $SL(2,R)$ transformations and allow accurate estimates of the corresponding Choptuik exponents. It is well known that these exponents depend on the spacetime dimension and on the matter content. Our main result is that they also attain different values, even within a given conjugacy class, for the distinct types of critical solutions that we recently identified in the Einstein-axion-dilaton system.
high energy physics theory
The chemical evolution of fluorine is investigated in a sample of Milky Way red giantstars that span a significant range in metallicity from [Fe/H] $\sim$ -1.3 to 0.0 dex. Fluorine abundances are derived from vibration-rotation lines of HF in high-resolution infraredspectra near $\lambda$ 2.335 $\mu$m. The red giants are members of the thin and thick disk / halo,with two stars being likely members of the outer disk Monoceros overdensity. At lowermetallicities, with [Fe/H]<-0.4 to -0.5, the abundance of F varies as a primary element with respect to the Fe abundance, with a constant subsolar value of [F/Fe] $\sim$ -0.3 to -0.4 dex. At larger metallicities, however, [F/Fe] increases rapidly with [Fe/H] anddisplays a near-secondary behavior with respect to Fe. Comparisons with various models of chemical evolution suggest that in the low-metallicity regime (dominated hereby thick disk stars), a primary evolution of $^{19}$F with Fe, with a subsolar [F/Fe] valuethat roughly matches the observed plateau can be reproduced by a model incorporatingneutrino nucleosynthesis in the aftermath of the core collapse in supernovae of type II (SN II). A primary behavior for [F/Fe] at low metallicity is also observed for a model including rapid rotating low-metallicity massive stars but this overproduces [F/Fe] atlow metallicity. The thick disk red giants in our sample span a large range of galactocentric distance (Rg $\sim$ 6--13.7 kpc), yet display a $\sim$constant value of [F/Fe], indicating a very flat gradient (with a slope of 0.02 $\pm$ 0.03 dex/kpc) of this elemental ratio over asignificant portion of the Galaxy having|Z|>300 pc away from the Galaxy mid-plane.
astrophysics
Motivated by engineering vector-like (Higgs) pairs in the spectrum of 4d F-theory compactifications, we combine machine learning and algebraic geometry techniques to analyze line bundle cohomologies on families of holomorphic curves. To quantify jumps of these cohomologies, we first generate 1.8 million pairs of line bundles and curves embedded in $dP_3$, for which we compute the cohomologies. A white-box machine learning approach trained on this data provides intuition for jumps due to curve splittings, which we use to construct additional vector-like Higgs-pairs in an F-Theory toy model. We also find that, in order to explain quantitatively the full dataset, further tools from algebraic geometry, in particular Brill--Noether theory, are required. Using these ingredients, we introduce a diagrammatic way to express cohomology jumps across the parameter space of each family of matter curves, which reflects a stratification of the F-theory complex structure moduli space in terms of the vector-like spectrum. Furthermore, these insights provide an algorithmically efficient way to estimate the possible cohomology dimensions across the entire parameter space.
high energy physics theory
We present a method to considerably improve the numerical performance for solving Eliashberg-type coupled equations on the imaginary axis. Instead of the standard practice of introducing a hard numerical cutoff for treating the infinite summations involved, our scheme allows for the efficient calculation of such sums extended formally up to infinity. The method is first benchmarked with isotropic Migdal-Eliashberg theory calculations and subsequently applied to the solution of the full-bandwidth, multiband and anisotropic equations focusing on the FeSe/SrTiO$_3$ interface as a case study. Compared to the standard procedure, we reach similarly well converged results with less than one fifth of the number of frequencies for the anisotropic case, while for the isotropic set of equations we spare approximately ninety percent of the complexity. Since our proposed approximations are very general, our numerical scheme opens the possibility of studying the superconducting properties of a wide range of materials at ultra-low temperatures.
condensed matter
The possibility of determining the value of the Hubble constant using observations of galaxy clusters in X-ray and microwave wavelengths through the Sunyaev Zel\'dovich (SZ) effect has long been known. Previous measurements have been plagued by relatively large errors in the observational data and severe biases induced, for example, by cluster triaxiality and clumpiness. The advent of \textit{Planck} allows us to map the Compton parameter y, that is, the amplitude of the SZ effect, with unprecedented accuracy at intermediate cluster-centric radii, which in turn allows performing a detailed spatially resolved comparison with X-ray measurements. Given such higher quality observational data, we developed a Bayesian approach that combines informed priors on the physics of the intracluster medium obtained from hydrodynamical simulations of massive clusters with measurement uncertainties. We apply our method to a sample of 61 galaxy clusters with redshifts up to z < 0.5 observed with Planck and XMM-Newton observations and find H_0=67 \pm 3 km s^{-1} Mpc^{-1}.
astrophysics
The paper introduces a non-linear version of the process convolution formalism for building covariance functions for multi-output Gaussian processes. The non-linearity is introduced via Volterra series, one series per each output. We provide closed-form expressions for the mean function and the covariance function of the approximated Gaussian process at the output of the Volterra series. The mean function and covariance function for the joint Gaussian process are derived using formulae for the product moments of Gaussian variables. We compare the performance of the non-linear model against the classical process convolution approach in one synthetic dataset and two real datasets.
statistics
We provide a scenario for a singularity-mediated turbulence based on the self-focusing non-linear Schr\"odinger equation, for which sufficiently smooth initial states leads to blow-up in finite time. Here, by adding dissipation, these singularities are regularized, and the inclusion of an external forcing results in a chaotic fluctuating state. The strong events appear randomly in space and time, making the dissipation rate highly fluctuating. The model shows that: i) dissipation takes place near the singularities only, ii) such intense events are random in space and time, iii) the mean dissipation rate is almost constant as the viscosity varies, and iv) the observation of an Obukhov-Kolmogorov spectrum with a power law dependence together with an intermittent behavior using structure functions correlations, in close correspondence with fluid turbulence.
condensed matter
Recently proposed normalizing flow models such as Glow have been shown to be able to generate high quality, high dimensional images with relatively fast sampling speed. Due to their inherently restrictive architecture, however, it is necessary that they are excessively deep in order to train effectively. In this paper we propose to combine Glow with an underlying variational autoencoder in order to counteract this issue. We demonstrate that our proposed model is competitive with Glow in terms of image quality and test likelihood while requiring far less time for training.
computer science
We propose a method of determining the shape of a two-dimensional magnetic skyrmion, which can be parameterized as the position dependence of the orientation of the local magnetic moment, by using the expansion in terms of the eigenfunctions of the Schr\"{o}dinger equation of a harmonic oscillator. A variational calculation is done, up to the next-to-next-to-leading order. This result is verified by a lattice simulation based on Landau-Lifshitz-Gilbert equation. Our method is also applied to the dissipative matrix in the Thiele equation as well as two interacting skyrmions in a bilayer system.
condensed matter
The three-dimensional femtoscopic correlations of pions and kaons are presented for Pb$-$Pb collisions at \rootsNN = 5.02 TeV within the framework of (3+1)D viscous hydrodynamics combined with THERMINATOR 2 code for statistical hadronization. The femtoscopic radii for pions and kaons are obtained as a function of pair transverse momentum and centrality in all three pair directions. The radii showed a decreasing trend with an increase of pair transverse momentum and transverse mass for all centralities. These observations indicate the presence of strong collectivity. A simple effective scaling of radii with pair transverse mass was observed for both pion and kaons.
high energy physics phenomenology
The use of deep learning models within scientific experimental facilities frequently requires low-latency inference, so that, for example, quality control operations can be performed while data are being collected. Edge computing devices can be useful in this context, as their low cost and compact form factor permit them to be co-located with the experimental apparatus. Can such devices, with their limited resources, can perform neural network feed-forward computations efficiently and effectively? We explore this question by evaluating the performance and accuracy of a scientific image restoration model, for which both model input and output are images, on edge computing devices. Specifically, we evaluate deployments of TomoGAN, an image-denoising model based on generative adversarial networks developed for low-dose x-ray imaging, on the Google Edge TPU and NVIDIA Jetson. We adapt TomoGAN for edge execution, evaluate model inference performance, and propose methods to address the accuracy drop caused by model quantization. We show that these edge computing devices can deliver accuracy comparable to that of a full-fledged CPU or GPU model, at speeds that are more than adequate for use in the intended deployments, denoising a 1024 x 1024 image in less than a second. Our experiments also show that the Edge TPU models can provide 3x faster inference response than a CPU-based model and 1.5x faster than an edge GPU-based model. This combination of high speed and low cost permits image restoration anywhere.
electrical engineering and systems science
The European Spallation Source (ESS), currently finishing its construction, will soon provide the most intense neutron beams for multi-disciplinary science. At the same time, it will also produce a high-intensity neutrino flux with an energy suitable for precision measurements of Coherent Elastic Neutrino-Nucleus Scattering. We describe some physics prospects, within and beyond the Standard Model, of employing innovative detector technologies to take the most out of this large flux. We show that, compared to current measurements, the ESS will provide a much more precise understanding of neutrino and nuclear properties.
high energy physics phenomenology
Computation holds great potential for introducing new opportunities for creativity and exploration into the physics curriculum. At the University of Oslo we have begun development of a new class of assignment called computational essays to help facilitate creative, open-ended computational physics projects. Computational essays are a type of essay or narrative that combine text and code to express an idea or make an argument, usually written in computational notebooks. During a pilot implementation of computational essays in an introductory electricity and magnetism course, students reported that computational essays facilitated creative investigation at a variety of levels within their physics course. They also reported finding this creativity as being both challenging and motivating. Based on these reflections, we argue that computational essays are a useful tool for leveraging the creative affordances of programming in physics education.
physics