text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Context. The diffusion of volatile species on amorphous solid water ice affects the chemistry on dust grains in the interstellar medium as well as the trapping of gases enriching planetary atmospheres or present in cometary material. Aims. The aim of the work is to provide diffusion coefficients of CH$_4$ on amorphous solid water (ASW), and to understand how they are affected by the ASW structure. Methods. Ice mixtures of H$_2$O and CH$_4$ were grown in different conditions and the sublimation of CH$_4$ was monitored via infrared spectroscopy or via the mass loss of a cryogenic quartz crystal microbalance. Diffusion coefficients were obtained from the experimental data assuming the systems obey Fick's law of diffusion. Monte Carlo simulations modeled the different amorphous solid water ice structures investigated and were used to reproduce and interpret the experimental results. Results. Diffusion coefficients of methane on amorphous solid water have been measured to be between 10$^{-12}$ and 10$^{-13}$ cm$^2$ s$^{-1}$ for temperatures ranging between 42 K and 60 K. We showed that diffusion can differ by one order of magnitude depending on the morphology of amorphous solid water. The porosity within water ice, and the network created by pore coalescence, enhance the diffusion of species within the pores.The diffusion rates derived experimentally cannot be used in our Monte Carlo simulations to reproduce the measurements. Conclusions. We conclude that Fick's law can be used to describe diffusion at the macroscopic scale, while Monte Carlo simulations describe the microscopic scale where trapping of species in the ices (and their movement) is considered. | astrophysics |
Microfading Spectrometry (MFS) is a method for assessing light sensitivity color (spectral) variations of cultural heritage objects. The MFS technique provides measurements of the surface under study, where each point of the surface gives rise to a time-series that represents potential spectral (color) changes due to sunlight exposition over time. Color fading is expected to be non-decreasing as a function of time and stabilize eventually. These properties can be expressed in terms of the partial derivatives of the functions. We propose a spatio-temporal model that takes this information into account by jointly modeling the spatio-temporal process and its derivative process using Gaussian processes (GPs). We fitted the proposed model to MFS data collected from the surface of prehistoric rock art paintings. A multivariate covariance function in a GP allows modeling trichromatic image color variables jointly with spatial distances and time points variables as inputs to evaluate the covariance structure of the data. We demonstrated that the colorimetric variables are useful for predicting the color fading time-series for new unobserved spatial locations. Furthermore, constraining the model using derivative sign observations for monotonicity was shown to be beneficial in terms of both predictive performance and application-specific interpretability. | statistics |
We discuss some physical consequences of the resurgent structure of Painleve equations and their related conformal block expansions. The resurgent structure of Painleve equations is particularly transparent when expressed in terms of physical conformal block expansions of the associated tau functions. Resurgence produces an intricate network of inter-relations; some between expansions around different critical points, others between expansions around different instanton sectors of the expansions about the same critical point, and others between different non-perturbative sectors of associated spectral problems, via the Bethe-gauge and Painleve-gauge correspondences. Resurgence relations exist both for convergent and divergent expansions, and can be interpreted in terms of the physics of phase transitions. These general features are illustrated with three physical examples: correlators of the 2d Ising model, the partition function of the Gross-Witten-Wadia matrix model, and the full counting statistics of one dimensional fermions, associated with Painleve VI, Painleve III and Painleve V, respectively. | high energy physics theory |
This paper considers a wireless communication system with low-resolution quantizers, in which transmitted signals are corrupted by fading and additive noise. For such wireless systems, a universal lower bound on the average symbol error probability (SEP), correct for all M-ary modulation schemes, is obtained when the number of quantization bits is not enough to resolve M signal points. In the special case of M-ary phase shift keying (M-PSK), the optimum maximum likelihood detector for equi-probable signal points is derived. Utilizing the structure of the derived optimum receiver, a general average SEP expression for M-PSK modulation with n-bit quantization is obtained when the wireless channel is subject to fading with a circularly-symmetric distribution. Adopting this result for Nakagami-m fading channels, easy-to-evaluate expressions for the average SEP for M-PSK modulation are further derived. It is shown that a transceiver architecture with n-bit quantization is asymptotically optimum in terms of communication reliability if n is greater than or equal to log_2(M +1). That is, the decay exponent for the average SEP is the same and equal to m with infinite-bit and n-bit quantizers for n is greater than or equal to log_2(M+1). On the other hand, it is only equal to half and 0 for n = log_2(M) and n < log_2(M), respectively. An extensive simulation study is performed to illustrate the derived results and energy efficiency gains obtained by means of low-resolution quantizers. | electrical engineering and systems science |
Second sound is known as the thermal transport regime where heat is carried by temperature waves. Its experimental observation was previously restricted to a small number of materials, usually in rather narrow temperature windows. We show that it is possible to overcome these limitations by driving the system with a rapidly varying temperature field. This effect is demonstrated in bulk Ge between 7 kelvin and room temperature, studying the phase lag of the thermal response under a harmonic high frequency external thermal excitation, addressing the relaxation time and the propagation velocity of the heat waves. These results provide a new route to investigate the potential of wave-like heat transport in almost any material, opening opportunities to control heat through its oscillatory nature. | condensed matter |
Quantum algorithms can deliver asymptotic speedups over their classical counterparts. However, there are few cases where a substantial quantum speedup has been worked out in detail for reasonably-sized problems, when compared with the best classical algorithms and taking into account realistic hardware parameters and overheads for fault-tolerance. All known examples of such speedups correspond to problems related to simulation of quantum systems and cryptography. Here we apply general-purpose quantum algorithms for solving constraint satisfaction problems to two families of prototypical NP-complete problems: boolean satisfiability and graph colouring. We consider two quantum approaches: Grover's algorithm and a quantum algorithm for accelerating backtracking algorithms. We compare the performance of optimised versions of these algorithms, when applied to random problem instances, against leading classical algorithms. Even when considering only problem instances that can be solved within one day, we find that there are potentially large quantum speedups available. In the most optimistic parameter regime we consider, this could be a factor of over $10^5$ relative to a classical desktop computer; in the least optimistic regime, the speedup is reduced to a factor of over $10^3$. However, the number of physical qubits used is extremely large, and improved fault-tolerance methods will likely be needed to make these results practical. In particular, the quantum advantage disappears if one includes the cost of the classical processing power required to perform decoding of the surface code using current techniques. | quantum physics |
Recently, it has been found that the kinematic viscosity of liquids at the minimum, $\nu_m$, can be expressed in terms of fundamental physical constants, giving $\nu_m$ on the order of $10^{-7}~{\rm m^2/s}$. Here, we show that the kinematic viscosity of quark-gluon plasma (QGP) has a similar value and support this finding by experimental data and theoretical estimations. The similarity is striking, given that the dynamic viscosity and the density of QGP are about 16 orders of magnitude larger than in liquids and that the two systems have disparate interactions and fundamental theories. We discuss the implications of this result for understanding the QGP including the similarity of flow and particle dynamics at the viscosity minimum, the associated dynamical crossover and universality of shear diffusivity. | high energy physics theory |
This is a set of lecture notes for an introductory course (advanced undergaduates or the 1st graduate course) on foundations of supervised machine learning (in Portuguese). The topics include: the geometry of the Hamming cube, concentration of measure, shattering and VC dimension, Glivenko-Cantelli classes, PAC learnability, universal consistency and the k-NN classifier in metric spaces, dimensionality reduction, universal approximation, sample compression. There are appendices on metric and normed spaces, measure theory, etc., making the notes self-contained. Este \'e um conjunto de notas de aula para um curso introdut\'orio (curso de gradua\c{c}\~ao avan\c{c}ado ou o 1o curso de p\'os) sobre fundamentos da aprendizagem de m\'aquina supervisionada (em Portugu\^es). Os t\'opicos incluem: a geometria do cubo de Hamming, concentra\c{c}\~ao de medida, fragmenta\c{c}\~ao e dimens\~ao de Vapnik-Chervonenkis, classes de Glivenko-Cantelli, aprendizabilidade PAC, consist\^encia universal e o classificador k-NN em espa\c{c}os m\'etricos, redu\c{c}\~ao de dimensionalidade, aproxima\c{c}\~ao universal, compress\~ao amostral. H\'a ap\^endices sobre espa\c{c}os m\'etricos e normados, teoria de medida, etc., tornando as notas autosuficientes. | computer science |
We study parametric instability of compact axion dark matter structures decaying to radiophotons. Corresponding objects - Bose (axion) stars, their clusters, and clouds of diffuse axions - form abundantly in the postinflationary Peccei-Quinn scenario. We develop general description of parametric resonance incorporating finite-volume effects, backreaction, axion velocities and their (in)coherence. With additional coarse-graining, our formalism reproduces kinetic equation for virialized axions interacting with photons. We derive conditions for the parametric instability in each of the above objects, as well as in collapsing axion stars, evaluate photon resonance modes and their growth exponents. As a by-product, we calculate stimulated emission of Bose stars and diffuse axions, arguing that the former can give larger contribution into the radiobackground. In the case of QCD axions, the Bose stars glow and collapsing stars radioburst if the axion-photon coupling exceeds the original KSVZ value by two orders of magnitude. The latter constraint is alleviated for several nearby axion stars in resonance and absent for axion-like particles. Our results show that the parametric effect may reveal itself in observations, from FRB to excess radiobackground. | astrophysics |
MinHash and HyperLogLog are sketching algorithms that have become indispensable for set summaries in big data applications. While HyperLogLog allows counting different elements with very little space, MinHash is suitable for the fast comparison of sets as it allows estimating the Jaccard similarity and other joint quantities. This work presents a new data structure called SetSketch that is able to continuously fill the gap between both use cases. Its commutative and idempotent insert operation and its mergeable state make it suitable for distributed environments. Robust and easy-to-implement estimators for cardinality and joint quantities, as well as the ability to use SetSketch for similarity search, enable versatile applications. The developed methods can also be used for HyperLogLog sketches and allow estimation of joint quantities such as the intersection size with a smaller error compared to the common estimation approach based on the inclusion-exclusion principle. | computer science |
Objective: Recognizing retinal vessel abnormity is vital to early diagnosis of ophthalmological diseases and cardiovascular events. However, segmentation results are highly influenced by elusive vessels, especially in low-contrast background and lesion region. In this work, we present an end-to-end synthetic neural network, containing a symmetric equilibrium generative adversarial network (SEGAN), multi-scale features refine blocks (MSFRB), and attention mechanism (AM) to enhance the performance on vessel segmentation. Method: The proposed network is granted powerful multi-scale representation capability to extract detail information. First, SEGAN constructs a symmetric adversarial architecture, which forces generator to produce more realistic images with local details. Second, MSFRB are devised to prevent high-resolution features from being obscured, thereby merging multi-scale features better. Finally, the AM is employed to encourage the network to concentrate on discriminative features. Results: On public dataset DRIVE, STARE, CHASEDB1, and HRF, we evaluate our network quantitatively and compare it with state-of-the-art works. The ablation experiment shows that SEGAN, MSFRB, and AM both contribute to the desirable performance. Conclusion: The proposed network outperforms the mature methods and effectively functions in elusive vessels segmentation, achieving highest scores in Sensitivity, G-Mean, Precision, and F1-Score while maintaining the top level in other metrics. Significance: The appreciable performance and computational efficiency offer great potential in clinical retinal vessel segmentation application. Meanwhile, the network could be utilized to extract detail information in other biomedical issues | electrical engineering and systems science |
The chiral source and its mechanism in the molecular system are of great significance in many fields. In this work, we proposed visualized methods to investigate physical mechanism of chiral molecule, where the electric and magnetic interactions are visualized with the transitional electric dipole moment, the transitional magnetic dipole moment and the transitional electric quadrupole moment, and their tensor product. The relationship between molecular ROA response and molecular structure was analyzed in an intuitive way. The relationship between chromophore chirality and molecular vibration mode are revealed via interaction between the transition electric dipole moment and the transition magnetic dipole moment. The molecular chirality is derived from the anisotropy of the molecular transition electric dipole moment and the transition magnetic dipole moment. The anisotropic dipole moment localized molecular chromophore is the source of the vibration mode in which the ROA responds to the reverse. | physics |
Gaussian processes (GPs) are nonparametric Bayesian models that have been applied to regression and classification problems. One of the approaches to alleviate their cubic training cost is the use of local GP experts trained on subsets of the data. In particular, product-of-expert models combine the predictive distributions of local experts through a tractable product operation. While these expert models allow for massively distributed computation, their predictions typically suffer from erratic behaviour of the mean or uncalibrated uncertainty quantification. By calibrating predictions via a tempered softmax weighting, we provide a solution to these problems for multiple product-of-expert models, including the generalised product of experts and the robust Bayesian committee machine. Furthermore, we leverage the optimal transport literature and propose a new product-of-expert model that combines predictions of local experts by computing their Wasserstein barycenter, which can be applied to both regression and classification. | statistics |
Negative atomic hydrogen ion (H$^{-}$) densities were measured in a pulsed low-pressure E-mode inductively-coupled radio-frequency (rf) driven plasma in hydrogen by means of laser photodetachment and a Langmuir probe. This investigation focuses on the influence of different metallic surface materials on the volume production of H$^{-}$ ions. The H$^{-}$ density was measured above a thin disc of either tungsten, stainless steel, copper, aluminium, or molybdenum placed onto the lower grounded electrode of the plasma device as a function of gas pressure and applied rf power. For copper, aluminium, and molybdenum the H$^{-}$ density was found to be quite insensitive to pressure and rf power, with values ranging between 3.6x10$^{14}$ to 5.8x10$^{14}$ m$^{-3}$. For stainless steel and tungsten, the H$^{-}$ dependency was found to be complex, apart from the case of a similar linear increase from 2.9x10$^{14}$ to 1.1x10$^{15}$ m$^{-3}$ with rf power at a pressure of 25 Pa. Two-photon absorption laser induced fluorescence was used to measure the atomic hydrogen densities and phase resolved optical emission spectroscopy was used to investigate whether the plasma dynamics were surface dependent. An explanation for the observed differences between the two sets of investigated materials is given in terms of surface reaction mechanisms for the creation of vibrationally excited hydrogen molecules. | physics |
The paper addresses the problem of estimation of the model parameters of the logistic exponential distribution based on progressive type-I hybrid censored sample. The maximum likelihood estimates are obtained and computed numerically using Newton-Raphson method. Further, the Bayes estimates are derived under squared error, LINEX and generalized entropy loss functions. Two types (independent and bivariate) of prior distributions are considered for the purpose of Bayesian estimation. It is seen that the Bayes estimates are not of explicit forms.Thus, Lindley's approximation technique is employed to get approximate Bayes estimates. Interval estimates of the parameters based on normal approximate of the maximum likelihood estimates and normal approximation of the log-transformed maximum likelihood estimates are constructed. The highest posterior density credible intervals are obtained by using the importance sampling method. Furthermore, numerical computations are reported to review some of the results obtained in the paper. A real life dataset is considered for the purpose of illustrations. | statistics |
We investigate free non-local massless and massive scalar field on deSitter (dS) space-time. We compute the propagator for the non-local scalar field for the corresponding theories on flat and deSitter space-times. It is seen that for the non-local theory, the massless limit of massive propagator is smooth for both flat and deSitter. Moreover, this limit matches exactly with the massless propagator of the non-local scalar field for both flat and deSitter space-time. The propagator is seen to respect dS invariance. Furthermore, investigations of the non-local Green's function on deSitter for large time-like separation shows that the propagator has no infrared divergences. The dangerous infrared $\log$-divergent contributions which arise is local massless theories are absent in the corresponding non-local version. Lack of infrared divergences in the propagator hints at the strong role non-localities may play in the dS infrared physics. This study suggest that non-locality can cure IR issues in deSitter. | high energy physics theory |
In this paper, we consider the problem of assessing the adversarial robustness of deep neural network models under both Markov chain Monte Carlo (MCMC) and Bayesian Dark Knowledge (BDK) inference approximations. We characterize the robustness of each method to two types of adversarial attacks: the fast gradient sign method (FGSM) and projected gradient descent (PGD). We show that full MCMC-based inference has excellent robustness, significantly outperforming standard point estimation-based learning. On the other hand, BDK provides marginal improvements. As an additional contribution, we present a storage-efficient approach to computing adversarial examples for large Monte Carlo ensembles using both the FGSM and PGD attacks. | computer science |
We present a pipeline to estimate baryonic properties of a galaxy inside a dark matter (DM) halo in DM-only simulations using a machine trained on high-resolution hydrodynamic simulations. As an example, we use the IllustrisTNG hydrodynamic simulation of a $(75 \,\,h^{-1}{\rm Mpc})^3$ volume to train our machine to predict e.g., stellar mass and star formation rate in a galaxy-sized halo based purely on its DM content. An extremely randomized tree (ERT) algorithm is used together with multiple novel improvements we introduce here such as a refined error function in machine training and two-stage learning. Aided by these improvements, our model demonstrates a significantly increased accuracy in predicting baryonic properties compared to prior attempts --- in other words, the machine better mimics IllustrisTNG's galaxy-halo correlation. By applying our machine to the MultiDark-Planck DM-only simulation of a large $(1 \,\,h^{-1}{\rm Gpc})^3$ volume, we then validate the pipeline that rapidly generates a galaxy catalogue from a DM halo catalogue using the correlations the machine found in IllustrisTNG. We also compare our galaxy catalogue with the ones produced by popular semi-analytic models (SAMs). Our so-called machine-assisted semi-simulation model (MSSM) is shown to be largely compatible with SAMs, and may become a promising method to transplant the baryon physics of galaxy-scale hydrodynamic calculations onto a larger-volume DM-only run. We discuss the benefits that machine-based approaches like this entail, as well as suggestions to raise the scientific potential of such approaches. | astrophysics |
Many eigenvalue matrix models possess a peculiar basis of observables which have explicitly calculable averages. This explicit calculability is a stronger feature than ordinary integrability, just like the cases of quadratic and Coulomb potentials are distinguished among other central potentials, and we call it superintegrability. Aa a peculiarity of matrix models, the relevant basis is formed by the Schur polynomials (characters) and their generalizations, and superintegrability looks like a property $<character>\,\sim character$. This is already known to happen in the most important cases of Hermitian, unitary, and complex matrix models. Here we add two more examples of principal importance, where the model depends on external fields: a special version of the complex model and the cubic Kontsevich model. In the former case, straightforward is a generalization to the complex tensor model. In the latter case, the relevant characters are the celebrated $Q$ Schur functions appearing in the description of spin Hurwitz numbers and other related contexts. | high energy physics theory |
The spontaneous formation of micelles in aqueous solutions is governed by the amphipathic nature of surfactants and is practically interesting due to the regular use of micelles as membrane mimics, for the characterization of protein structure, and for drug design and delivery. We performed a systematic characterization of the finite-size effect observed in single-component dodecylphosphocholine (DPC) micelles with the coarse-grained MARTINI model. Of multiple coarse-grained solvent models investigated using large system sizes, the non-polarizable solvent model was found to most-accurately reproduce SANS spectra of 100 mM DPC in aqueous solution. We systematically investigated the finite-size effect at constant 100 mM concentration in 23 systems of sizes 40 to 150 DPC, confirming the finite-size effect to manifest as an oscillation in the mean micelle aggregation number about the thermodynamic aggregation number as the system size increases, mostly diminishing once the system supports formation of three micelles. The accuracy of employing a multiscale simulation approach to avoid finite-size effects in the micelle size distribution and SANS spectra using MARTINI and CHARMM36 was explored using multiple long timescale 500-DPC coarse-grained simulations which were back-mapped to CHARMM36 all-atom systems. It was found that the MARTINI model generally occupies more volume than the all-atom model, leading to the formation of micelles that are of a reasonable radius of gyration, but are smaller in aggregation number. The systematic characterization of the finite-size effect and exploration of multiscale modeling presented in this work provides guidance for the accurate modeling of micelles in simulations. | physics |
In the late 80's, Ou and Mandel experimentally observed signal beatings by performing a non-time resolved coincidence detection of two photons having interfered in a balanced beam splitter [Phys. Rev. Lett 61, 54 (1988)]. In this work, we provide a new interpretation of the fringe pattern observed in this experiment as the direct measurement of the chronocyclic Wigner distribution of a frequency Schr\"odinger cat-like state produced by local spectral filtering. Based on this analysis, we also study time-resolved HOM experiment to measure such frequency state. | quantum physics |
Machine learning has achieved great success in many applications, including electroencephalogram (EEG) based brain-computer interfaces (BCIs). Unfortunately, many machine learning models are vulnerable to adversarial examples, which are crafted by adding deliberately designed perturbations to the original inputs. Many adversarial attack approaches for classification problems have been proposed, but few have considered target adversarial attacks for regression problems. This paper proposes two such approaches. More specifically, we consider white-box target attacks for regression problems, where we know all information about the regression model to be attacked, and want to design small perturbations to change the regression output by a pre-determined amount. Experiments on two BCI regression problems verified that both approaches are effective. Moreover, adversarial examples generated from both approaches are also transferable, which means that we can use adversarial examples generated from one known regression model to attack an unknown regression model, i.e., to perform black-box attacks. To our knowledge, this is the first study on adversarial attacks for EEG-based BCI regression problems, which calls for more attention on the security of BCI systems. | computer science |
In this study, electric power is processed using the logic operation method and the error correction algorithms to meet load demand. Electric power was treated as physically flow through the distribution network, which was governed by circuit configuration and efficiency. The hardware required to digitize or packetize electric power, which is called power packet router, was developed in this research work. It provides the opportunity for functional electric power dispatching disregarding the power flow in the circuit. This study proposes a new design for the network, which makes the logic operation of electric power possible and provides an algorithm to correct the inaccuracies caused by dissipation and noise. Phase shift of the power supply network is resulted by implementing the introduced design. | electrical engineering and systems science |
Nested sampling (NS) is an invaluable tool in data analysis in modern astrophysics, cosmology, gravitational wave astronomy and particle physics. We identify a previously unused property of NS related to order statistics: the insertion indexes of new live points into the existing live points should be uniformly distributed. This observation enabled us to create a novel cross-check of single NS runs. The tests can detect when an NS run failed to sample new live points from the constrained prior and plateaus in the likelihood function, which break an assumption of NS and thus leads to unreliable results. We applied our cross-check to NS runs on toy functions with known analytic results in 2 - 50 dimensions, showing that our approach can detect problematic runs on a variety of likelihoods, settings and dimensions. As an example of a realistic application, we cross-checked NS runs performed in the context of cosmological model selection. Since the cross-check is simple, we recommend that it become a mandatory test for every applicable NS run. | statistics |
Magnetic fields are crucial in shaping the non-thermal emission of the TeV-PeV neutrinos of astrophysical origin seen by the IceCube neutrino telescope. The sources of these neutrinos are unknown, but if they harbor a strong magnetic field, then the synchrotron energy losses of the neutrino parent particles---protons, pions, and muons---leave characteristic imprints on the neutrino energy distribution and its flavor composition. We use high-energy neutrinos as "cosmic magnetometers" to constrain the identity of their sources by placing limits on the strength of the magnetic field in them. We look for evidence of synchrotron losses in public IceCube data: 6 years of High Energy Starting Events (HESE) and 2 years of Medium Energy Starting Events (MESE). In the absence of evidence, we place an upper limit of 10 kG-10 MG (95% C.L.) on the average magnetic field strength of the sources. | astrophysics |
In this paper we investigate a local to global principle for Mordell-Weil group defined over a ring of integers ${\cal O}_K$ of $t$-modules that are products of the Drinfeld modules ${\widehat\varphi}={\phi}_{1}^{e_1}\times \dots \times {\phi}_{t}^{e_{t}}.$ Here $K$ is a finite extension of the field of fractions of $A={\mathbb F}_{q}[t].$ We assume that the ${\mathrm{rank}}(\phi)_{i})=d_{i}$ and endomorphism rings of the involved Drinfeld modules of generic characteristic are the simplest possible, i.e. ${\mathrm{End}}({\phi}_{i})=A$ for $ i=1,\dots , t.$ Our main result is the following numeric criterion. Let ${N}={N}_{1}^{e_1}\times\dots\times {N}_{t}^{e_t}$ be a finitely generated $A$ submodule of the Mordell-Weil group ${\widehat\varphi}({\cal O}_{K})={\phi}_{1}({\cal O}_{K})^{e_{1}}\times\dots\times {\phi}_{t}({\cal O}_{K})^{{e}_{t}},$ and let ${\Lambda}\subset N$ be an $A$ - submodule. If we assume $d_{i}\geq e_{i}$ and $P\in N$ such that $r_{\cal W}(P)\in r_{\cal W}({\Lambda}) $ for almost all primes ${\cal W}$ of ${\cal O}_{K},$ then $P\in {\Lambda}+N_{tor}.$ We also build on the recent results of S.Bara{\'n}czuk \cite{b17} concerning the dynamical local to global principle in Mordell-Weil type groups and the solvability of certain dynamical equations to the aforementioned $t$-modules. | mathematics |
We investigate the relation between transport properties and entanglement between the internal (spin) and external (position) degrees of freedom in one-dimensional discrete time quantum walks. We obtain closed-form expressions for the long-time position variance and asymptotic entanglement of quantum walks whose time evolution is given by any balanced quantum coin, starting from any initial qubit and position states following $\delta$-like (local) and Gaussian distributions. We find out that the knowledge of the limit velocity of the walker together with the polar angle of the initial qubit provide the asymptotic entanglement for local states, while this velocity with the quantum coin phases give it for highly delocalized states. | quantum physics |
W-algebras are constructed via quantum Hamiltonian reduction associated with a Lie algebra $\mathfrak{g}$ and an $\mathfrak{sl}(2)$-embedding into $\mathfrak{g}$. We derive correspondences among correlation functions of theories having different W-algebras as symmetry algebras. These W-algebras are associated to the same $\mathfrak{g}$ but distinct $\mathfrak{sl}(2)$-embeddings. For this purpose, we first explore different free field realizations of W-algebras and then generalize previous works on the path integral derivation of correspondences of correlation functions. For $\mathfrak{g}=\mathfrak{sl}(3)$, there is only one non-standard (non-regular) W-algebra known as the Bershadsky-Polyakov algebra. We examine its free field realizations and derive correlator correspondences involving the WZNW theory of $\mathfrak{sl}(3)$, the Bershadsky-Polyakov algebra and the principal $W_3$-algebra. There are three non-regular W-algebras associated to $\mathfrak{g}=\mathfrak{sl}(4)$. We show that the methods developed for $\mathfrak{g}=\mathfrak{sl}(3)$ can be applied straightforwardly. We briefly comment on extensions of our techniques to general $\mathfrak{g}$. | high energy physics theory |
In this article we consider Wigner matrices $X_N$ with variance profiles (also called Wigner-type matrices) which are of the form $X_N(i,j) = \sigma(i/N,j/N) a_{i,j} / \sqrt{N}$ where $\sigma$ is a symmetric real positive function of $[0,1]^2$ and $\sigma$ will be taken either continuous or piecewise constant. We prove a large deviation principle for the largest eigenvalue of those matrices under the same condition of sharp sub-Gaussian bound and for some other assumptions on $\sigma$. These sub-Gaussian bounds are verified for example for Gaussian variables, Rademacher variables or uniform variables on $[- \sqrt{3}, \sqrt{3}]$. | mathematics |
Long-range and fast transport of coherent excitons is important for development of high-speed excitonic circuits and quantum computing applications. However, most of these coherent excitons have only been observed in some low-dimensional semiconductors when coupled with cavities, as there are large inhomogeneous broadening and dephasing effects on the exciton transport in their native states of the materials. Here, by confining coherent excitons at the 2D quantum limit, we firstly observed molecular aggregation enabled super-transport of excitons in atomically thin two-dimensional (2D) organic semiconductors between coherent states, with a measured a high effective exciton diffusion coefficient of 346.9 cm2/sec at room temperature. This value is one to several orders of magnitude higher than the reported values from other organic molecular aggregates and low-dimensional inorganic materials. Without coupling to any optical cavities, the monolayer pentacene sample, a very clean 2D quantum system (1.2 nm thick) with high crystallinity (J type aggregation) and minimal interfacial states, showed superradiant emissions from the Frenkel excitons, which was experimentally confirmed by the temperature-dependent photoluminescence (PL) emission, highly enhanced radiative decay rate, significantly narrowed PL peak width and strongly directional in-plane emission. The coherence in monolayer pentacene samples was observed to be delocalized over 135 molecules, which is significantly larger than the values (a few molecules) observed from other organic thin films. In addition, the super-transport of excitons in monolayer pentacene samples showed highly anisotropic behaviour. Our results pave the way for the development of future high-speed excitonic circuits, fast OLEDs, and other opto-electronic devices. | condensed matter |
We employ the functional renormalization group approach formulated on the Schwinger-Keldysh contour to calculate real-time correlation functions in scalar field theories. We provide a detailed description of the formalism, discuss suitable truncation schemes for real-time calculations as well as the numerical procedure to self-consistently solve the flow equations for the spectral function. Subsequently, we discuss the relations to other perturbative and non-perturbative approaches to calculate spectral functions, and present a detailed comparison and benchmark in $d=0+1$ dimensions. | high energy physics phenomenology |
We generate high-fidelity massively entangled states in an antiferromagnetic spin-1 Bose-Einstein condensate (BEC) by utilizing multilevel oscillations. Combining the multilevel oscillations with additional adiabatic drives, we greatly shorten the necessary evolution time and relax the requirement on the control accuracy of quadratic Zeeman splitting, from micro-Gauss to milli-Gauss, for a $^{23}$Na spinor BEC. The achieved high fidelities over $96\%$ show that two kinds of massively entangled states, the many-body singlet state and the twin-Fock state, are almost perfectly generated. The generalized spin squeezing parameter drops to a value far below the standard quantum limit even with the presence of atom number fluctuations and stray magnetic fields, illustrating the robustness of our protocol under real experimental conditions. The generated many-body entangled states can be employed to achieve the Heisenberg-limit quantum precision measurement and to attack nonclassical problems in quantum information science. | quantum physics |
We show that one-body entanglement, which is a measure of the deviation of a pure fermionic state from a Slater determinant (SD) and is determined by the mixedness of the single-particle density matrix (SPDM), can be considered as a quantum resource. The associated theory has SDs and their convex hull as free states, and number conserving fermion linear optics operations (FLO), which include one-body unitary transformations and measurements of the occupancy of single-particle modes, as the basic free operations. We first provide a bipartitelike formulation of one-body entanglement, based on a Schmidt-like decomposition of a pure $N$-fermion state, from which the SPDM [together with the $(N-1)$-body density matrix] can be derived. It is then proved that under FLO operations, the initial and postmeasurement SPDMs always satisfy a majorization relation, which ensures that these operations cannot increase, on average, the one-body entanglement. It is finally shown that this resource is consistent with a model of fermionic quantum computation which requires correlations beyond antisymmetrization. More general free measurements and the relation with mode entanglement are also discussed. | quantum physics |
The Galactic low-mass X-ray binary AT2019wey (ATLAS19bcxp, SRGA J043520.9+552226, SRGE J043523.3+552234, ZTF19acwrvzk) was discovered as a new optical transient in Dec 2019, and independently as an X-ray transient in Mar 2020. In this paper, we present comprehensive NICER, NuSTAR, Chandra, Swift, and MAXI observations of AT2019wey from ~1 year prior to the discovery to the end of September 2020. AT2019wey appeared as a ~1 mCrab source and stayed at this flux density for several months, displaying a hard X-ray spectrum that can be modeled as a power-law with photon index Gamma~1.8. In June 2020 it started to brighten, and reached ~20 mCrab in ~2 months. The inclination of this system can be constrained to i<~30 deg by modeling the reflection spectrum. Starting from late-August (~59082 MJD), AT2019wey entered into the hard-intermediate state (HIMS), and underwent a few week-long timescale outbursts, where the brightening in soft X-rays is correlated with the enhancement of a thermal component. Low-frequency quasi-periodic oscillation (QPO) was observed in the HIMS. We detect no pulsation and in timing analysis of the NICER and NuSTAR data. The X-ray states and power spectra of AT2019wey are discussed against the landscape of low-mass X-ray binaries. | astrophysics |
We provide strong evidence that all tree-level 4-point holographic correlators in AdS$_3 \times S^3$ are constrained by a hidden 6D conformal symmetry. This property has been discovered in the AdS$_5 \times S^5$ context and noticed in the tensor multiplet subsector of the AdS$_3 \times S^3$ theory. Here we extend it to general AdS$_3 \times S^3$ correlators which contain also the chiral primary operators of spin zero and one that sit in the gravity multiplet. The key observation is that the 6D conformal primary field associated with these operators is not a scalar but a self-dual $3$-form primary. As an example, we focus on the correlators involving two fields in the tensor multiplets and two in the gravity multiplet and show that all such correlators are encoded in a conformal 6D correlator between two scalars and two self-dual $3$-forms, which is determined by three functions of the cross ratios. We fix these three functions by comparing with the results of the simplest correlators derived from an explicit supergravity calculation. | high energy physics theory |
We study two dimensional $\mathcal{N} = (2, 2)$ Landau-Ginzburg models with tensor valued superfields with the aim of constructing large central charge superconformal field theories which are solvable using large $N$ techniques. We demonstrate the viability of such constructions and motivate the study of anisotropic tensor models. Such theories are a novel deformation of tensor models where we break the continuous symmetries while preserving the large $N$ solvability. Specifically, we examine theories with superpotentials involving tensor contractions chosen to pick out melonic diagrams. The anisotropy is introduced by further biasing individual terms by different coefficients, all of the same order, to retain large $N$ scaling. We carry out a detailed analysis of the resulting low energy fixed point and comment on potential applications to holography. Along the way we also examine gauged versions of the models (with partial anisotropy) and find generically that such theories have a non-compact Higgs branch of vacua. | high energy physics theory |
We study the long-time existence and convergence of general parabolic complex Monge-Ampere type equations whose second order operator is not necessarily convex or concave in the Hessian matrix of the unknown solution. | mathematics |
We describe the vote package in R, which implements the plurality (or first-past-the-post), two-round runoff, score, approval and single transferable vote (STV) electoral systems, as well as methods for selecting the Condorcet winner and loser. We emphasize the STV system, which we have found to work well in practice for multi-winner elections with small electorates, such as committee and council elections, and the selection of multiple job candidates. For single-winner elections, the STV is also called instant runoff voting (IRV), ranked choice voting (RCV), or the alternative vote (AV) system. The package also implements the STV system with equal preferences, for the first time in a software package, to our knowledge. It also implements a new variant of STV, in which a minimum number of candidates from a specified group are required to be elected. We illustrate the package with several real examples. | statistics |
We present the first sublinear memory sketch that can be queried to find the nearest neighbors in a dataset. Our online sketching algorithm compresses an N element dataset to a sketch of size $O(N^b \log^3 N)$ in $O(N^{(b+1)} \log^3 N)$ time, where $b < 1$. This sketch can correctly report the nearest neighbors of any query that satisfies a stability condition parameterized by $b$. We achieve sublinear memory performance on stable queries by combining recent advances in locality sensitive hash (LSH)-based estimators, online kernel density estimation, and compressed sensing. Our theoretical results shed new light on the memory-accuracy tradeoff for nearest neighbor search, and our sketch, which consists entirely of short integer arrays, has a variety of attractive features in practice. We evaluate the memory-recall tradeoff of our method on a friend recommendation task in the Google Plus social media network. We obtain orders of magnitude better compression than the random projection based alternative while retaining the ability to report the nearest neighbors of practical queries. | computer science |
The discoveries made over the past 20 years by Chandra and XMM-Newton surveys in conjunction with multiwavelength imaging and spectroscopic data available in the same fields have significantly changed the view of the supermassive black hole (SMBH) and galaxy connection. These discoveries have opened up several exciting questions that are beyond the capabilities of current X-ray telescopes and will need to be addressed by observatories in the next two decades. As new observatories peer into the early Universe, we will begin to understand the physics and demographics of SMBH infancy (at $z>6$) and investigate the influence of their accretion on the formation of the first galaxies ($\S$ 2.1). We will also be able to understand the accretion and evolution over the cosmic history (at $z\sim$1-6) of the full population of black holes in galaxies, including low accretion rate, heavily obscured AGNs at luminosities beyond the reach of current X-ray surveys ($\S$2.2 and $\S$2.3), enabling us to resolve the connection between SMBH growth and their environment. | astrophysics |
Recently the YbOH molecule has been suggested as a candidate to search for the electron electric dipole moment (eEDM) which violates spatial parity (P) and time-reversal (T) symmetries [I. Kozyryev and N. R. Hutzler, Phys. Rev. Lett. 119, 133002 (2017)]. In the present paper we show that the same system can be used to measure coupling constants of the interaction of electrons and nucleus with axionlike particles. The electron-nucleus interaction produced by the axion exchange induces T,P-violating a EDM of the whole molecular system. We express the corresponding T,P-violating energy shift produced by this effect in terms of the axion mass and product of the axion-nucleus and axion-electron coupling constants. | physics |
The attention-based encoder-decoder modeling paradigm has achieved promising results on a variety of speech processing tasks like automatic speech recognition (ASR), text-to-speech (TTS) and among others. This paradigm takes advantage of the generalization ability of neural networks to learn a direct mapping from an input sequence to an output sequence, without recourse to prior knowledge such as audio-text alignments or pronunciation lexicons. However, ASR models stemming from this paradigm are prone to overfitting, especially when the training data is limited. Inspired by SpecAugment and BERT-like masked language modeling, we propose in the paper a decoder masking based training approach for end-to-end (E2E) ASR models. During the training phase we randomly replace some portions of the decoder's historical text input with the symbol [mask], in order to encourage the decoder to robustly output a correct token even when parts of its decoding history are masked or corrupted. The proposed approach is instantiated with the top-of-the-line transformer-based E2E ASR model. Extensive experiments on the Librispeech960h and TedLium2 benchmark datasets demonstrate the superior performance of our approach in comparison to some existing strong E2E ASR systems. | electrical engineering and systems science |
We measure magnetic field dependence of the Hall angle in a metallic ferromagnetic nanomagnet with stable local magnetic moments where the adopted mechanisms of Hall effect predict linear plus a constant dependence on the external field originating from the ordinary and anomalous Hall effects, respectively. We suggest that the experimentally observed deviations from this dependence is caused by the inverse spin Hall effect (ISHE) and develop a phenomenological theory, which predicts a unique nonlinear dependence of the ISHE contribution on the external magnetic field. Perfect agreement between theory and experiment supports the considerable role of the ISHE in the Hall transport in ferromagnetic metals. | condensed matter |
Random allocation models used in clinical trials aid researchers in determining which of a particular treatment provides the best results by reducing bias between groups. Often however, this determination leaves researchers battling ethical issues of providing patients with unfavorable treatments. Many methods such as Play the Winner and Randomized Play the Winner Rule have historically been utilized to determine patient allocation, however, these methods are prone to the increased assignment of unfavorable treatments. Recently a new Bayesian Method using Decreasingly Informative Priors has been proposed by \citep{sabo2014adaptive}, and later \citep{donahue2020allocation}. Yet this method can be time consuming if MCMC methods are required. We propose the use of a new method which uses Dynamic Linear Model (DLM) \citep{harrison1999bayesian} to increase allocation speed while also decreasing patient allocation samples necessary to identify the more favorable treatment. Furthermore, a sensitivity analysis is conducted on multiple parameters. Finally, a Bayes Factor is calculated to determine the proportion of unused patient budget remaining at a specified cut off and this will be used to determine decisive evidence in favor of the better treatment. | statistics |
This paper proposes a projection algorithm which can be employed to bound actuator signals, in terms of both magnitude and rate, for uncertain systems with redundant actuators. The investigated closed loop control system is assumed to contain an adaptive control allocator to distribute the total control input among actuators. Although conventional control allocation methods can handle actuator rate and magnitude constraints, they cannot consider actuator uncertainty. On the other hand, adaptive allocators manage uncertainty and actuator magnitude limits. The proposed projection algorithm enables adaptive control allocators to handle both magnitude and rate saturation constraints. A mathematically rigorous analysis is provided to show that with the help of the proposed projection algorithm, the performance of the adaptive control allocator can be guaranteed, in terms of error bounds. Simulation results are presented, where the Aero-Data Model In Research Environment (ADMIRE) is used as an over-actuated system, to demonstrate the effectiveness of the proposed method. | electrical engineering and systems science |
Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. | computer science |
Nowadays, long distance optical fibre transmission systems use polarization diversity multiplexed signals to enhance transmission performance. Distributed acoustic sensors (DAS) use the same propagation medium ie. single mode optical fibre, and aims at comparable targets such as covering the highest distance with the best signal quality. In the case of sensors, a noiseless transmission enables to monitor a large quantity of mechanical events along the fibre. This paper aims at extending the perspectives of DAS systems with regard to technology breakthroughs introduced in long haul transmission systems over the last decade. We recently developed a sensor interrogation method based on coherent phase-sensitive optical time domain reflectometry ($\phi$-OTDR), with dual polarization multiplexing at the transmitter and polarization diversity at the receiver. We name this technique Coherent-MIMO sensing. A study is performed from a dual-polarization numerical model to compare several sensor interrogation techniques, including Coherent-MIMO. We demonstrate that dual-polarization probing of a fibre sensor makes it insensitive to polarization effects, decreases the risks of false alarms and thus strongly enhances its sensitivity. The simulations results are validated with an experiment, and finally quantitative data are given on the performance increase enabled by Coherent-MIMO sensing. | electrical engineering and systems science |
Excellent variational approximations to Gaussian process posteriors have been developed which avoid the $\mathcal{O}\left(N^3\right)$ scaling with dataset size $N$. They reduce the computational cost to $\mathcal{O}\left(NM^2\right)$, with $M\ll N$ being the number of inducing variables, which summarise the process. While the computational cost seems to be linear in $N$, the true complexity of the algorithm depends on how $M$ must increase to ensure a certain quality of approximation. We address this by characterising the behavior of an upper bound on the KL divergence to the posterior. We show that with high probability the KL divergence can be made arbitrarily small by growing $M$ more slowly than $N$. A particular case of interest is that for regression with normally distributed inputs in D-dimensions with the popular Squared Exponential kernel, $M=\mathcal{O}(\log^D N)$ is sufficient. Our results show that as datasets grow, Gaussian process posteriors can truly be approximated cheaply, and provide a concrete rule for how to increase $M$ in continual learning scenarios. | statistics |
In this paper, we study the light scalar and pseudoscalar invisible particles in the flavor changing neutral current processes of the $B_c$ meson. Effective operators are introduced to describe the couplings between quarks and light invisible particles. The Wilson coefficients are extracted from the experimental results of the $B$ and $D$ mesons, which are used to predict the upper limits of the branching fractions of the similar decay processes for the $B_c$ meson. The hadronic transition matrix element is calculated with the instantaneously approximated Bethe-Salpeter method. The upper limits of the branching fractions when $m_\chi$ taking different values are presented. It is found that at some region of $m_\chi$, the channel $B_c\to D_s^{(\ast)}\chi\chi$ has the largest upper limit which is of the order of $10^{-6}$, and for $B_c\to D_s^\ast\chi\chi^\dagger$, the largest value of the upper limits can achieve the order of $10^{-5}$. Other decay modes, such as $B_c\to D^{(*)}\chi\chi^{(\dagger)}$ and $B_c\to B^{(*)}\chi\chi^{(\dagger)}$, are also considered. | high energy physics phenomenology |
Gaussian processes offer an attractive framework for predictive modeling from longitudinal data, i.e., irregularly sampled, sparse observations from a set of individuals over time. However, such methods have two key shortcomings: (i) They rely on ad hoc heuristics or expensive trial and error to choose the effective kernels, and (ii) They fail to handle multilevel correlation structure in the data. We introduce Longitudinal deep kernel Gaussian process regression (L-DKGPR), which to the best of our knowledge, is the only method to overcome these limitations by fully automating the discovery of complex multilevel correlation structure from longitudinal data. Specifically, L-DKGPR eliminates the need for ad hoc heuristics or trial and error using a novel adaptation of deep kernel learning that combines the expressive power of deep neural networks with the flexibility of non-parametric kernel methods. L-DKGPR effectively learns the multilevel correlation with a novel addictive kernel that simultaneously accommodates both time-varying and the time-invariant effects. We derive an efficient algorithm to train L-DKGPR using latent space inducing points and variational inference. Results of extensive experiments on several benchmark data sets demonstrate that L-DKGPR significantly outperforms the state-of-the-art longitudinal data analysis (LDA) methods. | statistics |
Analysis of $d\sigma/dt$ of the TOTEM Collaboration data, carried out without model assumptions, showed the existence of a new effect in the behavior of the hadron scattering amplitude at a small momentum transfer at a high confidence level. The quantitatively description of the data in the framework of the HEGS model support such phenomenon which can be connect with quark potentials at large distances. | high energy physics phenomenology |
Oxygen-rich young supernova remnants are valuable objects for probing the outcome of nucleosynthetic processes in massive stars, as well as the physics of supernova explosions. Observed within a few thousand years after the supernova explosion, these systems contain fast-moving oxygen-rich and hydrogen-poor filaments visible at optical wavelengths: fragments of the progenitor's interior expelled at a few 1000 km/s during the supernova explosion. Here we report the first identification of the compact object in 1E0102.2-7219 in reprocessed Chandra X-ray Observatory data, enabled via the discovery of a ring-shaped structure visible primarily in optical recombination lines of Ne I and O I. The optical ring, discovered in integral field spectroscopy observations from the Multi Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope, has a radius of $(2.10\pm0.35)$ arcsec = $(0.63\pm0.11)$ pc, and is expanding at a velocity of $90.5_{-30}^{+40}$ km/s. It surrounds an X-ray point source with an intrinsic X-ray luminosity $L_{i}$ (1.2--2.0 keV)= $(1.4\pm0.2)\times10^{33}$ erg/s. The energy distribution of the source indicates that this object is an isolated neutron star: a Central Compact Object akin to those present in the Cas A and Puppis A supernova remnants, and the first of its kind to be identified outside of our Galaxy. | astrophysics |
The 2-neutrino exchange potential is a Standard Model weak potential arising from the exchange of virtual neutrino-antineutrino pairs which must include all neutrino properties, including the number of flavors, their masses, fermionic nature (Dirac or Majorana), and CP violation. We describe a new approach for calculating the spin-independent 2-neutrino exchange potential, including the mixing of three neutrino mass states and CP violation. | high energy physics phenomenology |
As recently advocated in \cite{Fischer:2018niu}, there is a fundamentally new mechanism for the axion production in the Sun and Earth. However, the role of very slow axions in previous studies were neglected because of its negligible contribution to the total axion production by this new mechanism. In the present work we specifically focus on analysis of the non-relativistic axions which will be trapped by the Sun and Earth due to the gravitational forces. The corresponding emission rate of these low energy axions (below the escape velocity) is very tiny. However, these axions will be accumulated by the Sun and Earth during their life-times, i.e. 4.5 billion of years, which greatly enhances the discovery potential. The computations are based on the so-called Axion Quark Nugget (AQN) Dark Matter Model. This model was originally invented as a natural explanation of the observed ratio $\Omega_{\rm dark} \sim \Omega_{\rm visible}$ when the DM and visible matter densities assume the same order of magnitude values, irrespectively to the axion mass $m_a$ or initial misalignment angle $\theta_0$.This model, without adjustment of any parameters, gives a very reasonable intensity of the extreme UV (EUV) radiation from the solar corona as a result of the AQN annihilation events with the solar material. This extra energy released in corona represents a resolution, within AQN framework, a long standing puzzle known in the literature as the "solar corona heating mystery". The same annihilation events also produce the axions. The flux of these axions is unambiguously fixed in this model and expressed in terms of the EUV luminosity from solar corona. We make few comments on the potential discovery of these gravitationally bound axions. | high energy physics phenomenology |
Product risk assessment is the overall process of determining whether a product, which could be anything from a type of washing machine to a type of teddy bear, is judged safe for consumers to use. There are several methods used for product risk assessment, including RAPEX, which is the primary method used by regulators in the UK and EU. However, despite its widespread use, we identify several limitations of RAPEX including a limited approach to handling uncertainty and the inability to incorporate causal explanations for using and interpreting test data. In contrast, Bayesian Networks (BNs) are a rigorous, normative method for modelling uncertainty and causality which are already used for risk assessment in domains such as medicine and finance, as well as critical systems generally. This article proposes a BN model that provides an improved systematic method for product risk assessment that resolves the identified limitations with RAPEX. We use our proposed method to demonstrate risk assessments for a teddy bear and a new uncertified kettle for which there is no testing data and the number of product instances is unknown. We show that, while we can replicate the results of the RAPEX method, the BN approach is more powerful and flexible. | statistics |
Our knowledge of the variety of galaxy clusters has been increasing in the last few years thanks to our progress in understanding the severity of selection effects on samples. To understand the reason for the observed variety, we study CL2015, a cluster easily missed in X-ray selected observational samples. Its core-excised X-ray luminosity is low for its mass M500, well below the mean relation for an X-ray selected sample, but only ~1.5 sigma below that derived for an X-ray unbiased sample. We derived thermodynamic profiles and hydrostatic masses with the acquired deep Swift X-ray data, and we used archival Einstein, Planck, and SDSS data to derive additional measurements, such as integrated Compton parameter, total mass, and stellar mass. The pressure and the electron density profiles of CL2015 are systematically outside the +/- 2 sigma range of the universal profiles; in particular the electron density profile is even lower than the one derived from Planck-selected clusters. CL2015 also turns out to be fairly different in the X-ray luminosity versus integrated pressure scaling compared to an X-ray selected sample, but it is a normal object in terms of stellar mass fraction. CL2015's hydrostatic mass profile, by itself or when is considered together with dynamical masses, shows that the cluster has an unusual low concentration and an unusual sparsity compared to clusters in X-ray selected samples. The different behavior of CL2015 is caused by its low concentration. When concentration differences are accounted for, the properties of CL2015 become consistent with comparison samples. CL2015 is perhaps the first known cluster with a remarkably low mass concentration for which high quality X-ray data exist. Objects similar to CL2015 fail to enter observational X-ray selected samples because of their low X-ray luminosity relative to their mass. | astrophysics |
This article is based on lecture notes for the Marie Curie Training school "Initial Training on Numerical Methods for Active Matter". It provides an introductory overview of modeling approaches for active matter and is primarily targeted at PhD students (or other readers) who encounter some of these approaches for the first time. The aim of the article is to help put the described modeling approaches into perspective. | condensed matter |
Channel estimation and hybrid precoding are considered for multi-user millimeter wave massive multi-input multi-output system. A deep learning compressed sensing (DLCS) channel estimation scheme is proposed. The channel estimation neural network for the DLCS scheme is trained offline using simulated environments to predict the beamspace channel amplitude. Then the channel is reconstructed based on the obtained indices of dominant beamspace channel entries. A deep learning quantized phase (DLQP) hybrid precoder design method is developed after channel estimation. The training hybrid precoding neural network for the DLQP method is obtained offline considering the approximate phase quantization. Then the deployment hybrid precoding neural network (DHPNN) is obtained by replacing the approximate phase quantization with ideal phase quantization and the output of the DHPNN is the analog precoding vector. Finally, the analog precoding matrix is obtained by stacking the analog precoding vectors and the digital precoding matrix is calculated by zero-forcing. Simulation results demonstrate that the DLCS channel estimation scheme outperforms the existing schemes in terms of the normalized mean-squared error and the spectral efficiency, while the DLQP hybrid precoder design method has better spectral efficiency performance than other methods with low phase shifter resolution. | electrical engineering and systems science |
An accurate knowledge of nuclear parton distribution functions (nPDFs) is an essential ingredient of high energy physics calculations when the processes are involving nuclei in the initial state. It is well known now that the prompt photon production both in hadronic and nuclear collisions is a powerful tool for exploring the parton densities in the nucleon and nuclei especially of the gluon. In this work, we are going to perform a comprehensive study of the isolated prompt photon production in $ p $-Pb collisions at backward rapidities to find the best kinematic regions in which the experimental measurements have most sensitivity to the nuclear modifications of parton densities. Most emphasis will be placed on the antishadowing nuclear modification. To this aim, we calculate and compare various quantities at different values of center-of-mass energy covered by the LHC and also different rapidity regions to realize which one is most useful. | high energy physics phenomenology |
Messier 8 (M8), one of the brightest HII regions in our Galaxy, is associated with two prominent massive star-forming regions: M8-Main, the particularly bright part of the large scale HII region (mainly) ionised by the stellar system Herschel 36 (Her 36) and M8 East (M8 E), which is mainly powered by a deeply embedded young stellar object (YSO), a bright infrared (IR) source, M8E-IR. We aim to study the interaction of the massive star-forming region M8 E with its surroundings and to compare the star-forming environments of M8-Main and M8 E. We used the IRAM 30 m telescope to perform an imaging spectroscopy survey of the molecular environment of M8E-IR. We imaged and analysed data for the $J$ = 1 $\to$ 0 rotational transitions of $^{12}$CO, $^{13}$CO, N$_2$H$^+$, HCN, H$^{13}$CN, HCO$^+$, H$^{13}$CO$^+$, HNC and HN$^{13}$C observed for the first time toward M8~E. We used LTE and non-LTE techniques to determine column densities of the observed species and to constrain the physical conditions of the gas responsible for their emission. Examining the YSO population in M8~E allows us to explore the observed ionization front (IF) as seen in GLIMPSE 8~$\mu$m emission image. We find that $^{12}$CO probes the warm diffuse gas also traced by the GLIMPSE 8~$\mu$m emission, while N$_2$H$^+$ and HN$^{13}$C trace the cool and dense gas. We find that the star-formation in M8~E appears to be triggered by the earlier formed stellar cluster NGC~6530, which powers an HII region giving rise to an IF that is moving at a speed $\geq$ 0.26~km~s$^{-1}$ across M8~E. We derive temperatures of 80 K and 30 K for the warm and cool gas components, respectively, and constrain H$_2$ volume densities to be in the range of 10$^4$--10$^6$~cm$^{-3}$. Comparison of the observed abundances of various species reflects the fact that M8~E is at an earlier stage of massive star formation than M8-Main. | astrophysics |
We propose a novel direction to improve the denoising quality of filtering-based denoising algorithms in real time by predicting the best filter parameter value using a Convolutional Neural Network (CNN). We take the use case of BM3D, the state-of-the-art filtering-based denoising algorithm, to demonstrate and validate our approach. We propose and train a simple, shallow CNN to predict in real time, the optimum filter parameter value, given the input noisy image. Each training example consists of a noisy input image (training data) and the filter parameter value that produces the best output (training label). Both qualitative and quantitative results using the widely used PSNR and SSIM metrics on the popular BSD68 dataset show that the CNN-guided BM3D outperforms the original, unguided BM3D across different noise levels. Thus, our proposed method is a CNN-based improvement on the original BM3D which uses a fixed, default parameter value for all images. | electrical engineering and systems science |
In wireless communication systems, Orthogonal Frequency-Division Multiplexing (OFDM) includes variants using either a cyclic prefix (CP) or a zero padding (ZP) as the guard interval to avoid inter-symbol interference. OFDM is ideally suited to deal with frequency-selective channels and additive white Gaussian noise (AWGN); however, its performance may be dramatically degraded in the presence of impulse noise. While the ZP variants of OFDM exhibit lower bit error rate (BER)and higher energy efficiency compared to their CP counterparts,they demand strict time synchronization, which is challenging in the absence of pilot and CP. Moreover, on the contrary to AWGN, impulse noise severely corrupts data. In this paper, a new low-complexity timing offset (TO) estimator for ZP-OFDM for practical impulsive-noise environments is proposed, where relies on the second-other statistics of the multipath fading channel and noise. Performance comparison with existing TO estimators demonstrates either a superior performance in terms of lock-in probability or a significantly lower complexity over a wide range of signal-to-noise ratio (SNR) for various practical scenarios. | electrical engineering and systems science |
Lung nodule malignancy prediction is an essential step in the early diagnosis of lung cancer. Besides the difficulties commonly discussed, the challenges of this task also come from the ambiguous labels provided by annotators, since deep learning models may learn, even amplify, the bias embedded in them. In this paper, we propose a multi-view "divide-and-rule" (MV-DAR) model to learn from both reliable and ambiguous annotations for lung nodule malignancy prediction. According to the consistency and reliability of their annotations, we divide nodules into three sets: a consistent and reliable set (CR-Set), an inconsistent set (IC-Set), and a low reliable set (LR-Set). The nodule in IC-Set is annotated by multiple radiologists inconsistently, and the nodule in LR-Set is annotated by only one radiologist. The proposed MV-DAR contains three DAR submodels to characterize a lung nodule from three orthographic views. Each DAR consists of a prediction network (Prd-Net), a counterfactual network (CF-Net), and a low reliable network (LR-Net), learning on CR-Set, IC-Set, and LR-Set, respectively. The image representation ability learned by CF-Net and LR-Net is then transferred to Prd-Net by negative-attention module (NA-Module) and consistent-attention module (CA-Module), aiming to boost the prediction ability of Prd-Net. The MV-DAR model has been evaluated on the LIDC-IDRI dataset and LUNGx dataset. Our results indicate not only the effectiveness of the proposed MV-DAR model in learning from ambiguous labels but also its superiority over present noisy label-learning models in lung nodule malignancy prediction. | electrical engineering and systems science |
Given a hypergraph $H$, the size-Ramsey number $\hat{r}_2(H)$ is the smallest integer $m$ such that there exists a graph $G$ with $m$ edges with the property that in any colouring of the edges of $G$ with two colours there is a monochromatic copy of $H$. We prove that the size-Ramsey number of the $3$-uniform tight path on $n$ vertices $P^{(3)}_n$ is linear in $n$, i.e., $\hat{r}_2(P^{(3)}_n) = O(n)$. This answers a question by Dudek, Fleur, Mubayi, and R\"odl for $3$-uniform hypergraphs [On the size-Ramsey number of hypergraphs, J. Graph Theory 86 (2016), 417-434], who proved $\hat{r}_2(P^{(3)}_n) = O(n^{3/2} \log^{3/2} n)$. | mathematics |
Data matrix centering is an ever-present yet under-examined aspect of data analysis. Functional data analysis (FDA) often operates with a default of centering such that the vectors in one dimension have mean zero. We find that centering along the other dimension identifies a novel useful mode of variation beyond those familiar in FDA. We explore ambiguities in both matrix orientation and nomenclature. Differences between centerings and their potential interaction can be easily misunderstood. We propose a unified framework and new terminology for centering operations. We clearly demonstrate the intuition behind and consequences of each centering choice with informative graphics. We also propose a new direction energy hypothesis test as part of a series of diagnostics for determining which choice of centering is best for a data set. We explore the application of these diagnostics in several FDA settings. | statistics |
Under any Multiclass Classification (MCC) setting defined by a collection of labeled point-cloud specified by a feature-set, we extract only stochastic partial orderings from all possible triplets of point-cloud without explicitly measuring the three cloud-to-cloud distances. We demonstrate that such a collective of partial ordering can efficiently compute a label embedding tree geometry on the Label-space. This tree in turn gives rise to a predictive graph, or a network with precisely weighted linkages. Such two multiscale geometries are taken as the coarse scale information content of MCC. They indeed jointly shed lights on explainable knowledge on why and how labeling comes about and facilitates error-free prediction with potential multiple candidate labels supported by data. For revealing within-label heterogeneity, we further undergo labeling naturally found clusters within each point-cloud, and likewise derive multiscale geometry as its fine-scale information content contained in data. This fine-scale endeavor shows that our computational proposal is indeed scalable to a MCC setting having a large label-space. Overall the computed multiscale collective of data-driven patterns and knowledge will serve as a basis for constructing visible and explainable subject matter intelligence regarding the system of interest. | statistics |
Arrangement of interacting particles on a sphere is historically a well known problem, however, ordering of particles with anisotropic interaction, such as the dipole-dipole interaction, has remained unexplored. We solve the orientational ordering of point dipoles on a sphere with fixed positional order with numerical minimization of interaction energy and analyze stable configurations depending on their symmetry and degree of ordering. We find that a macrovortex is a generic ground state, with various discrete rotational symmetries for different system sizes, while higher energy metastable states are similar, but less ordered. We observe orientational phase transitions and hysteresis in response to changing external field both for the fixed sphere orientation with respect the field, as well as for a freely-rotating sphere. For the case of a freely rotating sphere, we also observe changes of the symmetry axis with increasing field strength. | condensed matter |
Purpose: The purpose of our study was to use Dual-TR STE-MR protocol as a clinical tool for cortical bone free water quantification at 1.5T and validate it by comparing the obtained results (MR-derived results) with dehydration results. Methods: Human studies were compliant with HIPPA and were approved by the institutional review board. Short Echo Time (STE) MR imaging with different Repetition Times (TRs) was used for quantification of cortical bone free water T1 (T1free) and concentration (\r{ho}free). The proposed strategy was compared with the dehydration technique in seven bovine cortical bone samples. The agreement between the two methods was quantified by using Bland and Altman analysis. Then we applied the technique on a cross-sectional population of thirty healthy volunteers (18F/12M) and examined the association of the biomarkers with age. Results: The mean values of \r{ho}free for bovine cortical bone specimens were quantified as 4.37% and 5.34% by using STE-MR and dehydration techniques, respectively. The Bland and Altman analysis showed good agreement between the two methods along with the suggestion of 0.99% bias between them. Strong correlations were also reported between \r{ho}free (r2 = 0.62) and T1free and age (r2 = 0.8). The reproducibility of the method, evaluated in eight subjects, yielded an intra-class correlation of 0.95. Conclusion: STE-MR imaging with dual-TR strategy is a clinical solution for quantifying cortical bone \r{ho}free and T1free. | physics |
Real-space picture of electron recollision with the parent ion guides our understanding of the highly nonlinear response of atoms and molecules to intense low-frequency laser fields. It is also among several leading contestants for the dominant mechanism of high harmonic generation (HHG) in solids, where it is typically viewed in the momentum space, as the recombination of the conduction band electron with the valence band hole, competing with another HHG mechanism, the strong-field driven Bloch oscillations. In this work, we use numerical simulations to directly test and confirm the real-space recollision picture as the key mechanism of HHG in solids. Our tests take advantage of the well-known characteristic features in the molecular harmonic spectra, associated with the real-space structure of the molecular ion. We show the emergence of analogous spectral features when similar real-space structures are present in the periodic potential of the solid-state lattice. This work demonstrates the capability of HHG imaging of spatial structures of a unit cell in solids. | physics |
L10 ordered magnetic alloys such as FePt, FePd, CoPt and FeNi are well known for their large magnetocrystalline anisotropy. Among these, L10-FeNi alloy is economically viable material for magnetic recording media because it does not contain rare earth and noble elements. In this work, L10-FeNi films with three different strengths of anisotropy were fabricated by varying the deposition process in molecular beam epitaxy system. We have investigated the magnetization reversal along with domain imaging via magneto optic Kerr effect based microscope. It is found that in all three samples, the magnetization reversal is happening via domain wall motion. Further ferromagnetic resonance (FMR) spectroscopy was performed to evaluate the damping constant and magnetic anisotropy. It was observed that the FeNi sample with moderate strength of anisotropy exhibits low value of damping constant ~ 4.9X10^-3. In addition to this, it was found that the films possess a mixture of cubic and uniaxial anisotropies. | physics |
In this paper we consider the rigidity and flexibility of $C^{1, \theta}$ isometric extensions and we show that the H\"older exponent $\theta_0=\frac12$ is critical in the following sense: if $u\in C^{1,\theta}$ is an isometric extension of a smooth isometric embedding of a codimension one submanifold $\Sigma$ and $\theta> \frac12$, then the tangential connection agrees with the Levi-Civita connection along $\Sigma$. On the other hand, for any $\theta<\frac12$ we can construct $C^{1,\theta}$ isometric extensions via convex integration which violate such property. As a byproduct we get moreover an existence theorem for $C^{1, \theta}$ isometric embeddings, $\theta<\frac12$, of compact Riemannian manifolds with $C^1$ metrics and sharper amount of codimension. | mathematics |
Molecular dynamics simulations are used to show that strong magnetization significantly increases the space and time scales associated with interparticle correlations. The physical mechanism responsible is a channeling effect whereby particles are confined to move along narrow cylinders with a width characterized by the gyroradius and a length characterized by the collision mean free path. The predominant interaction is $180^\circ$ collisions at the ends of the collision cylinders, resulting in a long-range correlation parallel to the magnetic field. Its influence is demonstrated via the dependence of the velocity autocorrelation functions and self-diffusion coefficients on the domain size and run time in simulations of the one-component plasma. A very large number of particles, and therefore domain size, must be used to resolve the long-range correlations, suggesting that the number of charged particles in the collection must increase in order to constitute a plasma. Correspondingly, this effect significantly delays the time it takes to reach a diffusive regime, in which the mean square displacement of particles increases linearly in time. This result presents challenges for connecting measurements in non-neutral and ultracold neutral plasma experiments, as well as molecular dynamics simulations, with fluid transport properties due to their finite size. | physics |
The root of most of the technicolor (TC) problems lies in the way the ordinary fermions acquire their masses, where an ordinary fermion (f) couples to a technifermion (F) mediated by an Extended Technicolor (ETC) boson leading to fermion masses that vary with the ETC mass scale ($M_E$) as $1/M_E^2$. Recently, we discussed a new approach consisting of models where TC and QCD are coupled through a larger theory, in this case the solutions of these equations are modified compared to those of the isolated equations, and TC and QCD self-energies are of the Irregular form, which allows us to build models where ETC boson masses can be pushed to very high energies. In this work we extend these results for 331-TC models, in particular considering a coupled system of Schwinger-Dyson equations, we show that all technifermions of the model exhibit the same asymptotic behavior for TC self-energies. As an application we discuss how the mass splitting of the order $O(100)GeV$ could be generated between the second and third generation of fermions. | high energy physics phenomenology |
We present online boosting algorithms for multilabel ranking with top-k feedback, where the learner only receives information about the top k items from the ranking it provides. We propose a novel surrogate loss function and unbiased estimator, allowing weak learners to update themselves with limited information. Using these techniques we adapt full information multilabel ranking algorithms (Jung and Tewari, 2018) to the top-k feedback setting and provide theoretical performance bounds which closely match the bounds of their full information counterparts, with the cost of increased sample complexity. These theoretical results are further substantiated by our experiments, which show a small gap in performance between the algorithms for the top-k feedback setting and that for the full information setting across various datasets. | statistics |
Using a resonance nonlinear Schrodinger equation as a bridge,we explore a direct connection of cold plasma physics to two-dimensional black holes.Namely,we compute and diagonalize a metric attached to the propagation of magneto-acoustic waves in a cold plasma subject to a transverse magnetic field,and we construct an explicit change of variables by which this metric is transformed exactly to a Jackiw-Teitelbiom black hole metric. | high energy physics theory |
We investigate the charge-states coherence underlying the nonequilibirum transport through a spinless double-dot Aharonov-Bohm (AB) interferometer. Both the current noise spectrum and real-time dynamics are evaluated with the well-established dissipaton-equation-of-motion method. The resulted spectrums show the characteristic peaks and dips, arising from coherent Rabi oscillation dynamics, with the environment-assisted indirect inter-dot tunnel coupling mechanism. The observed spectroscopic features are in a quantitative agreement to the real-time dynamics of the reduced density matrix off-diagonal element between two charge states. As the aforementioned mechanism, these characteristics of coherence are very sensitive to the AB phase. While this is generally true for cross-correlation spectrum, the total circuit noise spectrum that is experimentally more accessible shows remarkably rich interplay between various mechanisms. The most important finding of this work is the existence of Coulomb-blockade-assisted Rabi interference, with very distinct signatures arising from the interplay between the AB interferometer and the interdot Coulomb interaction induced Fano resonance. | condensed matter |
We present the easy-to-use, publicly available, Python package petitRADTRANS, built for the spectral characterization of exoplanet atmospheres. The code is fast, accurate, and versatile; it can calculate both transmission and emission spectra within a few seconds at low resolution ($\lambda/\Delta\lambda$ = 1000; correlated-k method) and high resolution ($\lambda/\Delta\lambda = 10^6$; line-by-line method), using only a few lines of input instruction. The somewhat slower correlated-k method is used at low resolution because it is more accurate than methods such as opacity sampling. Clouds can be included and treated using wavelength-dependent power law opacities, or by using optical constants of real condensates, specifying either the cloud particle size, or the atmospheric mixing and particle settling strength. Opacities of amorphous or crystalline, spherical or irregularly-shaped cloud particles are available. The line opacity database spans temperatures between 80 and 3000 K, allowing to model fluxes of objects such as terrestrial planets, super-Earths, Neptunes, or hot Jupiters, if their atmospheres are hydrogen-dominated. Higher temperature points and species will be added in the future, allowing to also model the class of ultra hot-Jupiters, with equilibrium temperatures $T_{\rm eq} \gtrsim 2000$ K. Radiative transfer results were tested by cross-verifying the low- and high-resolution implementation of petitRADTRANS, and benchmarked with the petitCODE, which itself is also benchmarked to the ATMO and Exo-REM codes. We successfully carried out test retrievals of synthetic JWST emission and transmission spectra (for the hot Jupiter TrES-4b, which has a $T_{\rm eq}$ of $\sim$ 1800 K). The code is publicly available at http://gitlab.com/mauricemolli/petitRADTRANS, and its documentation can be found at https://petitradtrans.readthedocs.io. | astrophysics |
Amateur astronomers can make useful contributions to the study of comets. They add temporal coverage and multi-scale observations which can aid the study of fast-changing, and large-scale comet features. We document and review the amateur observing campaign set up to complement the Rosetta space mission, including the data submitted to date, and consider the campaign's effectiveness in the light of experience from previous comet amateur campaigns. We report the results of surveys of campaign participants, the amateur astronomy community, and schools who participated in a comet 46P observing campaign. We draw lessons for future campaigns which include the need for: clarity of objectives; recognising the wider impact campaigns can have on increasing science capital; clear, consistent, timely and tailored guidance; easy upload procedures with in-built quality control; and, regular communication, feedback and recognition. | astrophysics |
Low temperature optical spectroscopy in applied magnetic fields provides clear evidence of magnetoelastic coupling in the spin ice material Ho2Ti2O7. In IR measurements, we observe field dependent features around 61, 72 and 78 meV, energies corresponding to crystal electronic field doublets. Calculating the electronic band structure based on the crystal field Hamiltonian allows determination of crystal field energies, values for the crystal field parameters, and confirmation that the observed features in IR are consistent with magnetic-dipole-allowed transitions between 5I8 CEF levels. Additionally, we identify a weak field-dependent feature near one of the CEF doublets, which we associate with a vibronic bound state that was previously observed by others in inelastic neutron measurements. | condensed matter |
In the present article, we develop a general framework for the description of an $N$-sequential state discrimination, where each of $N$ receivers always obtains a conclusive result. For this new state discrimination scenario, we derive two mutually equivalent general representations of the success probability and prove that if one of two states, pure or mixed, is prepared by a sender, then the optimal success probability is given by the Helstrom bound for any number $N$ of sequential receivers. Furthermore, we specify receivers' indirect measurements resulting in the optimal $N$-sequential conclusive state discrimination protocol. The developed framework is true for any number $N$ of sequential receivers, any number of arbitrary quantum states, pure or mixed, to be discriminated, and all types of receivers' quantum measurements. The new general results derived within the developed framework are important both from the theoretical point of view and for a successful multipartite quantum communication even in the presence of a quantum noise. | quantum physics |
We present here Nested_fit, a Bayesian data analysis code developed for investigations of atomic spectra and other physical data. It is based on the nested sampling algorithm with the implementation of an upgraded lawn mower robot method for finding new live points. For a given data set and a chosen model, the program provides the Bayesian evidence, for the comparison of different hypotheses/models, and the different parameter probability distributions. A large database of spectral profiles is already available (Gaussian, Lorentz, Voigt, Log-normal, etc.) and additional ones can easily added. It is written in Fortran, for an optimized parallel computation, and it is accompanied by a Python library for the results visualization. | physics |
Entropic dynamics is a framework in which the laws of dynamics are derived as an application of entropic methods of inference. Its successes include the derivation of quantum mechanics and quantum field theory from probabilistic principles. Here we develop the entropic dynamics of a system the state of which is described by a probability distribution. Thus, the dynamics unfolds on a statistical manifold which is automatically endowed by a metric structure provided by information geometry. The curvature of the manifold has a significant influence. We focus our dynamics on the statistical manifold of Gibbs distributions (also known as canonical distributions or the exponential family). The model includes an "entropic" notion of time that is tailored to the system under study; the system is its own clock. As one might expect, entropic time is intrinsically directional; there is a natural arrow of time which is lead by entropic considerations. As illustrative examples we discuss dynamics on a space of Gaussians and the discrete 3-state system. | condensed matter |
We discuss a phase transition in spin glass models which have been rarely considered in the past, namely the phase transition that may take place when two real replicas are forced to be at a larger distance (i.e. at a smaller overlap) than the typical one. In the first part of the work, by solving analytically the Sherrington-Kirkpatrick model in a field close to its critical point, we show that even in a paramagnetic phase the forcing of two real replicas to an overlap small enough leads the model to a phase transition where the symmetry between replicas is spontaneously broken. More importantly, this phase transition is related to the de Almeida-Thouless (dAT) critical line. In the second part of the work, we exploit the phase transition in the overlap between two real replicas to identify the critical line in a field in finite-dimensional spin glasses. This is a notoriously difficult computational problem, because of huge finite-size corrections. We introduce a new method of analysis of Monte Carlo data for disordered systems, where the overlap between two real replicas is used as a conditioning variate. We apply this analysis to equilibrium measurements collected in the paramagnetic phase in a field, $h>0$ and $T_c(h)<T<T_c(h=0)$, of the $d=1$ spin glass model with long-range interactions decaying fast enough to be outside the regime of validity of the mean-field theory. We thus provide very reliable estimates for the thermodynamic critical temperature in a field. | condensed matter |
Being able to describe accurately the dynamics and steady-states of driven and/or dissipative but quantum correlated lattice models is of fundamental importance in many areas of science: from quantum information to biology. An efficient numerical simulation of large open systems in two spatial dimensions is a challenge. In this work, we develop a tensor network method, based on an infinite Projected Entangled Pair Operator (iPEPO) ansatz, applicable directly in the thermodynamic limit. We incorporate techniques of finding optimal truncations of enlarged network bonds by optimising an objective function appropriate for open systems. Comparisons with numerically exact calculations, both for the dynamics and the steady-state, demonstrate the power of the method. In particular, we consider dissipative transverse quantum Ising and driven-dissipative hard core boson models in non-mean field limits, proving able to capture substantial entanglement in the presence of dissipation. Our method enables to study regimes which are accessible to current experiments but lie well beyond the applicability of existing techniques. | quantum physics |
We map the quantum problem of a free bosonic field in a space-time dependent background into a classical problem. $N$ degrees of freedom of a real field in the quantum theory are mapped into $2N^2$ classical simple harmonic oscillators with specific initial conditions. We discuss how this classical-quantum correspondence (CQC) may be used to evaluate quantum radiation and fully treat the backreaction of quantum fields on classical backgrounds. The technique has widespread application, including to the quantum evaporation of classical breathers ("oscillons"). | high energy physics theory |
The main objective of this paper is to improve the communication costs in distributed quantum circuits. To this end, we present a method for generating distributed quantum circuits from monolithic quantum circuits in such a way that communication between partitions of a distributed quantum circuit is minimized. Thus, the communication between distributed components is performed at a lower cost. Compared to existing works, our approach can effectively map a quantum circuit into an appropriate number of distributed components. Since teleportation is usually the protocol used to connect components in a distributed quantum circuit, our approach ultimately reduces the number of teleportations. The results of applying our approach to the benchmark quantum circuits determine its effectiveness and show that partitioning is a necessary step in constructing distributed quantum circuit. | quantum physics |
We survey several notions of Mackey functors and biset functors found in the literature and prove some old and new theorems comparing them. While little here will surprise the experts, we draw a conceptual and unified picture by making systematic use of finite groupoids. This provides a road map for the various approaches to the axiomatic representation theory of finite groups, as well as some details which are hard to find in writing. | mathematics |
We demonstrate that a many-body nonlocality is a resource for ultra-precise metrology. This result is achieved by linking the sensitivity of a quantum sensor with a combination of many-body correlation functions that witness the nonlocality. We illustrate our findings with some prominent examples---a collection of spins forming an Ising chain and a gas of ultra-cold atoms in any two-mode configuration. | quantum physics |
Difference-in-differences is a widely-used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, \citet{APbook} show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting. We provide three examples to illustrate the theoretical results with replication files in \citet{ding2019bracketingData}. | statistics |
In epidemiology, identifying the effect of exposure variables in relation to a time-to-event outcome is a classical research area of practical importance. Incorporating propensity score in the Cox regression model, as a measure to control for confounding, has certain advantages when outcome is rare. However, in situations involving exposure measured with moderate to substantial error, identifying the exposure effect using propensity score in Cox models remains a challenging yet unresolved problem. In this paper, we propose an estimating equation method to correct for the exposure misclassification-caused bias in the estimation of exposure-outcome associations. We also discuss the asymptotic properties and derive the asymptotic variances of the proposed estimators. We conduct a simulation study to evaluate the performance of the proposed estimators in various settings. As an illustration, we apply our method to correct for the misclassification-caused bias in estimating the association of PM2.5 level with lung cancer mortality using a nationwide prospective cohort, the Nurses' Health Study (NHS). The proposed methodology can be applied using our user-friendly R function published online. | statistics |
In recent times, several discrepancies at the level of $(2-3)\sigma$ have been observed in the decay processes mediated by flavour changing neutral current (FCNC) transitions $b \to s \ell^+ \ell^-$, which may be considered as the smoking-gun signal of New Physics (NP). These intriguing hints of NP have attracted a lot of attention and many attempts are made to look for the possible NP signature in other related processes, which are mediated through the same quark-level transitions. In this work, we perform a comprehensive analysis of the FCNC decays of $B$ meson to axial vector mesons $ K_1(1270)$ and $ K_1(1400)$, which are admixture of the $1^3P_1$ and $1^1P_1$ states $K_{1A}$ and $K_{1B}$, in a model independent framework. Using the $ B \to K_1$ form factors evaluated in the light cone sum rule approach, we investigate the rare exclusive semileptonic decays $ B \to K_1(1270) \mu^+ \mu^-$ and $ B \to K_1(1400) \mu^+ \mu^-$. Considering all the possible relevant operators for $b \to s \ell^+ \ell^-$ transitions, we study their effects on various observables such as branching fractions, lepton flavor universality violating ratio ($R_{K_1})$, forward-backward asymmetries, and lepton polarization asymmetries of these processes. These results will not only enhance the theoretical understanding of the mixing angle but also serve as a good tool for probing New Physics. | high energy physics phenomenology |
The problem of bound entanglement detection is a challenging aspect of quantum information theory for higher dimensional systems. Here, we propose an indecomposable positive map for two-qutrit systems, which is shown to generate a class of positive partial transposed (PPT) states. A corresponding witness operator is constructed and shown to be weakly optimal and locally implementable. Further, we perform a structural physical approximation of the indecomposable map to make it a completely positive one, and find a new PPT entangled state which is not detectable by certain other well-known entanglement detection criteria. | quantum physics |
In this paper, we present a new nonintrusive reduced basis method when a cheap low-fidelity model and expensive high-fidelity model are available. The method relies on proper orthogonal decomposition (POD) to generate the high-fidelity reduced basis and a shallow multilayer perceptron to learn the high-fidelity reduced coefficients. In contrast to other methods, one distinct feature of the proposed method is to incorporate the features extracted from the low-fidelity data as the input feature, this approach not only improves the predictive capability of the neural network but also enables the decoupling the high-fidelity simulation from the online stage. Due to its nonintrusive nature, it is applicable to general parameterized problems. We also provide several numerical examples to illustrate the effectiveness and performance of the proposed method. | statistics |
Knowledge distillation (KD) is one of the most useful techniques for light-weight neural networks. Although neural networks have a clear purpose of embedding datasets into the low-dimensional space, the existing knowledge was quite far from this purpose and provided only limited information. We argue that good knowledge should be able to interpret the embedding procedure. This paper proposes a method of generating interpretable embedding procedure (IEP) knowledge based on principal component analysis, and distilling it based on a message passing neural network. Experimental results show that the student network trained by the proposed KD method improves 2.28% in the CIFAR100 dataset, which is higher performance than the state-of-the-art (SOTA) method. We also demonstrate that the embedding procedure knowledge is interpretable via visualization of the proposed KD process. The implemented code is available at https://github.com/sseung0703/IEPKT. | computer science |
Quantum metrology deals with improving the resolution of instruments that are otherwise limited by shot noise and it is therefore a promising avenue for enabling scientific breakthroughs. The advantage can be even more striking when quantum enhancement is combined with correlation techniques among several devices. Here, we present and realize a correlation interferometry scheme exploiting bipartite quantum correlated states injected in two independent interferometers. The scheme outperforms classical analogues in detecting a faint signal that may be correlated/uncorrelated between the two devices. We also compare its sensitivity with that obtained for a pair of two independent squeezed modes, each addressed to one interferometer, for detecting a correlated stochastic signal in the MHz frequency band. Being the simpler solution, it may eventually find application to fundamental physics tests, e.g., searching for the effects predicted by some Planck scale theories. | quantum physics |
The purpose of this article is to explore the properties of integrable, purely transmitting, defects placed at the junctions of several one-dimensional domains within a network. The defect sewing conditions turn out to be quite restrictive - for example, requiring the number of domains meeting at a junction to be even - and there is a clear distinction between the behaviour of conformal and massive integrable models. The ideas are mainly developed within classical field theory and illustrated using a variety of field theory models defined on the branches of the network, including both linear and nonlinear examples. | high energy physics theory |
We study the evolution of the graph distance and weighted distance between two fixed vertices in dynamically growing random graph models. More precisely, we consider preferential attachment models with power-law exponent $\tau\in(2,3)$, sample two vertices $u_t,v_t$ uniformly at random when the graph has $t$ vertices, and study the evolution of the graph distance between these two fixed vertices as the surrounding graph grows. This yields a discrete-time stochastic process in $t'\geq t$, called the distance evolution. We show that there is a tight strip around the function $4\frac{\log\log(t)-\log(1\vee\log(t'/t))}{|\log(\tau-2)|}\vee 4$ that the distance evolution never leaves with high probability as $t$ tends to infinity. We extend our results to weighted distances, where every edge is equipped with an i.i.d. copy of a non-negative random variable $L$. | mathematics |
Time- and angle-resolved photoemission spectroscopy (trARPES) is a powerful spectroscopic method to measure the ultrafast electron dynamics directly in momentum-space. However, band gap materials with exceptional strong Coulomb interaction such as monolayer transition metal dichlacogenides (TMDC) exhibit tightly bound excitons, which dominate their optical properties. This rises the question whether excitons, in particular their formation and relaxation dynamics, can be detected in photoemission. Here, we develope a fully microscopic theory of the temporal dynamics of excitonic time- and angle resolved photoemission with particular focus on the phonon-mediated thermalization of optically excited excitons to momentum-forbidden dark exciton states. We find that trARPES is able to probe the ultrafast exciton formation and relaxation throughout the Brillouin zone. | condensed matter |
We establish the existence, uniqueness and exponential attraction properties of an invariant measure for the MHD equations with degenerate stochastic forcing acting only in the magnetic equation. The central challenge is to establish time asymptotic smoothing properties of the associated Markovian semigroup corresponding to this system. Towards this aim we take full advantage of the characteristics of the advective structure to discover a novel H\"ormander-type condition which only allows for several noises in the magnetic direction. | mathematics |
Despite Generative Adversarial Networks (GANs) have been widely used in various image-to-image translation tasks, they can be hardly applied on mobile devices due to their heavy computation and storage cost. Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks. Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator. To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators. An adversarial learning process is therefore established to optimize student generator and student discriminator. Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance. | computer science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.