text
stringlengths
11
9.77k
label
stringlengths
2
104
At elevated temperature environments, elastic structures experience a change of the stress-free state of the body that can strongly influence the optimal topology of the structure. This work presents level-set based topology optimization of structures undergoing large deformations due to thermal and mechanical loads. The nonlinear analysis model is constructed by multiplicatively decomposing thermal and mechanical effects and introducing an intermediate stress-free state between the undeformed and deformed coordinates. By incorporating the thermoelastic nonlinearity into the level-set topology optimization scheme, wider design spaces can be explored with the consideration of both mechanical and thermal loads. Four numerical examples are presented that demonstrate how temperature changes affect the optimal design of large-deforming structures. In particular, we show how optimization can manipulate the material layout in order to create a counteracting effect between thermal and mechanical loads, even up to a degree that buckling and snap-through are suppressed. Hence the consideration of large deformations in conjunction with thermoelasticity opens many new possibilities for controlling and manipulating the thermo-mechanical response via topology optimization.
computer science
Quantum simulation can be implemented in pure digital or analog ways, each with their pros and cons. By taking advantage of the universality of a digital route and the efficiency of analog simulation, hybrid digital-analog approaches can enrich the possibilities for quantum simulation. We use a unique hybrid approach to experimentally perform a quantum simulation of phase-controlled dynamics resulting from a closed-contour interaction (CCI) within certain multi-level systems in superconducting quantum circuits. Due to symmetry constraints, such systems cannot host an inherent CCI. Nevertheless, by assembling analog modules corresponding to their natural evolutions and specially designed digital modules constructed from standard quantum logic gates, we can bypass such constraints and realize an effective CCI in these systems. Based on this realization, we demonstrate a variety of related and interesting phenomena, including phase-controlled chiral dynamics, separation of chiral enantiomers, and a new mechanism to generate entangled states based on CCI.
quantum physics
In several recent communications (Khrennikov 2019a, b, c, 2020a, b), A. Khrennikov argued for "eliminating the issue of quantum nonlocality" from the analysis of quantum entanglement and quantum phenomena in general. He proposed to differentiate quantum and classical phenomena and entanglement not by their respective nonlocality and locality, as is common, but by the discreteness of quantum phenomena vs. the continuity of classical phenomena, supplemented by Bohr's complementarity in the case of quantum phenomena. As I argue here, however, the question may not be that of "eliminating the issue of quantum nonlocality" but instead of illuminating this issue, a task that can, I also argue, be pursued by relating quantum nonlocality to other key features of quantum phenomena. I suggest that the following features of quantum phenomena and quantum mechanics, distinguishing them from classical phenomena and classical physics--(1) the irreducible role of measuring instruments in defining quantum phenomena; (2) discreteness; (3) complementarity; (4) entanglement; (5) quantum nonlocality; and (6) the irreducibly probabilistic nature of quantum predictions--are all interconnected in defining quantum phenomena and distinguishing them from classical ones, so that it is difficult to give an unconditional priority to any one of them. To argue this case, I consider quantum phenomena and quantum mechanics from a nonrealist or, in terms adopted here, "reality-without-realism" (RWR) perspective. This perspective extends and gives new dimensions to Bohr's view, grounded in his analysis of the irreducible tole of measuring instruments in the constitution of quantum phenomena, with quantum measurement defined by the entanglements between the quantum object under investigation and the instrument used.
quantum physics
Forgetfulness is a common feature of nature. Moreover, without forgetfulness, repeatability would be impossible. Despite this, small systems constantly leak information about their state to their surroundings, and quantum mechanics tells us that this information cannot be deleted, invariably returning to influence their future behaviour. How can physical nature be forgetful if it is forbidden to forget? The results within this thesis bridge this gap between what we see in the real world and what idealised physical theories say, explaining the emergence of forgetful processes from closed quantum dynamics and allowing to quantify the rate at which memory effects become relevant.
quantum physics
We explore the impact of pulsar electromagnetic dipole and fallback accretion emission on the luminosity of a suite of kilonova models. The pulsar models are varied over pulsar magnetic field strength, pulsar lifetime, ejecta mass, and elemental abundances; the fallback models are varied over fallback accretion rate and ejecta mass. For the abundances, we use Fe and Nd as representatives of the wind and dynamical ejecta, respectively. We simulate radiative transfer in the ejecta in either 1D spherical or 2D cylindrical spatial geometry. For the grid of 1D simulations, the mass fraction of Nd is 0, $10^{-4}$, or $10^{-3}$ and the rest is Fe. Our models that fit the bolometric luminosity of AT 2017gfo (the kilonova associated with the first neutron star merger discovered in gravitational waves, GW170817) do not simultaneously fit the B, V, and I time evolution. However, we find that the trends of the evolution in B and V magnitudes are better matched by the fallback model relative to the pulsar model, implying the time dependence of the remnant source influences the color evolution. Further exploration of the parameter space and model deficiencies is needed before we can describe AT 2017gfo with a remnant source.
astrophysics
Weak-value amplification employs postselection to enhance the measurement of small parameters of interest. The amplification comes at the expense of reduced success probability, hindering the utility of this technique as a tool for practical metrology. Following other quantum technologies that display a quantum advantage, we formalize a quantum advantage in the success probability and present a scheme based on non-linear collective Hamiltonians that shows a super-extensive growth in success probability while simultaneously displaying an extensive growth in the weak value. We propose an experimental implementation of our scheme.
quantum physics
A path integral in Jackiw-Teitelboim (JT) gravity is given by integrating over the volume of the moduli of Riemann surfaces with boundaries, known as the "Weil-Petersson volume," together with integrals over wiggles along the boundaries. The exact computation of the Weil-Petersson volume $V_{g,n}(b_1, \ldots, b_n)$ is difficult when the genus $g$ becomes large. We utilize two partial differential equations known to hold on the Weil-Petersson volumes to estimate asymptotic behaviors of the volume with two boundaries $V_{g,2}(b_1, b_2)$ and the volume with three boundaries $V_{g,3}(b_1, b_2, b_3)$ when the genus $g$ is large. Furthermore, we present a conjecture on the asymptotic expression for general $V_{g,n}(b_1, \ldots, b_n)$ with $n$ boundaries when $g$ is large.
high energy physics theory
We consider the classical machine scheduling, where $n$ jobs need to be scheduled on $m$ machines, and where job $j$ scheduled on machine $i$ contributes $p_{i,j}\in \mathbb{R}$ to the load of machine $i$, with the goal of minimizing the makespan, i.e., the maximum load of any machine in the schedule. We study inefficiency of schedules that are obtained when jobs arrive sequentially one by one, and the jobs choose themselves the machine on which they will be scheduled, aiming at being scheduled on a machine with small load. We measure the inefficiency of a schedule as the ratio of the makespan obtained in the worst-case equilibrium schedule, and of the optimum makespan. This ratio is known as the \emph{sequential price of anarchy}. We also introduce two alternative inefficiency measures, which allow for a favorable choice of the order in which the jobs make their decisions. As our first result, we disprove the conjecture of Hassin and Yovel claiming that the sequential price of anarchy for $m=2$ machines is at most 3. We show that the sequential price of anarchy grows at least linearly with the number $n$ of players, i.e., we show that $SPoA = \Omega(n)$. Furthermore, we show that there exists an order of the jobs, resulting in makespan that is at most linearly larger than the optimum makespan. To the end, we show that if an authority can change the order of the jobs adaptively to the decisions made by the jobs so far (but cannot influence the decisions of the jobs), then there exists an adaptive ordering in which the jobs end up in an optimum schedule.
computer science
The transport of a particle in the presence of a potential that changes periodically in space and in time can be characterized by the amount of work needed to shift a particle by a single spatial period of the potential. In general, this amount of work, when averaged over a single temporal period of the potential, can take any value in a continuous fashion. Here we present a topological effect inducing the quantization of the average work. We find that this work is equal to the first Chern number calculated in a unit cell of a space-time lattice. Hence, this quantization of the average work is topologically protected. We illustrate this phenomenon with the example of an atom whose center of mass motion is coupled to its internal degrees of freedom by electromagnetic waves.
quantum physics
For a better understanding of magnetic field in the solar corona and dynamic activities such as flares and coronal mass ejections, it is crucial to measure the time-evolving coronal field and accurately estimate the magnetic energy. Recently, a new modeling technique called the data-driven coronal field model, in which the time evolution of magnetic field is driven by a sequence of photospheric magnetic and velocity field maps, has been developed and revealed the dynamics of flare-productive active regions. Here we report on the first qualitative and quantitative assessment of different data-driven models using a magnetic flux emergence simulation as a ground-truth (GT) data set. We compare the GT field with those reconstructed from the GT photospheric field by four data-driven algorithms. It is found that, at least, the flux rope structure is reproduced in all coronal field models. Quantitatively, however, the results show a certain degree of model dependence. In most cases, the magnetic energies and relative magnetic helicity are comparable to or at most twice of the GT values. The reproduced flux ropes have a sigmoidal shape (consistent with GT) of various sizes, a vertically-standing magnetic torus, or a packed structure. The observed discrepancies can be attributed to the highly non-force-free input photospheric field, from which the coronal field is reconstructed, and to the modeling constraints such as the treatment of background atmosphere, the bottom boundary setting, and the spatial resolution.
astrophysics
In a one-dimensional (1D) system with degenerate ground states, their domain boundaries, dubbed solitons, emerge as topological excitations often carrying unconventional charges and spins; however, the soliton excitations are only vital in the non-ordered 1D regime. Then a question arises; how do the solitons conform to a 3D ordered state? Here, using a quasi-1D organic ferroelectric, TTF-CA, with degenerate polar dimers, we pursue the fate of a spin-soliton charge-soliton composite matter in a 1D polar-dimer liquid upon its transition to a 3D ferroelectric order by resistivity, NMR and NQR measurements. We demonstrate that the soliton matter undergoes neutral spin-spin soliton pairing and spin-charge soliton pairing to form polarons, coping with the 3D order. The former contributes to the magnetism through triplet excitations whereas the latter carries electrical current. Our results reveal the whole picture of a soliton matter that condenses into the 3D ordered state.
condensed matter
We propose a projection pursuit (PP) algorithm based on Gaussian mixture models (GMMs). The negentropy obtained from a multivariate density estimated by GMMs is adopted as the PP index to be maximised. For a fixed dimension of the projection subspace, the GMM-based density estimation is projected onto that subspace, where an approximation of the negentropy for Gaussian mixtures is computed. Then, Genetic Algorithms (GAs) are used to find the optimal, orthogonal projection basis by maximising the former approximation. We show that this semi-parametric approach to PP is flexible and allows highly informative structures to be detected, by projecting multivariate datasets onto a subspace, where the data can be feasibly visualised. The performance of the proposed approach is shown on both artificial and real datasets.
statistics
We study charged black hole solutions in 4-dimensional (4D) Einstein-Gauss-Bonnet-Maxwell theory to the linearized perturbation level. We first compute the shear viscosity to entropy density ratio. We then demonstrate how bulk causal structure analysis imposes a upper bound on the Gauss-Bonnet coupling constant in the AdS space. Causality constrains the value of Gauss-Bonnet coupling constant $\alpha_{GB}$ to be bounded by $\alpha_{GB}\leq 0$ as $D\rightarrow 4$.
high energy physics theory
LTE Narrowband Internet of Things (NB-IoT) is a 3GPP defined cellular technology that is designed to enable connectivity to many low-cost and low power/throughput IoT devices running delay-tolerant applications. NB-IoT can coexist within LTE spectrum either in a standalone mode, in-band with LTE or in guard-band of LTE. With NB-IoT designed to operate in very low signal to noise power ratios, the uplink receiver design presents several challenges. In this paper the design and performance of a NB-IoT uplink receiver is studied in detail. First, receiver design for NB-IoT uplink channels, with corresponding mathematical analysis, is presented. Specifically, it is shown how the time/frequency structure of signals can be exploited to enhance the receiver performance. Second, the performance of each channel is characterized with both link level simulations and implementation on a commercially deployed Qualcomm(registered) FSM(Trade Mark) platform [1]. Comparisons against the 3GPP defined Radio Performance and Protocol aspect requirements are also provided. Finally, implementation details are addressed and discussions on proposed enhancements to NB-IoT in 3GPP Release 15 are provided. It is shown how the proposed receiver algorithms can be adopted to Release 15 enhancements with minor or no modifications. The work in this paper is of significance to system designers looking to implement efficient NB-IoT uplink receiver to coexist with legacy LTE systems.
computer science
It is by now well known that, at subleading power in scale ratios, factorization theorems for high-energy cross sections and decay amplitudes contain endpoint-divergent convolution integrals. The presence of these divergences hints at a violation of simple scale separation, as a result of the so-called collinear anomaly. At the technical level, endpoint divergences indicate an unexpected failure of dimensional regularization and the $\bar{\rm MS}$ subtraction scheme. In this paper we start a comprehensive discussion of factorization at subleading power within the framework of soft-collinear effective theory. As a concrete example, we factorize the decay amplitude for the radiative Higgs-boson decay $h\to \gamma\gamma$ mediated by a $b$-quark loop, for which endpoint-divergent convolution integrals require both dimensional and rapidity regulators. We derive a factorization theorem for the decay amplitude in terms of bare Wilson coefficients and operator matrix elements. We show that endpoint divergences caused by rapidity divergences cancel to all orders in perturbation theory, while endpoint divergences that are regularized dimensionally can be removed by rearranging the terms in the factorization theorem. We use our result to resum the leading double-logarithmic corrections of order $\alpha_s^n\ln^{2n+2}(-M_h^2/m_b^2)$ to all orders of perturbation theory.
high energy physics phenomenology
In a recent Letter we presented a systematic way of testing the seesaw origin of neutrino mass in the context of the Minimal Left-Right Symmetric Model. The essence of the program is to exploit lepton number violating decays of doubly charged scalars, particles which lie at the heart of the Higgs-mechanism-based seesaw, to probe the Dirac neutrino mass term which in turn enters directly into a number of physical processes including the decays of right-handed neutrinos into the $W$ boson and left-handed charged leptons. In this longer version we discuss at length these and related processes, and offer some missing technical details. We also carefully analyze the physically appealing possibility of parity conserving Yukawa sector showing that the neutrino Dirac mass matrix can be analytically expressed as a function of light and heavy neutrino masses and mixing, without resorting to any additional discrete symmetries, a context in which the seesaw mechanism can be disentangled completely.When parity does get broken, we show that, in the general case, only the Hermitian part of the Dirac mass term is independent which substantially simplifies the task of testing experimentally the origin of neutrino mass. We illustrate this program through some physical examples that allow simple analytical expressions. Our work shows that the Minimal Left-Right Symmetric Model is a self-contained theory of neutrino mass which can be in principle tested at the LHC or the next hadron collider.
high energy physics phenomenology
We demonstrate that single crystals of methylammonium lead bromide (MAPbBr3) could be grown directly on vertically aligned carbon nanotube (VACNT) forests. The fast-growing MAPbBr3 single crystals engulfed the protogenetic inclusions in the form of individual CNTs, thus resulting in a three-dimensionally enlarged photosensitive interface. Photodetector devices were obtained, detecting low light intensities (~20 nW) from UV range to 550 nm. Moreover, a photocurrent was recorded at zero external bias voltage which points to the plausible formation of a p-n junction resulting from interpenetration of MAPbBr3 single crystals into the VACNT forest. This reveals that vertically aligned CNTs can be used as electrodes in operationally stable perovskite-based optoelectronic devices and can serve as a versatile platform for future selective electrode development.
condensed matter
An important experimental design problem in early-stage drug discovery is how to prioritize available compounds for testing when very little is known about the target protein. Informer based ranking (IBR) methods address the prioritization problem when the compounds have provided bioactivity data on other potentially relevant targets. An IBR method selects an informer set of compounds, and then prioritizes the remaining compounds on the basis of new bioactivity experiments performed with the informer set on the target. We formalize the problem as a two-stage decision problem and introduce the Bayes Optimal Informer SEt (BOISE) method for its solution. BOISE leverages a flexible model of the initial bioactivity data, a relevant loss function, and effective computational schemes to resolve the two-step design problem. We evaluate BOISE and compare it to other IBR strategies in two retrospective studies, one on protein-kinase inhibition and the other on anti-cancer drug sensitivity. In both empirical settings BOISE exhibits better predictive performance than available methods. It also behaves well with missing data, where methods that use matrix completion show worse predictive performance. We provide an R implementation of BOISE at https://github.com/wiscstatman/esdd/BOISE
statistics
Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of $O(T^{-1/4}(\log T)^{1/2})$ from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves $\epsilon$-suboptimal solutions, on average, provided that it is run for a time that is polynomial in $\epsilon^{-1}$ and slightly super-exponential in the problem dimension.
computer science
A radiative transfer model was used to reproduce several millions of OMEGA (Observatoire pour la Min\'eralogie, l'Eau, les Glaces et l'Activit\'e) spectra representative of igneous terrains of Mars. This task provided the modal composition and grain sizes at a planetary scale. The lithology can be summarized in five mineral maps at km-scale. We found that the low albedo equatorial regions of the Martian surface (from 60{\deg}S to 30{\deg}N) are globally dominated by plagioclase with average abundance ~50 vol% and pyroxenes with total averaged abundance close to 40 vol%. An evolution of the LCP/(LCP+HCP) ratio is observed with time at the global scale, suggesting an evolution of the degree of partial melting throughout the Martian eras. Olivine and Martian dust are minor components of the modelled terrains. The olivine distribution is quite different from the other minerals because it is found on localized areas with abundance reaching 20 vol%. A statistical approach, to classify the pixels of the abundances maps, using k-means clustering, highlighted seven distinct mineral assemblages on the surface. This classification illustrates that diverse mineralogical units are found in the Noachian and Hesperian terrains, which suggests the presence of various and complex magmatic processes at a global scale during the two oldest eras. The chemical composition was derived from the modal composition maps. The OMEGA-derived chemical composition is quite consistent with several distinctive geochemical characteristics previously considered as fingerprints of the Martian surface. A major discrepancy is in regards to the Fe content that is significantly smaller than soil and rock analyses from GRS and in situ measurements. The discrepancy could be partly explained by the assumptions used for the spectral modelling or could also indicate surface alteration rinds.
astrophysics
We analyze the divisibility properties of generalized Pauli dynamical maps. Using the results for image non-increasing dynamical maps, we formulate the conditions for the underlying evolution to satisfy specific non-Markovianity criteria. For the qubit evolution, we find the necessary and sufficient conditions for P and CP-divisibility of the associated, in general noninvertible, Pauli dynamical maps. Finally, we analyze the divisibility degree for mixtures of noninvertible phase-damping channels. For P-divisible maps, we propose a legitimate time-local generator with all temporarily infinite decoherence rates.
quantum physics
Let $L$ be a fixed branch -- that is, an irreducible germ of curve -- on a normal surface singularity $X$. If $A,B$ are two other branches, define $u_L(A,B) := \dfrac{(L \cdot A) \: (L \cdot B)}{A \cdot B}$, where $A \cdot B$ denotes the intersection number of $A$ and $B$. Call $X$ arborescent if all the dual graphs of its resolutions are trees. In a previous paper, the first three authors extended a 1985 theorem of P{\l}oski by proving that whenever $X$ is arborescent, the function $u_L$ is an ultrametric on the set of branches on $X$ different from $L$. In the present paper we prove that, conversely, if $u_L$ is an ultrametric, then $X$ is arborescent. We also show that for any normal surface singularity, one may find arbitrarily large sets of branches on $X$, characterized uniquely in terms of the topology of the resolutions of their sum, in restriction to which $u_L$ is still an ultrametric. Moreover, we describe the associated tree in terms of the dual graphs of such resolutions. Then we extend our setting by allowing $L$ to be an arbitrary semivaluation on $X$ and by defining $u_L$ on a suitable space of semivaluations. We prove that any such function is again an ultrametric if and only if $X$ is arborescent, and without any restriction on $X$ we exhibit special subspaces of the space of semivaluations in restriction to which $u_L$ is still an ultrametric.
mathematics
Data augmentation is an effective regularization strategy to alleviate the overfitting, which is an inherent drawback of the deep neural networks. However, data augmentation is rarely considered for point cloud processing despite many studies proposing various augmentation methods for image data. Actually, regularization is essential for point clouds since lack of generality is more likely to occur in point cloud due to small datasets. This paper proposes a Rigid Subset Mix (RSMix), a novel data augmentation method for point clouds that generates a virtual mixed sample by replacing part of the sample with shape-preserved subsets from another sample. RSMix preserves structural information of the point cloud sample by extracting subsets from each sample without deformation using a neighboring function. The neighboring function was carefully designed considering unique properties of point cloud, unordered structure and non-grid. Experiments verified that RSMix successfully regularized the deep neural networks with remarkable improvement for shape classification. We also analyzed various combinations of data augmentations including RSMix with single and multi-view evaluations, based on abundant ablation studies.
computer science
Currently self-report pain ratings are the gold standard in clinical pain assessment. However, the development of objective automatic measures of pain could substantially aid pain diagnosis and therapy. Recent neuroimaging studies have shown the potential of functional near-infrared spectroscopy (fNIRS) for pain detection. This is a brain-imaging technique that provides non-invasive, long-term measurements of cortical hemoglobin concentration changes. In this study, we focused on fNIRS signals acquired exclusively from the prefrontal cortex, which can be accessed unobtrusively, and derived an algorithm for the detection of the presence of pain using Bayesian hierarchical modelling with wavelet features. This approach allows personalization of the inference process by accounting for inter-participant variability in pain responses. Our work highlights the importance of adopting a personalized approach and supports the use of fNIRS for pain assessment.
electrical engineering and systems science
The Schwinger-Dyson equations for Green functions and 3-vertexes in planar approximation used 50 years ago in the earlier theory of conformal bootstrap are written in frames of the AdS/CFT correspondence. The technique of "Wightman-Witten" diagrams is used in calculations of the bubble bulk diagrams that permitted to obtain the values of conformal dimensions of primary operators in two models of the interacting bulk scalar fields which conformal dimensions obey "extremal" relation. Simple expressions for the four-point tree "Wightman-Witten" correlators of these primary operators are obtained and spectra of corresponding Bethe-Salpeter equations are calculated.
high energy physics theory
HIP 41378 f is a temperate $9.2\pm0.1 R_{\oplus}$ planet with period of 542.08 days and an extremely low density of $0.09\pm0.02$ g cm$^{-3}$. It transits the bright star HIP 41378 (V=8.93), making it an exciting target for atmospheric characterization including transmission spectroscopy. HIP 41378 was monitored photometrically between the dates of 2019 November 19 and November 28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever detected for this planet, which confirms the orbital period. This is also the first ground-based detection of a transit of HIP 41378 f. Additional ground-based photometry was also obtained and used to constrain the time of the transit. The transit was measured to occur 1.50 hours earlier than predicted. We use an analytic transit timing variation (TTV) model to show the observed TTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using our TTV model, we predict the epochs of future transits of HIP 41378 f, with derived transit centres of T$_{C,4} = 2459355.087^{+0.031}_{-0.022}$ (May 2021) and T$_{C,5} = 2459897.078^{+0.114}_{-0.060}$ (Nov 2022).
astrophysics
We consider a class of spin networks where each spin in a certain set interacts, via Ising coupling, with a set of central spins, and the control acts simultaneously on all the spins. This is a common situation for instance in NV centers in diamonds, and we focus on the physical case of up to two central spins. Due to the permutation symmetries of the network, the system is not globally controllable but it displays invariant subspaces of the underlying Hilbert space. The system is said to be subspace controllable if it is controllable on each of these subspaces. We characterize the given invariant subspaces and the dynamical Lie algebra of this class of systems and prove subspace controllability in every case.
quantum physics
We study the Bloch oscillation dynamics of a spin-orbit-coupled cold atomic gas trapped inside a one-dimensioanl optical lattice. The eigenspectra of the system is identified as two interpenetrating Wannier-Stark ladder. Based on that, we carefully analyzed the Bloch oscillation dynamics and found out that intraladder coupling between neighboring rungs of Wannier-Stark ladder give rise to ordinary Bloch oscillation while interladder coupling lead to small amplitude high frequency oscillation superimposed on it. Specifically spin-orbit interaction breaks Galilean invariance, which can be reflected by out-of-phase oscillation of the two spin components in the accelerated frame. The possibility of generating spin current in this system are also explored.
condensed matter
Accretion discs around black holes power some of the most luminous objects in the Universe. Discs that are misaligned to the black hole spin can become warped over time by Lense-Thirring precession. Recent work has shown that strongly warped discs can become unstable, causing the disc to break into discrete rings producing a more dynamic and variable accretion flow. In a companion paper, we present numerical simulations of this instability and the resulting dynamics. In this paper, we discuss the implications of this dynamics for accreting black hole systems, with particular focus on the variability of Active Galactic Nuclei (AGN). We discuss the timescales on which variability might manifest, and the impact of the observer orientation with respect to the black hole spin axis. When the disc warp is unstable near the inner edge of the disc, we find quasi periodic behaviour of the inner disc which may explain the recent quasi periodic eruptions observed in, for example, the Seyfert 2 galaxy GSN 069 and in the galactic nucleus of RX J1301.9+2747. These eruptions are thought to be similar to the `heartbeat' modes observed in some X-ray binaries (e.g. GRS 1915+105 and IGR J17091-3624). When the instability manifests at larger radii in the disc, we find that the central accretion rate can vary on timescales that may be commensurate with, e.g., changing-look AGN. We therefore suggest that some of the variability properties of accreting black hole systems may be explained by the disc being significantly warped, leading to disc tearing.
astrophysics
It is widely reported, based on clustering measurements of observed active galactic nuclei (AGN) samples, that AGN reside in similar mass host dark matter halos across the bulk of cosmic time, with log $M/M_\odot$~12.5-13.0 to z~2.5. We show that this is due in part to the AGN fraction in galaxies rising with increasing stellar mass, combined with AGN observational selection effects that exacerbate this trend. Here, we use AGN specific accretion rate distribution functions determined as a function of stellar mass and redshift for star-forming and quiescent galaxies separately, combined with the latest galaxy-halo connection models, to determine the parent and sub-halo mass distribution function of AGN to various observational limits. We find that while the median (sub-)halo mass of AGN, $\approx10^{12}M_\odot$, is fairly constant with luminosity, specific accretion rate, and redshift, the full halo mass distribution function is broad, spanning several orders of magnitude. We show that widely used methods to infer a typical dark matter halo mass based on an observed AGN clustering amplitude can result in biased, systematically high host halo masses. While the AGN satellite fraction rises with increasing parent halo mass, we find that the central galaxy is often not an AGN. Our results elucidate the physical causes for the apparent uniformity of AGN host halos across cosmic time and underscore the importance of accounting for AGN selection biases when interpreting observational AGN clustering results. We further show that AGN clustering is most easily interpreted in terms of the relative bias to galaxy samples, not from absolute bias measurements alone.
astrophysics
Detections of stellar coronal mass ejections (CMEs) are still rare. Observations of strong Balmer line asymmetries during flare events have been interpreted as being caused by CMEs. Here, we aim to estimate the maximum possible Balmer line fluxes expected from CMEs to infer their detectability in spectroscopic observations. Moreover, we use these results together with a model of intrinsic CME rates to infer the potentially observable CME rates for stars of different spectral types under various observing conditions, as well as the minimum required observing time to detect stellar CMEs in Balmer lines. We find that generally CME detection is favoured for mid- to late-type M dwarfs, as they require the lowest signal-to-noise ratio for CME detection, and the fraction of observable-to-intrinsic CMEs is largest. They may require, however, longer observing times than stars of earlier spectral types at the same activity level, as their predicted intrinsic CME rates are lower. CME detections are generally favoured for stars close to the saturation regime, because they are expected to have the highest intrinsic rates; the predicted minimum observing time to detect CMEs on just moderately active stars is already >100 h. By comparison with spectroscopic data sets including detections as well as non-detections of CMEs, we find that our modelled maximum observable CME rates are generally consistent with these observations on adopting parameters within the ranges determined by observations of solar and stellar prominences.
astrophysics
Given a tournament $T$, a module of $T$ is a subset $M$ of $V(T)$ such that for $x, y\in M$ and $v\in V(T)\setminus M$, $(v,x)\in A(T)$ if and only if $(v,y)\in A(T)$. The trivial modules of $T$ are $\emptyset$, $\{u\}$ $(u\in V(T))$ and $V(T)$. The tournament $T$ is indecomposable if all its modules are trivial; otherwise it is decomposable. Let $T$ be a tournament with at least five vertices. In a previous paper, the authors proved that the smallest number $\delta(T)$ of arcs that must be reversed to make $T$ indecomposable satisfies $\delta(T) \leq \left\lceil \frac{v(T)+1}{4} \right\rceil$, and this bound is sharp, where $v(T) = |V(T)|$ is the order of $T$. In this paper, we prove that if the tournament $T$ is not transitive of even order, then $T$ can be made indecomposable by reversing the arcs of a subtournament of $T$. We denote by $\delta'(T)$ the smallest size of such a subtournament. We also prove that $\delta(T) = \left\lceil \frac{\delta'(T)}{2} \right\rceil$.
mathematics
With the help of a functional renormalization group, we study the dynamical breakdown of scale invariance in quantum Weyl gravity by starting from the UV fixed point that we assume to be Gaussian. To this end, we resort to two classes of Bach-flat backgrounds, namely maximally symmetric spacetimes and Ricci-flat backgrounds in the improved one-loop scheme. We show that apart from a genuine IR fixed point that is reached at a zero value of the running scale, the renormalization group flow also exhibits bouncing behavior. We demonstrate that the IR fixed point found is IR-stable in the space of the considered couplings. As a next step, we analyze physics in the broken phase. In particular, we show that in the low-energy sector of the broken phase, the theory looks like Starobinsky $f(R)$ gravity with a gravi-cosmological constant that has a negative sign in comparison to the usual matter-induced cosmological constant. We discuss implications for cosmic inflation and highlight a non-trivial relation between Starobinsky's parameter and the gravi-cosmological constant. Salient issues, including the scheme independence of the IR fixed point and the role of trace anomaly, are also discussed.
high energy physics theory
We use the recently developed code 2HDME to perform a 2-loop renormalization group analyzis of the CP violating Two-Higgs-doublet model (2HDM). Using parameter scans of several scenarios of Z2 symmetry breaking, we investigate the properties of 2HDMs under renormalization group evolution. Collider data constraints are implemented with HiggsBounds and HiggsSignals and we include all the important Barr-Zee diagram contributions to the electron's electric dipole moment to put limits on the CP violation that is allowed both in the scalar and the Yukawa sector. As a result, we see that the CP violation spreads easily across the sectors during renormalization group evolution when one breaks the Z2 symmetry in either sector, putting additional constraints on the CP violating parameters.
high energy physics phenomenology
Nanoparticle (NP)-protein complexes exhibit the correct identity of NP in biological media. Therefore, protein-NP interactions should be closely explored to understand and to modulate the nature of NPs in medical implementations. This review focuses mainly on the physicochemical parameters such as dimension, surface chemistry, the morphology of NPs and influence of medium pH on the formation of protein corona and conformational changes of adsorbed proteins by different kinds of methods. Also, the impact of protein corona on the colloidal stability of NPs is discussed. Uncontrolled protein attachment on NPs may bring unwanted impacts such as protein denaturation and aggregation. In contrast, controlled protein adsorption by optimal concentration, size, pH and surface modification of NPs may result in potential implementation of NPs as therapeutic agents especially for disaggregation of amyloid fibrils. Also, the effect of NPs-protein corona on reducing the cytotoxicity and clinical implications such as drug delivery, cancer therapy, imaging and diagnosis will be discussed. Validated correlative physicochemical parameters for NP-protein corona formation frequently derived from protein corona fingerprints of NPs which are more valid than the parameters obtained only on the base of NP features. This review may provide useful information regarding the potency as well as the adverse effects of NPs to predict their behavior in the in vivo experiments.
physics
We report on the spectral behavior of the first Galactic ultraluminous X-ray pulsar Swift J0243.6+6124 with NuSTAR observations during its 2017-2018 outburst. At sub-Eddington levels, the source spectrum is characterized by three emission components, respectively from the accretion column, the hot spot, and a broad iron line emission region. When the source is above the Eddington limit, the hot spot temperature increases and the spectrum features two more blackbody components. One blackbody component has a radius of 10-20 km and is likely originated from the top of the accretion column. The other one saturates at a blackbody luminosity of (1 - 2)*10^38 erg/s, coincident with the Eddington limit of a neutron star. This is well consistent with the scenario that super-Eddington accretion onto compact objects will power optically-thick outflows and indicates an accretion rate 60-80 times the critical value. This suggests that super-Eddington accretion onto magnetized systems can also power massive winds. At super-Eddington levels, the iron line becomes more significant and blueshifted, and is argued to be associated with the ultrafast wind in the central funnel or jets. This source, if located in external galaxies, will appear like other ultraluminous pulsars.
astrophysics
In the framework of dense-dilute CGC approach we study fluctuations in the multiplicity of produced particles in p-A collisions. We show that the leading effect that drives the fluctuations is the Bose enhancement of gluons in the proton wave function. We explicitly calculate the moment generating function that resums the effects of Bose enhancement. We show that it can be understood in terms of the Liouville effective action for the composite field which is identified with the fluctuating density, or saturation momentum of the proton. The resulting probability distribution turns out to be very close to the gamma-distribution. We also calculate the first correction to this distribution which is due to pairwise Hanbury Brown-Twiss correlations of produced gluons.
high energy physics phenomenology
Performance assessment is a key issue in the process of proposing new machine learning/statistical estimators. A possible method to complete such task is by using simulation studies, which can be defined as the procedure of estimating and comparing properties (such as predictive power) of estimators (and other statistics) by averaging over many replications given a true distribution; i.e.: generating a dataset, fitting the estimator, calculating and storing the predictive power, and then repeating the procedure many times and finally averaging over the stored predictive powers. Given that, in this paper, we present sstudy: a Python package designed to simplify the preparation of simulation studies using SQL database engines as the storage system; more specifically, we present its basic features, usage examples and references to the its documentation. We also present a short statistical description of the simulation study procedure with a simplified explanation of what is being estimated by it, as well as some examples of applications.
statistics
Nematic order is an exotic property observed in several strongly correlated systems, such as the iron-based superconductors. Using large-scale density matrix renormalization group (DMRG) techniques, we study at zero-temperature the nematic spin liquid that competes with spin dipolar and quadrupolar orders. We use these nematic orders to characterize different quantum phases and quantum phase transitions. More specifically, we study a spin-$1$ bilinear-biquadratic Heisenberg model on the square lattice with couplings beyond nearest neighbors. We focus on parameter regions around the highly symmetric $SU(3)$ point where the bilinear and biquadratic interactions are equal. With growing further-neighbor biquadratic interactions, we identify different spin dipolar and quadrupolar orders. We find that the DMRG results on cylindrical geometries correctly detect nematicity in different quantum states and accurately characterize the quantum phase transitions among them. Therefore, spin-driven nematicity -- here defined as the spontaneous breaking of the lattice invariance under a 90$^o$ rotation -- is an order parameter which can be studied directly in DMRG calculations in two dimensions in different quantum states.
condensed matter
Neutron scattering techniques offer a unique combination of structural and the dynamic information of atomic and molecular systems over a wide range of distances and times. The increasing complexity in science investigations driven by technological advances is reflected in the studies of neutron scattering science, which enforces a diversification and an improvement of experimental tools, from the instrument design to the detector performance. It calls as well for more advanced data analysis and modelling. The improvements in resolution, count rate and signal-to-background ratio, achievable with the new instrumentations, also drive the research of alternative technologies to replace the 3He-based detector technology unable to fulfil the requirement of increasing performance. Two solution have been studied: a boron-10-based gaseous detector, the Multi-Blade and a solid-state Si-Gd detector. Both solution are suitable alternatives for neutron detection, able to meet the demands of high performance. It has been shown not only the technical characteristic of the devices, but how the science can profit from the better performance of these new detector technologies in real experimental condition.
physics
Awareness of cybersecurity topics facilitates software developers to produce secure code. This awareness is especially important in industrial environments for the products and services in critical infrastructures. In this work, we address how to raise awareness of software developers on the topic of secure coding. We propose the "CyberSecurity Challenges", a serious game designed to be used in an industrial environment and address software developers' needs. Our work distils the experience gained in conducting these CyberSecurity Challenges in an industrial setting. The main contributions are the design of the CyberSecurity Challenges events, the analysis of the perceived benefits, and practical advice for practitioners who wish to design or refine these games.
computer science
Technologically useful and robust graphene-based interfaces for devices require the introduction of highly selective, stable, and covalently bonded functionalities on the graphene surface, whilst essentially retaining the electronic properties of the pristine layer. This work demonstrates that highly controlled, ultrahigh vacuum covalent chemical functionalization of graphene sheets with a thiol-terminated molecule provides a robust and tunable platform for the development of hybrid nanostructures in different environments. We employ this facile strategy to covalently couple two representative systems of broad interest: metal nanoparticles, via S-metal bonds, and thiol-modified DNA aptamers, via disulfide bridges. Both systems, which have been characterized by a multi-technique approach, remain firmly anchored to the graphene surface even after several washing cycles. Atomic force microscopy images demonstrate that the conjugated aptamer retains the functionality required to recognize a target protein. This methodology opens a new route to the integration of high-quality graphene layers into diverse technological platforms, including plasmonics, optoelectronics, or biosensing. With respect to the latter, the viability of a thiol-functionalized chemical vapor deposition graphene-based solution-gated field-effect transistor array was assessed.
physics
Out-of-time-order correlators (OTOC), vigorously being explored as a measure of quantum chaos and information scrambling, is studied here in the natural and simplest multi-particle context of bipartite systems. We show that two strongly chaotic and weakly interacting subsystems display two distinct phases in the growth of OTOC.The first is dominated by intra-subsystem scrambling, when an exponential growth with a positive Lyapunov exponent is observed till the Ehrenfest time. This phase is essentially independent of the interaction, while the second phase is an interaction dominated exponential approach to saturation that is universal and described by a random matrix model. This simple random matrix model of weakly interacting strongly chaotic bipartite systems, previously employed for studying entanglement and spectral transitions, is approximately analytically solvable for its OTOC. The example of two coupled kicked rotors is used to demonstrate the two phases, and the extent to which the random matrix model is applicable. That the two phases correspond to delocalization in the subsystems followed by inter-subsystem mixing is seen via the participation ratio in phase-space. We also point out that the second, universal, phase alone exists when the observables are in a sense already scrambled. Thus, while the post-Ehrenfest time OTOC growth is in general not well-understood, the case of strongly chaotic and weakly coupled systems presents an, perhaps important, exception.
quantum physics
In this paper we construct examples of irrational behavior of multiplicities and mixed multiplicities of divisorial filtrations. The construction makes essential use of anti-positive intersection products.
mathematics
In this work, we develop a distributed least squares approximation (DLSA) method that is able to solve a large family of regression problems (e.g., linear regression, logistic regression, and Cox's model) on a distributed system. By approximating the local objective function using a local quadratic form, we are able to obtain a combined estimator by taking a weighted average of local estimators. The resulting estimator is proved to be statistically as efficient as the global estimator. Moreover, it requires only one round of communication. We further conduct a shrinkage estimation based on the DLSA estimation using an adaptive Lasso approach. The solution can be easily obtained by using the LARS algorithm on the master node. It is theoretically shown that the resulting estimator possesses the oracle property and is selection consistent by using a newly designed distributed Bayesian information criterion (DBIC). The finite sample performance and computational efficiency are further illustrated by an extensive numerical study and an airline dataset. The airline dataset is 52 GB in size. The entire methodology has been implemented in Python for a {\it de-facto} standard Spark system. The proposed DLSA algorithm on the Spark system takes 26 minutes to obtain a logistic regression estimator, which is more efficient and memory friendly than conventional methods.
statistics
We study the retarded field sourced by a uniformly accelerated particle in a non-local scalar field theory. While the presence of non-locality regularizes the field at the location of the source, we also show that Lorentz-invariant non-local field theories are particularly sensitive to the somewhat unphysical assumption of uniform acceleration, leading to logarithmic divergences on the acceleration horizon. Analytic properties of the non-local retarded Green function indicate that the divergences can be removed by placing appropriate sources on the acceleration horizon in the asymptotic past.
high energy physics theory
Always-on spoken language interfaces, e.g. personal digital assistants, rely on a wake word to start processing spoken input. We present novel methods to train a hybrid DNN/HMM wake word detection system from partially labeled training data, and to use it in on-line applications: (i) we remove the prerequisite of frame-level alignments in the LF-MMI training algorithm, permitting the use of un-transcribed training examples that are annotated only for the presence/absence of the wake word; (ii) we show that the classical keyword/filler model must be supplemented with an explicit non-speech (silence) model for good performance; (iii) we present an FST-based decoder to perform online detection. We evaluate our methods on two real data sets, showing 50%--90% reduction in false rejection rates at pre-specified false alarm rates over the best previously published figures, and re-validate them on a third (large) data set.
electrical engineering and systems science
How magnetism emerges in low-dimensional materials such as transition metal dichalcogenides at the monolayer limit is still an open question. Herein, we present a comprehensive study of the magnetic properties of single crystal and monolayer VSe$_{2}$, both experimentally and \emph{ab initio}. Magnetometry, X-ray magnetic circular dichrosim (XMCD) and \emph{ab initio} calculations demonstrate that the charge density wave in bulk stoichiometric VSe$_{2.0}$ causes a structural distortion with a strong reduction in the density of sates at the Fermi level, prompting the system towards a non-magnetic state but on the verge of a ferromagnetic instability. In the monolayer limit, the structural rearrangement induces a Peierls distortion with the opening of an energy gap at the Fermi level and the absence of magnetic order. Control experiments on defect-induced VSe$_{2-\delta}$ single crystals show a breakdown of magnetism, discarding vacancies as a possible origin of magnetic order in VSe$_{2}$.
condensed matter
We study the finite distance boundary symmetry current algebra of the most general first order theory of 3d gravity. We show that the space of quadratic generators contains diffeomorphisms but also a notion of dual diffeomorphisms, which together form either a double Witt or centreless BMS$_3$ algebra. The relationship with the usual asymptotic symmetry algebra relies on a duality between the null and angular directions, which is possible thanks to the existence of the dual diffeomorphisms.
high energy physics theory
In this paper, the dynamics of fluorescent light emitted by a two-level atom interacts with squeezed vacuum reservoir is studied wisely using two-time correlation function fundamentals. The mathematical analysis shows the fluorescent spectrum of light emitted by the atom is turned out to be a single peak at a Lorentz's frequency. On the other hand, the squeezed vacuum reservoir input is responsible to the stimulated emission of photon from the atom. Moreover, it is identified that thermal reservoir is more efficient than squeezed vacuum reservoir to have valuable power spectrum.
quantum physics
Characterizing quasibound states from coupled-channel scattering calculations can be a laborious task, involving extensive manual iteration and fitting. We present an automated procedure, based on the phase shift or S-matrix eigenphase sum, that reliably converges on a quasibound state (or scattering resonance) from some distance away. It may be used for both single-channel and multichannel scattering. It produces the energy and width of the state and the phase shift of the background scattering, and hence the lifetime of the state. It also allows extraction of partial widths for decay to individual open channels. We demonstrate the method on a very narrow state in the Van der Waals complex Ar--H$_2$, which decays only by vibrational predissociation, and on near-threshold states of $^{85}$Rb$_2$, whose lifetime varies over 4 orders of magnitude as a function of magnetic field.
physics
We show that the Ruelle dynamical zeta function on a closed odd dimensional locally symmetric space twisted by an arbitrary flat vector bundle has a meromorphic extension to the whole complex plane and that its leading term in the Laurent series at the zero point is related to the regularised determinant of the flat Laplacian of Cappell-Miller. When the flat vector bundle is close to an acyclic and unitary one, we show that the dynamical zeta function is regular at the zero point and that its value is equal to the complex valued analytic torsion of Cappell-Miller. This generalises author's previous results for unitarily flat vector bundles as well as M\"uller and Spilioti's results on hyperbolic manifolds.
mathematics
End-to-end spoken language understanding (SLU) systems have many advantages over conventional pipeline systems, but collecting in-domain speech data to train an end-to-end system is costly and time consuming. One question arises from this: how to train an end-to-end SLU with limited amounts of data? Many researchers have explored approaches that make use of other related data resources, typically by pre-training parts of the model on high-resource speech recognition. In this paper, we suggest improving the generalization performance of SLU models with a non-standard learning algorithm, Reptile. Though Reptile was originally proposed for model-agnostic meta learning, we argue that it can also be used to directly learn a target task and result in better generalization than conventional gradient descent. In this work, we employ Reptile to the task of end-to-end spoken intent classification. Experiments on four datasets of different languages and domains show improvement of intent prediction accuracy, both when Reptile is used alone and used in addition to pre-training.
computer science
Cyber-physical attacks impose a significant threat to the smart grid, as the cyber attack makes it difficult to identify the actual damage caused by the physical attack. To defend against such attacks, various inference-based solutions have been proposed to estimate the states of grid elements (e.g., transmission lines) from measurements outside the attacked area, out of which a few have provided theoretical conditions for guaranteed accuracy. However, these conditions are usually based on the ground truth states and thus not verifiable in practice. To solve this problem, we develop (i) verifiable conditions that can be tested based on only observable information, and (ii) efficient algorithms for verifying the states of links (i.e., transmission lines) within the attacked area based on these conditions. Our numerical evaluations based on the Polish power grid and IEEE 300-bus system demonstrate that the proposed algorithms are highly successful in verifying the states of truly failed links, and can thus greatly help in prioritizing repairs during the recovery process.
computer science
Bundle adjustment (BA) is a fundamental optimization technique used in many crucial applications, including 3D scene reconstruction, robotic localization, camera calibration, autonomous driving, space exploration, street view map generation etc. Essentially, BA is a joint non-linear optimization problem, and one which can consume a significant amount of time and power, especially for large optimization problems. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. In this paper we propose {\pi}-BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC that exploits custom hardware for higher performance and power efficiency. Specifically, based on our key observation that not all points appear on all images in a BA problem, we designed and implemented a Co-Observation Optimization technique to accelerate BA operations with optimized usage of memory and computation resources. Experimental results confirm that {\pi}-BA outperforms the existing software implementations in terms of performance and power consumption.
electrical engineering and systems science
We establish an important duality correspondence between topological order in quantum many body systems and criticality in ferromagnetic classical spin systems. We show how such a correspondence leads to a classical and simple procedure for characterization of topological order in an important set of quantum entangled states, namely the Calderbank-Shor-Steane (CSS) states. To this end, we introduce a particular quantum Hamiltonian which allows us to consider the existence of a topological phase transition from quantum CSS states to a magnetized state. We study the ground state fidelity in order to find non-analyticity in the wave function as a signature of a topological phase transition. Since hypergraphs can be used to map any arbitrary CSS state to a classical spin model, we show that fidelity of the quantum model defined on a hypergraph $H$ is mapped to the heat capacity of the classical spin model defined on dual hypergraph $\tilde{H}$. Consequently, we show that a ferromagnetic-paramagnetic phase transition in a classical model is mapped to a topological phase transition in the corresponding quantum model. We also show that magnetization does not behave as a local order parameter at the transition point while the classical order parameter is mapped to a non-local measure on the quantum side, further indicating the non local nature of the transition. Our procedure not only opens the door for identification of topological phases via the existence of a local and classical quantity, i.e. critical point, but also offers the potential to classify various topological phases through the concept of universality in phase transitions.
quantum physics
This is a tutorial on duality properties of special functions, mainly of orthogonal polynomials in the ($q$-)Askey scheme. It is based on the first part of the 2017 R.P. Agarwal Memorial Lecture delivered by the author.
mathematics
The pseudo-marginal algorithm is a variant of the Metropolis--Hastings algorithm which samples asymptotically from a probability distribution when it is only possible to estimate unbiasedly an unnormalized version of its density. Practically, one has to trade-off the computational resources used to obtain this estimator against the asymptotic variances of the ergodic averages obtained by the pseudo-marginal algorithm. Recent works optimizing this trade-off rely on some strong assumptions which can cast doubts over their practical relevance. In particular, they all assume that the distribution of the difference between the log-density and its estimate is independent of the parameter value at which it is evaluated. Under regularity conditions we show here that, as the number of data points tends to infinity, a space-rescaled version of the pseudo-marginal chain converges weakly towards another pseudo-marginal chain for which this assumption indeed holds. A study of this limiting chain allows us to provide parameter dimension-dependent guidelines on how to optimally scale a normal random walk proposal and the number of Monte Carlo samples for the pseudo-marginal method in the large-sample regime. This complements and validates currently available results.
statistics
We establish the sharp rate of continuity of extensions of $\mathbb{R}^m$-valued $1$-Lipschitz maps from a subset $A$ of $\mathbb{R}^n$ to a $1$-Lipschitz maps on $\mathbb{R}^n$. We consider several cases when there exists an extension with preserved Lipschitz constant and preserved uniform distance to a given $1$-Lipschitz map. We prove that if $m>1$ then a given map is $1$-Lipschitz and affine if and only if such extension exists for any $1$-Lipschitz map defined on any subset of $\mathbb{R}^n$. This shows a striking difference from the case $m=1$, where any $1$-Lipschitz function has such property. Another example where we prove it is possible to find an extension with the same Lipschitz constant and the same uniform distance to another Lipschitz map $v$ is when the difference between the two maps belongs to a fixed one-dimensional subspace of $\mathbb{R}^m$ and the set $A$ is geodesically convex with respect to a Riemannian pseudo-metric associated with $v$.
mathematics
It is usual to identify initial conditions of classical dynamical systems with mathematical real numbers. However, almost all real numbers contain an infinite amount of information. I argue that a finite volume of space can't contain more than a finite amount of information, hence that the mathematical real numbers are not physically relevant. Moreover, a better terminology for the so-called real numbers is ``random numbers'', as their series of bits are truly random. I propose an alternative classical mechanics, which is empirically equivalent to classical mechanics, but uses only finite-information numbers. This alternative classical mechanics is non-deterministic, despite the use of deterministic equations, in a way similar to quantum theory. Interestingly, both alternative classical mechanics and quantum theories can be supplemented by additional variables in such a way that the supplemented theory is deterministic. Most physicists straightforwardly supplement classical theory with real numbers to which they attribute physical existence, while most physicists reject Bohmian mechanics as supplemented quantum theory, arguing that Bohmian positions have no physical reality.
quantum physics
We study the interplay between the following types of special non-K\"ahler Hermitian metrics on compact complex manifolds: it locally conformally K\"ahler, $k$-Gauduchon, balanced and locally conformally balanced and prove that a locally conformally K\"ahler compact nilmanifold carrying a balanced or a $k$-Gauduchon metric is necessarily a torus. Combined with a result of Fino and Vezzoni from 2016, this leads to the fact that a compact complex 2-step nilmanifold endowed with whichever two of the following types of metrics: balanced, pluriclosed and locally conformally K\"ahler is a torus. Moreover, we construct a family of compact nilmanifolds in any dimension carrying both balanced and locally conformally balanced metrics and finally we show a compact complex nilmanifold does not support a left-invariant locally conformally hyperK\"ahler structure.
mathematics
In this work we introduce the development of a three--phase incompressible Navier--Stokes/Cahn--Hilliard numerical method to simulate three--phase flows, present in many industrial operations. The numerical method is then applied to successfully solve oil transport problems, such as those found in the oil and gas industry. The three--phase model adopted in this work is a Cahn--Hilliard diffuse interface model, which was derived by Boyer and Lapuerta et al. 2006. The Cahn--Hilliard model is coupled to the entropy--stable incompressible Navier--Stokes equations model derived by Manzanero et al. 2019. The spatial discretization uses a high--order discontinuous Galerkin spectral element method which yields highly accurate results in arbitrary geometries, while an implicit--explicit (IMEX) method is adopted as temporal discretization. The developed numerical tool is tested for two and three dimensional problems, including a convergence study, a two--dimensional jet, a three--dimensional annular flow, and realistic geometries like T--shaped pipe intersections.
mathematics
This paper introduces the chain lattice, a hierarchical truss structure comprising two interpenetrating lattices. One lattice toughens the material and prevents catastrophic localized failure while the other lattice serves as a porous matrix that densifies to absorb energy during tensile loading. Chain lattices are amenable to additive manufacturing and can transform 3D-printable materials that are normally brittle and flaw-sensitive into damage-tolerant materials. Calculations predict ceramic chain lattices can have a specific energy absorption several orders of magnitude greater than that of their fully dense counterparts.
physics
Calude, Jain, Khoussainov, Li, and Stephan (2017) proposed a quasi-polynomial-time algorithm solving parity games. After this breakthrough result, a few other quasi-polynomial-time algorithms were introduced; none of them is easy to understand. Moreover, it turns out that in practice they operate very slowly. On the other side there is the Zielonka's recursive algorithm, which is very simple, exponential in the worst case, and the fastest in practice. We combine these two approaches: we propose a small modification of the Zielonka's algorithm, which ensures that the running time is at most quasi-polynomial. In effect, we obtain a simple algorithm that solves parity games in quasi-polynomial time. We also hope that our algorithm, after further optimizations, can lead to an algorithm that shares the good performance of the Zielonka's algorithm on typical inputs, while reducing the worst-case complexity on difficult inputs.
computer science
Aluminum scandium nitride alloy (Al1-xScxN) is regarded as a promising material for high-performance acoustic devices used in wireless communication systems. Phonon scattering and heat conduction processes govern the energy dissipation in acoustic resonators, ultimately determining their performance quality. This work reports, for the first time, on phonon scattering processes and thermal conductivity in Al1-xScxN alloys with the Sc content (x) up to 0.26. The thermal conductivity measured presents a descending trend with increasing x. Temperature-dependent measurements show an increase in thermal conductivity as the temperature increases at temperatures below 200K, followed by a plateau at higher temperatures (T> 200K). Application of a virtual crystal phonon conduction model allows us to elucidate the effects of boundary and alloy scattering on the observed thermal conductivity behaviors. We further demonstrate that the alloy scattering is caused mainly by strain-field difference, and less by the atomic mass difference between ScN and AlN, which is in contrast to the well-studied Al1-xGaxN and SixGe1-x alloy systems where atomic mass difference dominates the alloy scattering. This work studies and provides the quantitative knowledge for phonon scattering and the thermal conductivity in Al1-xScxN, paving the way for future investigation of materials and design of acoustic devices.
condensed matter
The issue of rogue wave lifetimes is addressed in this study, which helps to detail the general picture of this dangerous oceanic phenomenon. The direct numerical simulations of irregular wave ensembles are performed to obtain the complete accurate data on the rogue wave occurrence and evolution. The simulations are conducted by means of the HOS scheme for the potential Euler equations; purely collinear wave systems, moderately crested and short-crested sea states have been simulated. We join instant abnormally high waves in close locations and close time moments to new objects, rogue events, what helps to retrieve the abnormal occurrences more stably and more consistently from the physical point of view. The rogue wave event probability distributions are built based on the simulated wave data. They show the distinctive difference between rough sea states with small directional bandwidth on the one part, and small-amplitude states and short-crested states on the other part. The former support long-living rogue wave patterns (the corresponding probability distributions have heavy tails), though the latter possess exponential probability distributions of rogue event lifetimes and produce much shorter rogue wave events.
physics
We report on the well-posedness of the Feynman problem for the Klein-Gordon equation on asymptotically Minkowski spacetimes. The main result is the invertibility of the Klein-Gordon operator with Feynman conditions at infinite times. Furthermore, the inverse is shown to coincide with the Duistermaat-H\"ormander Feynman parametrix modulo smoothing terms.
mathematics
Security vulnerability in third-party dependencies is a growing concern not only for developers of the affected software, but for the risks it poses to an entire software ecosystem, e.g., Heartbleed vulnerability. Recent studies show that developers are slow to respond to the threat of vulnerability, sometimes taking four to eleven months to act. To ensure quick adoption and propagation of a release that contains the fix (fixing release), we conduct an empirical investigation to identify lags that may occur between the vulnerable release and its fixing release (package-side fixing release). Through a preliminary study of 231 package-side fixing release of npm projects on GitHub, we observe that a fixing release is rarely released on its own, with up to 85.72% of the bundled commits being unrelated to a fix. We then compare the package-side fixing release with changes on a client-side (client-side fixing release). Through an empirical study of the adoption and propagation tendencies of 1,290 package-side fixing releases that impact throughout a network of 1,553,325 releases of npm packages, we find that stale clients require additional migration effort, even if the package-side fixing release was quick (i.e., package patch landing). Furthermore, we show the influence of factors such as the branch that the package-side fixing release lands on and the severity of vulnerability on its propagation. In addition to these lags we identify and characterize, this paper lays the groundwork for future research on how to mitigate lags in an ecosystem.
computer science
The movable temperature profiler is a 7 m vertical array of 24 sensors that measures cryogenic temperatures with a precision of a few mK. This precision is necessary to monitor the efficiency of re-circulation and purification of liquid-argon inside large liquid-argon based neutrino detectors. Liquid argon temperature impacts electron (signal) drift velocity, flow, purity distribution and thus the overall energy calibration. The temperature profiler is motorized and moves vertically, while in the detector, and cross-calibrates neighboring sensors. The temperature offsets between each sensor cancel the effects of electromagnetic noise. This poster reports on the temperature measurements and such in-situ cross-calibrations at ProtoDUNE (single phase) at CERN.
physics
Topological edge modes, which are robust against disorders, have been used to enhance the spatial stability of lasers. Recently, it was revealed that topological lasers can be further stabilized using a novel topological phase in non-Hermitian photonic topological insulators. Here we propose a procedure to realize topologically protected modes extended over a d-dimensional bulk by introducing an imaginary gauge field. This generalizes the idea of zero-energy extended modes in the one-dimensional Su-Schrieffer-Heeger lattice into higher dimensional lattices allowing a d-dimensional bulky mode that is topologically protected. Furthermore, we numerically demonstrate that the topological bulk lasing mode can achieve high temporal stability superior to topological edge mode lasers. In the exemplified topological extended mode in the kagome lattice, we show that large regions of stability exist in its parameter space.
physics
Biomolecular light-harvesting antennas operate as nanoscale devices in a regime where the coherent interactions of individual light, matter and vibrational quanta are non-perturbatively strong. The complex behaviour arising from this could, if fully understood, be exploited for myriad energy applications. However, non-perturbative dynamics are computationally challenging to simulate, and experiments on biomaterials explore very limited regions of the non-perturbative parameter space. So-called `quantum simulators' of light-harvesting models could provide a solution to this problem, and here we employ the hierarchical equations of motion technique to investigate recent superconducting experiments of Poto{\v{c}}nik $\it{et}$ $\it{al.}$ (Nat. Com. 9, 904 (2018)) used to explore excitonic energy capture. By explicitly including the role of optical driving fields, non-perturbative dephasing noise and the full multi-excitation Hilbert space of a three-qubit quantum circuit, we predict the measureable impact of these factors on transfer efficiency. By analysis of the eigenspectrum of the network, we uncover a structure of energy levels that allows the network to exploit optical `dark' states and excited state absorption for energy transfer. We also confirm that time-resolvable coherent oscillations could be experimentally observed, even under strong, non-additive action of the driving and optical fields.
quantum physics
Gamma-ray bursts (GRBs) can be divided into three subclasses: X-ray flash (XRF), X-ray rich (XRR), and classical GRB (C-GRB). An X-ray flare is the rebrightening emission shown in the early X-ray afterglow of some GRBs. In this paper, we comprehensively examine the X-ray flare properties among XRF, XRR, and C-GRB subclasses. We utilize the XRF, XRR, and C-GRB subclass samples obtained from the Swift-BAT3 catalog, and the X-ray flare observational properties are collected from Falcone et al., Chincarini et al., and Yi et al. We find that XRFs and XRRs have more bright X-ray flares than C-GRBs. The ratio of the X-ray flare fluence to the prompt emission fluence has different distributions between XRF and C-GRB subclasses. The linear correlation between the duration and the peak time of the X-ray flares is also different between XRF and C-GRB subclasses. We are inclined to identify the GRBs with the bright X-ray flares as XRFs or XRRs. We discuss some issues that are related to the XRF/XRR/C-GRB classification. We also caution the selection effects and the instrument bias in our investigation. Large samples are required in the future to further confirm our results.
astrophysics
We investigate the rising flux tube and the formation of sunspots in an unprecedentedly deep computational domain that covers the whole convection zone with a radiative magnetohydrodynamics simulation. Previous calculations had shallow computational boxes (< 30 Mm) and convection zones at a depth of 200 Mm. By using our new numerical code R2D2, we succeed in covering the whole convection zone and reproduce the formation of the sunspot from a simple horizontal flux tube because of the turbulent thermal convection. The main findings are (1) The rising speed of the flux tube is larger than the upward convection velocity because of the low density caused by the magnetic pressure and the suppression of the mixing. (2) The rising speed of the flux tube exceeds 250 m/s at a depth of 18 Mm, while we do not see any clear evidence of the divergent flow 3 hr before the emergence at the solar surface. (3) Initially, the root of the flux tube is filled with the downflows and then the upflow fills the center of the flux tube during the formation of the sunspot. (4) The essential mechanisms for the formation of the sunspot are the coherent inflow and the turbulent transport. (5) The low-temperature region is extended to a depth of at least 40 Mm in the matured sunspot, with the high-temperature region in the center of the flux tube. Some of the findings indicate the importance of the deep computational domain for the flux emergence simulations.
astrophysics
Fast and accurate MRI image reconstruction from undersampled data is critically important in clinical practice. Compressed sensing based methods are widely used in image reconstruction but the speed is slow due to the iterative algorithms. Deep learning based methods have shown promising advances in recent years. However, recovering the fine details from highly undersampled data is still challenging. In this paper, we introduce a novel deep learning-based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct the image from multiple scales. We evaluated our model on the fastMRI dataset and the results show that the proposed model achieves significant improvements than other methods and can recover more fine details.
electrical engineering and systems science
Understanding causal relationships is one of the most important goals of modern science. So far, the causal inference literature has focused almost exclusively on outcomes coming from a linear space, most commonly the Euclidean space. However, it is increasingly common that complex datasets collected through electronic sources, such as wearable devices and medical imaging, cannot be represented as data points from linear spaces. In this paper, we present a formal definition of causal effects for outcomes from non-linear spaces, with a focus on the Wasserstein space of cumulative distribution functions. We develop doubly robust estimators and associated asymptotic theory for these causal effects. Our framework extends to outcomes from certain Riemannian manifolds. As an illustration, we use our framework to quantify the causal effect of marriage on physical activity patterns using wearable device data collected through the National Health and Nutrition Examination Survey.
statistics
We present ALMA observations of the CO(2-1) and CO(3-2) molecular gas transitions and associated (sub)-mm continua of the nearby Seyfert 1.5 galaxy NGC3227 with angular resolutions 0.085-0.21" (7-15pc). On large scales the cold molecular gas shows circular motions as well as streaming motions on scales of a few hundred parsecs associated with a large scale bar. We fitted the nuclear ALMA 1.3mm emission with an unresolved component and an extended component. The 850$\mu$m emission shows at least two extended components, one along the major axis of the nuclear disk and the other along the axis of the ionization cone. The molecular gas in the central region (1" ~73pc) shows several CO clumps with complex kinematics which appears to be dominated by non-circular motions. While we cannot demonstrate conclusively the presence of a warped nuclear disk, we also detected non-circular motions along the kinematic minor axis. They reach line-of-sight velocities of v-vsys =150-200km/s. Assuming that the radial motions are in the plane of the galaxy, then we interpret them as a nuclear molecular outflow due to molecular gas in the host galaxy being entrained by the AGN wind. We derive molecular outflow rates of $5\,M_\odot\,{\rm yr}^{-1}$ and $0.6\,M_\odot\,{\rm yr}^{-1}$ at projected distances of up to 30pc to the northeast and southwest of the AGN, respectively. At the AGN location we estimate a mass in molecular gas of $5\times 10^{5}\,M_\odot$ and an average column density $N({\rm H}_2) = 2-3\times 10^{23}\,{\rm cm}^{-2}$ in the inner 15pc. The nuclear molecular gas and sub-mm continuum emission of NGC3227 do not resemble the classical compact torus. Rather, these emissions extend for several tens of parsecs and appear connected with the circumnuclear ring in the host galaxy disk, as found in other local AGN. (Abridged)
astrophysics
We present a calculation of the NLO QCD corrections to the loop-induced production of a photon pair through gluon fusion, including massive top quarks at two loops, where the two-loop integrals are calculated numerically. Matching the fixed-order NLO results to a threshold expansion, we obtain accurate results around the top quark pair production threshold. We analyse how the top quark threshold corrections affect distributions of the photon pair invariant mass and comment on the possibility of determining the top quark mass from precision measurements of the diphoton invariant mass spectrum.
high energy physics phenomenology
We consider the Willmore functional on graphs, with an additional penalization of the area where the curvature is non-zero. Interpreting the penalization parameter as a Lagrange multiplier, this corresponds to the Willmore functional with a constraint on the area where the graph is flat. Sending the penalization parameter to $\infty$ and rescaling suitably, we derive the limit functional in the sense of $\Gamma$-convergence.
mathematics
Airborne radar carried on-board unmanned aerial vehicles (UAV) is serving as the harbinger of new remote sensing applications for security and rescue in inclement environments. The mobility and agility of UAVs along with intelligent on-board sensors (cameras, acoustics, and radar) are more effective during the early stages of disaster response. The ability of radars to penetrate through objects and operate during low visibility conditions enables detection of occluded human subjects on and under debris when other sensing modalities fail. Recently, radars have been deployed on UAVs to measure minute human physiological parameters such as respiratory and heart rates while sensing through clothing and building materials. Aggregating radar measurements with the information from other sensors is broadening the applications of drones in life-critical situations. Signal processing techniques are critical in enabling UAV-borne radars for human vital sign detection (VSD) in multiple operation modes. Novel radar configurations such as in a UAV swarm and tethered UAVs are required to facilitate multi-tasking and high endurance, respectively. This paper provides an overview of recent advances in UAV-borne VSD with a focus on deployment modes and processing methods.
electrical engineering and systems science
Attention is an operation that selects some largest element from some set, where the notion of largest is defined elsewhere. Applying this operation to sequence to sequence mapping results in significant improvements to the task at hand. In this paper we provide the mathematical definition of attention and examine its application to sequence to sequence models. We highlight the exact correspondences between machine learning implementations of attention and our mathematical definition. We provide clear evidence of effectiveness of attention mechanisms evaluating models with varying degrees of attention on a very simple task: copying a sentence. We find that models that make greater use of attention perform much better on sequence to sequence mapping tasks, converge faster and are more stable.
computer science
The issue of synchronization in the power grid is receiving renewed attention, as new energy sources with different dynamics enter the picture. Global metrics have been proposed to evaluate performance and analyzed under highly simplified assumptions. In this paper, we extend this approach to more realistic network scenarios and more closely connect it with metrics used in power engineering practice. In particular, our analysis covers networks with generators of heterogeneous ratings and richer dynamic models of machines. Under a suitable proportionality assumption in the parameters, we show that the step response of bus frequencies can be decomposed in two components. The first component is a {system-wide frequency} that captures the aggregate grid behavior, and the residual component represents the individual bus frequency deviations from the aggregate. Using this decomposition, we define --and compute in closed form-- several metrics that capture dynamic behaviors that are of relevance for power engineers. In particular, using the \emph{system frequency}, we define industry-style metrics (Nadir, RoCoF) that are evaluated through a representative machine. We further use the norm of the residual component to define a \emph{synchronization cost} that can appropriately quantify inter-area oscillations. Finally, we employ robustness analysis tools to evaluate deviations from our proportionality assumption. We show that the system frequency still captures the grid steady-state deviation, and becomes an accurate reduced-order model of the grid as the network connectivity grows. Simulation studies with practically relevant data are included to validate the theory and further illustrate the impact of network structure and parameters on synchronization. Our analysis gives conclusions of practical interest, sometimes challenging the conventional wisdom in the field.
computer science
Historically, non-disabled individuals have viewed disability as a personal deficit requiring change to the disabled individual. However, models have emerged from disability activists and disabled intellectuals that emphasize the role of disabling social structures in preventing or hindering equal access across the ability continuum. We used the social relational proposition, which situates disability within the interaction of impairments and particular social structures, to identify disabling structures in introductory STEM courses. We conducted interviews with nine students who identified with a range of impairments about their experiences in introductory STEM courses. We assembled a diverse research team and analyzed the interviews through phenomenological analysis. Participants reported course barriers that prevented effective engagement with course content. These barriers resulted in challenges with time management as well as feelings of stress and anxiety. We discuss recommendations for supporting students to more effectively engage with introductory STEM courses.
physics
Nanoscale topologically non-trivial spin textures, such as magnetic skyrmions, have been identified as promising candidates for the transport and storage of information for spintronic applications, notably magnetic racetrack memory devices. The design and realization of single skyrmion chain at room temperature (RT) and above in the low-dimensional nanostructures are of great importance for future practical applications. Here, we report the creation of a single skyrmion bubble chain in a geometrically confined Fe3Sn2 nanostripe with a width comparable to the featured size of a skyrmion bubble. Systematic investigations on the thermal stability have revealed that the single chain of skyrmion bubbles can keep stable at temperatures varying from RT up to a record-high temperature of 630 K. This extreme stability can be ascribed to the weak temperature-dependent magnetic anisotropy and the formation of edge states at the boundaries of the nanostripes. The realization of the highly stable skyrmion bubble chain in a geometrically confined nanostructure is a very important step towards the application of skyrmion-based spintronic devices.
condensed matter
A fraction of dwarf galaxies in the Virgo cluster contain disk features like bars and spiral arms. Using $N$-body simulations, we investigate the effects of tidal forces on the formation of such disk features in disk dwarf galaxies resembling VCC856. We consider 8 Cluster-Galaxy models in which disk dwarf galaxies with differing pericenter distance and spin orientation experience the tidal gravitational force of a Virgo-like NFW halo, and additional 8 Galaxy-Galaxy models in which two dwarf galaxies undergo tidal interactions with different strength. We find that the cluster tidal effect is moderate due to the small galaxy size, making the bars form earlier by $\sim1$--$1.5\Gyr$ compared to the cases in isolation. While the galactic halos significantly lose their mass within the virial radius due to the cluster tidal force, the mass of the stellar disks is nearly unchanged, suggesting that the inner regions of a disk-halo system is secured from the tidal force. The tidal forcing from either the cluster potential or a companion galaxy triggers the formation of two-armed spirals at early time before a bar develops. The tidally-driven arms decay and wind with time, suggesting that they are kinematic density waves. In terms of the strength and pitch angle, the faint arms in VCC856 are best matched with the arms in a marginally unstable galaxy produced by a distant tidal encounter with its neighbor $\sim0.85\Gyr$ ago.
astrophysics
To address COVID-19 healthcare challenges, we need frequent sharing of health data, knowledge and resources at a global scale. However, in this digital age, data privacy is a big concern that requires the secure embedding of privacy assurance into the design of all technological solutions that use health data. In this paper, we introduce differential privacy by design (dPbD) framework and discuss its embedding into the federated machine learning system. To limit the scope of our paper, we focus on the problem scenario of COVID-19 imaging data privacy for disease diagnosis by computer vision and deep learning approaches. We discuss the evaluation of the proposed design of federated machine learning systems and discuss how differential privacy by design (dPbD) framework can enhance data privacy in federated learning systems with scalability and robustness. We argue that scalable differentially private federated learning design is a promising solution for building a secure, private and collaborative machine learning model such as required to combat COVID19 challenge.
computer science
Hypothesis: Evaporation of surfactant droplets on leaves is complicated due to the complex physical and chemical properties of the leaf surfaces. However, for certain leaf surfaces for which the evaporation process appears to follow the standard constant-contact-radius or constant-contact-angle modes, it should be possible to mimic the droplet evaporation with both a well-chosen synthetic surface and a relatively simple mathematical model. Experiments: Surfactant droplet evaporation experiments were performed on two commercial crop species, wheat and capsicum, along with two synthetic surfaces, up to a $90\,^{\circ}$ incline. The time-dependence of the droplets' contact angles, height, volume and contact radius was measured throughout the evaporation experiments. Mathematical models were developed to simulate the experiments. Findings: With one clear exception, for all combinations of surfaces, surfactant concentrations and angles, the experiments appear to follow the standard evaporation modes and are well described by the mathematical models (modified Popov and Young-Laplace-Popov). The exception is wheat with a high surfactant concentration, for which droplet evaporation appears nonstandard and deviates from the diffusion limited models, perhaps due to additional mechanisms such as the adsorption of surfactant, stomatal density or an elongated shape in the direction of the grooves in the wheat surface.
physics
The study of quantum resonances in the chaotic atom-optics kicked rotor system is of interest from two different perspectives. In quantum chaos, it marks out the regime of resonant quantum dynamics in which the atomic cloud displays ballistic mean energy growth due to coherent momentum transfer. Secondly, the sharp quantum resonance peaks are useful in the context of measurement of Talbot time, one of the parameter that helps in precise measurement of fine structure constant. Most of the earlier works rely on fidelity based approach and have proposed Talbot time measurement through experimental determination of the momentum space probability density of the periodically kicked atomic cloud. Fidelity approach has the disadvantage that phase reversed kicks need to be imparted as well which potentially leads to dephasing. In contrast to this, in this work, it is theoretically shown that, without manipulating the kick sequences, the quantum resonances through position space density can be measured more accurately and is experimentally feasible as well.
quantum physics
We analyze high-resolution spectropolarimetric observations of a flux emerging region (FER) in order to understand its magnetic and kinematic structure. Our spectropolarimetric observations in the He I 1083.0 nm spectral region of a FER are recorded with GRIS at the 1.5 m aperture GREGOR telescope. A Milne-Eddington based inversion code was employed to extract the photospheric information of the Si I spectral line, whereas the He I triplet line was analyzed with the Hazel inversion code, which takes into account the joint action of the Hanle and the Zeeman effect. The spectropolarimetric analysis of Si I line displays a complex magnetic structure near the vicinity of FER. Moreover, we find supersonic downflows of 40 km/sec appears near the footpoints of loops connecting two pores of opposite polarity, whereas a strong upflows of 22 km/sec appears near the apex of the loops. Furthermore, non-force-free field extrapolations were performed separately at two layers in order to understand the magnetic field topology of the FER. We determine, using extrapolations from the photosphere and the observed chromospheric magnetic field, that the average formation height of the He triplet line is 2 Mm from the solar surface. The reconstructed loops using photospheric extrapolations along an arch filament system have a maximum height of 10.5 Mm from the solar surface with a foot-points separation of 19 Mm, whereas the loops reconstructed using chromospheric extrapolations are around 8.4 Mm high from the solar surface with a foot-point separation of 16 Mm at the chromospheric height. The magnetic topology in the FER suggests the presence of small-scale loops beneath the large loops. Under suitable conditions, due to magnetic reconnection, these loops can trigger various heating events in the vicinity of the FER.
astrophysics
In this work, a pattern recognition algorithm was developed to process and analyze electron emission micrographs. Various examples of dc and rf emission are given that demonstrate this algorithm applicability to determine emitters spatial location and distribution and calculate apparent emission area. The algorithm is fast and only takes $\sim$10 seconds to process and analyze one micrograph using resources of an Intel Core i5.
physics
We calculate all planar contributions to the two-loop massless helicity amplitudes for the process $q\bar q\to \gamma\gamma\gamma$. The results are presented in fully analytic form in terms of the functional basis proposed recently by Chicherin and Sotnikov. With this publication we provide the two-loop contributions already used by us in the NNLO QCD calculation of the LHC process $pp\to \gamma\gamma\gamma$ [Chawdhry et al. (2019)]. Our results agree with a recent calculation of the same amplitude [Abreu et al. (2020)] which was performed using different techniques. We combine several modern computational techniques, notably, analytic solutions for the IBP identities, finite-field reconstruction techniques as well as the recent approach [Chen (2019)] for efficiently projecting helicity amplitudes. Our framework appears well-suited for the calculation of two-loop multileg amplitudes for which complete sets of master integrals exist.
high energy physics phenomenology
When absorbing boundary conditions are used to evaporate a black hole in AdS/CFT, we show that there is a phase transition in the location of the quantum Ryu-Takayanagi surface, at precisely the Page time. The new RT surface lies slightly inside the event horizon, at an infalling time approximately the scrambling time $\beta/2\pi \log S_{BH}$ into the past. We can immediately derive the Page curve, using the Ryu-Takayanagi formula, and the Hayden-Preskill decoding criterion, using entanglement wedge reconstruction. Because part of the interior is now encoded in the early Hawking radiation, the decreasing entanglement entropy of the black hole is exactly consistent with the semiclassical bulk entanglement of the late-time Hawking modes, despite the absence of a firewall. By studying the entanglement wedge of highly mixed states, we can understand the state dependence of the interior reconstructions. A crucial role is played by the existence of tiny, non-perturbative errors in entanglement wedge reconstruction. Directly after the Page time, interior operators can only be reconstructed from the Hawking radiation if the initial state of the black hole is known. As the black hole continues to evaporate, reconstructions become possible that simultaneously work for a large class of initial states. Using similar techniques, we generalise Hayden-Preskill to show how the amount of Hawking radiation required to reconstruct a large diary, thrown into the black hole, depends on both the energy and the entropy of the diary. Finally we argue that, before the evaporation begins, a single, state-independent interior reconstruction exists for any code space of microstates with entropy strictly less than the Bekenstein-Hawking entropy, and show that this is sufficient state dependence to avoid the AMPSS typical-state firewall paradox.
high energy physics theory
We discuss a few tests of the ER=EPR proposal. We consider certain conceptual issues as well as explicit physical examples that could be experimentally realized. In particular, we discuss the role of the Bell bounds, the large N limit, as well as the consistency of certain theoretical assumptions underlying the ER=EPR proposal. As explicit tests of the ER=EPR proposal we consider limits coming from the entropy-energy relation and certain limits coming from measurements of the speed of light as well as measurements of effective weights of entangled states. We also discuss various caveats of such experimental tests of the ER=EPR proposal.
high energy physics theory
We study convergence and convergence rates for resampling schemes. Our first main result is a general consistency theorem based on the notion of negative association, which is applied to establish the almost-sure weak convergence of measures output from Kitagawa's (1996) stratified resampling method. Carpenter et al's (1999) systematic resampling method is similar in structure but can fail to converge depending on the order of the input samples. We introduce a new resampling algorithm based on a stochastic rounding technique of Srinivasan (2001), which shares some attractive properties of systematic resampling, but which exhibits negative association and therefore converges irrespective of the order of the input samples. We confirm a conjecture made by Kitagawa (1996) that ordering input samples by their states in $\mathbb{R}$ yields a faster rate of convergence; we establish that when particles are ordered using the Hilbert curve in $\mathbb{R}^d$, the variance of the resampling error is ${\scriptscriptstyle\mathcal{O}}(N^{-(1+1/d)})$ under mild conditions, where $N$ is the number of particles. We use these results to establish asymptotic properties of particle algorithms based on resampling schemes that differ from multinomial resampling.
statistics
We present a brief overview of Sommerfeld's forerunner signal, which occurs when a monochromatic plane-wave (frequency {\omega}={\omega}_s) suddenly arrives, at time t=0 and at normal incidence, at the surface of a dispersive dielectric medium of refractive index n({\omega}). Deep inside the dielectric host at a distance z0 from the surface, no signal arrives until t=z0/c, where c is the speed of light in vacuum. Immediately after this point, a weak but extremely high frequency signal is observed at z=z0. This so-called Sommerfeld forerunner (or precursor) is highly chirped, meaning that its frequency, which is much greater than {\omega}_s immediately after t=z0/c, declines rapidly with the passage of time. The incident light with its characteristic frequency {\omega}_s eventually arrives at t~z0/vg , where vg is the group velocity of the incident light inside the host medium. Brillouin has identified a second forerunner that occupies the interval between the end of the Sommerfeld forerunner at t~n(0)z0/c and the beginning of the steady signal. This second forerunner is commonly referred to as the Brillouin forerunner (or precursor). Given that the incident wave has a sudden start at t=0, its frequency spectrum spans the entire range of frequencies from minus infinity to infinity. Consequently, the high-frequency first forerunner cannot be considered a superoscillation, nor can the low-frequency second forerunner be regarded as a suboscillation. The goal of the present paper is to extend the Sommerfeld-Brillouin theory of precursors to bandlimited incident signals, in an effort to determine the conditions under which these precursors would continue to exist, and to answer the question as to whether or not such precursors, upon arising from a bandlimited incident signal, constitute super- or sub-oscillations.
physics
Squeezed light finds many important applications in quantum information science and quantum metrology, and has been produced in a variety of physical systems involving optical nonlinear processes. Here, we show how a nonlinear magnetostrictive interaction in a ferrimagnet in cavity magnomechanics can be used to reduce quantum noise of the electromagnetic field. We show optimal parameter regimes where a substantial and stationary squeezing of the microwave output field can be achieved. The scheme can be realized within the reach of current technology in cavity electromagnonics and magnomechanics. Our work provides a new and practicable approach for producing squeezed vacuum states of electromagnetic fields, and may find promising applications in quantum information processing and quantum metrology.
quantum physics
The spectral density operator $\hat{\rho}(\omega)=\delta(\omega-\hat{H})$ plays a central role in linear response theory as its expectation value, the dynamical response function, can be used to compute scattering cross-sections. In this work, we describe a near optimal quantum algorithm providing an approximation to the spectral density with energy resolution $\Delta$ and error $\epsilon$ using $\mathcal{O}\left(\sqrt{\log\left(1/\epsilon\right)\left(\log\left(1/\Delta\right)+\log\left(1/\epsilon\right)\right)}/\Delta\right)$ operations. This is achieved without using expensive approximations to the time-evolution operator but exploiting instead qubitization to implement an approximate Gaussian Integral Transform (GIT) of the spectral density. We also describe appropriate error metrics to assess the quality of spectral function approximations more generally.
quantum physics
The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.
electrical engineering and systems science
We propose a hybrid quantum-classical approach to model continuous classical probability distributions using a variational quantum circuit. The architecture of the variational circuit consists of two parts: a quantum circuit employed to encode a classical random variable into a quantum state, called the quantum encoder, and a variational circuit whose parameters are optimized to mimic a target probability distribution. Samples are generated by measuring the expectation values of a set of operators chosen at the beginning of the calculation. Our quantum generator can be complemented with a classical function, such as a neural network, as part of the classical post-processing. We demonstrate the application of the quantum variational generator using a generative adversarial learning approach, where the quantum generator is trained via its interaction with a discriminator model that compares the generated samples with those coming from the real data distribution. We show that our quantum generator is able to learn target probability distributions using either a classical neural network or a variational quantum circuit as the discriminator. Our implementation takes advantage of automatic differentiation tools to perform the optimization of the variational circuits employed. The framework presented here for the design and implementation of variational quantum generators can serve as a blueprint for designing hybrid quantum-classical architectures for other machine learning tasks on near-term quantum devices.
quantum physics
We consider quantum many-body dynamics under quantum measurements, where the so-called measurement-induced phase transitions (MIPs) occur, which depend on the frequency of the measurement. In this work, we consider the robustness of the MIP for long-range interaction that decays as $r^{-\alpha}$ with distance $r$. The effects of long-range interactions are classified into two regimes: i) for $\alpha > \alpha_c$, the MIP is observed and the universality of the exponent depends on $\alpha$; ii) for $\alpha<\alpha_c$, the MIP is absent, even for arbitrarily strong measurements. Using fermion models, we demonstrate both regimes in integrable and non-integrable cases. In addition, we identify the underlying mechanism and propose sufficient conditions to observe the MIP, that is, $\alpha > d+1/2$ for general bilinear systems and $\alpha > d+1$ for nonintegrable general systems ($d$: spatial dimension).
quantum physics
Multiply-interacting massive particles (MIMPs) are heavy (>10^10 GeV/c^2) dark matter particles that interact strongly with regular matter, but may have evaded detection due to the low number density required to make up the local dark matter halo. These particles could leave track-like signatures in current experiments, similar to lightly-ionizing particles. We show that previously calculated limits from the MAJORANA Demonstrator on the flux of lightly-ionizing particles can be used to exclude MIMP dark matter parameter space up to a mass of 10^15 GeV/c^2. We also calculate limits from the standard XENON1T analysis in this high-mass regime, properly taking into account flux limitations and multi-scatter effects. Finally, we show that a dedicated MIMP analysis using the XENON1T dark matter search could probe unexplored parameter space up to masses of 10^18 GeV/c^2.
high energy physics phenomenology