text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Motivated by the desire to understand chaos in the $S$-matrix of string theory, we study tree level scattering amplitudes involving highly excited strings. While the amplitudes for scattering of light strings have been a hallmark of string theory since its early days, scattering of excited strings has been far less studied. Recent results on black hole chaos, combined with the correspondence principle between black holes and strings, suggest that the amplitudes have a rich structure. We review the procedure by which an excited string is formed by repeatedly scattering photons off of an initial tachyon (the DDF formalism). We compute the scattering amplitude of one arbitrary excited string and any number of tachyons in bosonic string theory. At high energies and high mass excited state these amplitudes are determined by a saddle-point in the integration over the positions of the string vertex operators on the sphere (or the upper half plane), thus yielding a generalization of the "scattering equations". We find a compact expression for the amplitude of an excited string decaying into two tachyons, and study its properties for a generic excited string. We find the amplitude is highly erratic as a function of both the precise excited string state and of the tachyon scattering angle relative to its polarization, a sign of chaos.
|
high energy physics theory
|
There is accumulating evidence in the literature that stability of learning algorithms is a key characteristic that permits a learning algorithm to generalize. Despite various insightful results in this direction, there seems to be an overlooked dichotomy in the type of stability-based generalization bounds we have in the literature. On one hand, the literature seems to suggest that exponential generalization bounds for the estimated risk, which are optimal, can be only obtained through stringent, distribution independent and computationally intractable notions of stability such as uniform stability. On the other hand, it seems that weaker notions of stability such as hypothesis stability, although it is distribution dependent and more amenable to computation, can only yield polynomial generalization bounds for the estimated risk, which are suboptimal. In this paper, we address the gap between these two regimes of results. In particular, the main question we address here is \emph{whether it is possible to derive exponential generalization bounds for the estimated risk using a notion of stability that is computationally tractable and distribution dependent, but weaker than uniform stability. Using recent advances in concentration inequalities, and using a notion of stability that is weaker than uniform stability but distribution dependent and amenable to computation, we derive an exponential tail bound for the concentration of the estimated risk of a hypothesis returned by a general learning rule, where the estimated risk is expressed in terms of either the resubstitution estimate (empirical error), or the deleted (or, leave-one-out) estimate. As an illustration, we derive exponential tail bounds for ridge regression with unbounded responses, where we show how stability changes with the tail behavior of the response variables.
|
statistics
|
Quantitative characterization of the spatial structure of single photons is essential for free-space quantum communication and quantum imaging. We introduce an interferometric technique that enables the complete characterization of a two-dimensional probability amplitude of a single photon. Importantly, in contrast to methods that use a reference photon for the phase measurement, our technique relies on a single photon interfering with itself. Our setup comprises of a heralded single-photon source with an unknown spatial phase and a modified Mach-Zehnder interferometer with a spatial filter in one of its arms. The spatial filter removes the unknown spatial phase and the filtered beam interferes with the unaltered beam passing through the other arm of the interferometer. We experimentally confirm the feasibility of our technique by reconstructing the spatial phase of heralded single photons using the lowest order interference fringes. This technique can be applied to the characterization of arbitrary pure spatial states of single photons.
|
quantum physics
|
Macroscopic dark matter -- "macros"-- refers to a broad class of alternative candidates to particle dark matter with still unprobed regions of parameter space. These candidates would transfer energy primarily through elastic scattering with approximately their geometric cross-section. For sufficiently large cross-sections, the linear energy deposition could produce observable signals if a macro were to pass through compact objects such as white dwarfs or neutron stars in the form of thermonuclear runaway, leading to a type IA supernova or superburst respectively. We update the constraints from white dwarfs. These are weaker than previously inferred in important respects because of more careful treatment of the passage of a macro through the white dwarf and greater conservatism regarding the size of the region that must be heated to initiate runaway. On the other hand, we place more stringent constraints on macros at low cross-section, using new data from the Montreal White Dwarf Database. New constraints are inferred from the low mass X-ray binary 4U 1820-30, in which more than a decade passed between successive superbursts. Updated microlensing constraints are also reported.
|
astrophysics
|
We derive the photometric, kinematic, and abundance characteristics of 18 star-forming MaNGA galaxies with fairly regular velocity fields and surface brightness distributions and with a large offset between the measured position angles of the major kinematic and photometric axes, dPA > 20 degree. The aim is to examine if there is any other distinctive characteristic common to these galaxies. We found morphological signs of interaction in some (in 11 out of 18) but not in all galaxies. The observed velocity fields show a large variety; the maps of the isovelocities vary from an hourglass-like appearance to a set of straight lines. The position angles of the major kinematic axes of the stellar and gas rotations are close to each other. The values of the central oxygen abundance, radial abundance gradient, and star formation rate are distributed within the intervals defined by galaxies with small (no) dPA of similar mass. Thus, we do not find any specific characteristic common to all galaxies with large dPA. Instead, the properties of these galaxies are similar to those of galaxies with small (no) dPA. This suggests that either the reason responsible for the large dPA does not influence other characteristics or the galaxies with large dPA do not share a common origin, they can, instead, originate through different channels.
|
astrophysics
|
We emphasize the importance of asking the right question when interpreting the decisions of a learning model. We discuss a natural extension of the theoretical machinery from Janzing et. al. 2020, which answers the question "Why did my model predict a person has cancer?" for answering a more involved question, "What caused my model to predict a person has cancer?" While the former quantifies the direct effects of variables on the model, the latter also accounts for indirect effects, thereby providing meaningful insights wherever human beings can reason in terms of cause and effect. We propose three broad categories for interpretations: observational, model-specific and causal each of which are significant in their own right. Furthermore, this paper quantifies feature relevance by weaving different natures of interpretations together with different measures as characteristic functions for Shapley symmetrization. Besides the widely used expected value of the model, we also discuss measures of statistical uncertainty and dispersion as informative candidates, and their merits in generating explanations for each data point, some of which are used in this context for the first time. These measures are not only useful for studying the influence of variables on the model output, but also on the predictive performance of the model, and for that we propose relevant characteristic functions that are also used for the first time.
|
statistics
|
We study r-mode instability of rotating compact objects composed of asymmetric and self-interacting Fermionic dark matter (dark stars). It is argued that the instability limit the angular frequency of the stars less than half of the Keplerian frequency. This may constrain the stars as alternatives to fast spinning pulsars or rapidly rotating Kerr black holes.
|
astrophysics
|
Absorption covers the physical processes which convert intense photon flux into energetic particles when a high-power laser illuminates optically-thick matter. It underpins important petawatt-scale applications today, e.g., medical-quality proton beam production. However, development of ultra-high-field applications has been hindered since no study so far has described absorption throughout the entire transition from the classical to the quantum electrodynamical (QED) regime of plasma physics. Here we present a model of absorption that holds over an unprecedented six orders-of-magnitude in optical intensity and lays the groundwork for QED applications of laser-driven particle beams. We demonstrate 58% efficient \gamma-ray production at $1.8\times 10^{25}~\mathrm{W~ cm^{-2}}$ and the creation of an anti-matter source achieving $4\times 10^{24}\ \mathrm{positrons}\ \mathrm{cm^{-3}}$, $10^{6}~\times$ denser than of any known photonic scheme. These results will find applications in scaled laboratory probes of black hole and pulsar winds, \gamma-ray radiography for materials science and homeland security, and fundamental nuclear physics.
|
physics
|
According to the black hole membrane paradigm, the black hole event horizon behaves like a 2+1 dimensional fluid. The fluid has nonzero momentum density but zero velocity. As a result, it does not respond to tidal forces in the usual way. In this note, we point out that this unusual behavior can be traced back to an emergent, near-horizon Carroll symmetry (the Carroll group is the $c\rightarrow 0$ limit of the Poincar\'e group). For Schwarzschild black holes in $d=4$ general relativity, we relate the vanishing of the black hole fluid's velocity to vanishing of the black hole's Love numbers. This suggests near-horizon Carroll symmetry may have a role to play in explaining black hole Love numbers.
|
high energy physics theory
|
We demonstrate the fabrication of diffraction-limited dielectric metasurface lenses for NIR by use of standard industrial high throughput silicon processing techniques: UV Nano Imprint Lithography (UV-NIL) combined with continuous Reactive Ion Etching (RIE) and pulsed Bosch Deep Reactive Ion Etching (DRIE). As the research field of metasurfaces moves towards applications these techniques are relevant as potential replacements of commonly used cost-intensive fabrication methods utilizing Electron Beam Lithography. We show that washboard-type sidewall surface roughness arising from the Bosch DRIE process can be compensated for in the design of the metasurface, without deteriorating lens quality. Particular attention is given to fabrication challenges that must be overcome towards high throughput production of relevance to commercial applications. Lens efficiencies are measured to be 30% and 17% at wavelengths {\lambda} = 1.55$\mu$m and {\lambda} = 1.31$\mu$m, respectively. A number of routes towards process optimization are proposed in relation to encountered challenges.
|
physics
|
In this paper, we performed a detailed theoretical study of structural, elastic and electronic properties of two germanides LuAuGe and ScAuGe by means of first-principles calculations using the pseudopotential plane-wave method within the generalized gradient approximation. The crystal lattice parameters and the internal coordinates are in good agreement with the existing experimental and theoretical reports, which proves the reliability of the applied theoretical method. The hydrostatic pressure effect on the structural parameters is shown. The monocrystalline elastic constants were calculated using the stress-strain technique. The calculated elastic constants of the MAuGe (M = Lu, Sc) compounds meet the mechanical stability criteria for hexagonal crystals and these constants were used to analyze the elastic anisotropy of the MAuGe compounds through three different indices. Polycrystalline isotropic elastic moduli, namely bulk modulus, shear modulus, Young's modulus, Poisson's ratio, and the related properties are also estimated using Voigt-Reuss-Hill approximations. Finally, we studied the electronic properties of the considered compounds by calculating their band structures, their densities of states and their electron density distributions.
|
condensed matter
|
The rate of stimulated inverse bremsstrahlung is calculated for low electron density stellar plasmas and the condition under which the plasma becomes transparent is presented. The stability of low density stellar plasma is analyzed for a star with a spherical symmetry in equilibrium between the gravitational attractive forces and the repulsive pressure forces of an ideal electron gas where the analysis is developed by the use of Boltzmann statistics. Fundamental and surprising results are obtained by which the radius and the total mass of the star are inversely proportional to the square root of the electron density in the star center. The total gravitational forces of the star with very low electron and mass densities are very large (!) due to the extreme large star volumes. The absorption and emission of radiation for extremely low density star plasmas vanishes over all the entire electromagnetic spectrum. The present results are supported by numerical calculations. Similar effects are predicted for low density stellar plasmas which have different structures and the properties of such plasmas might show certain similarities with those of dark matter.
|
astrophysics
|
The observation of long range ridge-like structure in the near-side region of the two particle $\Delta\eta-\Delta\phi$ correlations as measured by LHC experiments in high multiplicity p$-$p collisions indicated towards the presence of collective effects which are similar to that observed in p$-$A(nucleon-nucleus) and A$-$A (nucleus-nucleus) collisions. The two particle correlation between the charged particles in $\Delta\eta-\Delta\phi$ for p$-$p collisions at $\sqrt{s}$ = 7 TeV and 13 TeV is studied using Pythia 8 event generator within the framework of final-state partonic color reconnection effects as well as the microscopic rope hadronization model. The rope hadronization relies on the formation of ropes due to overlapping of strings in high multiplicity events followed by string shoving. A near side ridge-like structure which is qualitatively similar to the observed ridge in data was observed for high-multiplicity events when the mechanism of rope hadronization (with shoving) was enabled.
|
high energy physics phenomenology
|
We put forth a unifying formalism for the description of the thermodynamics of continuously monitored systems, where measurements are only performed on the environment connected to a system. We show, in particular, that the conditional and unconditional entropy production, which quantify the degree of irreversibility of the open system's dynamics, are related to each other by the Holevo quantity. This, in turn, can be further split into an information gain rate and loss rate, which provide conditions for the existence of informational steady-states (ISSs), i.e. stationary states of a conditional dynamics that are maintained owing to the unbroken acquisition of information. We illustrate the applicability of our framework through several examples.
|
quantum physics
|
We present a detailed study of a clamped ribbon-like filament under a compressive active force using Brownian dynamics simulations. We show that a clamped ribbon-like filament is able to capture beating as well as a rotational motion under the compressive force. The nature of oscillation is governed by the torsional rigidity of the filament. The frequency of oscillation is almost independent of the torsional rigidity. The beating of the filament gives butterfly shape trajectory of the free-end monomer, whereas rotational motion yields a circular trajectory on a plane. The binormal correlation and the principal component analysis reveal the butterfly, elliptical, and circular trajectories of the free end monomer. We present a phase diagram for different kinds of motion in the parameter regime of compressive force and torsional rigidity.
|
condensed matter
|
One of the most significant revelations from Kepler is that roughly one-third of Sun-like stars host planets which orbit their stars within 100 days and are between the size of Earth and Neptune. How do these super-Earth and sub-Neptune planets form, what are they made of, and do they represent a continuous population or naturally divide into separate groups? Measuring their masses and thus bulk densities can help address these questions of their origin and composition. To that end, we began the Magellan-TESS Survey (MTS), which uses Magellan II/PFS to obtain radial velocity (RV) masses of 30 transiting exoplanets discovered by TESS and develops an analysis framework that connects observed planet distributions to underlying populations. In the past, RV measurements of small planets have been challenging to obtain due to the faintness and low RV semi-amplitudes of most Kepler systems, and challenging to interpret due to the potential biases in the existing ensemble of small planet masses from non-algorithmic decisions for target selection and observation plans. The MTS attempts to minimize these biases by focusing on bright TESS targets and employing a quantitative selection function and multi-year observing strategy. In this paper, we (1) describe the motivation and survey strategy behind the MTS, (2) present our first catalog of planet mass and density constraints for 25 TESS Objects of Interest (TOIs; 20 in our population analysis sample, five that are members of the same systems), and (3) employ a hierarchical Bayesian model to produce preliminary constraints on the mass-radius (M-R) relation. We find qualitative agreement with prior mass-radius relations but some quantitative differences (abridged). The the results of this work can inform more detailed studies of individual systems and offer a framework that can be applied to future RV surveys with the goal of population inferences.
|
astrophysics
|
In this work we consider two complex scalar fields distinguished by their masses coupled to constant background electric and magnetic fields in the $(3+1)$-dimensional Minkowski spacetime and subsequently investigate a few measures quantifying the quantum correlations between the created particle-antiparticle Schwinger pairs. Since the background magnetic field itself cannot cause the decay of the Minkowski vacuum, our chief motivation here is to investigate the interplay between the effects due to the electric and magnetic fields. We start by computing the entanglement entropy for the vacuum state of a single scalar field. Second, we consider some maximally entangled states for the two-scalar field system and compute the logarithmic negativity and the mutual information. Qualitative differences of these results pertaining to the charge content of the states are emphasised. Based upon these results, we suggest some possible effects of a background magnetic field on the degradation of entanglement between states in an accelerated frame, for charged quantum fields.
|
high energy physics theory
|
In this article, a new method is discussed for the calibration and monitoring of photomultiplier tubes (PMTs). This method is based on a Discrete Fourier Transform (DFT) and it is fast and general so that it can be used in cases where an analytical model of the PMT response is not available. The DFT approach is employed for the absolute calibration of the Hamamatsu R1408 photomultiplier tube. It should be noted that the R1408 PMTs do not show a sharp peak at the single photoelectron distribution and gain determination via conventional methods is often unattainable. Here, we show that the DFT technique, coupled with a gamma function model for the single photoelectron response, produces rigorous calibration results and it can be used for gain determination with a good accuracy.
|
physics
|
The time evolution of an oscillator coupled to an infinite string with a discontinuous mass density is investigated. It is shown that the equation of motion of the oscillator leads to a nonlinear characteristic equation due to the frequency-dependent nature of the point impedance of the string. Then, the method of perturbation theory is applied to solve the characteristic equation to first order in perturbation parameter. Finally, response of the oscillator to an impulse is obtained by means of the Green's function method.
|
physics
|
The X-ray pulsar GRO J1744-28 is a unique source which shows both pulsations and type-II X-ray bursts, allowing studies of the interaction of the accretion disk with the magnetosphere at huge mass accretion rates exceeding $10^{19}$ g s$^{-1}$ during its super-Eddington outbursts. The magnetic field strength in the source, $B\approx 5\times 10^{11}$ G, is known from the cyclotron absorption feature discovered in the energy spectrum around 4.5 keV. Here, we explore the flux variability of the source in context of interaction of its magnetosphere with the radiation-pressure dominated accretion disk. Particularly, we present the results of the analysis of noise power density spectra (PDS) using the observations of the source in 1996-1997 by RXTE. Accreting compact objects commonly exhibit a broken power-law shape of the PDS with a break corresponding to the Keplerian orbital frequency of matter at the innermost disk radius. The observed frequency of the break can thus be used to estimate the size of the magnetosphere. We found, however, that the observed PDS of GRO J1744-28 differs dramatically from the canonical shape. Furthermore, the observed break frequency appears to be significantly higher than what is expected based on the magnetic field estimated from the cyclotron line energy. We argue that these observational facts can be attributed to the existence of the radiation-pressure dominated region in the accretion disk at luminosities above $\sim$2$\times 10^{37}$ erg s$^{-1}$. We discuss a qualitative model for the PDS formation in such disks, and show that its predictions are consistent with our observational findings. The presence of the radiation-pressure dominated region can also explain the observed weak luminosity-dependence of the inner radius, and we argue that the small inner radius can be explained by a quadrupole component dominating the magnetic field of the neutron star.
|
astrophysics
|
We study the decay processes of $\bar{B}^0 \to J/\psi \bar{K}^{*0} K^0$ and $\bar{B}^0 \to J/\psi f_1(1285)$ to analyse the $f_1(1285)$ resonance. By the calculation within chiral unitary approach where $f_1(1285)$ resonance is dynamically generated from the $K^*\bar{K}-c.c.$ interaction, we find that the $\bar{K}^{*0} K^0$ invariant mass distribution has a clear broad peak. Such broad peak has been understood as the signal of the $f_1(1285)$. Finally, we obtain a theoretical result $R_t=\Gamma_{\bar{B}^0 \to J/\psi \bar{K}^{*0} K^0}/\Gamma_{\bar{B}^0 \to J/\psi f_1(1285)}$ which is expected to be compared with the experimental data.
|
high energy physics phenomenology
|
In this review we discuss intriguing properties of apparently classical optical fields, that go beyond purely classical context and allow us to speak about quantum characteristics of such fields and about their applications in quantum technologies. We briefly define the genuinely quantum concepts of entanglement and steering. We then move to the boarder line between classical and quantum world introducing quantum discord, a more general concept of quantum coherence, and finally a controversial notion of classical entanglement. To unveil the quantum aspects of often classically perceived systems, we focus more in detail on quantum discordant correlations between the light modes and on nonseparability properties of optical vector fields leading to entanglement between different degrees of freedom of a single beam. To illustrate the aptitude of different types of correlated systems to act as quantum or quantum-like resource, entanglement activation from discord, high-precision measurements with classical entanglement and quantum information tasks using intra-system correlations are discussed. The common themes behind the versatile quantum properties of seemingly classical light are coherence, polarization and inter and intra--mode quantum correlations.
|
quantum physics
|
A mathematical analysis of local and nonlocal phase-field models of tumor growth is presented that includes time-dependent Darcy-Forchheimer-Brinkman models of convective velocity fields and models of long-range cell interactions. A complete existence analysis is provided. In addition, a parameter-sensitivity analysis is described that quantifies the sensitivity of key quantities of interest to changes in parameter values. Two sensitivity analyses are examined; one employing statistical variances of model outputs and another employing the notion of active subspaces based on existing observational data. Remarkably, the two approaches yield very similar conclusions on sensitivity for certain quantities of interest. The work concludes with the presentation of numerical approximations of solutions of the governing equations and results of numerical experiments on tumor growth produced using finite element discretizations of the full tumor model for representative cases.
|
mathematics
|
We present a method for the in-flight relative flux self-calibration of a spectro-photometer instrument, general enough to be applied to any upcoming galaxy survey on satellite. The instrument response function, that accounts for a smooth continuous variation due to telescope optics, on top of a discontinuous effect due to the segmentation of the detector, is inferred with a $\chi^2$ statistics. The method provides unbiased inference of the sources count rates and of the reconstructed relative response function, in the limit of high count rates. We simulate a simplified sequence of observations with realistic distributions of sources and count rates, with the purpose of quantifying the relative importance of the number of sources and exposures for the correct reconstruction of the instrument response. We present a validation of the method, with the definition of figures of merit to quantify the expected performance, in plausible scenarios.
|
astrophysics
|
The stellar initial mass function (IMF) is fundamental for many areas of astrophysics, but its origin remains poorly understood. It may be inherited from the core mass function (CMF) or arise as a result of more chaotic, competitive accretion. Dense, gravitationally bound cores are seen in molecular clouds and some observations have suggested that the CMF is similar in shape to the IMF, though translated to higher masses by a factor of $\sim3$. Here we measure the CMF in 28 dense clumps within 3.5 kpc that are likely to be central regions of massive protoclusters, observed via $1.3\:{\rm{mm}}$ dust continuum emission by the ALMAGAL project. We identify 222 cores using the dendrogram algorithm with masses ranging from 0.04 to $252\:M_{\odot}$. We apply completeness corrections for flux and number recovery, estimated from core insertion and recovery experiments. At higher masses, the final derived CMF is well described by a single power law of the form $dN/d\:{\textrm{log}}\:M\propto\:M^{-\alpha}$ with $\alpha\simeq0.94\pm0.08$. However, we find evidence of a break in this power-law behavior between $\sim5$ and $15\:M_{\odot}$, which is, to our knowledge, the first time such a break has been found in distant ($\gtrsim 1$~kpc) regions by ALMA. We compare this massive protocluster CMF with those derived using the same methods in the G286 protocluster and a sample of Infrared Dark Clouds. The massive protocluster CMF is significantly different, i.e., containing more massive cores, which is a potential indication of the role of environment on the CMF and IMF.
|
astrophysics
|
The inclusive cross sections of W$^+$, W$^-$, and Z boson production from 34 different measurements performed in proton-(anti)proton collisions at center-of-mass energies $\sqrt{s}$ = 1.8--13 TeV, are compared to perturbative QCD calculations at next-to-next-to-leading-order (NNLO) accuracy with four sets of parton distributions functions (CT14, HERAPDF2.0, MMHT14, and NNPDF3.0 PDFs) and varying values of the strong coupling constant at the Z mass pole, $\alpha_s(m_Z)$. The data-theory agreement is good within the experimental and theoretical uncertainties, with the CT14 and MMHT14 parton densities providing the most overall consistent description of all cross section data. A value of $\alpha_s(m_Z) = 0.1188^{+0.0019}_{-0.0013}$ is extracted from a combined fit of the 28 experimental LHC measurements to the corresponding NNLO theoretical predictions obtained with the MMHT14 PDF set, which provides the most robust and stable QCD coupling extraction of this analysis.
|
high energy physics phenomenology
|
Realizing solution processed quantum dot (QD) lasers is one of the holy-grails of nanoscience. The reason that QD lasers are not yet commercialized is that the lasing threshold is too high: one needs > 1 exciton per QD, which is hard to achieve due to fast non-radiative Auger recombination. The optical gain threshold can be reduced by electronic doping of the QDs, which lowers the absorption near the band-edge, such that the stimulated emission (SE) can easily outcompete absorption. Here, we show that by electrochemically doping films of CdSe/CdS/ZnS QDs we achieve quantitative control over the gain threshold. We obtain stable and reversible doping with up to two electrons per QD. We quantify the gain threshold and the charge carrier dynamics using ultrafast spectroelectrochemistry and achieve quantitative agreement between experiments and theory. Over a range of wavelengths with appreciable gain coefficients, the gain thresholds reach record-low values of ~10^-5 excitons per QD. These results demonstrate an unprecedented level of control over the gain threshold in doped QD solids, paving the way for the creation of cheap, solution-processable low-threshold QD-lasers.
|
condensed matter
|
Model averaging is an alternative to model selection for dealing with model uncertainty, which is widely used and very valuable. However, most of the existing model averaging methods are proposed based on the least squares loss function, which could be very sensitive to the presence of outliers in the data. In this paper, we propose an outlier-robust model averaging approach by Mallows-type criterion. The key idea is to develop weight choice criteria by minimising an estimator of the expected prediction error for the function being convex with an unique minimum, and twice differentiable in expectation, rather than the expected squared error. The robust loss functions, such as least absolute deviation and Huber's function, reduce the effects of large residuals and poor samples. Simulation study and real data analysis are conducted to demonstrate the finite-sample performance of our estimators and compare them with other model selection and averaging methods.
|
statistics
|
AI-based data synthesis has seen rapid progress over the last several years, and is increasingly recognized for its promise to enable privacy-respecting high-fidelity data sharing. However, adequately evaluating the quality of generated synthetic datasets is still an open challenge. We introduce and demonstrate a holdout-based empirical assessment framework for quantifying the fidelity as well as the privacy risk of synthetic data solutions for mixed-type tabular data. Measuring fidelity is based on statistical distances of lower-dimensional marginal distributions, which provide a model-free and easy-to-communicate empirical metric for the representativeness of a synthetic dataset. Privacy risk is assessed by calculating the individual-level distances to closest record with respect to the training data. By showing that the synthetic samples are just as close to the training as to the holdout data, we yield strong evidence that the synthesizer indeed learned to generalize patterns and is independent of individual training records. We demonstrate the presented framework for seven distinct synthetic data solutions across four mixed-type datasets and compare these to more traditional statistical disclosure techniques. The results highlight the need to systematically assess the fidelity just as well as the privacy of these emerging class of synthetic data generators.
|
statistics
|
Low-temperature specific heat (SH) is measured on the 1111-type CaFe_{0.88}Co_{0.12}AsF single crystals under different magnetic fields. A clear SH jump with the height \Delta C/T|_Tc = 10.4 mJ/mol K^2 was observed at the superconducting transition temperature T_c. The electronic SH coefficient \Delta\gamma (B) increases linearly with the field below 5 T and a kink is observed around 5 T, indicating a multi-gap feature in the present system. Such a sign is also reflected in the Tc-B data. A detailed analysis shows that this behavior can be interpreted in terms of a two-gap scenario with the ratio \Delta_L=\Delta_S = 2:8-4:5.
|
condensed matter
|
This paper describes the integration of weighted delay-and-sum beamforming with speech source localization using image processing and robot head visual servoing for source tracking. We take into consideration the fact that the directivity gain provided by the beamforming depends on the angular distance between its main lobe and the main response axis of the microphone array. A visual servoing scheme is used to reduce the angular distance between the center of the video frame of a robot camera and a target object. Additionally, the beamforming strategy presented combines two information sources: the direction of the target object obtained with image processing and the audio signals provided by a microphone array. These sources of information were integrated by making use of a weighted delay-and-sum beamforming method. Experiments were carried out with a real mobile robotic testbed built with a PR2 robot. Static and dynamic robot head as well as the use of one and two external noise sources were considered. The results presented here show that the appropriate integration of visual source tracking with visual servoing and a beamforming method can lead to a reduction in WER as high as 34% compared to beamforming alone.
|
electrical engineering and systems science
|
This article is a survey of our recent work on the connections between Koba-Nielsen amplitudes and local zeta functions (in the sense of Gel'fand, Weil, Igusa, Sato, Bernstein, Denef, Loeser, etc.). Our research program is motivated by the fact that the p-adic strings seem to be related in some interesting ways with ordinary strings. For instance, connections through the adelic relations and through the limit when p tends to 1. Gerasimov and Shatashvili studied the limit p tends to 1 of the p-adic effective action introduced by Brekke, Freund, Olson and Witten. They showed that this limit gives rise to a boundary string field theory, which was previously proposed by Witten in the context of background independent string theory. Explicit computations in the cases of 4 and 5 points show that the Feynman amplitudes at the tree level of the Gerasimov-Shatashvili Lagrangian are related with the limit p tends to 1 of the p-adic Koba-Nielsen amplitudes. At a mathematical level, this phenomenon is deeply connected with the topological zeta functions introduced by Denef and Loeser. A Koba-Nielsen amplitude is just a new type of local zeta function, which can be studied by using embedded resolution of singularities. In this way, one shows the existence of a meromorphic continuations for the Koba-Nielsen amplitudes as functions of the kinematic parameters. The Koba-Nielsen local zeta functions are algebraic-geometric integrals that can be defined over arbitrary local fields (for instance R, C, Q_{p}, F_{p}((T)), and it is completely natural to expect connections between these objects. The limit p tends to one of the Koba-Nielsen amplitudes give rise to a new amplitudes which we have called Denef-Loeser amplitudes. Along the article, we have emphasized the explicit calculations in the cases of 4 and 5 points.
|
high energy physics theory
|
Searching topological similarity between a pair of shapes or data is an important problem in data analysis and visualization. The problem of computing similarity measures using scalar topology has been studied extensively and proven useful in shape and data matching. Even though multi-field (or multivariate) topology-based techniques reveal richer topological features, research on computing similarity measures using multi-field topology is still in its infancy. In the current paper, we propose a novel similarity measure between two piecewise-linear multi-fields based on their multi-resolution Reeb spaces - a newly developed data-structure that captures the topology of a multi-field. Overall, our method consists of two steps: (i) building a multi-resolution Reeb space corresponding to each of the multi-fields and (ii) proposing a similarity measure for a list of matching pairs (of nodes), obtained by comparing the multi-resolution Reeb spaces. We demonstrate an application of the proposed similarity measure by detecting the nuclear scission point in a time-varying multi-field data from computational physics.
|
computer science
|
Machine Learning (ML) has become a ubiquitous tool for predicting and classifying data and has found application in several problem domains, including Software Development (SD). This paper reviews the literature between 2000 and 2019 on the use the learning models that have been employed for programming effort estimation, predicting risks and identifying and detecting defects. This work is meant to serve as a starting point for practitioners willing to add ML to their software development toolbox. It categorises recent literature and identifies trends and limitations. The survey shows as some authors have agreed that industrial applications of ML for SD have not been as popular as the reported results would suggest. The conducted investigation shows that, despite having promising findings for a variety of SD tasks, most of the studies yield vague results, in part due to the lack of comprehensive datasets in this problem domain. The paper ends with concluding remarks and suggestions for future research.
|
computer science
|
The Wigner function's behavior of accelerated and non-accelerated Greenberger Horne Zeilinger (GHZ) state is discussed. For the non-accelerated GHZ state, the minimum/maximum peaks of the Wigner function depends on the distribution's angles, where they are displayed regularly at fixed values of the distribution's angles. We show that, for the accelerated GHZ state, the minimum bounds increases as the acceleration increases. The increasing rate depends on the number of accelerated qubits. Due to the positivity/ negativity behavior of the Wigner function, one can use it as an indicators of the presences of the classical/quantum correlations, respectively. The maximum bounds of the quantum and the classical correlations depend on the purity of the initial GHZ state. The classical correlation that depicted by the behavior of Wigner function independent of the acceleration, but depends on the degree of its purity.
|
quantum physics
|
Facial Expression Recognition (FER) in the wild is extremely challenging due to occlusions, variant head poses, face deformation and motion blur under unconstrained conditions. Although substantial progresses have been made in automatic FER in the past few decades, previous studies are mainly designed for lab-controlled FER. Real-world occlusions, variant head poses and other issues definitely increase the difficulty of FER on account of these information-deficient regions and complex backgrounds. Different from previous pure CNNs based methods, we argue that it is feasible and practical to translate facial images into sequences of visual words and perform expression recognition from a global perspective. Therefore, we propose Convolutional Visual Transformers to tackle FER in the wild by two main steps. First, we propose an attentional selective fusion (ASF) for leveraging the feature maps generated by two-branch CNNs. The ASF captures discriminative information by fusing multiple features with global-local attention. The fused feature maps are then flattened and projected into sequences of visual words. Second, inspired by the success of Transformers in natural language processing, we propose to model relationships between these visual words with global self-attention. The proposed method are evaluated on three public in-the-wild facial expression datasets (RAF-DB, FERPlus and AffectNet). Under the same settings, extensive experiments demonstrate that our method shows superior performance over other methods, setting new state of the art on RAF-DB with 88.14%, FERPlus with 88.81% and AffectNet with 61.85%. We also conduct cross-dataset evaluation on CK+ show the generalization capability of the proposed method.
|
computer science
|
Constraints make hard optimization problems even harder to solve on quantum devices because they are implemented with large energy penalties and additional qubit overhead. The parity mapping, which has been introduced as an alternative to the spin encoding, translates the problem to a representation using only parity variables that encodes products of spin variables. In combining exchange interaction and single spin flip terms in the parity representation, constraints on sums and products of arbitrary k-body terms can be implemented without additional overhead in two-dimensional quantum systems.
|
quantum physics
|
The prompt emission of GRBs has been investigated for more than 50 years but remains poorly understood. Commonly, spectral and temporal profiles of {\gamma}-ray emission are analysed. However, they are insufficient for a complete picture on GRB-related physics. The addition of polarization measurements provides invaluable information towards the understanding of these astrophysical sources. In recent years, dedicated polarimeters, such as POLAR and GAP, were built. The former of which observed low levels of polarization as well as a temporal evolution of the polarization angle. It was understood that a larger sample of GRB polarization measurements and time resolved studies are necessary to constrain theoretical models. The POLAR-2 mission aims to address this by increasing the effective area by an order of magnitude compared to POLAR. POLAR-2 is manifested for launch on board the China Space Station in 2024 and will operate for at least 2 years. Insight from POLAR will aid in the improvement of the overall POLAR-2 design. Major improvements (compared to POLAR) will include the replacement of multi-anode PMTs (MAPMTs) with SiPMs, increase in sensitive volume and further technological upgrades. POLAR-2 is projected to measure about 50 GRBs per year with equal or better quality compared to the best seen by POLAR. The instrument design, preliminary results and anticipated scientific potential of this mission will be discussed.
|
astrophysics
|
Empirical risk minimization is the main tool for prediction problems, but its extension to relational data remains unsolved. We solve this problem using recent ideas from graph sampling theory to (i) define an empirical risk for relational data and (ii) obtain stochastic gradients for this empirical risk that are automatically unbiased. This is achieved by considering the method by which data is sampled from a graph as an explicit component of model design. By integrating fast implementations of graph sampling schemes with standard automatic differentiation tools, we provide an efficient turnkey solver for the risk minimization problem. We establish basic theoretical properties of the procedure. Finally, we demonstrate relational ERM with application to two non-standard problems: one-stage training for semi-supervised node classification, and learning embedding vectors for vertex attributes. Experiments confirm that the turnkey inference procedure is effective in practice, and that the sampling scheme used for model specification has a strong effect on model performance. Code is available at https://github.com/wooden-spoon/relational-ERM.
|
statistics
|
With the recent trend for ultra high definition displays, the demand for high quality and efficient video super-resolution (VSR) has become more important than ever. Previous methods adopt complex motion compensation strategies to exploit temporal information when estimating the missing high frequency details. However, as the motion estimation problem is a highly challenging problem, inaccurate motion compensation may affect the performance of VSR algorithms. Furthermore, the complex motion compensation module may also introduce a heavy computational burden, which limits the application of these methods in real systems. In this paper, we propose an efficient recurrent latent space propagation (RLSP) algorithm for fast VSR. RLSP introduces high-dimensional latent states to propagate temporal information between frames in an implicit manner. Our experimental results show that RLSP is a highly efficient and effective method to deal with the VSR problem. We outperform current state-of-the-art method DUF with over 70x speed-up.
|
electrical engineering and systems science
|
Sound Event Localization and Detection (SELD) is a problem related to the field of machine listening whose objective is to recognize individual sound events, detect their temporal activity, and estimate their spatial location. Thanks to the emergence of more hard-labeled audio datasets, Deep Learning techniques have become state-of-the-art solutions. The most common ones are those that implement a convolutional recurrent network (CRNN) having previously transformed the audio signal into multichannel 2D representation. The squeeze-excitation technique can be considered as a convolution enhancement that aims to learn spatial and channel feature maps independently rather than together as standard convolutions do. This is usually achieved by combining some global clustering operators, linear operators and a final calibration between the block input and its learned relationships. This work aims to improve the accuracy results of the baseline CRNN presented in DCASE 2020 Task 3 by adding residual squeeze-excitation (SE) blocks in the convolutional part of the CRNN. The followed procedure involves a grid search of the parameter ratio (used in the linear relationships) of the residual SE block, whereas the hyperparameters of the network remain the same as in the baseline. Experiments show that by simply introducing the residual SE blocks, the results obtained clearly exceed the baseline.
|
computer science
|
Under the term global blockage, the cumulative induction of wind turbines in a wind farm has been recently suspected to be responsible for observed overestimations of the energy yield in large-size wind farms. In this paper, the practice of modeling this effect after linear superposition of single turbine inductions, calculated with three of the most recent analytic models, is compared to Large-Eddy-Simulations of wind farms. We compare the models across two different farms, composed of 9 and 49 turbines, with two different heights of the atmospheric boundary layer, 300 and 500 m. The results show hat the differences between the analytical models are negligible while they substantially differ from the LES results. The linear superposition of induction consistently underestimates the velocity deficit in front of the farm with an error that increases as the wind farm size grows and the ABL height decreases. Also, when calculating the power output at the turbines of the farm, all the analytical models considered do not agree with the LES. These comparisons reveal that the farm interactions with the atmospheric boundary layer may highly outclass the turbine induction in determining the extent of the global blockage effect. Therefore, we present a first dimensional approach to the problem based on LES, aimed at simplifying its characterization.
|
physics
|
Compressed sensing (CS) MRI relies on adequate undersampling of the k-space to accelerate the acquisition without compromising image quality. Consequently, the design of optimal sampling patterns for these k-space coefficients has received significant attention, with many CS MRI methods exploiting variable-density probability distributions. Realizing that an optimal sampling pattern may depend on the downstream task (e.g. image reconstruction, segmentation, or classification), we here propose joint learning of both task-adaptive k-space sampling and a subsequent model-based proximal-gradient recovery network. The former is enabled through a probabilistic generative model that leverages the Gumbel-softmax relaxation to sample across trainable beliefs while maintaining differentiability. The proposed combination of a highly flexible sampling model and a model-based (sampling-adaptive) image reconstruction network facilitates exploration and efficient training, yielding improved MR image quality compared to other sampling baselines.
|
electrical engineering and systems science
|
The LHCb Collaboration announced the observation of doubly charmed baryon through $\Xi_{c c}^{++} \rightarrow \Lambda_{c}^{+} K^{-} \pi^{+} \pi^{+}$ in 2017. Since then, a series of studies of doubly heavy baryons have been presented. $\Xi_{cc}^{++}$ was discovered through nonleptonic four-body decay mode, and experimental data has indicated that the decay modes of $\Xi_{c c}^{++}$ are not saturated by two and three-body intermediate states. In this work, we analyze the four-body weak decays of doubly heavy baryons $\Xi_{cc}^{++}, \Xi_{cc}^+$, and $\Omega_{cc}^+$. Decay amplitudes for various channels are parametrized in terms of SU(3) irreducible amplitudes. We point out that branching fractions for Cabibbo-allowed processes $\Xi_{cc}^{+}\to\Lambda_c^+\pi^+ \pi^0 K^-$, $\Omega_{cc}^{+}\to\Lambda_c^+\pi^+ \overline K^0 K^-$ would be helpful to search for $\Xi_{cc}^+$ and $\Omega_{cc}^+$ in future measurements at experimental facilities like LHC, Belle II, and CEPC.
|
high energy physics phenomenology
|
Glasses based on SiO2-PbO-CdO-Ga2O3 system have been studied for the first time for fabrication of mid-infrared optical elements. Gallium oxide concentration was gradually increased, replacing silicon dioxide, for different cadmium and lead oxide content. The thermal and optical properties were investigated for different compositions. It was observed that the thermal stability, refractive index, and the transmission in the infrared range increased with increase of gallium and lead concentrations. The most thermally stable glass composition was selected for fabrication of optical elements such as optical fibers. We also successfully fabricated mid-infrared lenses by hot embossing for potential application in compact gas detectors
|
physics
|
Recent X-ray observations have revealed the complexity and diversity of high-mass X-ray binaries (HMXBs). This diversity challenges a classical understanding of the accretion process onto the compact objects. In this study, we reinforce the conventional concept of the nature of wind-fed accretion onto a neutron star considering the geometrical effect of radiatively accelerated wind, and re-evaluate the transported angular momentum by using a simple wind model. Our results suggest that even in an OB-type HMXB fed by stellar wind, a large amount of angular momentum could be transported to form an accretion disk due to the wind-inhomogeneity, if the binary separation is tight enough and/or stellar wind is slow. We apply our model into actual systems such as LMC X-4 and OAO 1657-415, and discuss the possibility of disk formations in these systems.
|
astrophysics
|
In a recent paper published in this Journal [Phys. Rev. B 97, 075135 (2018)], Menezes et al. analyze the topological behavior of a effective bosonic model defined on the Lieb lattice in presence of an electromagnetic field. In this context, the authors claim to have found an atypical quantum Hall effect for the quasiparticles. However, some inconsistencies related to the treatment of the propagator jeopardizes the main result in this system.
|
high energy physics theory
|
We consider the matrix regularization of fields on a Riemann surface which couple to gauge fields with a nonvanishing magnetic flux. We show that such fields are described as rectangular matrices in the matrix regularization. We construct the matrix regularization explicitly for the case of the sphere and torus based on the Berezin-Toeplitz quantization, and also discuss a possible generalization to cases with higher genera. We also discuss the matrix version of the Laplacian acting on the rectangular matrices.
|
high energy physics theory
|
Free radicals play a key role in the ageing process. The strongly debated free radical theory of ageing even states that damage caused by free radicals is the main cause of aging on a cellular level. However, free radicals are small, reactive and short lived and thus challenging to measure. We utilize a new technique called diamond magnetometry for this purpose. We make use of nitrogen vacancy centers in nanodiamonds. Via a quantum effect these defects convert a magnetic resonance signal into an optical signal. While this method is increasingly popular for its unprecedented sensitivity in physics, we use this technique here for the first time to measure free radicals in living cells. Our signals are equivalent to T1 signals in conventional MRI but from nanoscale voxels from single cells with sub-cellular resolution. With this powerful tool we are able to follow free radical generation after chemically inducing stress. In addition, we can observe free radical reduction in presence of an antioxidant. We were able to clearly differentiate between mutant strains with altered metabolism. Finally, the excellent stability of our diamond particles allowed us to follow the ageing process and differentiate between young and old cells. We could confirm the expected increase of free radical load in old wild type and sod1{\Delta} mutants. We further applied this new technique to investigate tor1{\Delta} and pex19{\Delta} cells. For these mutants an increased lifespan has been reported but the exact mechanism is unclear. We find a decreased free radical load in, which might offer an explanation for the increased lifespan in these cells.
|
physics
|
It has recently been appreciated that the conifold modulus plays an important role in string-phenomenological set-ups involving warped throats, both by imposing constraints on model building and for obtaining a 10-dimensional picture of SUSY-breaking. In this note, we point out that the stability of the conifold modulus furthermore prevents large super- Planckian axion monodromy field ranges caused by brane-flux decay processes down warped throats. Our findings imply a significant challenge for concrete string theory embeddings of the inflationary flux-unwinding scenario.
|
high energy physics theory
|
Dynamic/kinematic model is of great significance in decision and control of intelligent vehicles. However, due to the singularity of dynamic models at low speed, kinematic models have been the only choice under many driving scenarios. This paper presents a discrete dynamic bicycle model feasible at any low speed utilizing the concept of backward Euler method. We further give a sufficient condition, based on which the numerical stability is proved. Simulation verifies that (1) the proposed model is numerically stable while the forward-Euler discretized dynamic model diverges; (2) the model reduces forecast error by up to 49% compared to the kinematic model. As far as we know, it is the first time that a dynamic bicycle model is qualified for urban driving scenarios involving stop-and-go tasks.
|
electrical engineering and systems science
|
Kaluza-Klein Theory states that a metric on the total space of a principal bundle $P\rightarrow M$, if it is invariant under the principal action of $P$, naturally reduces to a metric together with a gauge field on the base manifold $M$. We propose a generalization of this Kaluza-Klein principle to higher principal bundles and higher gauge fields. For the particular case of the abelian gerbe of Kalb-Ramond field, this Higher Kaluza-Klein geometry provides a natural global formulation for Double Field Theory (DFT). In this framework the doubled space is the total space of a higher principal bundle and the invariance under its higher principal action is exactly a global formulation of the familiar strong constraint. The patching problem of DFT is naturally solved by gluing the doubled space with a higher group of symmetries in a higher category. Locally we recover the familiar picture of an ordinary para-Hermitian manifold equipped with Born geometry. Infinitesimally we recover the familiar picture of a higher Courant algebroid twisted by a gerbe (also known as Extended Riemannian Geometry). As first application we show that on a torus-compactified spacetime the Higher Kaluza-Klein reduction gives automatically rise to abelian T-duality, while on a general principal bundle it gives rise to non-abelian T-duality. As final application we define a natural notion of Higher Kaluza-Klein monopole by directly generalizing the ordinary Gross-Perry one. Then we show that under Higher Kaluza-Klein reduction, this monopole is exactly the NS5-brane on a $10d$ spacetime. If, instead, we smear it along a compactified direction we recover the usual DFT monopole on a $9d$ spacetime.
|
high energy physics theory
|
The rich molecular structures of polycyclic aromatic hydrocarbons -- essentially planar flakes of fused benzene rings -- and their fullerene cousins are revealed through their vibrational and electronic spectra.
|
astrophysics
|
We present a characterization of finite permutation groups which contain a transitive dihedral subgroup.
|
mathematics
|
We study the \'etale sheafification of algebraic K-theory, called \'etale K-theory. Our main results show that \'etale K-theory is very close to a noncommutative invariant called Selmer K-theory, which is defined at the level of categories. Consequently, we show that \'etale K-theory has surprisingly well-behaved properties, integrally and without finiteness assumptions. A key theoretical ingredient is the distinction, which we investigate in detail, between sheaves and hypersheaves of spectra on \'etale sites.
|
mathematics
|
The Fragmentation Functions is one of the non-perturbative components of the QCD factorization theorem. They represents the probability of a parton carrying a fraction z of momentum to form into a particular kind of hadron. In this work, we study the jet fragmentation functions in the collisions between electrons and positrons. The jets where identified with Fastjet for different p_{Tch jet} intervals. The intervals and the final jets were reconstructed by means of the event shape T separation using spherocity variable, the study is performed under Pythia Monte Carlo event generator framework.
|
high energy physics phenomenology
|
Many networks are embedded in their latent geometries. If two nodes are embedded closely in the latent geometry, there is a disproportionately high probability that they are connected by a link. Latent geometry has a wide range of practical applications, and some of the epidemic process spreads through the latent geometry. Although extensive studies have been devoted to the embedding of networks in latent hyperbolic space, little research has been conducted to develop a method to estimate general unknown latent geometry of complex networks. Here, we develop methods to estimate unknown latent geometry of a given network by removing links with high loads according to certain criteria. Our methods estimate the homology of the latent geometry and provide a simplified map of the latent geometry.
|
physics
|
Belief propagation is a widely used message passing method for the solution of probabilistic models on networks such as epidemic models, spin models, and Bayesian graphical models, but it suffers from the serious shortcoming that it works poorly in the common case of networks that contain short loops. Here we provide a solution to this long-standing problem, deriving a belief propagation method that allows for fast calculation of probability distributions in systems with short loops, potentially with high density, as well as giving expressions for the entropy and partition function, which are notoriously difficult quantities to compute. Using the Ising model as an example, we show that our approach gives excellent results on both real and synthetic networks, improving significantly on standard message passing methods. We also discuss potential applications of our method to a variety of other problems.
|
condensed matter
|
The Project PAI Data Protocol ("PAI Data") is a specification that extends the Project PAI Blockchain Protocol to include a method of securing and provisioning access to arbitrary data. In the context of PAI Coin Development Proposal (PDP) 2, this paper defines two important transaction types that PAI Data supports: Storage Transactions, which facilitate storage of data and proof of ownership, and Sharing Transactions, designed to enable granting and revocation of data access to designated recipients. A comparative analysis of PAI Data against similar blockchain-based file storage systems is also presented.
|
computer science
|
The numerical emulation of quantum physics and quantum chemistry often involves an intractable number of degrees of freedom and admits no known approximation in general form. In practice, representing quantum-mechanical states using available numerical methods becomes exponentially more challenging with increasing system size. Recently quantum algorithms implemented as variational models, have been proposed to accelerate such simulations. Here we study the effect of noise on the quantum phase transition in the Schwinger model, within a variational framework. The experiments are built using a free space optical scheme to realize a pair of polarization qubits and enable any two-qubit state to be experimentally prepared up to machine tolerance. We specifically exploit the possibility to engineer noise and decoherence for polarization qubits to explore the limits of variational algorithms for NISQ architectures in identifying and quantifying quantum phase transitions with noisy qubits. We find that despite the presence of noise one can detect the phase transition of the Schwinger Hamiltonian even for a two-qubit system using variational quantum algorithms.
|
quantum physics
|
In this work, the antibacterial activity of the polymeric precursor dicarbonyldichlororuthenium has been studied against Escherichia coli and Staphylococcus aureus. This Ru carbonyl precursor shows minimum inhibitory concentration at nanogram per millilitre, which renders it a novel antimicrobial polymer without any organic ligands. Besides, dicarbonyldichlororuthenium antimicrobial activity is markedly boosted under photoirradiation, which can be ascribed to the enhanced generation of reactive oxygen species under UV irradiation. This compound has been able to inhibit bacterial growth via the disruption of bacterial membranes and triggering upregulation of stress responses as shown in microscopic measurements. The activity of polymeric ruthenium as an antibacterial material is significant even at very low concentrations while remaining biocompatible to the mammalian cells at much higher concentrations. This study proves that this simple Ru carbonyl precursor can be used as an antimicrobial compound with high activity and a low toxicity profile in the context of need for new antimicrobial agents to fight bacterial infections.
|
condensed matter
|
We propose a protocol for the second-order nonlinear phase estimation with a coherent state as input and balanced homodyne detection as measurement strategy. The sensitivity is sub-Heisenberg limit, which scales as $N^{-3/2}$ for $N$ photons on average. By ruling out hidden resources in quantum Fisher information, the fundamental sensitivity limit is recalculated and compared to the optimal sensitivity of our protocol. In addition, we investigate the effect of photon loss on sensitivity, and discuss the robustness of measurement strategy. The results indicate that our protocol is nearly optimal and robust.
|
quantum physics
|
Magnetic Feshbach resonances are a key tool in the field of ultracold quantum gases, but their full exploitation requires the generation of large, stable magnetic fields up to 1000 G with fractional stabilities of better than $10^{-4}$. Design considerations for electromagnets producing these fields, such as optical access and fast dynamical response, mean that electric currents in excess of 100 A are often needed to obtain the requisite field strengths. We describe a simple digital proportional-integral-derivative current controller constructed using a field-programmable gate array and off-the-shelf evaluation boards which allows for gain scheduling, enabling optimal control of current sources with non-linear actuators. Our controller can stabilize an electric current of 337.5 A to the level of $7.5\times 10^{-7}$ in an averaging time of 10 minutes and with a control bandwidth of 2 kHz.
|
physics
|
In this paper, we present a theoretical analysis of different integrating front-ends employed in broad-band communications through \textit{lossy} channels. Time-domain receivers for broad-band communication typically deal with large integrated noise due to its high bandwidth of operation. However, unlike traditional wireline systems that are typically not noise-limited, channels with high channel-loss render the input signal swing to be very small imposing several challenges in RX design as the circuits operate in the noise-limited regime. This simultaneous high integrated noise and low signal-swing limits the maximum achievable data-rate for a target bit-error-rate (BER) and deteriorates the energy-efficiency of the RX. In this work, transient, noise and gain performance of different standard signaling blocks have been obtained with closed-form expressions and are validated through spice-simulations. Multi-integrator cascade has been proposed which provides significant gain with relatively lower power consumption than the standard gain elements. Also, maximum achievable data-rate and optimum energy efficiency for different channel losses have been obtained theoretically for different architectures revealing their advantages and limitations. All the pertaining circuits have been designed in 65 nm CMOS process with a 1 V supply voltage.
|
electrical engineering and systems science
|
We introduce in this work an efficient numerical method for the simulation of the quantum Liouville-BGK equation, which models the diffusive transport of quantum particles. The corner stone to the model is the BGK collision operator, obtained by minimizing the quantum free energy under the constraint that the local density of particles is conserved during collisions. This leads to a large system of coupled nonlinear nonlocal PDEs whose resolution is challenging. We then define a splitting scheme that separates the transport and the collision parts, which, exploiting the local conservation of particles, leads to a fully linear collision step. The latter involves the resolution of a constrained optimization problem that is is handled with the nonlinear conjugate gradient algorithm. We prove that the time semi-discrete scheme is convergent, and as an application of our numerical scheme, we validate the quantum drift-diffusion model that is obtained as the diffusive limit of the quantum Liouville-BGK equation.
|
mathematics
|
Lead halide hybrid perovskites consist of an inorganic framework hosting a molecular cation located in the interstitial space. These compounds have been extensively studied as they have been identified as promising materials for photovoltaic applications with the interaction between the molecular cation and the inorganic framework implicated as influential for the electronic properties. CH3NH3PbCl3 undergoes two structural transitions from a high temperature cubic unit cell to a tetragonal phase at 177 K and an orthorhombic transition at 170 K. We have measured the low-frequency lattice dynamics using neutron spectroscopy and observe an energy broadening in the acoustic phonon linewidth towards the symmetry point QX =(2,1/2,0) when approaching the transitions. Concomitant with these zone boundary anomalies is a hardening of the entire acoustic phonon branch measured near the (2, 0, 0) Bragg position with decreasing temperature. Measurements of the elastic scattering at the Brillouin zone edges QX = (2,1/2,0), QM = (3/2,1/2,0), and QR = (3/2,3/2,5/2) show Bragg peaks appearing below these structural transitions. Based on selection rules of neutron scattering, we suggest that the higher 177 K transition is displacive with a distortion of the local octahedral environment and the lower transition is a rigid tilt transition of the octahedra. We do not observe any critical broadening in energy or momentum, beyond resolution, of these peaks near the transitions. We compare these results to the critical properties reported near the structural transitions in other perovskites. We suggest that the simultaneous onset of static resolution-limited Bragg peaks at the zone boundaries and the changes in acoustic phonon energies near the zone center is evidence of a coupling between the inorganic framework and the molecular cation.
|
condensed matter
|
A liquid meniscus, a bending rod (also called elastica) and a simple pendulum are all described by the same non-dimensional equation. The oscillatory regime of the pendulum corresponds to buckling rods and pendant drops, and the high-velocity regime corresponds to spherical drops, puddles and multiple rod loopings. We study this analogy in a didactic way and discuss how, despite this common governing equation, the three systems are not completely equivalent. We also consider the cylindrical deformations of an inextensible, flexible membrane containing a liquid, which in some sense interpolates between the meniscus and rod conformations.
|
condensed matter
|
Power systems are undergoing a transformation toward a low-carbon non-synchronous generation portfolio. A major concern for system planners and operators is the system dynamics in the high renewable penetration future. Because of the scale of the system and numerous components involved, it is extremely difficult to develop high PV dynamic models based upon actual power system models. The main contribution of this paper is providing an example of developing high PV penetration models based on the validated dynamic model of an actual large-scale power grid - the U.S. Eastern Interconnection system. The displacement of conventional generators by PV is realized by optimization. Combining the PV distribution optimization and the validated dynamic model information, this approach avoids the uncertainties brought about by transmission planning. As the existing dynamic models can be validated by measurements, this approach improves the credibility of the high PV models in representing future power grids. This generic approach can be applied to develop high PV dynamic models for other actual large-scale systems.
|
electrical engineering and systems science
|
We investigate relations among Schur multiple zeta functions and zeta-functions of root systems attached to semisimple Lie algebras. Schur multiple zeta functions are defined as sums over semi-standard Young tableaux. Then, assuming the Young tableaux is of anti-hook shape, we show that they can be written in terms of modified zeta-functions of root systems of type $A$. Our proof is quite computational, but we also give a pictorial interpretation of our argument in terms of Young tableaux. It is also possible to understand that one of our theorems gives an expression of Schur multiple zeta functions by an analogue of Weyl group multiple Dirichlet series in the sense of Bump et al. By combining with a result of Nakasuji, Phuksuwan and Yamasaki, our theorems yield a new method of finding functional relations among zeta-functions of root systems.
|
mathematics
|
Recent works on observation of discrete time-crystalline signatures throw up major puzzles on the necessity of localization for stabilizing such out-of-equilibrium phases. Motivated by these studies, we delve into a clean interacting Floquet system, whose quasi-spectrum conforms to the ergodic Wigner-Dyson distribution, yet with an unexpectedly robust, long-lived time-crystalline dynamics in the absence of disorder or fine-tuning. We relate such behavior to a measure zero set of nonthermal Floquet eigenstates with long-range spatial correlations, which coexist with otherwise thermal states at near-infinite temperature and develop a high overlap with a family of translationally invariant, symmetry-broken initial conditions. This resembles the notion of "dynamical scars" that remain robustly localized throughout a thermalizing Floquet spectrum with fractured structure. We dub such a long-lived discrete time crystal formed in partially nonergodic systems, "scarred discrete time crystal" which is distinct by nature from those stabilized by either many-body localization or prethermalization mechanism.
|
condensed matter
|
Newton's method for polynomial root finding is one of mathematics' most well-known algorithms. The method also has its shortcomings: it is undefined at critical points, it could exhibit chaotic behavior and is only guaranteed to converge locally. Based on the {\it Geometric Modulus Principle} for a complex polynomial $p(z)$, together with a {\it Modulus Reduction Theorem} proved here, we develop the {\it Robust Newton's method} (RNM), defined everywhere with a step-size that guarantees an {\it a priori} reduction in polynomial modulus in each iteration. Furthermore, we prove RNM iterates converge globally, either to a root or a critical point. Specifically, given $\varepsilon $ and any seed $z_0$, in $t=O(1/\varepsilon^{2})$ iterations of RNM, independent of degree of $p(z)$, either $|p(z_t)| \leq \varepsilon$ or $|p(z_t) p'(z_t)| \leq \varepsilon$. By adjusting the iterates at {\it near-critical points}, we describe a {\it modified} RNM that necessarily convergence to a root. In combination with Smale's point estimation, RNM results in a globally convergent Newton's method having a locally quadratic rate. We present sample polynomiographs that demonstrate how in contrast with Newton's method RNM smooths out the fractal boundaries of basins of attraction of roots. RNM also finds potentials in computing all roots of arbitrary degree polynomials. A particular consequence of RNM is a simple algorithm for solving cubic equations.
|
mathematics
|
The measurement precision of modern quantum simulators is intrinsically constrained by the limited set of measurements that can be efficiently implemented on hardware. This fundamental limitation is particularly severe for quantum algorithms where complex quantum observables are to be precisely evaluated. To achieve precise estimates with current methods, prohibitively large amounts of sample statistics are required in experiments. Here, we propose to reduce the measurement overhead by integrating artificial neural networks with quantum simulation platforms. We show that unsupervised learning of single-qubit data allows the trained networks to accommodate measurements of complex observables, otherwise costly using traditional post-processing techniques. The effectiveness of this hybrid measurement protocol is demonstrated for quantum chemistry Hamiltonians using both synthetic and experimental data. Neural-network estimators attain high-precision measurements with a drastic reduction in the amount of sample statistics, without requiring additional quantum resources.
|
quantum physics
|
We consider a swimmer consisting of a collinear assembly of three spheres connected by two slender rods. This swimmer can propel itself forward by varying the lengths of the rods in a way that is not invariant under time reversal. Although any non-reciprocal strokes of the arms can lead to a net displacement, the energetic efficiency of the swimmer is strongly dependent on the details and sequences of these strokes, and also the sizes of the spheres. We define the efficiency of the swimmer using Lighthill's criterion, i.e., the power that is needed to pull the swimmer by an external force at a certain speed, divided by the power needed for active swimming with the same average speed. Here, we determine numerically the optimal stroke sequences and the optimal size ratio of the spheres, while limiting the maximum extension of the rods. Our calculation takes into account both far-field and near-field hydrodynamic interactions. We show that, surprisingly, the three-sphere swimmer with unequal spheres can be more efficient than the equally-sized case. We also show that the variations of efficiency with size ratio is not monotonic and there exists a specific size ratio at which the swimmer has the highest efficiency. We find that the swimming efficiency initially rises by increasing the maximum allowable extension of the rods, and then converges to a maximum value. We calculate this upper limit analytically and report the highest value of efficiency that the three-sphere swimmer can reach.
|
physics
|
A two-coloring of the vertices $V$ of the hypergraph $H=(V, E)$ by red and blue has discrepancy $d$ if $d$ is the largest difference between the number of red and blue points in any edge. Let $f(n)$ be the fewest number of edges in an $n$-uniform hypergraph without a coloring with discrepancy $0$. Erd\H{o}s and S\'os asked: is $f(n)$ unbounded? N. Alon, D. J. Kleitman, C. Pomerance, M. Saks and P. Seymour proved upper and lower bounds in terms of the smallest non-divisor ($\mbox{snd}$) of $n$. We refine the upper bound as follows: $$f (n) \leq c \log \mbox{snd}\ {n}.$$
|
mathematics
|
Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables. For statistical inference in high dimension, sparsity inducing regularizations have proven to be useful while offering statistical guarantees. However, solving the resulting optimization problems can be challenging: even for popular iterative algorithms such as coordinate descent, one needs to loop over a large number of variables. To mitigate this, techniques known as screening rules and working sets diminish the size of the optimization problem at hand, either by progressively removing variables, or by solving a growing sequence of smaller problems. For both techniques, significant variables are identified thanks to convex duality arguments. In this paper, we show that the dual iterates of a GLM exhibit a Vector AutoRegressive (VAR) behavior after sign identification, when the primal problem is solved with proximal gradient descent or cyclic coordinate descent. Exploiting this regularity, one can construct dual points that offer tighter certificates of optimality, enhancing the performance of screening rules and helping to design competitive working set algorithms.
|
statistics
|
We investigate various phenomenological schemes for the rapid generation of 3D mock galaxy catalogues with a given power spectrum and bispectrum. We apply the fast bispectrum estimator \MODALLSS{} to these mock galaxy catalogues and compare to $N$-body simulation data analysed with the halo-finder \texttt{ROCKSTAR} (our benchmark data). We propose an assembly bias model for populating parent halos with subhalos by using a joint lognormal-Gaussian probability distribution for the subhalo occupation number and the halo concentration. This prescription enabled us to recover the benchmark power spectrum from $N$-body simulations to within 1\% and the bispectrum to within 4\% across the entire range of scales of the simulation. A small further boost adding an extra galaxy to all parent halos above the mass threshold $M>2\times10^{14}\,h^{-1} M_\odot$ obtained a better than 1\% fit to both power spectrum and bispectrum in the range $K/3<1.1\,h\,\text{Mpc}^{-1}$, where $K=k_1+k_2+k_3$. This statistical model should be applicable to fast dark matter codes, allowing rapid generation of mock catalogues which simultaneously reproduce the halo power spectrum and bispectrum obtained from $N$-body simulations. We also investigate alternative schemes using the Halo Occupation Distribution (HOD) which depend only on halo mass, but these yield results deficient in both the power spectrum (2\%) and the bispectrum (>4\%) at $k,K/3 \approx 0.2\,h\,\text{Mpc}^{-1}$, with poor scaling for the latter. Efforts to match the power spectrum by modifying the standard four-parameter HOD model result in overboosting the bispectrum (with a 10\% excess). We also characterise the effect of changing the halo profile on the power spectrum and bispectrum.
|
astrophysics
|
Intelligent reflecting surfaces (IRSs) constitute a disruptive wireless communication technique capable of creating a controllable propagation environment. In this paper, we propose to invoke an IRS at the cell boundary of multiple cells to assist the downlink transmission to cell-edge users, whilst mitigating the inter-cell interference, which is a crucial issue in multicell communication systems. We aim for maximizing the weighted sum rate (WSR) of all users through jointly optimizing the active precoding matrices at the base stations (BSs) and the phase shifts at the IRS subject to each BS's power constraint and unit modulus constraint. Both the BSs and the users are equipped with multiple antennas, which enhances the spectral efficiency by exploiting the spatial multiplexing gain. Due to the non-convexity of the problem, we first reformulate it into an equivalent one, which is solved by using the block coordinate descent (BCD) algorithm, where the precoding matrices and phase shifts are alternately optimized. The optimal precoding matrices can be obtained in closed form, when fixing the phase shifts. A pair of efficient algorithms are proposed for solving the phase shift optimization problem, namely the Majorization-Minimization (MM) Algorithm and the Complex Circle Manifold (CCM) Method. Both algorithms are guaranteed to converge to at least locally optimal solutions. We also extend the proposed algorithms to the more general multiple-IRS and network MIMO scenarios. Finally, our simulation results confirm the advantages of introducing IRSs in enhancing the cell-edge user performance.
|
electrical engineering and systems science
|
A method for unsupervised contextual anomaly detection is proposed using a cross-linked pair of Variational Auto-Encoders for assigning a normality score to an observation. The method enables a distinct separation of contextual from behavioral attributes and is robust to the presence of anomalous or novel contextual attributes. The method can be trained with data sets that contain anomalies without any special pre-processing.
|
statistics
|
We have previously shown for powders that Mo substitution into the CuO chains of YBa2Cu3O7 can create effective pinning centres which significantly increase the critical current density (j_c) in 7 T field by a factor of 4 and 10 at 50 and 60 K, respectively. The present work reports on the influence of the Mo substitution and high-pressure oxygen annealing on the pinning properties and critical currents of YBa_2Cu_{3-x}Mo_xO_{7-d} by comparing pure (x = 0, d > 0) and substituted (x = 0.03, d < 0) single crystals. Pinning properties have been investigated by measurements of magnetization loops and calculations of j_c in the ab-plane, in the temperature range from 2 to 90 K and in fields up to 14 T. Depending on the Mo substitution and the oxygen treatment, several types of pinning centres increasing j_c have been revealed and analysed in the frame of Dew-Hughes' and Kramer's models.
|
condensed matter
|
The electronic transport behaviour of materials determines their suitability for technological applications. We develop an efficient method for calculating carrier scattering rates of solid-state semiconductors and insulators from first principles inputs. The present method extends existing polar and non-polar electron-phonon coupling, ionized impurity, and piezoelectric scattering mechanisms formulated for isotropic band structures to support highly anisotropic materials. We test the formalism by calculating the electronic transport properties of 16 semiconductors and comparing the results against experimental measurements. The present work is amenable for use in high-throughput computational workflows and enables accurate screening of carrier mobilities, lifetimes, and thermoelectric power.
|
condensed matter
|
Photonics might play a key role in future wireless communication systems that operate at THz carrier frequencies. A prime example is the generation of THz data streams by mixing optical signals in high-speed photodetectors. Over the previous years, this concept has enabled a series of wireless transmission experiments at record-high data rates. Reception of THz signals in these experiments, however, still relied on electronic circuits. In this paper, we show that wireless THz receivers can also greatly benefit from optoelectronic signal processing techniques, in particular when carrier frequencies beyond 0.1 THz and wideband tunability over more than an octave is required. Our approach relies on a high-speed photoconductor and a photonic local oscillator for optoelectronic down-conversion of THz data signals to an intermediate frequency band that is easily accessible by conventional microelectronics. By tuning the frequency of the photonic local oscillator, we can cover a wide range of carrier frequencies between 0.03 THz and 0.34 THz. We demonstrate line rates of up to 10 Gbit/s on a single channel and up to 30 Gbit/s on multiple channels over a distance of 58 m. To the best of our knowledge, our experiments represent the first demonstration of a THz transmission link that exploits optoelectronic signal processing techniques both at the transmitter and the receiver.
|
physics
|
We formulate the Schwinger-Keldysh effective field theory of hydrodynamics without boost symmetry. This includes a spacetime covariant formulation of classical hydrodynamics without boosts with an additional conserved particle/charge current coupled to Aristotelian background sources. We find that, up to first order in derivatives, the theory is characterised by the thermodynamic equation of state and a total of 29 independent transport coefficients, in particular, 3 hydrostatic, 9 non-hydrostatic non-dissipative, and 17 dissipative. Furthermore, we study the spectrum of linearised fluctuations around anisotropic equilibrium states with non-vanishing fluid velocity. This analysis reveals a pair of sound modes that propagate at different speeds along and opposite to the fluid flow, one charge diffusion mode, and two distinct shear modes along and perpendicular to the fluid velocity. We present these results in a new hydrodynamic frame that is linearly stable irrespective of the boost symmetry in place. This provides a unified covariant stable approach for simultaneously treating Lorentzian, Galilean, and Lifshitz fluids within an effective field theory framework and sets the stage for future studies of non-relativistic intertwined patterns of symmetry breaking.
|
high energy physics theory
|
To understand the phase transition phenomena, information theoretical approaches can pick up some important properties of the phenomena based on the probability distribution. In this paper, we show information theoretical aspects of the 3-dimensional 3-state Potts model with the external field which is corresponding to the QCD effective model with heavy quarks. The transfer mutual information which represents the information flow of two spin variables is numerically estimated based on the Markov-chain Monte-Carlo method. The transfer mutual information has the peak near the confinement-deconfinement transition, and it may be used to detect the precursors of the transition. Since the transfer mutual information still have the peak even if the Polyakov-loop changes continuously and smoothly, we may pick up some aspects of the confinement-deconfinement nature from the information flow properties. Particularly, the transfer mutual information shows the significantly different behavior below and above the Roberge-Weiss endpoint existed in the pure imaginary chemical potential region, which may indicate the system change by the confinement-deconfinement transition.
|
high energy physics phenomenology
|
In this paper, we present a novel approach for the prediction of rogue waves in oceans using statistical machine learning methods. Since the ocean is composed of many wave systems, the change from a bimodal or multimodal directional distribution to unimodal one is taken as the warning criteria. Likewise, we explore various features that help in predicting rogue waves. The analysis of the results shows that the Spectral features are significant in predicting rogue waves. We find that nonlinear classifiers have better prediction accuracy than the linear ones. Finally, we propose a Random Forest Classifier based algorithm to predict rogue waves in oceanic conditions. The proposed algorithm has an Overall Accuracy of 89.57% to 91.81%, and the Balanced Accuracy varies between 79.41% to 89.03% depending on the forecast time window. Moreover, due to the model-free nature of the evaluation criteria and interdisciplinary characteristics of the approach, similar studies may be motivated in other nonlinear dispersive media, such as nonlinear optics, plasma, and solids, governed by similar equations, which will allow for the early detection of extreme waves
|
physics
|
The field of attosecond science was first enabled by nonlinear compression of intense laser pulses to a duration below two optical cycles. Twenty years later, creating such short pulses still requires state-of-the-art few-cycle laser amplifiers to most efficiently exploit 'instantaneous' optical nonlinearities in noble gases for spectral broadening and parametric frequency conversion. Here, we show that nonlinear compression can in fact be much more efficient when driven in molecular gases by pulses substantially longer than a few cycles, due to enhanced optical nonlinearity associated with rotational alignment. We use 80-cycle pulses from an industrial-grade laser amplifier to simultaneously drive molecular alignment and supercontinuum generation in a gas-filled capillary, producing more than two octaves of coherent bandwidth and achieving >45-fold compression to a duration of 1.7 cycles. As the enhanced nonlinearity is linked to rotational motion, the dynamics can be exploited for long-wavelength frequency conversion and compressing picosecond lasers.
|
physics
|
Despite the enormous progress achieved during the past decade, nanoelectronic devices based on two-dimensional (2D) semiconductors still suffer from a limited electrical stability. This limited stability has been shown to result from the interaction of charge carriers originating from the 2D semiconductors with defects in the surrounding insulating materials. The resulting dynamically trapped charges are particularly relevant in field effect transistors (FETs) and can lead to a large hysteresis, which endangers stable circuit operation. Based on the notion that charge trapping is highly sensitive to the energetic alignment of the channel Fermi-level with the defect band in the insulator, we propose to optimize device stability by deliberately tuning the channel Fermi-level. Our approach aims to minimize the amount of electrically active border traps without modifying the total number of traps in the insulator. We demonstrate the applicability of this idea by using two differently doped graphene layers in otherwise identical FETs with Al$_2$O$_3$ as a gate oxide mounted on a flexible substrate. Our results clearly show that by increasing the distance of the Fermi-level to the defect band, the hysteresis is significantly reduced. Furthermore, since long-term reliability is also very sensitive to trapped charges, a corresponding improvement in reliability is both expected theoretically and demonstrated experimentally. Our study paves the way for the construction of more stable and reliable 2D FETs in which the channel material is carefully chosen and tuned to maximize the energetic distance between charge carriers in the channel and the defect bands in the insulator employed.
|
physics
|
We consider alignment-dependent spin and heat transport across a magnon spin valve in the tunneling regime, i.e., a junction consisting of two weakly coupled ferromagnetic insulators. We determine the difference in spin and heat conductance between the parallel and antiparallel configuration of the magnetization direction. The dependence of these conductances on both the Gilbert damping and ellipticity is studied. We find that both magnon ellipticity and dissipation open channels for magnons to tunnel through in the antiparallel configuration. Our results highlight an important difference between electronic and magnon spin transport in spin-valve structures and may be important for the development of devices based on magnetic insulators.
|
condensed matter
|
This paper deals with non-parametric density estimation on $\bR^2$ from i.i.d observations. It is assumed that after unknown rotation of the coordinate system the coordinates of the observations are independent random variables whose densities belong to a H\"older class with unknown parameters. The minimax and adaptive minimax theories for this structural statistical model are developed.
|
mathematics
|
Geometry and topology are fundamental concepts, which underlie a wide range of fascinating physical phenomena such as topological states of matter and topological defects. In quantum mechanics, the geometry of quantum states is fully captured by the quantum geometric tensor. Using a qubit formed by an NV center in diamond, we perform the first experimental measurement of the complete quantum geometric tensor. Our approach builds on a strong connection between coherent Rabi oscillations upon parametric modulations and the quantum geometry of the underlying states. We then apply our method to a system of two interacting qubits, by exploiting the coupling between the NV center spin and a neighboring $^{13}$C nuclear spin. Our results establish coherent dynamical responses as a versatile probe for quantum geometry, and they pave the way for the detection of novel topological phenomena in solid state.
|
quantum physics
|
We present a novel strategy to automatically reconstruct 3D faces from monocular images with explicitly disentangled facial geometry (pose, identity and expression), reflectance (diffuse and specular albedo), and self-shadows. The scene lights are modeled as a virtual light stage with pre-oriented area lights used in conjunction with differentiable Monte-Carlo ray tracing to optimize the scene and face parameters. With correctly disentangled self-shadows and specular reflection parameters, we can not only obtain robust facial geometry reconstruction, but also gain explicit control over these parameters, with several practical applications. We can change facial expressions with accurate resultant self-shadows or relight the scene and obtain accurate specular reflection and several other parameter combinations.
|
computer science
|
To investigate the origin of elevated globular cluster abundances observed around Ultra-Diffuse Galaxies (UDGs), we simulate globular cluster populations hosted by UDGs formed through tidal heating. Specifically, globular cluster (GC) formation is modeled as occurring in regions of dense star formation. Because star-formation-rate-densities are higher at high redshift, dwarf galaxies in massive galaxy clusters, which formed most of their stars at high redshift, form a large fraction of their stars in globular clusters. Given that UDGs formed through environmental processes are more likely to be accreted at high redshift, these systems have more GCs than non-UDGs. In particular, our model predicts that massive UDGs have twice the GC mass of non-UDGs of similar stellar mass, in rough agreement with observations. Although this effect is somewhat diminished by GC disruption, we find that the relationship between GC mass fraction and cluster-centric distance, and the relationship between GC mass fraction and galaxy half-light radius are remarkably similar to observations. Among our model objects, both UDGs and non-UDGs present a correlation between halo mass and GC mass, although UDGs have lower dynamical masses at a given GC mass. Furthermore, because of the effectiveness of GC disruption, we predict that GCs around UDGs should have a more top heavy mass function than GCs around non-UDGs. This analysis suggests that dwarfs with older stellar populations, such as UDGs, should have higher globular cluster mass fractions than objects with young stellar populations, such as isolated dwarfs.
|
astrophysics
|
The Holographic Naturalness (HN) is a new paradigm towards an explanation of the Cosmological Constant (CC) and the Higgs Hierarchy (HH) in the Universe. Motivated by the Holographic Principle, and inspired by the (A)dS/CFT correspondence, we elaborate on the possibility and on the cosmological consequences of a fundamental intrinsic disorder and temperature in "vacuo". We postulate that the zero vacuum entropy is provided by a large number of quantum hair fields, the "hairons". The quantum hairon gas in space-time induces an effective decoherence effect to the Standard Model (SM) particle sector. This is leading to an entropic reinterpretation of UV divergent contributions to CC and HH: we will show that, in both the cases, the large number of re-scatterings on the hairon ensamble suppresses any radiative instabilities. The CC and HH problems are illusions envisaged by a conscious observer, having access on the limited amount of informations from SM tests: both the issues are originated from our ignorance of the hidden entropy intrinsically stored in the space-time. The HN suggests to search for effective decoherence effects in particle physics observables such as effective CPT, Unitarity and Energy violations. Regarding the HH, the HN does not introduce any new particles or interactions around the TeV-scale: we do not expect for any signatures, at LHC and any future high energy colliders, related to the Higgs UV completion in a Wilsonian sense.
|
high energy physics theory
|
Our understanding of the structure of the brain and its relationships with human traits is largely determined by how we represent the structural connectome. Standard practice divides the brain into regions of interest (ROIs) and represents the connectome as an adjacency matrix having cells measuring connectivity between pairs of ROIs. Statistical analyses are then heavily driven by the (largely arbitrary) choice of ROIs. In this article, we propose a novel tractography-based representation of brain connectomes, which clusters fiber endpoints to define a data adaptive parcellation targeted to explain variation among individuals and predict human traits. This representation leads to Principal Parcellation Analysis (PPA), representing individual brain connectomes by compositional vectors building on a basis system of fiber bundles that captures the connectivity at the population level. PPA reduces subjectivity and facilitates statistical analyses. We illustrate the proposed approach through applications to data from the Human Connectome Project (HCP) and show that PPA connectomes improve power in predicting human traits over state-of-the-art methods based on classical connectomes, while dramatically improving parsimony and maintaining interpretability. Our PPA package is publicly available on GitHub, and can be implemented routinely for diffusion tensor image data.
|
statistics
|
In this paper we present a deep X-ray observation of the nearby M dwarf GJ 357 and use it to put constraints on the atmospheric evolution of its planet, GJ 357 b. We also analyse the systematic errors in the stellar parameters of GJ 357 in order to see how they affect the perceived planetary properties. We estimate the age of GJ 357 b by comparing the observed X-ray luminosity of its host star, derived from a recent {\em XMM-Newton} observation {($\log{L_{\rm x}}\,{\rm [erg/s]} = 25.73$), with $L_{\rm x} -$ age relations for M dwarfs. We find that GJ 357 presents one of the lowest X-ray activity levels ever measured for an M dwarf, and we put a lower limit on its age of $5$\,Gyr.} Using this age limit, we perform a backwards reconstruction of the original primordial atmospheric reservoir. Furthermore, by considering the systematic errors in the stellar parameters, we find a range of possible planetary masses, radii, and densities. From the backwards reconstruction of GJ 357 b's irradiation history we find that the upper limit of its initial primordial atmospheric mass is $\sim \rm 38M_{\oplus}$. An initial atmospheric reservoir significantly larger than this may have survived through the X-ray and ultraviolet irradiation history, hence being inconsistent with current observations that suggest a telluric composition. In spite of the unlikelihood of a currently existing primordial envelope, volcanism and outgassing may have contributed to a secondary atmosphere. Under this assumption, we present three different synthetic infrared spectra for GJ 357 b that one might expect, consisting of $100\%~\rm CO_{2}$, $100\%~\rm SO_{2}$, and $75\%~ \rm N_{2}$, $24\%~\rm CO_{2}$ and $1\%~\rm H_{2}O$.
|
astrophysics
|
We develop a novel dynamical method to examine spatial interaction models (SIMs). For each SIM, we use our dynamical framework to model emigration patterns. We look at the resulting population distributions to see if they are realistic or not. We use the US census data from 2010 and various spatial statistics to access the success or failure of each model. While we looked at over eighty different SIMs, we will focus here on two examples: the production constrained gravity model and the Radiation model. The results suggest that all these models fail to produce realistic population distributions and we identify the flaws within existing models. This leads us to suggest that we should define site attractiveness in terms of a second short range SIM leading to a new spatial interaction model - the Two-Trip model - which offers significant improvements when examined via our method. We also note that our Two-Trip adaptation can be used in any spatial modelling contexts, not just emigration.
|
physics
|
Automatic 3D reconstruction of glia morphology is a powerful tool necessary for investigating the role of microglia in neurological disorders in the central nervous system. Current glia skeleton reconstruction techniques fail to capture an accurate tracing of the processes over time, useful for the study of the microglia motility and morphology in the brain during healthy and diseased states. We propose Hieroglyph, a fully automatic temporal 3D skeleton reconstruction algorithm for glia imaged via 3D multiphoton microscopy. Hieroglyph yielded a 21% performance increase compared to state of the art automatic skeleton reconstruction methods and outperforms the state of the art in different measures of consistency on datasets of 3D images of microglia. The results from this method provide a 3D graph and digital reconstruction of glia useful for a myriad of morphological analyses that could impact studies in brain immunology and disease.
|
electrical engineering and systems science
|
We introduce and study a class of entanglement criteria based on the idea of applying local contractions to an input multipartite state, and then computing the projective tensor norm of the output. More precisely, we apply to a mixed quantum state a tensor product of contractions from the Schatten class $S_1$ to the Euclidean space $\ell_2$, which we call entanglement testers. We analyze the performance of this type of criteria on bipartite and multipartite systems, for general pure and mixed quantum states, as well as on some important classes of symmetric quantum states. We also show that previously studied entanglement criteria, such as the realignment and the SIC POVM criteria, can be viewed inside this framework. This allows us to answer in the positive two conjectures of Shang, Asadian, Zhu, and G\"uhne by deriving systematic relations between the performance of these two criteria.
|
quantum physics
|
Quantum time dilation occurs when a clock moves in a superposition of relativistic momentum wave packets. We utilize the lifetime of an excited hydrogen-like atom as a clock to demonstrate how quantum time dilation manifests in a spontaneous emission process. The resulting emission rate differs when compared to the emission rate of an atom prepared in a mixture of momentum wave packets at order $v^2/c^2$. This effect is accompanied by a quantum correction to the Doppler shift due to the coherence between momentum wave packets. This quantum Doppler shift affects the spectral line shape at order $v/c$. However, its effect on the decay rate is suppressed when compared to the effect of quantum time dilation. We argue that spectroscopic experiments offer a technologically feasible platform to explore the effects of quantum time dilation.
|
quantum physics
|
Cyber-physical systems are often safety-critical in that violations of safety properties may lead to catastrophes. We propose a method to enforce the safety of systems with real-valued signals by synthesizing a runtime enforcer called the shield. Whenever the system violates a property, the shield, composed with the system, makes correction instantaneously to ensure that no erroneous output is generated by the combined system. While techniques for synthesizing Boolean shields are well understood, they do not handle real-valued signals ubiquitous in cyber-physical systems, meaning corrections may be either unrealizable or inefficient to compute in the real domain. We solve the realizability and efficiency problems by statically analyzing the compatibility of predicates defined over real-valued signals, and using the analysis result to constrain a two-player safety game used to synthesize the shield. We have implemented the method and demonstrated its effectiveness and efficiency on a variety of applications, including an automotive powertrain control system.
|
computer science
|
Tail averaging consists in averaging the last examples in a stream. Common techniques either have a memory requirement which grows with the number of samples to average, are not available at every timestep or do not accomodate growing windows. We propose two techniques with a low constant memory cost that perform tail averaging with access to the average at every time step. We also show how one can improve the accuracy of that average at the cost of increased memory consumption.
|
computer science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.