text
stringlengths
11
9.77k
label
stringlengths
2
104
Measurements of momentum space correlations in heavy ion reactions are a unique tools to investigate the properties of the created medium. However, these analyses require the careful handling of the final state interactions such as the Coulomb repulsion of the involved particles. In small systems such as $e^+ + e^-$ or $p+p$ the well-known Gamow factor gives an acceptable description but in the case of extended sources like that are created in heavy ion collisions, a more sophisticated approach has to be developed. In this paper we expand our previous work on the investigation of the Coulomb final state interaction in the presence of a L\'evy source. Such sources were shown to be a statistically acceptable assumption to describe the quantumstatistical correlation functions in high energy heavy ion reactions.
high energy physics phenomenology
Sequences of correlated binary patterns can represent many time-series data including text, movies, and biological signals. These patterns may be described by weighted combinations of a few dominant structures that underpin specific interactions among the binary elements. To extract the dominant correlation structures and their contributions to generating data in a time-dependent manner, we model the dynamics of binary patterns using the state-space model of an Ising-type network that is composed of multiple undirected graphs. We provide a sequential Bayes algorithm to estimate the dynamics of weights on the graphs while gaining the graph structures online. This model can uncover overlapping graphs underlying the data better than a traditional orthogonal decomposition method, and outperforms an original time-dependent Ising model. We assess the performance of the method by simulated data, and demonstrate that spontaneous activity of cultured hippocampal neurons is represented by dynamics of multiple graphs.
statistics
The paper presents the results of a study of the possibility of using the WKB approach to describe Inhomogeneous Travelling-Wave Accelerating Sections (ITWAS). This possibility not only simplifies the calculation, but also allows the use of simpler physical models of transient processes. Using the traveling wave concept simplifies the understanding of pulsed-excited ITWAS transients and the development of methods to mitigate their effect on beam parameters.
physics
There are diverse interdisciplinary applications for nanoscale resolution electrometry of elementary charges under ambient conditions. These include characterization of 2D electronics, charge transfer in biological systems, and measurement of fundamental physical phenomena. The nitrogen-vacancy center in diamond is uniquely capable of such measurements, however electrometry thus far has been limited to charges within the same diamond lattice. It has been hypothesized that the failure to detect charges external to diamond is due to quenching and surface screening, but no proof, model, or design to overcome this has yet been proposed. In this work we affirm this hypothesis through a comprehensive theoretical model of screening and quenching within a diamond electrometer and propose a solution using controlled nitrogen doping and a fluorine-terminated surface. We conclude that successful implementation requires further work to engineer diamond surfaces with lower surface defect concentrations.
physics
The new vector resonance $X_1(2900)$ observed recently by LHCb in the $ D^{-}K^{+}$ invariant mass distribution in the decay $B^{+} \to D^{+}D^{-}K^{+}$ is studied to uncover internal structure of this state, and calculate its physical parameters. In the present paper, the resonance $ X_1(2900)$ is modeled as an exotic vector state, $ J^P=1^- $, built of the light diquark $u^{T}C\gamma_5d$ and heavy antidiquark $\overline{c} \gamma_{\mu}\gamma_5C\overline{s}^{T}$. The mass and current coupling of $ X_1(2900)$ are computed using the QCD two-point sum rule approach by taking into account various vacuum condensates up to dimension $10$. The width of the resonance $X_1(2900)$ is saturated by two decay channels $X_1 \to D^{-}K^{+}$ and $X_1 \to \overline{D}^{0}K^{0}$. The strong couplings $g_1$ and $g_2$ corresponding to the vertices $X_1D^{-}K^{+}$ and $X_1\overline{D} ^{0}K^{0}$ are evaluated in the context of the QCD light-cone sum rule method and technical tools of the soft-meson approximation. Results for the mass of the resonance $X_1(2900)$ $m=(2890~\pm 122)~\mathrm{MeV}$, and for its full width $\Gamma _{\mathrm{full}}=(93\pm 13)~\mathrm{MeV}$ are smaller than their experimental values reported by the LHCb collaboration. Nevertheless, by taking into account theoretical and experimental errors of investigations, interpretation of the state $X_1(2900)$ as the vector tetraquark does not contradict to the LHCb data. We also point out that analysis of the invariant mass distribution $D^{+}K^{+}$ in the same decay $ B^{+} \to D^{+}D^{-}K^{+}$ may reveal doubly charged four-quark structures $ [uc][\overline{s}\overline{d}]$.
high energy physics phenomenology
Even a small amplitude charge-density-wave (CDW) can reconstruct a Fermi surface, giving rise to new quantum oscillation frequencies. Here, we investigate quantum oscillations when the CDW has a finite correlation length $\xi$ -- a case relevant to the hole-doped cuprates. By considering the Berry phase induced by a spatially varying CDW phase, we derive an effective Dingle factor that depends exponentially on the ratio of the cyclotron orbit radius, $R_c$, to $\xi$. In the context of YBCO, we conclude that the values of $\xi$ reported to date for bidirectional CDW order are, prima facie, too short to account for the observed Fermi surface reconstruction; on the other hand, the values of $\xi$ for the unidirectional CDW are just long enough.
condensed matter
Statistical analysis of a graph often starts with embedding, the process of representing its nodes as points in space. How to choose the embedding dimension is a nuanced decision in practice, but in theory a notion of true dimension is often available. In spectral embedding, this dimension may be very high. However, this paper shows that existing random graph models, including graphon and other latent position models, predict the data should live near a much lower-dimensional set. One may therefore circumvent the curse of dimensionality by employing methods which exploit hidden manifold structure.
statistics
We present results from deep, wideband, high spatial and spectral resolution observations of the nearby luminous radio galaxy Cygnus A with the Jansky Very Large Array. The high surface brightness of this source enables detailed polarimetric imaging, providing images at $0.75\arcsec$, spanning 2 - 18 GHz, and at 0.30$\arcsec$ (6 - 18 GHz). The fractional polarization from 2000 independent lines of sight across the lobes decreases strongly with decreasing frequency, with the eastern lobe depolarizing at higher frequencies than the western lobe. The depolarization shows considerable structure, varying from monotonic to strongly oscillatory. The fractional polarization in general increases with increasing resolution at a given frequency, as expected. However, there are numerous lines of sight with more complicated behavior. We have fitted the $0.3\arcsec$ images with a simple model incorporating random, unresolved fluctuations in the cluster magnetic field to determine the high resolution, high-frequency properties of the source and the cluster. From these derived properties, we generate predicted polarization images of the source at lower frequencies, convolved to 0.75$\arcsec$. These predictions are remarkably consistent with the observed emission. The observations are consistent with the lower-frequency depolarization being due to unresolved fluctuations on scales $\gtrsim$ 300 - 700 pc in the magnetic field and/or electron density superposed on a partially ordered field component. There is no indication in our data of the location of the depolarizing screen or the large-scale field, either, or both of which could be located throughout the cluster, or in a boundary region between the lobes and the cluster.
astrophysics
Recently, a partitioned-block-based frequency-domain Kalman filter (PFKF) has been proposed for acoustic echo cancellation. Compared with the normal frequency-domain Kalman filter, the PFKF utilizes the partitioned-block structure, resulting in both fast convergence and low time-latency. We present an analysis of the steady-state behavior of the PFKF and found that it suffers from a biased steady-state solution when the filter is of deficient length. Accordingly, we propose an effective modification that has the benefit of the guaranteed optimal steady-state behavior. Simulations are conducted to validate the improved performance of the proposed method.
electrical engineering and systems science
In this paper, we consider channel estimation for intelligent reflecting surface (IRS)-assisted millimeter wave (mmWave) systems, where an IRS is deployed to assist the data transmission from the base station (BS) to a user. It is shown that for the purpose of joint active and passive beamforming, the knowledge of a large-size cascade channel matrix needs to be acquired. To reduce the training overhead, the inherent sparsity in mmWave channels is exploited. By utilizing properties of Katri-Rao and Kronecker products, we find a sparse representation of the cascade channel and convert cascade channel estimation into a sparse signal recovery problem. Simulation results show that our proposed method can provide an accurate channel estimate and achieve a substantial training overhead reduction.
electrical engineering and systems science
We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), post-process illumination editing, and depth-of-field manipulation. Underlying our system is a fully automatic technique for recovering a comprehensive 3D scene model (geometry, illumination, diffuse albedo and camera parameters) from a single, low dynamic range photograph. This is made possible by two novel contributions: an illumination inference algorithm that recovers a full lighting model of the scene (including light sources that are not directly visible in the photograph), and a depth estimation algorithm that combines data-driven depth transfer with geometric reasoning about the scene layout. A user study shows that our system produces perceptually convincing results, and achieves the same level of realism as techniques that require significant user interaction.
computer science
We constrain the spectral index of polarized synchrotron emission, $\beta_s$, by correlating the recently released 2.3 GHz S-Band Polarization All Sky Survey (S-PASS) data with the 23 GHz 9-year Wilkinson Microwave Anisotropy Probe (WMAP) sky maps. We subdivide the S-PASS field, which covers the southern ecliptic hemisphere, into 95 $15^{\circ}\times15^{\circ}$ regions and estimate the spectral index of polarized synchrotron emission within each region using a simple but robust T-T plot technique. Three different versions of the S-PASS data are considered, corresponding to: no correction for Faraday rotation; Faraday correction based on the rotation measure model presented by the S-PASS team; or Faraday correction based on a rotation measure model presented by Hutschenreuter and En{\ss}lin. We find that the correlation between S-PASS and WMAP is strongest when applying the S-PASS model. Adopting this correction model, we find that the mean spectral index of polarized synchrotron emission gradually steepens from $\beta_s\approx-2.8$ at low Galactic latitudes to $\beta_s\approx-3.3$ at high Galactic latitudes, in good agreement with previously published results. Finally, we consider two special cases defined by the BICEP2 and SPIDER fields and obtain mean estimates of $\beta_{BICEP2}=-3.22\pm0.06$ and $\beta_{SPIDER}=-3.21\pm0.03$, respectively. A comparison with a similar analysis performed in the 23-33 GHz range suggests a flattening of about $\Delta\beta_s \sim 0.1 \pm 0.2$ from low to higher frequencies, but with no statistical significance due to high uncertainties.
astrophysics
Gender differences in human behaviour have attracted generations of social scientists, who have explored whether males and females act differently in domains involving competition, risk taking, cooperation, altruism, honesty, as well as many others. Yet, little is known about gender differences in the equity-efficiency trade-off. It has been suggested that females are more equitable than males, but the empirical evidence is weak and inconclusive. This gap is particularly important, because people in power of redistributing resources often face a conflict between equity and efficiency, to the point that this trade-off has been named as "the central problem in distributive justice". The recently introduced Trade-Off Game (TOG) - in which a decision-maker has to unilaterally choose between being equitable or being efficient - offers a unique opportunity to fill this gap. To this end, I analyse gender differences on a large dataset including all N=5,056 TOG decisions collected by my research group since we introduced this game. The results show that females prefer equity over efficiency to a greater extent than males do. These findings suggest that males and females have different preferences for resource distribution, and point to new avenues for future research.
physics
We investigate diffusive nanowire-based structures with two normal terminals on the sides and a central superconducting island in the middle, which is either grounded or floating. Using a semiclassical calculation we demonstrate that both device layouts permit a quantitative measurement of the energy-dependent sub-gap thermal conductance $G_\mathrm{th}$ from the spectral density of the current noise. In the floating case this goal is achieved without the need to contact the superconductor provided the device is asymmetric, that may be attractive from the experimental point of view. Our calculations are directly applicable to the multi-mode case and can serve as a starting point to understand the shot noise response in open one dimensional Majorana device.
condensed matter
This paper describes preliminary work in the recent promising approach of generating synthetic training data for facilitating the learning procedure of deep learning (DL) models, with a focus on aerial photos produced by unmanned aerial vehicles (UAV). The general concept and methodology are described, and preliminary results are presented, based on a classification problem of fire identification in forests as well as a counting problem of estimating number of houses in urban areas. The proposed technique constitutes a new possibility for the DL community, especially related to UAV-based imagery analysis, with much potential, promising results, and unexplored ground for further research.
computer science
Enzymes speed up biochemical reactions at the core of life by as much as 15 orders of magnitude. Yet, despite considerable advances, the fine dynamical determinants at the microscopic level of their catalytic proficiency are still elusive. In this work, we use a powerful mathematical approach to show that rate-promoting vibrations in the picosecond range, specifically encoded in the 3D protein structure, are localized vibrations optimally coupled to the chemical reaction coordinates at the active site. The universality of these features is demonstrated on a pool of more than 900 enzyme structures, comprising a total of more than 10,000 experimentally annotated catalytic sites. Our theory provides a natural microscopic rationale for the known subtle structural compactness of active sites in enzymes.
physics
Via symbolic computation we deduce 97 new type series for powers of $\pi$ related to Ramanujan-type series. Here are three typical examples: $$\sum_{k=0}^\infty \frac{P(k) \binom{2k}k\binom{3k}k \binom{6k}{3k}}{(k+1)(2k-1)(6k-1)(-640320)^{3k}} =\frac{18\times557403^3\sqrt{10005}}{5\pi}$$ with \begin{align*}P(k) = &637379600041024803108 k^2 + 657229991696087780968 k \\&+ 19850391655004126179, \end{align*} $$\sum_{k=1}^\infty \frac{(3k+1)16^k}{(2k+1)^2k^3\binom{2k}k^3} = \frac{\pi^2-8}2,$$ and $$\sum_{n=0}^\infty\frac{3n+1}{(-100)^n} \sum_{k=0}^n{n\choose k}^2T_k(1,25)T_{n-k}(1,25) = \frac{25}{8\pi},$$ where the generalized central trinomial coefficient $T_k(b,c)$ denotes the coefficient of $x^k$ in the expansion of $(x^2+bx+c)^k$. We also formulate a general characterization of rational Ramanujan-type series for $1/\pi$ via congruences, and pose 117 new conjectural series for powers of $\pi$ via looking for corresponding congruences. For example, we conjecture that $$\sum_{k=0}^\infty\frac{39480k+7321}{(-29700)^k}T_k(14,1)T_k(11,-11)^2=\frac{6795\sqrt5}{\pi}.$$ Eighteen of the new series in this paper involve some imaginary quadratic fields with class number $8$.
mathematics
In this paper, we introduce a new extension of the Singular Spectrum Analysis (SSA) called functional SSA to analyze functional time series. The new methodology is developed by integrating ideas from functional data analysis and univariate SSA. We explore the advantages of the functional SSA in terms of simulation results and two real data applications. We compare the proposed approach with Multivariate SSA (MSSA) and dynamic Functional Principal Component Analysis (dFPCA). The results suggest that further improvement to MSSA is possible, and the new method provides an attractive alternative to the dFPCA approach that is used for analyzing correlated functions. We implement the proposed technique to an application of remote sensing data and a call center dataset. We have also developed an efficient and user-friendly R package and a shiny web application to allow interactive exploration of the results.
statistics
We construct confluent conformal blocks of the second kind of the Virasoro algebra. We also construct the Stokes transformations which map such blocks in one Stokes sector to another. In the BPZ limit, we verify explicitly that the constructed blocks and the associated Stokes transformations reduce to solutions of the confluent BPZ equation and its Stokes matrices, respectively. Both the confluent conformal blocks and the Stokes transformations are constructed by taking suitable confluent limits of the crossing transformations of the four-point Virasoro conformal blocks.
high energy physics theory
A class of models is considered in which the masses only of the third generation of quarks and leptons arise in the tree approximation, while masses for the second and first generations are produced respectively by one-loop and two-loop radiative corrections. So far, for various reasons, these models are not realistic.
high energy physics theory
In this thesis, we aim to understand the microscopic details and origin of the Cosmological Horizon, produced by a static observer in four-dimensional de Sitter (dS$_4$) spacetime. We consider a deformed extension of dS spacetime by means of a single $\mathbb Z_q$ quotient, which resembles an Orbifold geometry. The Orbifold parameter induces a pair of codimension two minimal surfaces given by 2-spheres in the Euclidean geometry. Using dimensional reduction on the two-dimensional plane where the minimal surfaces have support, we use the Liouville field theory and the Kerr/CFT mechanism in order to describe the underlying degrees of freedom of the Cosmological Horizon. We then show, that in the large $q$-limit, this pair of codimensions two surfaces can be realized as the conformal boundaries of dS$_3$. We notice that the central charge obtained using Liouville theory, in the latter limit, corresponds to the Strominger central charge obtained in the context of the dS/CFT correspondence. In addition, a formulation of entanglement entropy for de Sitter spacetimes is given in terms of dS holography and also a different approach in which the entanglement between two disconnected bulk observers is described in terms of the topology of the spacetime. Therefore, a quarter of the area formula is proposed, in which the area corresponds to the are of the set of fixed points of an $S^2/\mathbb Z_q$ orbifold.
high energy physics theory
Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2D and 3D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2D and 3D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2D steady-state flow and 3D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2D case, the inversion rapidly explores the posterior model distribution. For the 3D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.
statistics
We consider the Fano scheme $F_k(X)$ of $k$--dimensional linear subspaces contained in a complete intersection $X \subset \mathbb{P}^n$ of multi--degree $\underline{d} = (d_1, \ldots, d_s)$. Our main result is an extension of a result of Riedl and Yang concerning Fano schemes of lines on very general hypersurfaces: we consider the case when $X$ is a very general complete intersection and $\Pi_{i=1}^s d_i > 2$ and we find conditions on $n$, $\underline{d}$ and $k$ under which $F_k(X)$ does not contain either rational or elliptic curves. At the end of the paper, we study the case $\Pi_{i=1}^s d_i = 2$.
mathematics
In recent years generative adversarial network (GAN) based models have been successfully applied for unsupervised speech-to-speech conversion.The rich compact harmonic view of the magnitude spectrogram is considered a suitable choice for training these models with audio data. To reconstruct the speech signal first a magnitude spectrogram is generated by the neural network, which is then utilized by methods like the Griffin-Lim algorithm to reconstruct a phase spectrogram. This procedure bears the problem that the generated magnitude spectrogram may not be consistent, which is required for finding a phase such that the full spectrogram has a natural-sounding speech waveform. In this work, we approach this problem by proposing a condition encouraging spectrogram consistency during the adversarial training procedure. We demonstrate our approach on the task of translating the voice of a male speaker to that of a female speaker, and vice versa. Our experimental results on the Librispeech corpus show that the model trained with the TF consistency provides a perceptually better quality of speech-to-speech conversion.
electrical engineering and systems science
In this work we introduce the Dual Boson Diagrammatic Monte Carlo technique for strongly interacting electronic systems. This method combines the strength of dynamical mean-filed theory for non-perturbative description of local correlations with the systematic account of non-local corrections in the Dual Boson theory by the diagrammatic Monte Carlo approach. It allows us to get a numerically exact solution of the dual boson theory at the two-particle local vertex level for the extended Hubbard model. We show that it can be efficiently applied to description of single particle observables in a wide range of interaction strengths. We compare our exact results for the self-energy with the ladder Dual Boson approach and determine a physical regime, where description of collective electronic effects requires more accurate consideration beyond the ladder approximation. Additionally, we find that the order-by-order analysis of the perturbative diagrammatic series for the single-particle Green's function allows to estimate the transition point to the charge density wave phase.
condensed matter
It is argued that the results of recent precision measurements by LHCb of the lifetimes of charmed and bottom hyperons are very well consistent with the description within heavy quark expansion and allow to accurately determine matrix elements of light-flavor nonsinglet four-quark operators over the hyperons and to accurately reproduce the difference of lifemes of $\Lambda_b$ and $\Xi_b^-$. When combined with the recent LHCb results on the decay $\Xi_b^- \to \Lambda_b \pi^-$ this leads to prediction of a lower bound on the rate of the decays $\Xi_c \to \Lambda_c \pi$.
high energy physics phenomenology
Inspired by the recent development on calculating the free energy change via a relaxation process [Nat. Phys. 14, 842 (2018)], we investigate the role of heat released in an irreversible relaxation following a large perturbation. Utilizing a derivation without microscopic reversibility, we arrive at a new free energy estimator that employs a volume term to account for missing important rare events. Applications to harmonic oscillators and particle insertion in Lennard-Jones fluid agree well with the (numerical) exact solutions. Our study hence suggests an alternative interpretation to the insufficient sampling problem in free energy calculations.
physics
Rotating radio transients (RRATs) are peculiar astronomical objects whose emission mechanism remains under investigation. In this paper, we present observations of three RRATs, J1538+2345, J1854+0306 and J1913+1330, observed with the Five-hundred-meter Aperture Spherical radio Telescope (FAST). Specifically, we analyze the mean pulse profiles and temporal flux density evolutions of the RRATs. Owing to the high sensitivity of FAST, the derived burst rates of the three RRATs are higher than those in previous reports. RRAT J1854+0306 exhibited a time-dynamic mean pulse profile, whereas RRAT J1913+1330 showed distinct radiation and nulling segments on its pulse intensity trains. The mean pulse profile variation with frequency is also studied for RRAT J1538+2345 and RRAT J1913+1330, and the profiles at different frequencies could be well fitted with a cone-core model and a conal-beam model, respectively.
astrophysics
We demonstrate, what is to our knowledge, the shortest pulses directly generated to date from a solid-state laser, mode locked with a graphene saturable absorber (GSA). In the experiments, a low-threshold diode-pumped Cr3+:LiSAF laser was used near 850 nm. At a pump power of 275 mW provided by two pump diodes, the Cr3+:LiSAF laser produced nearly transform-limited, 19-fs pulses with an average output power of 8.5 mW. The repetition rate was around 107 MHz, corresponding to a pulse energy and peak power of 79 pJ and 4.2 kW, respectively. Once mode locking was initiated with the GSA, stable, uninterrupted femtosecond pulse generation could be obtained. In addition, the femtosecond output of the laser could be tuned from 836 nm to 897 nm with pulse durations in the range of 80-190 fs. We further performed detailed mode locking initiation tests across the full cavity stability range of the laser to verify that pulse generation was indeed started by the GSA and not by Kerr lens mode locking.
physics
We propose ZeroSARAH -- a novel variant of the variance-reduced method SARAH (Nguyen et al., 2017) -- for minimizing the average of a large number of nonconvex functions $\frac{1}{n}\sum_{i=1}^{n}f_i(x)$. To the best of our knowledge, in this nonconvex finite-sum regime, all existing variance-reduced methods, including SARAH, SVRG, SAGA and their variants, need to compute the full gradient over all $n$ data samples at the initial point $x^0$, and then periodically compute the full gradient once every few iterations (for SVRG, SARAH and their variants). Moreover, SVRG, SAGA and their variants typically achieve weaker convergence results than variants of SARAH: $n^{2/3}/\epsilon^2$ vs. $n^{1/2}/\epsilon^2$. ZeroSARAH is the first variance-reduced method which does not require any full gradient computations, not even for the initial point. Moreover, ZeroSARAH obtains new state-of-the-art convergence results, which can improve the previous best-known result (given by e.g., SPIDER, SpiderBoost, SARAH, SSRGD and PAGE) in certain regimes. Avoiding any full gradient computations (which is a time-consuming step) is important in many applications as the number of data samples $n$ usually is very large. Especially in the distributed setting, periodic computation of full gradient over all data samples needs to periodically synchronize all machines/devices, which may be impossible or very hard to achieve. Thus, we expect that ZeroSARAH will have a practical impact in distributed and federated learning where full device participation is impractical.
computer science
We have performed ab-initio lattice dynamics and molecular dynamics studies of Li2X (X=O, S and Se) to understand the ionic conduction in these compounds. The inelastic neutron scattering measurements on Li2O have been performed across its superionic transition temperature of about 1200 K. The experimental spectra show significant changes around the superionic transition temperature, which is attributed to large diffusion of lithium as well as its large vibrational amplitude. We have identified a correlation between the chemical pressure (ionic radius of X atom) and the superionic transition temperature. The simulations are able to provide the ionic diffusion pathways in Li2X.
condensed matter
The minimal gauged U(1)$_{L_\mu-L_\tau}$ model is a simple extension of the Standard Model and has a strong predictive power for the neutrino sector. In particular, the mass spectrum and couplings of heavy right-handed neutrinos are determined as functions of three neutrino Dirac Yukawa couplings, with which we can evaluate the baryon asymmetry of the Universe generated through their decay, i.e., leptogenesis. In this letter, we study leptogenesis in the minimal gauged U(1)$_{L_\mu-L_\tau}$ model. It turns out that the sign of the resultant baryon asymmetry for the case with the Dirac CP phase, $\delta$, larger than $\pi$ is predicted to be opposite to that for $\delta < \pi$. In addition, if lepton asymmetry is dominantly produced by the decay of the lightest right-handed neutrino, then the correct sign of baryon asymmetry is obtained for $\delta > \pi$, which is favored by the current neutrino-oscillation experiments, whilst the wrong sign is obtained for $\delta < \pi$. We further investigate a non-thermal leptogenesis scenario where the U(1)$_{L_\mu-L_\tau}$ breaking field plays the role of inflaton and decays into right-handed neutrinos, as a concrete example. It is found that this simple framework offers a successful inflation that is consistent with the CMB observation. We then show that the observed amount of baryon asymmetry can be reproduced in this scenario, with its sign predicted to be positive in most of the parameter space.
high energy physics phenomenology
The achievable instrument sensitivity is a critical parameter for the continued development of THz applications. Techniques such as Cavity-Enhanced Techniques and Cavity Ring Down Spectroscopy have not yet been employed at THz frequency due to the difficulties to construct a high finesse Fabry-P\'erot cavity. Here, we describe such a THz resonator based on a low-loss oversized corrugated waveguide with highly reflective photonic mirrors obtaining a finesse above 3000 around 620 GHz. These components enable a Fabry-Perot THz Absorption Spectrometer with an equivalent interaction length of one kilometer giving access to line intensities as low as 10-27cm- 1/(molecule/cm2) with a S/N ratio of 3. In addition, the intracavity optical power have allowed the Lamb-Dip effect to be studied with a low-power emitter, an absolute frequency accuracy better than 5 kHz can be easily obtained providing an additional solution for rotational spectroscopy.
physics
We present a hardware agnostic error mitigation algorithm for near term quantum processors inspired by the classical Lanczos method. This technique can reduce the impact of different sources of noise at the sole cost of an increase in the number of measurements to be performed on the target quantum circuit, without additional experimental overhead. We demonstrate through numerical simulations and experiments on IBM Quantum hardware that the proposed scheme significantly increases the accuracy of cost functions evaluations within the framework of variational quantum algorithms, thus leading to improved ground-state calculations for quantum chemistry and physics problems beyond state-of-the-art results.
quantum physics
Automatic speech recognition (ASR) systems play a key role in many commercial products including voice assistants. Typically, they require large amounts of clean speech data for training which gives an undue advantage to large organizations which have tons of private data. In this paper, we have first curated a fairly big dataset using publicly available data sources. Thereafter, we tried to investigate if we can use publicly available noisy data to train robust ASR systems. We have used speech enhancement to clean the noisy data first and then used it together with its cleaned version to train ASR systems. We have found that using speech enhancement gives 9.5\% better word error rate than training on just noisy data and 9\% better than training on just clean data. It's performance is also comparable to the ideal case scenario when trained on noisy and its clean version.
electrical engineering and systems science
Atomic synapses represent a special class of memristors whose operation relies on the formation of metallic nanofilaments bridging two electrodes across an insulator. Due to the magnifying effect of this narrowest cross-section on the device conductance, a nanometer scale displacement of a few atoms grants access to various resistive states at ultimately low energy costs, satisfying the fundamental requirements of neuromorphic computing hardware. Yet, device engineering lacks the complete quantum characterization of such filamentary conductance. Here we analyze multiple Andreev reflection processes emerging at the filament terminals when superconducting electrodes are utilized. Thereby the quantum PIN code, i.e. the transmission probabilities of each individual conduction channel contributing to the conductance of the nanojunctions is revealed. Our measurements on Nb$_2$O$_5$ resistive switching junctions provide a profound experimental evidence that the onset of the high conductance ON state is manifested via the formation of truly atomic-sized metallic filaments.
condensed matter
We propose a new deformed Rieffel product for functions in de Sitter spacetime, and study the resulting deformation of quantum field theory in de Sitter using warped convolutions. This deformation is obtained by embedding de Sitter in a higher-dimensional Minkowski spacetime, deforming there using the action of translations and subsequently projecting back to de Sitter. We determine the two-point function of a deformed free scalar quantum field, which differs from the undeformed one, in contrast to the results in deformed Minkowski spacetime where they coincide. Nevertheless, we show that in the limit where de Sitter spacetime becomes flat, we recover the well-known non-commutative Minkowski spacetime.
high energy physics theory
The statistics of the supersymmetry breaking scale in the string landscape has been extensively studied in the past finding either a power-law behaviour induced by uniform distributions of F-terms or a logarithmic distribution motivated by dynamical supersymmetry breaking. These studies focused mainly on type IIB flux compactifications but did not systematically incorporate the K\"ahler moduli. In this paper we point out that the inclusion of the K\"ahler moduli is crucial to understand the distribution of the supersymmetry breaking scale in the landscape since in general one obtains unstable vacua when the F-terms of the dilaton and the complex structure moduli are larger than the F-terms of the K\"ahler moduli. After taking K\"ahler moduli stabilisation into account, we find that the distribution of the gravitino mass and the soft terms is power-law only in KKLT and perturbatively stabilised vacua which therefore favour high scale supersymmetry. On the other hand, LVS vacua feature a logarithmic distribution of soft terms and thus a preference for lower scales of supersymmetry breaking. Whether the landscape of type IIB flux vacua predicts a logarithmic or power-law distribution of the supersymmetry breaking scale thus depends on the relative preponderance of LVS and KKLT vacua.
high energy physics theory
We study the regularization dependence of the Nambu-Jona--Lasinio model (NJL) predictions for some properties of magnetized quark matter at zero temperature (and baryonic density) in the mean field approximation. The model parameter dependence for each regularization procedure is also analyzed in detail. We calculate the average and difference of the quark condensates using different regularization methods and compare with recent lattice results. In this context, the reliability of the different regularization procedures is discussed.
high energy physics phenomenology
An application of a quantum wave impedance method for a study of quantum-mechanical systems which con\-tain singular zero-range potentials is considered. It was shown how to reformulate the problem of an investigation of mentioned systems in terms of a quantum wave impedance. As a result both the scattering and bound states problems are solved for systems of single $\delta$, double $\delta$ and single $\delta-\delta'$ potentials. The formalization of solving systems with an arbitrary combination of a piesewise constant potential and a $\delta$-potentials with the help of a quantum wave impedance approach is described.
quantum physics
Crystallization represents the prime example of a disorder order transition. In realistic situations, however, container walls and impurities are frequently present and hence crystallization is heterogeneously seeded. Rarely the seeds are perfectly compatible with the thermodynamically favoured crystal structure and thus induce elastic distortions, which impede further crystal growth. Here we use a colloidal model system, which not only allows us to quantitatively control the induced distortions but also to visualize and follow heterogeneous crystallization with single-particle resolution. We determine the sequence of intermediate structures by confocal microscopy and computer simulations, and develop a theoretical model that describes our findings. The crystallite first grows on the seed but then, on reaching a critical size, detaches from the seed. The detached and relaxed crystallite continues to grow, except close to the seed, which now prevents crystallization. Hence, crystallization seeds facilitate crystallization only during initial growth and then act as impurities.
condensed matter
We demonstrate up to 12 km, 56 Gb/s DMT transmission using high-speed VCSELs in the 1.5 um wavelength range for future 400Gb/s intra-data center connects, enabled by vestigial sideband filtering of the transmit signal.
electrical engineering and systems science
NIST SP800-22 is one of the most widely used statistical testing tools for pseudorandom number generators (PRNGs). This tool consists of 15 tests (one-level tests) and two additional tests (two-level tests). Each one-level test provides one or more $p$-values. The two-level tests measure the uniformity of the obtained $p$-values for a fixed one-level test. One of the two-level tests categorizes the $p$-values into ten intervals of equal length, and apply a chi-squared goodness-of-fit test. This two-level test is often more powerful than one-level tests, but sometimes it rejects even good PRNGs when the sample size at the second level is too large, since it detects approximation errors in the computation of $p$-values. In this paper, we propose a practical upper limit of the sample size in this two-level test, for each of six tests appeared in SP800-22. These upper limits are derived by the chi-squared discrepancy between the distribution of the approximated $p$-values and the uniform distribution $U(0, 1)$. We also computed a "risky" sample size at the second level for each one-level test. Our experiments show that the two-level test with the proposed upper limit gives appropriate results, while using the risky size often rejects even good PRNGs. We also propose another improvement: to use the exact probability for the ten categories in the computation of goodness-of-fit at the two-level test. This allows us to increase the sample size at the second level, and would make the test more sensitive than the NIST's recommending usage.
statistics
We demonstrate an unsuspected freedom in physics, by showing an essential unpredictability in the relation between the behavior of clocks on the workbench and explanations of that behavior written in symbols on the blackboard. In theory, time and space are defined by clocks synchronized as specified by relations among clock readings at the transmission and reception of light signals; however spacetime curvature implies obstacles to this synchronization. Recognizing the need to handle bits and other symbols in both theory and experiment, we offer a novel theory of symbol handling, centered on a kind of "logical synchronization," distinct from the synchronization defined by Einstein in special relativity. We present three things: (1) We show a need in physics, stemming from general relativity, for physicists to make choices about what clocks to synchronize with what other clocks. (2) To exploit the capacity to make choices of synchronization, we provide a theory in which to express timing relations between transmitted symbols and the clock readings of the agent that receives them, without relying on any global concept of "time". Dispensing with a global time variable is a marked departure from current practice. (3) The recognition of unpredictability calls for more attention to behavior on the workbench of experiment relative to what can be predicted on the blackboard. As a prime example, we report on the "horse race" situation of an agent measuring the order of arrival of two symbols, to show how order determinations depart from any possible assignment of values of a time variable.
physics
We present our results of the spectroscopic study of the lenticular galaxy NGC 4143 - an outskirt member of the Ursa Major cluster. Using the observations at the 6-m SAO RAS telescope with the SCORPIO-2 spectrograph and also the archive data of panoramic spectroscopy with the SAURON IFU at the WHT, we have detected an extended inclined gaseous disk which is traced up to a distance of about 3.5 kpc from the center, with a spin approximately opposite to the spin of the stellar disk. The galaxy images in the H-alpha and [NII]6583 emission lines obtained at the 2.5-m CMO SAI MSU telescope with the MaNGaL instrument have shown that the emission lines are excited by a shock wave. A spiral structure that is absent in the stellar disk of the galaxy is clearly seen in the brightness distribution of ionized-gas lines (H-alpha and [NII] from the MaNGaL data and [OIII] from the SAURON data). A complex analysis of both the Lick index distribution along the radius and of the integrated colors, including the ultraviolet measurements with the GALEX space telescope and the near-infrared measurements with the WISE space telescope, has shown that there has been no star formation in the galaxy, perhaps, for the last 10 Gyr. Thus, the recent external-gas accretion detected in NGC 4143 from its kinematics, was not accompanied by star formation, probably, due to an inclined direction of the gas inflow onto the disk.
astrophysics
Wireless dynamic charging technologies are becoming a promising alternative solution to plug-in ones as they allow on-the-move charging for electric vehicles. From a power network point of view, this type of charging makes electric vehicles a new type of loads--the moving loads. Such moving loads are different from the traditional loads as they may change their locations constantly in the grids. To study the effect of these moving loads on power distribution grids, this work focuses on the steady-state analysis of electrified roads equipped with wireless dynamic charging. In particular, the voltage profile and the long-term voltage stability of the electrified roads are considered. Unusual shapes of the voltage profile are observed such as the half-leaf veins for a one-way road and the harp-like shape for a two-way road. Voltage swings are also detected while the vehicles move in the two-way road configuration. As for the long-term voltage stability, continuation power flow is used to characterize the maximum length of a road as well as the maximum number of vehicles that the road can accommodate.
electrical engineering and systems science
We propose a new systematic method of studying correlations between parameters that describe an astronomical (or any) physical system. We recall that behind Dimensionless scaling laws in complex, self-interacting physical objects lies a rigorous theorem of Dimensional analysis, known widely as the Buckingham theorem. Once a {\it catalogue} of properties and forces that define an object or physical system is established, the theorem allows one to select a complete set of Dimensionless quantities or {\it Numbers} on which structure must depend. The internal structure takes the form of a functionally defined manifold in the space of these Numbers. Simple and familiar examples are discussed by way of introduction. Correlations in properties of astronomical objects can be sought either through the constancy of these Numbers or between pairs of the Numbers. In either case, within errors, the functional dependences take on an absolute numerical character. As our principal application, we study a well defined sample of galaxies in order to reveal the implied Tully Fisher and Baryonic Tully Fisher relations. We find that $L\,\propto\,v_{rot}^4$ for the former and $M_b\,\propto\,v_{rot}^3$ for the latter, suggesting that these relations may have different causal origins.
astrophysics
Decay of honeycomb-generated turbulence in a duct with a static transverse magnetic field is studied via direct numerical simulations. The simulations follow the revealing experimental study of Sukoriansky et al. (1986), in particular the paradoxical observation of high-amplitude velocity fluctuations, which exist in the downstream portion of the flow when the strong transverse magnetic field is imposed in the entire duct including the honeycomb exit, but not in other configurations. It is shown that the fluctuations are caused by the large-scale quasi-two-dimensional structures forming in the flow at the initial stages of the decay and surviving the magnetic suppression. Statistical turbulence properties, such as the energy decay curves, two-point correlations and typical length scales are computed. The study demonstrates that turbulence decay in the presence of a magnetic field is a complex phenomenon critically depending on the state of the flow at the moment the field is introduced.
physics
Being a wide band gap system GaMnN attracted considerable interest after the discovery of highest reported ferromagnetic transition temperature $T_C$ $\sim$ 940 K among all diluted magnetic semiconductors. Later it become a debate due to the observation of either a ferromagnetic state with very low $T_C$ $\sim$ 8 K or sometimes no ferromagnetic state at all. We address these issues by calculating the ferromagnetic window, $T_C$ Vs $p$, within a $t-t'$ Kondo lattice model using a spin-fermion Monte-Carlo method on a simple cubic lattice. We exploit the next-nearest-neighbor hopping $t'$ to tune the degree of delocalization of the free carriers and show that carrier localization (delocalization) significantly widen (shrunken) the ferromagnetic window with a reduction (enhancement) of the optimum $T_C$. We connect our results with the experimental findings and try to understand the ambiguities in ferromagnetism in GaMnN.
condensed matter
As has been shown elsewhere, a reasonable model of the loss of entanglement or correlation that occurs in quantum computations is one which assumes that they can effectively be predicted by a framework that presupposes the presence of irreversibilities internal to the system. It is based on the steepest-entropy-ascent principle and is used here to reproduce the behavior of a controlled-PHASE gate in good agreement with experimental data. The results show that the loss of entanglement predicted is related to the irreversibilities in a nontrivial way, providing a possible alternative approach that warrants exploration to that conventionally used to predict the loss of entanglement. The results provide a means for understanding this loss in quantum protocols from a nonequilibrium thermodynamic standpoint. This framework permits the development of strategies for extending either the maximum fidelity of the computation or the entanglement time.
quantum physics
Neural networks have greatly boosted performance in computer vision by learning powerful representations of input data. The drawback of end-to-end training for maximal overall performance are black-box models whose hidden representations are lacking interpretability: Since distributed coding is optimal for latent layers to improve their robustness, attributing meaning to parts of a hidden feature vector or to individual neurons is hindered. We formulate interpretation as a translation of hidden representations onto semantic concepts that are comprehensible to the user. The mapping between both domains has to be bijective so that semantic modifications in the target domain correctly alter the original representation. The proposed invertible interpretation network can be transparently applied on top of existing architectures with no need to modify or retrain them. Consequently, we translate an original representation to an equivalent yet interpretable one and backwards without affecting the expressiveness and performance of the original. The invertible interpretation network disentangles the hidden representation into separate, semantically meaningful concepts. Moreover, we present an efficient approach to define semantic concepts by only sketching two images and also an unsupervised strategy. Experimental evaluation demonstrates the wide applicability to interpretation of existing classification and image generation networks as well as to semantically guided image manipulation.
computer science
Innovation ecosystems can be naturally described as a collection of networked entities, such as experts, institutions, projects, technologies and products. Representing in a machine-readable form these entities and their relations is not entirely attainable, due to the existence of abstract concepts such as knowledge and due to the confidential, non-public nature of this information, but even its partial depiction is of strong interest. The representation of innovation ecosystems incarnated as knowledge graphs would enable the generation of reports with new insights, the execution of advanced data analysis tasks. An ontology to capture the essential entities and relations is presented, as well as the description of data sources, which can be used to populate innovation knowledge graphs. Finally, the application case of the Universidad Politecnica de Madrid is presented, as well as an insight of future applications.
computer science
We present a deep study of the Neutrino-4 data aimed at finding the statistical significance of the large-mixing short-baseline neutrino oscillation signal claimed by the Neutrino-4 collaboration at more than $3\sigma$. We found that the results of the Neutrino-4 collaboration can be reproduced approximately only by neglecting the effects of the energy resolution of the detector. Including these effects, we found that the best fit is obtained for a mixing that is even larger, close to maximal, but the statistical significance of the short-baseline neutrino oscillation signal is only about $2.7\sigma$ if evaluated with the usual method based on Wilks' theorem. We show that the large Neutrino-4 mixing is in strong tension with the KATRIN, PROSPECT, STEREO, and solar $\nu_{e}$ bounds. Using a more reliable Monte Carlo simulation of a large set of Neutrino-4-like data, we found that the statistical significance of the Neutrino-4 short-baseline neutrino oscillation signal decreases to about $2.2\sigma$. We also show that it is not unlikely to find a best-fit point that has a large mixing, even maximal, in the absence of oscillations. Therefore, we conclude that the claimed Neutrino-4 indication in favor of short-baseline neutrino oscillations with very large mixing is rather doubtful.
high energy physics phenomenology
We report the discovery of 25 new open clusters resulting from a search in dense low galactic latitude fields. We also provide, for the first time, structural and astrophysical parameters for the new findings and 34 other recently discovered open clusters using Gaia DR2 data. The candidates were confirmed by jointly inspecting the vector point diagrams and spatial distribution. The discoveries were validated by matching near known objects and comparing their mean astrometric parameters with the available literature. A decontamination algorithm was applied to the three-dimensional astrometric space to derive membership likelihoods for clusters stars. By rejecting stars with low membership likelihoods, we built decontaminated colour-magnitude diagrams and derived the clusters astrophysical parameters by isochrone fitting. The structural parameters were also derived by King-profile fittings over the stellar distributions. The investigated clusters are mainly located within 3 kpc from the Sun, with ages ranging from 30 Myr to 3.2 Gyr and reddening limited to E(B-V)=2.5. On average, our cluster sample presents less concentrated structures than Gaia DR2 confirmed clusters, since the derived core radii are larger while the tidal radii are not significantly different. Most of them are located in the IV quadrant of the Galactic disc at low latitudes, therefore they are immersed in dense fields characteristic of the inner Milky Way.
astrophysics
We present an overview of, and first science results from, the Magellanic Edges Survey (MagES), an ongoing spectroscopic survey mapping the kinematics of red clump and red giant branch stars in the highly substructured periphery of the Magellanic Clouds. In conjunction with Gaia astrometry, MagES yields a sample of ~7000 stars with individual 3D velocities that probes larger galactocentric radii than most previous studies. We outline our target selection, observation strategy, data reduction and analysis procedures, and present results for two fields in the northern outskirts ($>10^{\circ}$ on-sky from the centre) of the Large Magellanic Cloud (LMC). One field, located in the vicinity of an arm-like overdensity, displays apparent signatures of perturbation away from an equilibrium disk model. This includes a large radial velocity dispersion in the LMC disk plane, and an asymmetric line-of-sight velocity distribution indicative of motions vertically out of the disk plane for some stars. The second field reveals 3D kinematics consistent with an equilibrium disk, and yields $V_{\text{circ}}=87.7\pm8.0$km s$^{-1}$ at a radial distance of ~10.5kpc from the LMC centre. This leads to an enclosed mass estimate for the LMC at this radius of $(1.8\pm0.3)\times10^{10}\text{M}_{\odot}$.
astrophysics
We derive a Lorentzian OPE inversion formula for the principal series of $sl(2,\mathbb{R})$. Unlike the standard Lorentzian inversion formula in higher dimensions, the formula described here only applies to fully crossing-symmetric four-point functions and makes crossing symmetry manifest. In particular, inverting a single conformal block in the crossed channel returns the coefficient function of the crossing-symmetric sum of Witten exchange diagrams in AdS, including the direct-channel exchange. The inversion kernel exhibits poles at the double-trace scaling dimensions, whose contributions must cancel out in a generic solution to crossing. In this way the inversion formula leads to a derivation of the Polyakov bootstrap for $sl(2,\mathbb{R})$. The residues of the inversion kernel at the double-trace dimensions give rise to analytic bootstrap functionals discussed in recent literature, thus providing an alternative explanation for their existence. We also use the formula to give a general proof that the coefficient function of the principal series is meromorphic in the entire complex plane with poles only at the expected locations.
high energy physics theory
We propose a new method for Unsupervised clustering in particle physics named UCluster, where information in the embedding space created by a neural network is used to categorise collision events into different clusters that share similar properties. We show how this method can be applied to an unsupervised multiclass classification as well as for anomaly detection, which can be used for new physics searches.
physics
The Two-time model (2T model) has six dimensions with two dimensions of time, has a dilaton particle that makes the symmetry breaking differently from the Standard Model. Assuming a soft break of $SP(2,R)$ symmetry, the 2T extension can give a suitable picture of the matter-antimatter asymmetry by the Baryogenesis scenario. By reducing the 2T metric to the Minkowski metric (1T metric), we consider the electroweak phase transition picture in the 2T model with the dilaton as the trigger. Our analysis shows that Electroweak Phase Transition (EWPT) is a first-order phase transition at the $200$ GeV scale, its strength is about $1 - 3.08$ and the mass of dilaton is in the interval $[345,625]$ GeV. Therefore, the 2T-model indirectly suggests that extra-dimension can also be a source of EWPT.
high energy physics phenomenology
Well-motivated electroweak dark matter is often hosted by an extended electroweak sector which also contains new lepton pairs with masses near the weak scale. In this paper, we explore such electroweak dark matter via combining dark matter direct detections and high-luminosity LHC probes of new lepton pairs. Using $Z$- and $W$-associated electroweak processes with two or three lepton final states, we show that dependent on the overall coupling constant, dark matter mass up to $170-210$ GeV can be excluded at $2\sigma$ level and up to $175-205$ GeV can be discovered at $5\sigma$ level at the 14 TeV LHC with integrated luminosities 300 fb$^{-1}$ and 3000 fb$^{-1}$, respectively.
high energy physics phenomenology
We present an investigation into the interdisciplinary role of physics in a physics-for-non-physicists course at Pomona College. This work is guided by prior research into introductory physics for life-science (IPLS) courses, but attends to significant differences in the scope and context of this course. We interviewed enrolled students, physics professors, and professors from non-physics disciplines to explore the function of this course and the role of physics in the education of non-physics-science students. Interviews were audio recorded and transcribed, then analyzed to identify emergent themes. These themes outline the authentic physics, including content knowledge and other, broader learning objectives, that play an important and distinct role in the science education of enrolled students. Stakeholders generally align in their emphasis of interdisciplinary relevance with some divergence in the specific articulation of that idea. The differences can be understood through the stakeholders' distinct areas of expertise, with non-physics professors expressing value through relevance to their discipline and physics professors focusing on essential aspects of physics.
physics
We consider black holes in 2d de Sitter JT gravity coupled to a CFT, and entangled with matter in a disjoint non-gravitating universe. Tracing out the entangling matter leaves the CFT in a density matrix whose stress tensor backreacts on the de Sitter geometry, lengthening the wormhole behind the black hole horizon. Naively, the entropy of the entangling matter increases without bound as the strength of the entanglement increases, but the monogamy property predicts that this growth must level off. We compute the entropy via the replica trick, including wormholes between the replica copies of the de Sitter geometry, and find a competition between conventional field theory entanglement entropy and the surface area of extremal "islands" in the de Sitter geometry. The black hole and cosmological horizons both play a role in generating such islands in the back-reacted geometry, and have the effect of stabilizing the entropy growth as required by monogamy. We first show this in a scenario in which the de Sitter spatial section has been decompactified to an interval. Then we consider the compact geometry, and argue for a novel interpretation of the island formula in the context of closed universes that recovers the Page curve. Finally, we comment on the application of our construction to the cosmological horizon in empty de Sitter space.
high energy physics theory
The emerging paradigm of interconnected microgrids advocates energy trading or sharing among multiple microgrids. It helps make full use of the temporal availability of energy and diversity in operational costs when meeting various energy loads. However, energy trading might not completely absorb excess renewable energy. A multi-energy management framework including fuel cell vehicles, energy storage, combined heat and power system, and renewable energy is proposed, and the characteristics and scheduling arrangements of fuel cell vehicles are considered to further improve the local absorption of the renewable energy and enhance the economic benefits of microgrids. While intensive research has been conducted on energy scheduling and trading problem, a fundamental question still remains unanswered on microgrid economics. Namely, due to multi-energy coupling, stochastic renewable energy generation and demands, when and how a microgrid should schedule and trade energy with others, which maximizes its long-term benefit. This paper designs a joint energy scheduling and trading algorithm based on Lyapunov optimization and a double-auction mechanism. Its purpose is to determine the valuations of energy in the auction, optimally schedule energy distribution, and strategically purchase and sell energy with the current electricity prices. Simulations based on real data show that each individual microgrid, under the management of the proposed algorithm, can achieve a time-averaged profit that is arbitrarily close to an optimum value, while avoiding compromising its own comfort.
electrical engineering and systems science
Insufficient flexibility in system operation caused by traditional "heat-set" operating modes of combined heat and power (CHP) units in winter heating periods is a key issue that limits renewable energy consumption. In order to reduce the curtailment of renewable energy resources through improving the operational flexibility, a novel optimal scheduling model based on chance-constrained programming (CCP), aiming at minimizing the lowest generation cost, is proposed for a small-scale integrated energy system (IES) with CHP units, thermal power units, renewable generations and representative auxiliary equipments. In this model, due to the uncertainties of renewable generations including wind turbines and photovoltaic units, the probabilistic spinning reserves are supplied in the form of chance-constrained; from the perspective of user experience, a heating load model is built with consideration of heat comfort and inertia in buildings. To solve the model, a solution approach based on sequence operation theory (SOT) is developed, where the original CCP-based scheduling model is tackled into a solvable mixed-integer linear programming (MILP) formulation by converting a chance constraint into its deterministic equivalence class, and thereby is solved via the CPLEX solver. The simulation results on the modified IEEE 30-bus system demonstrate that the presented method manages to improve operational flexibility of the IES with uncertain renewable generations by comprehensively leveraging thermal inertia of buildings and different kinds of auxiliary equipments, which provides a fundamental way for promoting renewable energy consumption.
electrical engineering and systems science
We report an object tracking algorithm that combines geometrical constraints, thresholding, and motion detection for tracking of the descending aorta and the network of major arteries that branch from the aorta including the iliac and femoral arteries. Using our automated identification and analysis, arterial system was identified with more than 85% success when compared to human annotation. Furthermore, the reported automated system is capable of producing a stenosis profile, and a calcification score similar to the Agatston score. The use of stenosis and calcification profiles will lead to the development of better-informed diagnostic and prognostic tools.
electrical engineering and systems science
The LIGO observatories can potentially detect stochastic gravitational waves arising from phase transitions which happened in the early universe at temperatures around $T\sim 10^{8}$ GeV. This provides an extraordinary opportunity for discovering the phase transition associated with the breaking of the Peccei-Quinn symmetry, required in QCD axion models. Here we consider the simplest Peccei-Quinn models and study under which conditions a strong first-order phase transition can occur, analyzing its associated gravitational wave signal. To be detectable at LIGO, we show that some supercooling is needed, which can arise either in Coleman-Weinberg-type symmetry breaking or in strongly-coupled models. We also investigate phase transitions that interestingly proceed by first breaking the electroweak symmetry at large scales before tunneling to the Peccei-Quinn breaking vacuum. In this case, the associated gravitational wave signal is more likely to be probed at the proposed Einstein Telescope.
high energy physics phenomenology
In this letter, we report the discovery of 24 new super Li-rich (A(Li) $\ge$ 3.2) giants of He-core burning phase at red clump region. Results are based on systematic search of a large sample of about 12,500 giants common to the LAMOST spectroscopic and Kepler time resolved photometric surveys. The two key parameters derived from Kepler data; average period spacing ($\Delta p$) between $l=1$ mixed gravity dominated g-modes and average large frequency separation ($\Delta \nu$) $l=0$ acoustic p-modes, suggest all the Li-rich giants are in He-core burning phase. This is the first unbiased survey subjected to a robust technique of asteroseismic analysis to unambiguously determine evolutionary phase of Li-rich giants. The results provide a strong evidence that Li enhancement phenomenon is associated with giants of He-core burning phase, post He-flash, rather than any other phase on RGB with inert He-core surrounded by H-burning shell.
astrophysics
The use of modern effective field theory techniques has sparked significant developments in many areas of physics, including the study of gravity. Case in point, such techniques have recently been used to show that binary black holes can amplify incident, low-frequency radiation due to an interplay between absorption at the horizons and momentum transfer in the bulk of the spacetime. In this paper, we further examine the consequences of this superradiant mechanism on the dynamics of an ambient scalar field by taking the binary's long-range gravitational potential into account at the nonperturbative level. Doing so allows us to capture the formation of scalar clouds that are gravitationally bound to the binary. If the scalar is light enough, the cloud can be sufficiently diffuse (i.e., dilute while having considerable spatial extent) that it engulfs the binary as a whole. Its subsequent evolution exhibits an immensely rich phenomenology, which includes exponential growth, beating patterns, and the upscattering of bound states into scalar waves. While we find that these effects have negligible influence on the binary's inspiral in the regime wherein our approximations are valid, they offer new, analytic insight into how binary black holes interact with external perturbations. They may also provide useful, qualitative intuition for interpreting the results from future numerical simulations of these complex systems.
high energy physics theory
Quantum information science strives to leverage the quantum-mechanical nature of our universe in order to achieve large improvements in certain information processing tasks. In deep-space optical communications, current receivers for the pure-state classical-quantum channel first measure each qubit channel output and then classically post-process the measurements. This approach is sub-optimal. In this dissertation we investigate a recently proposed quantum algorithm for this task, which is inspired by classical belief-propagation algorithms, and analyze its performance on a simple $5$-bit code. We show that the algorithm is optimal for each bit and it appears to achieve optimal performance when deciding the full transmitted message. We also provide explicit circuits for the algorithm in terms of standard gates. This suggests a near-term quantum communication advantage over the aforementioned sub-optimal scheme. Quantum error correction is vital to building a universal fault-tolerant quantum computer. We propose an efficient algorithm that can translate a given logical Clifford operation on a stabilizer code into all (equivalence classes of) physical Clifford circuits that realize that operation. In order to achieve universality, one also needs to implement at least one non-Clifford logical operation. So, we develop a mathematical framework for a large subset of diagonal operations in the Clifford hierarchy, which we call Quadratic Form Diagonal (QFD) gates. Then we use the QFD formalism to characterize all stabilizer codes whose code spaces are preserved under the transversal action of the non-Clifford $T$ gates on the physical qubits. We also discuss a few purely-classical coding problems motivated by transversal $T$ gates. A conscious effort has been made to keep this dissertation self-contained, by including necessary background material on quantum information and computation.
quantum physics
The combination of gain and loss in optical systems that respect parity-time (PT)-symmetry has pointed recently to a variety of novel optical phenomena and possibilities. Many of them can be realized by combining the PT-symmetry concepts with metamaterials. Here we investigate the case of chiral metamaterials, showing that combination of chiral metamaterials with PT-symmetric gain-loss enables a very rich variety of phenomena and functionalities. Examining a simple one-dimensional chiral PT-symmetric system, we show that with normally incident waves the PT-symmetric and the chirality-related characteristics can be tuned independently and superimposed almost at will. On the other hand, under oblique incidence, chirality affects all the PT-related characteristics, leading also to novel and uncommon wave propagation features, such as asymmetric transmission and asymmetric optical activity and ellipticity. All these features are highly controllable both by chirality and by the angle of incidence, making PT-symmetric chiral metamaterials valuable in a large range of polarization-control-seeking applications.
physics
There have been various claims that the Equivalence Principle, as originally formulated by Einstein, presents several difficulties when extended to the quantum domain, even in the regime of weak gravity. Here we point out that by following the same approach as used for other classical principles, e.g. the principle of conservation of energy, one can, for weak fields, obtain a straightforward quantum formulation of the principle. We draw attention to a recently performed test that confirms the Equivalence Principle in this form and discuss its implications.
quantum physics
In this paper, we analyze some properties regarding singular spinors and how they are connected. The method employed here consists of mapping the spinorial structure and also the adjoint structure. Such a mathematical device is useful to determine propagators without invoking the vacuum expected value of the quantum field time-ordered product.
physics
The quantum computation of electronic energies can break the curse of dimensionality that plagues many-particle quantum mechanics. It is for this reason that a universal quantum computer has the potential to fundamentally change computational chemistry and materials science, areas in which strong electron correlations present severe hurdles for traditional electronic structure methods. Here, we present a state-of-the-art analysis of accurate energy measurements on a quantum computer for computational catalysis, using improved quantum algorithms with more than an order of magnitude improvement over the best previous algorithms. As a prototypical example of local catalytic chemical reactivity we consider the case of a ruthenium catalyst that can bind, activate, and transform carbon dioxide to the high-value chemical methanol. We aim at accurate resource estimates for the quantum computing steps required for assessing the electronic energy of key intermediates and transition states of its catalytic cycle. In particular, we present new quantum algorithms for double-factorized representations of the four-index integrals that can significantly reduce the computational cost over previous algorithms, and we discuss the challenges of increasing active space sizes to accurately deal with dynamical correlations. We address the requirements for future quantum hardware in order to make a universal quantum computer a successful and reliable tool for quantum computing enhanced computational materials science and chemistry, and identify open questions for further research.
quantum physics
We present a new and independent determination of the local value of the Hubble constant based on a calibration of the Tip of the Red Giant Branch (TRGB) applied to Type Ia supernovae (SNeIa). We find a value of Ho = 69.8 +/- 0.8 (+/-1.1\% stat) +/- 1.7 (+/-2.4\% sys) km/sec/Mpc. The TRGB method is both precise and accurate, and is parallel to, but independent of the Cepheid distance scale. Our value sits midway in the range defined by the current Hubble tension. It agrees at the 1.2-sigma level with that of the Planck 2018 estimate, and at the 1.7-sigma level with the SHoES measurement of Ho based on the Cepheid distance scale. The TRGB distances have been measured using deep Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) imaging of galaxy halos. The zero point of the TRGB calibration is set with a distance modulus to the Large Magellanic Cloud of 18.477 +/- 0.004 (stat) +/-0.020 (sys) mag, based on measurement of 20 late-type detached eclipsing binary (DEB) stars, combined with an HST parallax calibration of a 3.6 micron Cepheid Leavitt law based on Spitzer observations. We anchor the TRGB distances to galaxies that extend our measurement into the Hubble flow using the recently completed Carnegie Supernova Project I sample containing about 100 well-observed SNeIa. There are several advantages of halo TRGB distance measurements relative to Cepheid variables: these include low halo reddening, minimal effects of crowding or blending of the photometry, only a shallow (calibrated) sensitivity to metallicity in the I-band, and no need for multiple epochs of observations or concerns of different slopes with period. In addition, the host masses of our TRGB host-galaxy sample are higher on average than the Cepheid sample, better matching the range of host-galaxy masses in the CSP distant sample, and reducing potential systematic effects in the SNeIa measurements.
astrophysics
We present a novel Convolutional Neural Network (CNN) based approach for one class classification. The idea is to use a zero centered Gaussian noise in the latent space as the pseudo-negative class and train the network using the cross-entropy loss to learn a good representation as well as the decision boundary for the given class. A key feature of the proposed approach is that any pre-trained CNN can be used as the base network for one class classification. The proposed One Class CNN (OC-CNN) is evaluated on the UMDAA-02 Face, Abnormality-1001, FounderType-200 datasets. These datasets are related to a variety of one class application problems such as user authentication, abnormality detection and novelty detection. Extensive experiments demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods. The source code is available at : github.com/otkupjnoz/oc-cnn.
computer science
Voice Type Discrimination (VTD) refers to discrimination between regions in a recording where speech was produced by speakers that are physically within proximity of the recording device ("Live Speech") from speech and other types of audio that were played back such as traffic noise and television broadcasts ("Distractor Audio"). In this work, we propose a deep-learning-based VTD system that features an initial layer of learnable spectro-temporal receptive fields (STRFs). Our approach is also shown to provide very strong performance on a similar spoofing detection task in the ASVspoof 2019 challenge. We evaluate our approach on a new standardized VTD database that was collected to support research in this area. In particular, we study the effect of using learnable STRFs compared to static STRFs or unconstrained kernels. We also show that our system consistently improves a competitive baseline system across a wide range of signal-to-noise ratios on spoofing detection in the presence of VTD distractor noise.
electrical engineering and systems science
By utilizing the AdS/CFT correspondence, we explore the dynamics of strongly coupled superfluid vortices in a disk with constant angular velocity. Each vortex in the vortex lattice is quantized with vorticity one from the direct inspection of their phases. As the angular velocity of the disk is greater than a critical value, the first vortex will be excited as expected from theoretical predictions. The subsequent two and more vortices are also generated by increasingly rotating the disk, resulting in the remarkable step transitions for the angular velocity to excite each individual vortex. When the vortex number is large enough, the density of vortices is found to be linearly proportional to the angular velocity, which matches the Feynman relation very well.
high energy physics theory
Planetesimals are compact astrophysical objects roughly 1-1000 km in size, massive enough to be held together by gravity. They can grow by accreting material to become full-size planets. Planetesimals themselves are thought to form by complex physical processes from small grains in protoplanetary disks. The streaming instability (SI) model states that mm/cm-size particles (pebbles) are aerodynamically collected into self-gravitating clouds which then directly collapse into planetesimals. Here we analyze ATHENA simulations of the SI to characterize the initial properties (e.g., rotation) of pebble clouds. Their gravitational collapse is followed with the PKDGRAV N-body code, which has been modified to realistically account for pebble collisions. We find that pebble clouds rapidly collapse into short-lived disk structures from which planetesimals form. The planetesimal properties depend on the cloud's scaled angular momentum, l=L/(M R_H^2 Omega, where L and M are the angular momentum and mass, R_H is the Hill radius, and Omega is the orbital frequency. Low-l pebble clouds produce tight (or contact) binaries and single planetesimals. Compact high-l clouds give birth to binary planetesimals with attributes that closely resemble the equal-size binaries found in the Kuiper belt. Significantly, the SI-triggered gravitational collapse can explain the angular momentum distribution of known equal-size binaries -- a result pending verification from studies with improved resolution. About 10% of collapse simulations produce hierarchical systems with two or more large moons. These systems should be found in the Kuiper belt when observations reach the threshold sensitivity.
astrophysics
The ordinary Structure Identity Principle states that any property of set-level structures (e.g., posets, groups, rings, fields) definable in Univalent Foundations is invariant under isomorphism: more specifically, identifications of structures coincide with isomorphisms. We prove a version of this principle for a wide range of higher-categorical structures, adapting FOLDS-signatures to specify a general class of structures, and using two-level type theory to treat all categorical dimensions uniformly. As in the previously known case of 1-categories (which is an instance of our theory), the structures themselves must satisfy a local univalence principle, stating that identifications coincide with "isomorphisms" between elements of the structure. Our main technical achievement is a definition of such isomorphisms, which we call "indiscernibilities", using only the dependency structure rather than any notion of composition.
mathematics
Hawking's black hole evaporation process suggests that we may need to choose between quantum unitarity and other basic physical principles such as no-signalling, entanglement monogamy, and the equivalence principle. We here provide a quantum model for Hawking pair black hole evaporation within which these principles are all respected. The model does not involve exotic new physics, but rather uses quantum theory and general relativity. The black hole and radiation are in a joint superposition of different energy states at any stage of the evaporation process. In the particular branch where the black hole mass is 0, the radiation state is pure and one-to-one with the initial state forming the black hole. Thus there is no information loss upon full evaporation. The original Hawking's pair entanglement between infalling and outgoing particles gets transferred to outgoing particles via entanglement swapping, without violation of no-signalling or the entanglement's monogamy. The final state after the full black hole evaporation is pure, without loss of information, violation of monogamy, or the equivalence principle.
high energy physics theory
Complex hierarchical shapes are widely known in biogenic single crystals, but growing of intricate synthetic metal single crystals is still a challenge. Here we report on a simple method for growing intricately shaped single crystals of gold, each consisting of a micron-sized crystal surrounded by a nanoporous structure, while the two parts comprise a single crystal. This is achieved by annealing thin films of gold and germanium to solidify a eutectic composition melt at a hypoeutectic concentration (Au-enriched composition). Transmission electron microscopy and synchrotron submicron scanning diffractometry and imaging confirms that the whole structure was indeed a single crystal. A kinetic model showing how this intricate single-crystal structure can be grown is presented.
condensed matter
We investigate the problem of learning description logic ontologies from entailments via queries, using epistemic reasoning. We introduce a new learning model consisting of epistemic membership and example queries and show that polynomial learnability in this model coincides with polynomial learnability in Angluin's exact learning model with membership and equivalence queries. We then instantiate our learning framework to EL and show some complexity results for an epistemic extension of EL where epistemic operators can be applied over the axioms. Finally, we transfer known results for EL ontologies and its fragments to our learning model based on epistemic reasoning.
computer science
Observations of the synchrotron and inverse Compton emissions from ultrarelativistic electrons in astrophysical sources can reveal a great deal about the energy-momentum relations of those electrons. They can thus be used to place bounds on the possibility of Lorentz violation in the electron sector. Recent $\gamma$-ray telescope data allow the Lorentz-violating electron $c^{\nu\mu}$ parameters to be constrained extremely well, so that all bounds are at the level of $7\times 10^{-16}$ or better.
high energy physics phenomenology
Generative adversarial network (GAN) has been shown to be useful in various applications, such as image recognition, text processing and scientific computing, due its strong ability to learn complex data distributions. In this study, a theory-guided generative adversarial network (TgGAN) is proposed to solve dynamic partial differential equations (PDEs). Different from standard GANs, the training term is no longer the true data and the generated data, but rather their residuals. In addition, such theories as governing equations, other physical constraints and engineering controls, are encoded into the loss function of the generator to ensure that the prediction does not only honor the training data, but also obey these theories. TgGAN is proposed for dynamic subsurface flow with heterogeneous model parameters, and the data at each time step are treated as a two-dimensional image. In this study, several numerical cases are introduced to test the performance of the TgGAN. Predicting the future response, label-free learning and learning from noisy data can be realized easily by the TgGAN model. The effects of the number of training data and the collocation points are also discussed. In order to improve the efficiency of TgGAN, the transfer learning algorithm is also employed. Numerical results demonstrate that the TgGAN model is robust and reliable for deep learning of dynamic PDEs.
physics
This work describes the speaker verification system developed by Human Language Technology Laboratory, National University of Singapore (HLT-NUS) for 2019 NIST Multimedia Speaker Recognition Evaluation (SRE). The multimedia research has gained attention to a wide range of applications and speaker recognition is no exception to it. In contrast to the previous NIST SREs, the latest edition focuses on a multimedia track to recognize speakers with both audio and visual information. We developed separate systems for audio and visual inputs followed by a score level fusion of the systems from the two modalities to collectively use their information. The audio systems are based on x-vector based speaker embedding, whereas the face recognition systems are based on ResNet and InsightFace based face embeddings. With post evaluation studies and refinements, we obtain an equal error rate (EER) of 0.88% and an actual detection cost function (actDCF) of 0.026 on the evaluation set of 2019 NIST multimedia SRE corpus.
electrical engineering and systems science
We study fundamental performance limitations of distributed feedback control in large-scale networked dynamical systems. Specifically, we address the question of whether dynamic feedback controllers perform better than static (memoryless) ones when subject to locality constraints. We consider distributed linear consensus and vehicular formation control problems modeled over toric lattice networks. For the resulting spatially invariant systems we study the large-scale asymptotics (in network size) of global performance metrics that quantify the level of network coherence. With static feedback from relative state measurements, such metrics are known to scale unfavorably in lattices of low spatial dimensions, preventing, for example, a 1-dimensional string of vehicles to move like a rigid object. We show that the same limitations in general apply also to dynamic feedback control that is locally of first order. This means that the addition of one local state to the controller gives a similar asymptotic performance to the memoryless case. This holds unless the controller can access noiseless measurements of its local state with respect to an absolute reference frame, in which case the addition of controller memory may fundamentally improve performance. In simulations of platoons with 20-200 vehicles we show that the performance limitations we derive manifest as unwanted accordion-like motions. Similar behaviors are to be expected in any network that is embeddable in a low-dimensional toric lattice, and the same fundamental limitations would apply. To derive our results, we present a general technical framework for the analysis of stability and performance of spatially invariant systems in the limit of large networks.
mathematics
Compactifying M-theory on a manifold of $G_2$ holonomy gives a UV complete 4D theory. It is supersymmetric, with soft supersymmetry breaking via gaugino condensation that simultaneously stabilizes all moduli and generates a hierarchy between the Planck and the Fermi scale. It generically has gauge matter, chiral fermions, and several other important features of our world. Here we show that the theory also contains a successful inflaton, which is a linear combination of moduli closely aligned with the overall volume modulus of the compactified $G_2$ manifold. The scheme does not rely on ad hoc assumptions, but derives from an effective quantum theory of gravity. Inflation arises near an inflection point in the potential which can be deformed into a local minimum. This implies that a de Sitter vacuum can occur in the moduli potential even without uplifting. Generically present charged hidden sector matter generates a de Sitter vacuum as well.
high energy physics theory
We examine how non-destructive measurements generate spin squeezing in an atomic Bose-Einstein condensate confined in a double-well trap. The condensate in each well is monitored using coherent light beams in a Mach-Zehnder configuration that interacts with the atoms through a quantum nondemolition Hamiltonian. We solve the dynamics of the light-atom system using an exact wavefunction approach, in the presence of dephasing noise, which allows us to examine arbitrary interaction times and a general initial state. We find that monitoring the condensate at zero detection current and with identical coherent light beams minimizes the backaction of the measurement on the atoms. In the weak atom-light interaction regime, we find the mean spin direction is relatively unaffected, while the variance of the spins is squeezed along the axis coupled to the light. Additionally, squeezing persists in the presence of tunneling and dephasing noise.
quantum physics
This paper describes the NEMO submission to SIGTYP 2020 shared task which deals with prediction of linguistic typological features for multiple languages using the data derived from World Atlas of Language Structures (WALS). We employ frequentist inference to represent correlations between typological features and use this representation to train simple multi-class estimators that predict individual features. We describe two submitted ridge regression-based configurations which ranked second and third overall in the constrained task. Our best configuration achieved the micro-averaged accuracy score of 0.66 on 149 test languages.
computer science
We propose two new kernel-type estimators of the mean residual life function $m_X(t)$ of bounded or half-bounded interval supported distributions. Though not as severe as the boundary problems in the kernel density estimation, eliminating the boundary bias problems that occur in the naive kernel estimator of the mean residual life function is needed. In this article, we utilize the property of bijective transformation. Furthermore, our proposed methods preserve the mean value property, which cannot be done by the naive kernel estimator. Some simulation results showing the estimators' performances and a real data analysis will be presented in the last part of this article.
statistics
With the increasing abundance of 'digital footprints' left by human interactions in online environments, e.g., social media and app use, the ability to model complex human behavior has become increasingly possible. Many approaches have been proposed, however, most previous model frameworks are fairly restrictive. We introduce a new social modeling approach that enables the creation of models directly from data with minimal a priori restrictions on the model class. In particular, we infer the minimally complex, maximally predictive representation of an individual's behavior when viewed in isolation and as driven by a social input. We then apply this framework to a heterogeneous catalog of human behavior collected from fifteen thousand users on the microblogging platform Twitter. The models allow us to describe how a user processes their past behavior and their social inputs. Despite the diversity of observed user behavior, most models inferred fall into a small subclass of all possible finite-state processes. Thus, our work demonstrates that user behavior, while quite complex, belies simple underlying computational structures.
computer science
We investigate general frameworks for calculating transport coefficients for quasiparticle theories at finite temperature. Hadronic transport coefficients are then computed using the linear sigma model (LSM). The bulk viscosity over entropy density ($\zeta/s$) is evaluated in the relaxation time approximation (RTA) and the specific shear viscosity ($\eta/s$) and static electrical conductivity ($\sigma_{el}/T$) are both obtained in the RTA and using a functional variational approach. Results are shown for different values of the scalar-isoscalar hadron vacuum mass with in-medium masses for the interacting fields. The advantages and limitations of the LSM for studies of strongly interacting matter out of equilibrium are discussed and results are compared with others in the literature.
high energy physics phenomenology
In this work we investigate the interaction between spin-zero and spin-one monopoles by making use of an effective field theory based on two-body and four-body interaction parts. In particular, we analyze the formation of bound state of monopole-antimonopole (i.e. monopolium). The magnetic-charge conjugation symmetry is studied in analogy to the usual charge conjugation to define a particle basis, for which we find bound-state solutions with relatively small binding energies and which allows us to identify the bounds on the parameters in the effective Lagrangians. Estimations of their masses, binding energies and scattering lengths are performed as functions of monopole masses and interaction strength in a specific renormalization scheme. We also examine the general validity of the approach and the feasibility of detecting the monopolium.
high energy physics theory
This paper proves an equality in law between the invariant measure of a reflected system of Brownian motions and a vector of point-to-line last passage percolation times in a discrete random environment. A consequence describes the distribution of the all-time supremum of Dyson Brownian motion with drift. A finite temperature version relates the point-to-line partition functions of two directed polymers, with an inverse-gamma and a Brownian environment, and generalises Dufresne's identity. Our proof introduces an interacting system of Brownian motions with an invariant measure given by a field of point-to-line log partition functions for the log-gamma polymer.
mathematics
Strong many-body interactions in two-dimensional (2D) semiconductors give rise to efficient exciton-exciton annihilation (EEA). This process is expected to result in the generation of unbound high energy carriers. Here, we report an unconventional photoresponse of van der Waals heterostructure devices resulting from efficient EEA. Our heterostructures, which consist of monolayer transition metal dichalcogenide (TMD), hexagonal boron nitride (hBN), and few-layer graphene, exhibit photocurrent when photoexcited carriers possess sufficient energy to overcome the high energy barrier of hBN. Interestingly, we find that the device exhibits moderate photocurrent quantum efficiency even when the semiconducting TMD layer is excited at its ground exciton resonance despite the high exciton binding energy and large transport barrier. Using ab initio calculations, we show that EEA yields highly energetic electrons and holes with unevenly distributed energies depending on the scattering condition. Our findings highlight the dominant role of EEA in determining the photoresponse of 2D semiconductor optoelectronic devices.
condensed matter
Among the greatest challenges in understanding ultra-cool brown dwarf and exoplanet atmospheres is the evolution of cloud structure as a function of temperature and gravity. In this study, we present the rotational modulations of GU Psc b -- a rare mid-T spectral type planetary-mass companion at the end of the L/T spectral type transition. Based on the HST/WFC3 1.1-1.67$\rm\, \mu m$ time-series spectra, we observe a quasi-sinusoidal light curve with a peak-to-trough flux variation of 2.7 % and a minimum period of eight hours. The rotation-modulated spectral variations are weakly wavelength-dependent, or largely gray between 1.1-1.67$\rm\,\mu$m. The gray modulations indicate that heterogeneous clouds are present in the photosphere of this low-gravity mid-T dwarf. We place the color and brightness variations of GU Psc b in the context of rotational modulations reported for mid-L to late-T dwarfs. Based on these observations, we report a tentative trend: mid-to-late T dwarfs become slightly redder in $J-H$ color with increasing $J$-band brightness, while L dwarfs become slightly bluer with increasing brightness. If this trend is verified with more T-dwarf samples, it suggests that in addition to the mostly gray modulations, there is a second-order spectral-type dependence on the nature of rotational modulations.
astrophysics
Although deep learning models perform remarkably well across a range of tasks such as language translation and object recognition, it remains unclear what high-level logic, if any, they follow. Understanding this logic may lead to more transparency, better model design, and faster experimentation. Recent machine learning research has leveraged statistical methods to identify hidden units that behave (e.g., activate) similarly to human understandable logic, but those analyses require considerable manual effort. Our insight is that many of those studies follow a common analysis pattern, which we term Deep Neural Inspection. There is opportunity to provide a declarative abstraction to easily express, execute, and optimize them. This paper describes DeepBase, a system to inspect neural network behaviors through a unified interface. We model logic with user-provided hypothesis functions that annotate the data with high-level labels (e.g., part-of-speech tags, image captions). DeepBase lets users quickly identify individual or groups of units that have strong statistical dependencies with desired hypotheses. We discuss how DeepBase can express existing analyses, propose a set of simple and effective optimizations to speed up a standard Python implementation by up to 72x, and reproduce recent studies from the NLP literature.
computer science
The accurate extraction and the reliable, repeatable reduction of graphene - metal contact resistance (R$_{C}$) are still open issues in graphene technology. Here, we demonstrate the importance of following clear protocols when extracting R$_{C}$ using the transfer length method (TLM). We use the example of back-gated graphene TLM structures with nickel contacts, a complementary metal oxide semiconductor compatible metal. The accurate extraction of R$_{C}$ is significantly affected by generally observable Dirac voltage shifts with increasing channel lengths in ambient conditions. R$_{C}$ is generally a function of the carrier density in graphene. Hence, the position of the Fermi level and the gate voltage impact the extraction of R$_{C}$. Measurements in high vacuum, on the other hand, result in dependable extraction of R$_{C}$ as a function of gate voltage owing to minimal spread in Dirac voltages. We further assess the accurate measurement and extraction of important parameters like contact-end resistance, transfer length, sheet resistance of graphene under the metal contact and specific contact resistivity as a function of the back-gate voltage. The presented methodology has also been applied to devices with gold and copper contacts, with similar conclusions.
physics
The accuracy of the time information generated by clocks can be enhanced by allowing them to communicate with each other. Here we consider a basic scenario where a quantum clock receives a low-accuracy time signal as input and ask whether it can generate an output of higher accuracy. We propose protocols that use a quantum clock with a $d$-dimensional state space to achieve an accuracy enhancement by a factor of $d$, for large enough $d$. If no feedback to the input signal is allowed, this enhancement is temporary. With feedback the accuracy enhancement can be retained indefinitely. Our protocols are specific to quantum clocks, and may be used to synchronise them in a network, defining a time scale that is more accurate than what can be achieved by non-interacting or classical clocks.
quantum physics
We examine codimension--1 topological defects whose associated worldline is geodesically embedded in $\AdS_{2}$. This discussion extends a previous study of exact analytical solutions to the equations of motion of topological defects in $\AdS_{n}$ in a particular limit where the masses of the scalar and gauge field vanish. We study the linear perturbations about the zeroth order kink-like solution and verify that they are stable. We also discuss general features of the perturbation expansion to all orders.
high energy physics theory
At leading-twist accuracy the form factors for the transitions from a virtual photon to the $\eta$ or $\eta'$ can be expanded into a power series of the variable $\omega$, being related to the difference of two photon virtualities. The series possess the remarkable feature that only the Gegenbauer coefficients of the meson distribution amplitudes of order $l\leq m$ contribute to the term $\sim \omega^m$. Thus, for $\omega\to 0$ only the asymptotic meson distribution amplitude contributes, allowing for a test of the mixing of the $\eta$ and $\eta'$ decay constants. Employing the Gegenbauer coefficients determined in analysis of the form factors in the real photon limit, we present predictions for the $\gamma^*\eta$ and $\gamma^*\eta'$ form factors and compare them to the BaBar data.
high energy physics phenomenology