text
stringlengths
11
9.77k
label
stringlengths
2
104
A new approach is presented to improve the performance of semiempirical quantum mechanical (SQM) methods in the description of noncovalent interactions. To show the strategy, the PM6 Hamiltonian was selected, although, in general, the procedure can be applied to other semiempirical Hamiltonians and to different methodologies. In this way, analytical corrections to PM6 were derived from fits to CCSD(T) - PM6 interaction energy differences. A set of small molecules was selected as representative of the common functional groups, and intermolecular potential energy curves were evaluated for the most relevant orientations of interacting molecular pairs. The resulting method, called PM6-FGC (from Functional Group Corrections), significantly improves the performance of PM6 and previous corrected SQM methods, and shows the importance of including a sufficient number of orientations of the interacting molecules in the reference data set in order to obtain well-balanced descriptions.
physics
Studies accumulate over time and meta-analyses are mainly retrospective. These two characteristics introduce dependencies between the analysis time, at which a series of studies is up for meta-analysis, and results within the series. Dependencies introduce bias --- Accumulation Bias --- and invalidate the sampling distribution assumed for p-value tests, thus inflating type-I errors. But dependencies are also inevitable, since for science to accumulate efficiently, new research needs to be informed by past results. Here, we investigate various ways in which time influences error control in meta-analysis testing. We introduce an Accumulation Bias Framework that allows us to model a wide variety of practically occurring dependencies, including study series accumulation, meta-analysis timing, and approaches to multiple testing in living systematic reviews. The strength of this framework is that it shows how all dependencies affect p-value-based tests in a similar manner. This leads to two main conclusions. First, Accumulation Bias is inevitable, and even if it can be approximated and accounted for, no valid p-value tests can be constructed. Second, tests based on likelihood ratios withstand Accumulation Bias: they provide bounds on error probabilities that remain valid despite the bias. We leave the reader with a choice between two proposals to consider time in error control: either treat individual (primary) studies and meta-analyses as two separate worlds --- each with their own timing --- or integrate individual studies in the meta-analysis world. Taking up likelihood ratios in either approach allows for valid tests that relate well to the accumulating nature of scientific knowledge. Likelihood ratios can be interpreted as betting profits, earned in previous studies and invested in new ones, while the meta-analyst is allowed to cash out at any time and advise against future studies.
statistics
Hamiltonian Monte Carlo is a popular sampling technique for smooth target densities. The scale lengths of the target have long been known to influence integration error and sampling efficiency. However, quantitative measures intrinsic to the target have been lacking. In this paper, we restrict attention to the multivariate Gaussian and the leapfrog integrator, and obtain a condition number corresponding to sampling efficiency. This number, based on the spectral and Schatten norms, quantifies the number of leapfrog steps needed to efficiently sample. We demonstrate its utility by using this condition number to analyze HMC preconditioning techniques. We also find the condition number of large inverse Wishart matrices, from which we derive burn-in heuristics.
statistics
In this paper, we critically revisit the Horowitz-Maldacena proposal and its generalization of Lloyd. In the original proposal, as well as in Lloyd's generalization, Hawking radiation involves a pair of maximally entangled quantum states in which the ingoing partner state and the collapsed matter form either a maximally entangled pair or a Schmidt decomposed state near the singularity. However, this cannot be the most generic state if there is an interaction between the collapsing matter and the incoming Hawking radiation. In opposition to Lloyd's conclusion such that information can almost certainly escape from a black hole, we analytically and numerically confirm that information will almost certainly be lost because the fidelity will approach zero as the degrees of freedom increase.
high energy physics theory
The structure of solids and their phases is mainly determined by static Coulomb forces while the coupling of charges to the dynamical, i.e., quantized degrees of freedom of the electromagnetic field plays only a secondary role. Recently, it has been speculated that this general rule can be overcome in the context of cavity quantum electrodynamics (QED), where the coupling of dipoles to a single field mode can be dramatically enhanced. Here we present a first exact analysis of the ground states of a dipolar cavity QED system in the non-perturbative coupling regime, where electrostatic and dynamical interactions play an equally important role. Specifically, we show how strong and long-range vacuum fluctuations modify the states of dipolar matter and induce novel phases with unusual properties. Beyond a purely fundamental interest, these general mechanisms can be important for potential applications, ranging from cavity-assisted chemistry to quantum technologies based on ultrastrongly coupled circuit QED systems.
quantum physics
Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning. A popular approach for solving it is mapping the observations into a representation space with a simple joint distribution, which can typically be written as a product of its marginals -- thus drawing a connection with the field of nonlinear independent component analysis. Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian and is computationally expensive, thus imposing a trade-off between computation and expressive power. In this work, we propose a new approach for exact training of such neural networks. Based on relative gradients, we exploit the matrix structure of neural network parameters to compute updates efficiently even in high-dimensional spaces; the computational cost of the training is quadratic in the input size, in contrast with the cubic scaling of naive approaches. This allows fast training with objective functions involving the log-determinant of the Jacobian, without imposing constraints on its structure, in stark contrast to autoregressive normalizing flows.
statistics
In this paper, we study a class of linear codes defined by characteristic functions of certain subsets of a finite field. We derive a sufficient and necessary condition for such a code to be a minimal linear code by a character-theoretical approach. We obtain new three-weight or four-weight minimal linear codes that do not satisfy the Ashikhmin-Barg condition by using partial difference sets. We show that our construction yields minimal linear codes that do not arise from cutting vectorial blocking sets, and also discuss their applications in secret sharing schemes.
mathematics
COVID-19 causes a global epidemic infection, which is the most severe infection disaster in human history. In the absence of particular medication and vaccines, tracing and isolating the source of infection is the best option to slow the spread of the virus and reduce infection and death rates among the population. There are three main obstacles in the process of tracing the infection: 1) Patient's electronic health record is stored in a traditional centralized database that could be stolen and tampered with the infection data, 2) The confidential personal identity of the infected user may be revealed to a third party or organization, 3) Existing infection tracing systems do not trace infections from multiple dimensions. Either the system is location-based or individual-based tracing. In this work, we propose a global COVID-19 information sharing system that utilizes the Blockchain, Smart Contract, and Bluetooth technologies. The proposed system unifies location-based and Bluetooth-based contact tracing services into the Blockchain platform, where the automatically executed smart contracts are deployed so that users can get consistent and non-tamperable virus trails. The anonymous functionality provided by the Blockchain and Bluetooth technology protects the user's identity privacy. With our proposed analysis formula for estimating the probability of infection, users can take measures to protect themselves in advance. We also implement a prototype system to demonstrate the feasibility and effectiveness of our approach.
computer science
We study in detail the collider signatures of an $SU(2)_R$ fermionic quintuplet in the framework of left-right symmetric model in the context of the 13 TeV LHC. Apart from giving a viable dark matter candidate ($\chi^0$), this model provides unique collider imprints in the form of same-sign multileptons through the decays of multi-charged components of the quintuplet. In particular, we consider the scenario where the quintuplet carries $(B - L) = 4$ charge, allowing for the presence of high charge-multiplicity particles with relatively larger mass differences among them compared to $(B - L)$ = 0 or 2. In this paper, we mainly focus on the same-sign n-lepton signatures (nSSL). We show that with an integrated luminosity of 500 $fb^{-1}$, the mass of the neutral component, $M_{\chi^0} \leq 480~(800)$ GeV can be excluded at 95\% CL in the 2SSL (3SSL) channel after imposing several selection criteria. We also show that a $5\sigma$ discovery is also achievable if $M_{\chi^0} \leq 390~(750)$ GeV in the 2SSL (3SSL) channel with 1000 $fb^{-1}$ integrated luminosity.
high energy physics phenomenology
Mutual information (MI) is an information-theoretic measure of dependency between two random variables. Several methods to estimate MI, from samples of two random variables with unknown underlying probability distributions have been proposed in the literature. Recent methods realize parametric probability distributions or critic as a neural network to approximate unknown density ratios. The approximated density ratios are used to estimate different variational lower bounds of MI. While these methods provide reliable estimation when the true MI is low, they produce high variance estimates in cases of high MI. We argue that the high variance characteristic is due to the uncontrolled complexity of the critic's hypothesis space. In support of this argument, we use the data-driven Rademacher complexity of the hypothesis space associated with the critic's architecture to analyse generalization error bound of variational lower bound estimates of MI. In the proposed work, we show that it is possible to negate the high variance characteristics of these estimators by constraining the critic's hypothesis space to Reproducing Hilbert Kernel Space (RKHS), which corresponds to a kernel learned using Automated Spectral Kernel Learning (ASKL). By analysing the aforementioned generalization error bounds, we augment the overall optimisation objective with effective regularisation term. We empirically demonstrate the efficacy of this regularization in enforcing proper bias variance tradeoff on four variational lower bounds, namely NWJ, MINE, JS and SMILE.
computer science
Doped SrTiO$_3$, one of the most dilute bulk systems to display superconductivity, is perhaps the first example of an unconventional superconductor, as it does not fit into the standard BCS paradigm. More than five decades of research has revealed a rich temperature-carrier concentration phase diagram that showcases a superconducting dome, proximity to a putative quantum critical point, Lifshitz transitions, a multi-gap pairing state and unusual normal-state transport properties. Research has also extended beyond bulk SrTiO$_3$, ushering the new field of SrTiO$_3$-based heterostructures. Because many of these themes are also featured in other quantum materials of contemporary interest, recent years have seen renewed interest in SrTiO$_3$. Here, we review the challenges and recent progress in elucidating the superconducting state of this model system. At the same time that its extreme dilution requires to revisit several of the approximations that constitute the successful Migdal-Eliashberg description of electron-phonon superconductivity, including the suppression of the Coulomb repulsion via the Tolmachev-Anderson-Morel mechanism, it opens interesting routes for alternative pairing mechanisms whose applicability remains under debate. For instance, pairing mechanisms involving longitudinal optical phonons have to overcome the hurdles created by the anti-adiabatic nature of the pairing interaction, whereas mechanisms that rely on the soft transverse optical phonons associated with incipient ferroelectricity face challenges related to the nature of the electron-phonon coupling. Proposals in which pairing is mediated by plasmons or promoted locally by defects are also discussed. We finish by surveying the existing evidence for multi-band superconductivity and outlining promising directions that can potentially shed new light on the rich problem of superconductivity in SrTiO$_3$.
condensed matter
We analyzed Chandra X-ray observations of five galaxy clusters whose atmospheric cooling times, entropy parameters, and cooling time to free-fall time ratios within the central galaxies lie below 1 Gyr, below 30 keV cm^2, and between 20 < tcool/tff < 50, respectively. These thermodynamic properties are commonly associated with molecular clouds, bright H-alpha emission, and star formation in central galaxies. However, none of these clusters have detectable H-alpha indicated in the ACCEPT database, nor do they have significant star formation rates or detectable molecular gas. Among these, only RBS0533 has a detectable radio/X-ray bubble which are commonly observed in cooling atmospheres. Signatures of uplifted, high metallicity atmospheric gas are absent. Despite its prominent X-ray bubble, RBS0533 lacks significant levels of molecular gas. Cold gas is absent at appreciable levels in these systems perhaps because their radio sources have failed to lift low entropy atmospheric gas to an altitude where the ratio of the cooling time to the free-fall time falls below unity.
astrophysics
Analysis of dental radiographs is an important part of the diagnostic process in daily clinical practice. Interpretation by an expert includes teeth detection and numbering. In this project, a novel solution based on adaptive histogram equalization and convolution neural network (CNN) is proposed, which automatically performs the task for dental x-rays. In order to improve the detection accuracy, we propose three pre-processing techniques to supplement the baseline CNN based on some prior domain knowledge. Firstly, image sharpening and median filtering are used to remove impulse noise, and the edge is enhanced to some extent. Next, adaptive histogram equalization is used to overcome the problem of excessive amplification noise of HE. Finally, a multi-CNN hybrid model is proposed to classify six different locations of dental slices. The results showed that the accuracy and specificity of the test set exceeded 90\%, and the AUC reached 0.97. In addition, four dentists were invited to manually annotate the test data set (independently) and then compare it with the labels obtained by our proposed algorithm. The results show that our method can effectively identify the X-ray location of teeth.
electrical engineering and systems science
Color correction for underwater images has received increasing interests, due to its critical role in facilitating available mature vision algorithms for underwater scenarios. Inspired by the stunning success of deep convolutional neural networks (DCNNs) techniques in many vision tasks, especially the strength in extracting features in multiple scales, we propose a deep multi-scale feature fusion net based on the conditional generative adversarial network (GAN) for underwater image color correction. In our network, multi-scale features are extracted first, followed by augmenting local features on each scale with global features. This design was verified to facilitate more effective and faster network learning, resulting in better performance in both color correction and detail preservation. We conducted extensive experiments and compared with the state-of-the-art approaches quantitatively and qualitatively, showing that our method achieves significant improvements.
electrical engineering and systems science
Analytic representation of both position as well as momentum waveforms of the two-dimensional (2D) circular quantum dots with the Dirichlet and Neumann boundary conditions (BCs) allowed an efficient computation in either space of Shannon $S$, R\'{e}nyi $R(\alpha)$ and Tsallis $T(\alpha)$ entropies, Onicescu energies $O$ and Fisher informations $I$. It is shown that a transition to the 2D geometry lifts the 1D degeneracy of the position components $S_\rho$, $O_\rho$, $R_\rho(\alpha)$. Among many other findings, it is established that the lower limit $\alpha_{TH}$ of the semi-infinite range of the dimensionless R\'{e}nyi/Tsallis coefficient where one-parameter momentum entropies exist is equal to 2/5 for the Dirichlet requirement and 2/3 for the Neumann one. Since their 1D counterparts are $1/4$ and $1/2$, respectively, this simultaneously reveals that this critical value crucially depends not only on the position BC but the dimensionality of the structure too. As the 2D Neumann threshold $\alpha_{TH}^N$ is greater than one half, its R\'{e}nyi uncertainty relation for the sum of the position and wave vector components $R_\rho(\alpha)+R_\gamma\left(\frac{\alpha}{2\alpha-1}\right)$ is valid in the range $[1/2,2)$ only with its logarithmic divergence at the right edge whereas for all other systems it is defined at any coefficient $\alpha$ not smaller than one half. For both configurations, the lowest-energy level at $\alpha=1/2$ does saturate R\'{e}nyi and Tsallis entropic inequalities. Other properties are discussed and analyzed from the mathematical and physical points of view.
quantum physics
The continuous Multi Scale Entanglement Renormalization Anstaz (cMERA) consists of a variational method which carries out a real space renormalization scheme on the wavefunctionals of quantum field theories. In this work we calculate the entanglement entropy of the half space for a free scalar theory through a Gaussian cMERA circuit. We obtain the correct entropy written in terms of the optimized cMERA variational parameter, the local density of disentanglers. Accordingly, using the entanglement entropy production per unit scale, we study local areas in the bulk of the tensor network in terms of the differential entanglement generated along the cMERA flow. This result spurs us to establish an explicit relation between the cMERA variational parameter and the radial component of a dual AdS geometry through the Ryu-Takayanagi formula. Finally, based on recent formulations of non-Gaussian cMERA circuits, we argue that the entanglement entropy for the half space can be written as an integral along the renormalization scale whose measure is given by the Fisher information metric of the cMERA circuit. Consequently, a straightforward relation between AdS geometry and the Fisher information metric is also established.
high energy physics theory
The primary goal of this document is to record the asymptotic effects that preimage constraints impose upon the sizes of the iterated images of a random function. Specifically, given a subset $\mathcal{P}\subseteq \mathbb{Z}_{\geq 0}$ and a finite set $S$ of size $n$, choose a function uniformly from the set of functions $f:S\rightarrow S$ that satisfy the condition that $|f^{-1}(x)|\in\mathcal{P}$ for each $x\in S$, and ask what $|f^k(S)|$ looks like as $n$ goes to infinity. The robust theory of singularity analysis allows one to completely answer this question if one accepts that $0\in\mathcal{P}$, that $\mathcal{P}$ contains an element bigger than 1, and that $\gcd(\mathcal{P})=1$; only the third of these conditions is a meaningful restriction. The secondary goal of this paper is to record much of the background necessary to achieve the primary goal.
mathematics
I work out the general expressions for the first relativistic correction of order $ \dfrac{1}{c^2} $ to thermodynamics. The starting point is the relativistic Hamiltonian that I derived in a previous paper, which I expanded to powers of $ \dfrac{1}{c^2} $ to derive a local (in time) Hamiltonian. Limiting to the first relativistic correction, I worked out in general how the relativistic corrections to thermodynamics arise. I then applied the formalism to the problem of N particles with harmonic oscillator interaction in 3D to see the explicit expressions for relativistic corrections.
condensed matter
We present deep spectroscopy from Keck/DEIMOS of Andromeda I, III, V, VII, and X, all of which are dwarf spheroidal satellites of M31. The sample includes 256 spectroscopic members across all five dSphs. We confirm previous measurements of the velocity dispersions and dynamical masses, and we provide upper limits on bulk rotation. Our measurements confirm that M31 satellites obey the same relation between stellar mass and stellar metallicity as Milky Way (MW) satellites and other dwarf galaxies in the Local Group. The metallicity distributions show similar trends with stellar mass as MW satellites, including evidence in massive satellites for external influence, like pre-enrichment or gas accretion. We present the first measurements of individual element ratios, like [Si/Fe], in the M31 system, as well as measurements of the average [alpha/Fe] ratio. The trends of [alpha/Fe] with [Fe/H] also follow the same galaxy mass-dependent patterns as MW satellites. Less massive galaxies have more steeply declining slopes of [alpha/Fe] that begin at lower [Fe/H]. Finally, we compare the chemical evolution of M31 satellites to M31's Giant Stellar Stream and smooth halo. The properties of the M31 system support the theoretical prediction that the inner halo is composed primarily of massive galaxies that were accreted early. As a result, the inner halo exhibits higher [Fe/H] and [alpha/Fe] than surviving satellite galaxies.
astrophysics
Three basic ingredients are required to generate a simple stellar population (SSP) library, i.e., an initial mass function (IMF), a stellar evolution model/isochrones, and an empirical/theoretical stellar spectral library. However, there are still some uncertainties to the determination and understanding of these ingredients. We perform the spectral fitting to test the relative parameter offsets between these uncertainties using two different stellar population models, two different empirical stellar libraries, two different isochrones, and the Salpeter and Chabrier IMFs. Based on these setups, we select five SSP libraries generated with the Galaxev/STELIB and Vazdekis/MILES models, and apply them to the pPXF full-spectrum fitting of both MaNGA and mock spectra. We find that: 1) Compared to the Galaxev/STELIB model, spectral fitting qualities with the Vazdekis/MILES model have significant improvements for those metal-rich (especially over-solar) spectra, which cause better reduced $\chi^2$ distributions and more precisely fitted absorption lines. This might due to the lack of metal rich stars in the empirical STELIB library, or code improvement of the Vazdekis model. 2) When applying the Vazdekis/MILES model for spectral fitting, the IMF variation will lead to not only a systematic offset in $M_*/L_r$, but also offsets in age and metallicity, and these offsets increase with increasing stellar population ages. However, the IMF-variation caused metallicity offsets disappear in the case of Galaxev/STELIB based libraries. 3) The Padova2000 model provides a better match to the MaNGA galaxy spectra at [M/H]$_L<-1.0$, while the BaSTI model match the local galaxy spectra better at [M/H]$_L>-1.0$. Current tests suggest that spectral fitting with the Vazdekis/MILES+BaSTI combination would be a better choice for local galaxies.
astrophysics
We give an explicit component Lagrangian construction of massive higher spin on-shell $N=1$ supermultiplets in four-dimensional Anti-de Sitter space $AdS_4$. We use a frame-like gauge invariant description of massive higher spin bosonic and fermionic fields. For the two types of the supermultiplets (with integer and half-integer superspins) each one containing two massive bosonic and two massive fermionic fields we derive the supertransformations leaving the sum of four their free Lagrangians invariant such that the algebra of these supertransformations is closed on-shell.
high energy physics theory
We report the discovery of a high-redshift, massive molecular outflow in the starburst galaxy SPT0346-52 ($z=5.656$) via the detected absorption of high-excitation water transitions (H$_2$O $4_{2,3}-4_{1,4}$ and H$_2$O $3_{3,0}-3_{2,1}$) with the Atacama Large Millimeter/submillimeter Array (ALMA). The host galaxy is one of the most powerful starburst galaxies at high redshift (star formation rate; SFR $\sim3,600$M$_\odot$year$^{-1}$), with an extremely compact ($\sim320$pc) star formation region and a star formation rate surface density ($\Sigma_{SFR}\sim5,500$M$_{\odot}~$year$^{-1}~$kpc$^{-2}$) five times higher than `maximum' (i.e. Eddington-limited) starbursts, implying a highly transient phase. The estimated outflow rate is $\sim500$M$_{\odot}$year$^{-1}$, which is much lower than the SFR, implying that in this extreme starburst the outflow capabilities saturate and the outflow is no longer capable of regulating star formation, resulting in a runaway process in which star formation will use up all available gas in less than 30Myr. Finally, while previous kinematic investigations of this source revealed possible evidence for an ongoing major merger, the coincidence of the hyper-compact starburst and high-excitation water absorption indicates that this is a single starburst galaxy surrounded by a disc.
astrophysics
We propose a method for generating video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models, and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model, and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked in order to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state-of-the-art in learning-based human image synthesis. Project page: http://gvv.mpi-inf.mpg.de/projects/wxu/HumanReenactment/
computer science
Quasars accreting matter at very high rates (known as extreme Population A [xA] quasars, possibly associated with super-Eddington accreting massive black holes) may provide a new class of distance indicators covering cosmic epochs from present day up to less than 1 Gyr from the Big Bang. At a more fundamental level, xA quasars are of special interest in studies of the physics of AGNs and host galaxy evolution. However, their observational properties are largely unknown. xA quasars can be identified in relatively large numbers from major optical surveys over a broad range of redshifts, thanks to selection criteria defined from the systematic changes along the quasars main sequence. It has been possible to build a sample of about 250 quasars at low and intermediate redshift, and larger samples can be easily selected from the latest data releases of the Sloan Digital Sky Survey. A large sample can clarify the main properties of xA quasars which are expected - unlike the general population of quasars - to radiate at an extreme, well defined Eddington ratio with small scatter. As a result of the small scatter in Eddington ratio shown by xA quasars, we propose a method to derive the main cosmological parameters based on redshift-independent "virial luminosity" estimates from measurements of emission line widths, roughly equivalent to the luminosity estimates based from line width in early and late type galaxies. A major issue related to the cosmological application of the xA quasar luminosity estimates from line widths is the identification of proper emission lines whose broadening is predominantly virial over a wide range of redshift and luminosity. We report on preliminary developments using the AlIII 1860 intermediate ionization line and the Hydrogen Balmer line H-beta as virial broadening estimators, and we briefly discuss the perspective of the method based on xA quasars.
astrophysics
Molecules constitute compact hybrid quantum optical systems that can interface photons, electronic degrees of freedom, localized mechanical vibrations and phonons. In particular, the strong vibronic interaction between electrons and nuclear motion in a molecule resembles the optomechanical radiation pressure Hamiltonian. While molecular vibrations are often in the ground state even at elevated temperatures, one still needs to get a handle on decoherence channels associated with phonons before an efficient quantum optical network based on opto-vibrational interactions in solid-state molecular systems could be realized. As a step towards a better understanding of decoherence in phononic environments, we take here an open quantum system approach to the non-equilibrium dynamics of guest molecules embedded in a crystal, identifying regimes of Markovian versus non-Markovian vibrational relaxation. A stochastic treatment based on quantum Langevin equations predicts collective vibron-vibron dynamics that resembles processes of sub- and superradiance for radiative transitions. This in turn leads to the possibility of decoupling intramolecular vibrations from the phononic bath, allowing for enhanced coherence times of collective vibrations. For molecular polaritonics in strongly confined geometries, we also show that the imprint of opto-vibrational couplings onto the emerging output field results in effective polariton cross-talk rates for finite bath occupancies.
quantum physics
Bow-shaped mid-infrared emission regions have been discovered in satellite observations of numerous late-type O and early-type B stars with moderate velocities relative to the ambient interstellar medium. Previously, hydrodynamical bow shock models have been used to study this emission. It appears that such models are incomplete in that they neglect kinetic effects associated with long mean free paths of stellar wind particles, and the complexity of Weibel instability fronts. Wind ions are scattered in the Weibel instability and mix with the interstellar gas. However, they do not lose their momentum and most ultimately diffuse further into the ambient gas like cosmic rays, and share their energy and momentum. Lacking other coolants, the heated gas transfers energy to interstellar dust grains, which radiate it. This process, in addition to grain photo-heating, provides the energy for the emission. A weak R-type ionization front, formed well outside the infrared emission region, generally moderates the interstellar gas flow into the emission region. The theory suggests that the infrared emission process is limited to cases of moderate stellar peculiar velocities, evidently in accord with the observations.
astrophysics
In jammed packings, it is usually thought that local structure only plays a significant role in specific regimes. The standard deviation of the relative excess coordination, $\sigma_Z/ Z_\mathrm{c}$, decays like $1/\sqrt{d}$, so that local structure should play no role in high spatial dimensions. Furthermore, in any fixed dimension $d \geq 2$, there are diverging length scales as the pressure vanishes approaching the unjamming transition, again suggesting that local structure should not be sufficient to describe response. Here we challenge the assumption that local structure does not matter in these cases. In simulations of jammed packings under athermal, quasistatic shear, we use machine learning to identify a local structural variable, softness, that correlates with rearrangements in dimensions $d=2$ to $d=5$. We find that softness - and even just the coordination number $Z$ - are quite predictive of rearrangements over a wide range of pressures, all the way down to unjamming, in all $d$ studied. This result provides direct evidence that local structure can play a role in higher spatial dimensions.
condensed matter
We present a combined theoretical and experimental study of X-ray optical wave mixing. This class of nonlinear phenomena combines the strengths of spectroscopic techniques from the optical domain, with the high-resolution capabilities of X-rays. In particular, the spectroscopic sensitivity of these phenomena can be exploited to selectively probe valence dynamics. Specifically, we focus on the effect of X-ray parametric down-conversion. We present a theoretical description of the process, from which we deduce the observable nonlinear response of valence charges. Subsequently, we simulate scattering patterns for realistic conditions and identify characteristic signatures of the nonlinear conversion. For the observation of this signature, we present a dedicated experimental setup and results of a detailed investigation. However, we do not find evidence of the nonlinear effect. This finding stands in strong contradiction to previous claims of proof-of-principle demonstrations. Nevertheless, we are optimistic to employ related X-ray optical wave mixing processes on the basis of the methods presented here for probing valence dynamics in the future.
physics
Most of the existing artificial neural networks(ANNs) fail to learn continually due to catastrophic forgetting, while humans can do the same by maintaining previous tasks' performances. Although storing all the previous data can alleviate the problem, it takes a large memory, infeasible in real-world utilization. We propose a continual zero-shot learning model(A-CZSL) that is more suitable in real-case scenarios to address the issue that can learn sequentially and distinguish classes the model has not seen during training. Further, to enhance the reliability, we develop A-CZSL for a single head continual learning setting where task identity is revealed during the training process but not during the testing. We present a hybrid network that consists of a shared VAE module to hold information of all tasks and task-specific private VAE modules for each task. The model's size grows with each task to prevent catastrophic forgetting of task-specific skills, and it includes a replay approach to preserve shared skills. We demonstrate our hybrid model outperforms the baselines and is effective on several datasets, i.e., CUB, AWA1, AWA2, and aPY. We show our method is superior in class sequentially learning with ZSL(Zero-Shot Learning) and GZSL(Generalized Zero-Shot Learning).
computer science
This paper provides triangular spherical designs for the complex unit sphere $\Omega^d$ by exploiting the natural correspondence between the complex unit sphere in $d$ dimensions and the real unit sphere in $2d-1$. The existence of triangular and square complex spherical $t$-designs with the optimal order number of points is established. A variational characterization of triangular complex designs is provided, with particular emphasis on numerical computation of efficient triangular complex designs with good geometric properties as measured by their mesh ratio. We give numerical examples of triangular spherical $t$-designs on complex unit spheres of dimension $d=2$ to $6$.
statistics
A type I superconductor expels a magnetic field from its interior to a surface layer of thickness $\lambda_L$, the London penetration depth. $\lambda_L$ is a function of temperature, becoming smaller as the temperature decreases. Here we analyze the process of cooling (or heating) a type I superconductor in a magnetic field, with the system remaining always in the superconducting state. The conventional theory predicts that Joule heat is generated in this process, the amount of which depends on the rate at which the temperature changes. Assuming the final state of the superconductor is independent of history, as the conventional theory assumes, we show that this process violates the first and second laws of thermodynamics. We conclude that the conventional theory of superconductivity is internally inconsistent. Instead, we suggest that the alternative theory of hole superconductivity may be able to resolve this problem.
condensed matter
We estimate the rate at which collisions between ultra-high energy cosmic rays can form small black holes in models with extra dimensions. If recent conjectures about false vacuum decay catalyzed by black hole evaporation apply, the lack of vacuum decay events in our past light cone may place new bounds on the black hole formation rate and thus on the fundamental scale of gravity in these models. For theories with fundamental scale $E_{*}$ above the Higgs instability scale of the Standard Model, we find a lower bound on $E_{*}$ that is within about an order of magnitude of the energy where the cosmic ray spectrum begins to show suppression from the GZK effect. Otherwise, the abundant formation of semiclassical black holes with short lifetimes would likely initiate vacuum decay. Assuming a Higgs instability scale at the low end of the range compatible with experimental data, the excluded range is approximately $10^{17} \,\text{eV} \lesssim E_{*} \leq 10^{18.8}\,\text{eV}$ for theories with $n=1$ extra dimension, narrowing to $10^{17}\,\text{eV} \lesssim E_{*} \leq 10^{18.1}\,\text{eV}$ for $n=6$. These bounds rule out regions of parameter space that are inaccessible to collider experiments, small-scale gravity tests, or estimates of Kaluza-Klein processes in neutron stars and supernovae.
high energy physics phenomenology
In this paper we investigate quasinormal modes (QNM) for a scalar field around a noncommutative Schwarzschild black hole. We verify the effect of noncommutativity on quasinormal frequencies by applying two procedures widely used in the literature. The first is the Wentzel-Kramers-Brillouin (WKB) approximation up to sixth order. In the second case we use the continuous fraction method developed by Leaver. We find that the quasinormal frequencies obtained for nonzero noncommutative parameter resemble those of the Reissner-Nordstr\"{o}m geometry. Besides, we also show that due to noncommutativity, the shadow radius is reduced when we increase the noncommutative parameter. In addition, we find that the shadow radius is nonzero even at the zero mass limit for finite noncommutative parameter.
high energy physics theory
In this paper we obtain a decoupling feature of the random interlacements process $\mathcal{I}^u \subset \mathbb{Z}^d$, at level $u$, $d\geq 3$. More precisely, we show that the trace of the random interlacements process on two disjoint finite sets, $\textsf{F}$ and its translated $\textsf{F}+x$, can be coupled with high probability of success, when $\|x\|$ is large, with the trace of a process of independent excursions, which we call the noodle soup process. As a consequence, we obtain an upper bound on the covariance between two $[0,1]$-valued functions depending on the configuration of the random interlacements on $\textsf{F}$ and $\textsf{F}+x$, respectively. This improves a previous bound obtained by Sznitman in [12].
mathematics
We perform Monte-Carlo computations of four-point cluster connectivities in the critical 2d Potts model, for numbers of states $Q\in (0,4)$ that are not necessarily integer. We compare these connectivities to four-point functions in a CFT that interpolates between D-series minimal models. We find that 3 combinations of the 4 independent connectivities agree with CFT four-point functions, down to the $2$ to $4$ significant digits of our Monte-Carlo computations. However, we argue that the agreement is exact only in the special cases $Q=0, 3, 4$. We conjecture that the Potts model can be analytically continued to a double cover of the half-plane $\{\Re c <13\}$, where $c$ is the central charge of the Virasoro symmetry algebra.
high energy physics theory
Auto-correlated noise appears in many solid state qubit systems and hence needs to be taken into account when developing gate operations for quantum information processing. However, explicitly simulating this kind of noise is often less efficient than approximate methods. Here, we focus on the filter function formalism, which allows the computation of gate fidelities in the presence of auto-correlated classical noise. Hence, this formalism can be combined with optimal control algorithms to design control pulses, which optimally implement quantum gates. To enable the use of gradient-based algorithms with fast convergence, we present analytically derived filter function gradients with respect to control pulse amplitudes, and analyze the computational complexity of our results. When comparing pulse optimization using our derivatives to a gradient-free approach, we find that the gradient-based method is roughly two orders of magnitude faster for our test cases. We also provide a modular computational implementation compatible with quantum optimal control packages.
quantum physics
Sound policy and decision making in developing countries is often limited by the lack of timely and reliable data. Crowdsourced data may provide a valuable alternative for data collection and analysis, e. g. in remote and insecure areas or of poor accessibility where traditional methods are difficult or costly. However, crowdsourced data are not directly usable to draw sound statistical inference. Indeed, its use involves statistical problems because data do not obey any formal sampling design and may also suffer from various non-sampling errors. To overcome this, we propose the use of a special form of post-stratification with which crowdsourced data are reweighted prior their use in an inferential context. An example in Nigeria illustrates the applicability of the method.
statistics
Rapid increase of traffic volume on urban roads over time has changed the traffic scenario globally. It has also increased the ratio of road accidents that can be severe and fatal in the worst case. To improve traffic safety and its management on urban roads, there is a need for prediction of severity level of accidents. Various machine learning models are being used for accident prediction. In this study, tree based ensemble models (Random Forest, AdaBoost, Extra Tree, and Gradient Boosting) and ensemble of two statistical models (Logistic Regression Stochastic Gradient Descent) as voting classifiers are compared for prediction of road accident severity. Significant features that are strongly correlated with the accident severity are identified by Random Forest. Analysis proved Random Forest as the best performing model with highest classification results with 0.974 accuracy, 0.954 precision, 0.930 recall and 0.942 F-score using 20 most significant features as compared to other techniques classification of road accidents severity.
computer science
We present new planning and learning algorithms for RAE, the Refinement Acting Engine. RAE uses hierarchical operational models to perform tasks in dynamically changing environments. Our planning procedure, UPOM, does a UCT-like search in the space of operational models in order to find a near-optimal method to use for the task and context at hand. Our learning strategies acquire, from online acting experiences and/or simulated planning results, a mapping from decision contexts to method instances as well as a heuristic function to guide UPOM. Our experimental results show that UPOM and our learning strategies significantly improve RAE's performance in four test domains using two different metrics: efficiency and success ratio.
computer science
We construct closed, embedded, ancient mean curvature flows in each dimension $n\ge 2$ with the topology of $S^1 \times S^{n-1}$. These examples are not mean convex and not solitons. They are constructed by analyzing perturbations of the self-shrinking doughnuts constructed by Drugan and Nguyen (or, alternatively, Angenent's self shrinking torus when $n =2$)
mathematics
Modern longitudinal studies collect feature data at many timepoints, often of the same order of sample size. Such studies are typically affected by {dropout} and positivity violations. We tackle these problems by generalizing effects of recent incremental interventions (which shift propensity scores rather than set treatment values deterministically) to accommodate multiple outcomes and subject dropout. We give an identifying expression for incremental effects when dropout is conditionally ignorable (without requiring treatment positivity), and derive the nonparametric efficiency bound for estimating such effects. Then we present efficient nonparametric estimators, showing that they converge at fast parametric rates and yield uniform inferential guarantees, even when nuisance functions are estimated flexibly at slower rates. We also study the efficiency of incremental effects relative to more conventional deterministic effects in a novel infinite time horizon setting, where the number of timepoints can grow with sample size, and show that incremental effects yield near-exponential gains in this setup. Finally we conclude with simulations and apply our methods in a study of the effect of low-dose aspirin on pregnancy outcomes.
statistics
Optimal Transport is a theory that allows to define geometrical notions of distance between probability distributions and to find correspondences, relationships, between sets of points. Many machine learning applications are derived from this theory, at the frontier between mathematics and optimization. This thesis proposes to study the complex scenario in which the different data belong to incomparable spaces. In particular we address the following questions: how to define and apply Optimal Transport between graphs, between structured data? How can it be adapted when the data are varied and not embedded in the same metric space? This thesis proposes a set of Optimal Transport tools for these different cases. An important part is notably devoted to the study of the Gromov-Wasserstein distance whose properties allow to define interesting transport problems on incomparable spaces. More broadly, we analyze the mathematical properties of the various proposed tools, we establish algorithmic solutions to compute them and we study their applicability in numerous machine learning scenarii which cover, in particular, classification, simplification, partitioning of structured data, as well as heterogeneous domain adaptation.
statistics
In this project, we try to find the correlation between the non-local pressure inside the massive neutron stars resisting the gravitational collapse of the core and ECSK dark energy led by the effect of spin-torsion coupling between quark fields and the space-time at very high densities much larger than the nuclear density. The injection of dark energy into the core of massive neutron stars (MANs) and extra resistant nature of this dark energy to the collapse of MANs by the anti-gravity give the possibility of existence of neutron stars in the unobserved mass range of $[2.16M_{\odot},5M_{\odot}]$. Obtaining the ECSK TOV equation gives the local pressure of the ambient medium of MANs. Moreover, the negative pressure from the ECSK dark energy is obtained from the Lagrangian again, from which we are able to investigate the hydro-static equilibrium of the core and ambient medium of the MANs. If the equilibrium state is satisfied for the unobserved mass gap for the MANs in ECSK theory framework this will imply our model predicts this vast mass range of unobserved spectrum of the MANs in astrophysical studies.
physics
This paper provides a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack (PHY, MAC and network). First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning for non-machine learning experts to understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.
computer science
The McLachlan "minimum-distance" principle for optimizing approximate solutions of the time-dependent Schrodinger equation is revisited, with a focus on the local-in-time error accompanying the variational solutions. Simple, exact expressions are provided for this error, which are then evaluated in illustrative cases, notably the widely used mean-field approach and the adiabatic quantum molecular dynamics. These findings pave the way for the rigorous development of adaptive schemes that re-size on-the-fly the underlying variational manifold and thus optimize the overall computational cost of a quantum dynamical simulation.
quantum physics
We show that relaxion, that addresses the hierarchy problem, can account for the observed dark matter (DM) relic density. The setup is similar to the case of axion DM models topped with a dynamical misalignment mechanism. After the reheating, when the temperature is well above the electroweak scale, the backreaction potential disappears and the relaxion is displaced from its vacuum. When the "wiggles" reappear the relaxion coherently oscillates around its minimum as in the case of vanilla axion DM models. We identify the parameter space such that the relaxion is retrapped leading to the standard cosmology. When the relaxion is lighter than $10^{-7}\,$eV, Hubble friction during radiation-domination is sufficiently strong for retrapping, and even minimal models are found to be viable. It also leads to a new constraint on relaxion models, as a certain region of their parameter space could lead to overabundant relaxion DM. Alternatively, even a larger parameter space exists when additional friction is obtained by particle production from additional coupling to an additional dark photon field. The phenomenology of this class of models is quite unique, as it implies that we are surrounded by a time-dependent axion-like field that due to relaxion-Higgs mixing implies time-dependent Higgs vacuum-expectation-value that lead to time-variation of all coupling constants of nature.
high energy physics phenomenology
Measurements of the forward-backward asymmetry in neutral-current Drell-Yan di-lepton production have primarily been used for determinations of the weak mixing angle $\theta_W$. We observe that, unlike the case of Run-I of the Large Hadron Collider (LHC Run-I), for the first time at the LHC Run-II the reconstructed forward-backward asymmetry has the capability of placing useful constraints on the determination of the parton distribution functions (PDFs). By examining the statistical and the PDF uncertainties on the reconstructed forward-backward asymmetry, we investigate its potential for disentangling the flavour content of quark and antiquark PDFs. Access to the valence/sea $u$-quark and to the sea up-type antiquark PDFs, in particular, may be gained by the appropriate use of selection cuts in the rapidity of the emerging lepton pair in regions both near the $Z$-boson peak and away from it, in a manner complementary, though more indirect, to the case of the charged-current asymmetry. We study the extension of these results for the planned high-luminosity (HL) LHC.
high energy physics phenomenology
The sequence of Ap\'ery numbers is the moment sequence in the sense of Stieltjes. This is the short version of the proof. Appendix added for v.2
mathematics
We observe the abrupt end of solar activity cycles at the Sun's equator by combining almost 140 years of observations from ground and space. These "terminator" events appear to be very closely related to the onset of magnetic activity belonging to the next sunspot cycle at mid-latitudes and the polar-reversal process at high-latitudes. Using multi-scale tracers of solar activity we examine the timing of these events in relation to the excitation of new activity and find that the time taken for the solar plasma to communicate this transition is of the order of one solar rotation, but could be shorter. Utilizing uniquely comprehensive solar observations from the Solar Terrestrial Relations Observatory (STEREO), and Solar Dynamics Observatory (SDO) we see that this transitional event is strongly longitudinal in nature. Combined, these characteristics imply that magnetic information is communicated through the solar interior rapidly. A range of possibilities exist to explain such behavior: the presence of magnetic reconnection in the deep interior, internal gravity waves on the solar tachocline, or that the magnetic fields present in the Sun's convection zone could be very large, with a poloidal field strengths reaching 50k - considerably larger than conventional explorations of solar and stellar dynamos estimate. Regardless of mechanism responsible, the rapid timescales demonstrated by the Sun's global magnetic field reconfiguration present strong constraints on first-principles numerical simulations of the solar interior and, by extension, other stars.
astrophysics
Based on the factorization in perturbative QCD, a jet cross sections in heavy-ion collisions can be expressed as a convolution of the jet cross section in $p+p$ collisions and a jet energy loss distribution. Using this simple expression and the Markov Chain Monte Carlo method, we carry out Bayesian analyses of experimental data on jet spectra to extract energy loss distributions for both single inclusive and $\gamma$-triggered jets in $Pb+Pb$ collisions with different centralities at two colliding energies at the Large Hadron Collider. The average jet energy loss has a dependence on the initial jet energy that is slightly stronger than a logarithmic form and decreases from central to peripheral collisions. The extracted jet energy loss distributions with a scaling behavior in $x=\Delta p_T /\langle \Delta p_T\rangle$ have a large width. These are consistent with the linear Boltzmann transport model simulations, in which the observed jet quenching is caused on the average by only a few out-of-cone scatterings.
high energy physics phenomenology
Learning invariant representations has been proposed as a key technique for addressing the domain generalization problem. However, the question of identifying the right conditions for invariance remains unanswered. In this work, we propose a causal interpretation of domain generalization that defines domains as interventions under a data-generating process. Based on a general causal model for data from multiple domains, we show that prior methods for learning an invariant representation optimize for an incorrect objective. We highlight an alternative condition: inputs across domains should have the same representation if they are derived from the same base object. Inputs that share the same base object may be available through data augmentation or in some specific contexts, but base object information is not always available. Hence we propose an iterative algorithm called MatchDG that approximates base object similarity by using a contrastive loss formulation adapted for multiple domains. We then match inputs that are similar under the resultant representation to build an invariant classifier. We evaluate our matching-based methods on rotated MNIST, Fashion-MNIST, PACS and Chest X-ray datasets and find that they outperform prior work on out-of-domain accuracy. In particular, top-10 matches from MatchDG have over 50% overlap with ground-truth matches in MNIST and Fashion-MNIST. Code repository can be accessed here: \textit{https://github.com/microsoft/robustdg}
computer science
This study presents a nonreciprocal-beam phased-array antenna constituted of phase-gradient patch radiators integrated with transistor-based nonreciprocal phase shifters. Such an antenna exhibits different beams for transmission and reception states. The proposed phased-array antenna provides power amplification for both transmission and reception states, which is of paramount importance in most practical applications. In addition, in contrast to the recently proposed time-modulated antennas, the proposed nonreciprocal-beam phased-array antenna introduces no undesired time harmonics and unwanted frequency conversion, requires no radio frequency bias signal. Furthermore, the nonreciprocal phased-array antenna is lightweight and is amenable to integrated circuit fabrication. The transmission and reception beam angles, the beam shapes, and the power amplification level may be easily tuned by changing the direct current (dc) bias of the transistors and phase of the passive phase shifters. Such a nonreciprocal-beam phased-array antenna is expected to find military and commercial applications.
physics
We give a non-technical summary of the classification program, very dear to the hearts of both authors, of four dimensional $\mathcal{N}=2$ superconformal field theories (SCFTs) based on the study of their Coulomb branch geometries. We outline the main ideas behind this program, review the most important results thus far obtained, and the prospects for future results. This contribution will appear in the volume "the Pollica perspective on the (super)-conformal world" but we decided to also make it available separately in the hope that it could be useful to those who are interested in obtaining a quick grasp of this rapidly developing program.
high energy physics theory
The strong constraints on the R-parity conserving supersymmetry (SUSY) from the LHC searches motivate us to consider the new models in which the low-scale SUSY is still allowed. We propose a kind of R-parity violating SUSY scenarios with a non-zero $U^c_2 D^c_2 D^c_3$ operator. Three relevant LHC searches are recasted to test the status of this scenario in terms of four simplified models, with either light stop-Bino, stop-Higgsino, sbottom-Bino or sbottom-Higgsino. Some difficult scenarios for the LHC SUSY searches in these simplified models are identified. By extrapolate the current LHC searches to the future 14 TeV LHC with integrated luminosity of 300 fb$^{-1}$, most of the difficult scenarios can be probed, except the sbottom-Bino simplified one. An improved search, which utilises the multiple b-jets as well as its kinematic features, is expected to probe the signature of this case.
high energy physics phenomenology
The separate evaluation for males and females is the recent standard in in-vivo toxicology for dose or treatment effects using Dunnett tests. The alternative pre-test for sex-by-treatment interaction is problematic. Here a joint test is proposed considering the two sex-specific and the pooled Dunnett-type comparisons. The calculation of either simultaneous confidence intervals or adjusted p-values with the R-package multcomp is demonstrated using a real data example.
statistics
We outline a strategy to compute deeply inelastic scattering structure functions using a hybrid quantum computer. Our approach takes advantage of the representation of the fermion determinant in the QCD path integral as a quantum mechanical path integral over 0+1-dimensional fermionic and bosonic worldlines. The proper time evolution of these worldlines can be determined on a quantum computer. While extremely challenging in general, the problem simplifies in the Regge limit of QCD, where the interaction of the worldlines with gauge fields is strongly localized in proper time and the corresponding quantum circuits can be written down. As a first application, we employ the Color Glass Condensate effective theory to construct the quantum algorithm for a simple dipole model of the $F_2$ structure function. We outline further how this computation scales up in complexity and extends in scope to other real-time correlation functions.
high energy physics theory
In this paper we present a multi-rate control architecture for safety critical systems. We consider a high level planner and a low level controller which operate at different frequencies. This multi-rate behavior is described by a piecewise nonlinear model which evolves on a continuous and a discrete level. First, we present sufficient conditions which guarantee recursive constraint satisfaction for the closed-loop system. Afterwards, we propose a control design methodology which leverages Control Barrier Functions (CBFs) for low level control and Model Predictive Control (MPC) policies for high level planning. The control barrier function is designed using the full nonlinear dynamical model and the MPC is based on a simplified planning model. When the nonlinear system is control affine and the high level planning model is linear, the control actions are computed by solving convex optimization problems at each level of the hierarchy. Finally, we show the effectiveness of the proposed strategy on a simulation example, where the low level control action is updated at a higher frequency than the high level command.
electrical engineering and systems science
The Kondo-Heinsberg chain is an interesting model of a strongly correlated system which has a broad superconducting state with pair-density wave (PDW) order. Some of us have recently proposed that this PDW state is a symmetry-protected topological (SPT) state, and the gapped spin sector of the model supports Majorana zero modes. In this work, we reexamine this problem using a combination of numeric and analytic methods. In extensive density matrix renormalization group calculations, we find no evidence of a topological ground state degeneracy or the previously proposed Majorana zero modes in the PDW phase of this model. This result motivated us to reexamine the original arguments for the existence of the Majorana zero modes. A careful analysis of the effective continuum field theory of the model shows that the Hilbert space of the spin sector of the theory does not contain any single Majorana fermion excitations. This analysis shows that the PDW state of the doped 1D Kondo-Heisenberg model is not an SPT with Majorana zero modes.
condensed matter
We prove that for every parity decision tree of depth $d$ on $n$ variables, the sum of absolute values of Fourier coefficients at level $\ell$ is at most $d^{\ell/2} \cdot O(\ell \cdot \log(n))^\ell$. Our result is nearly tight for small values of $\ell$ and extends a previous Fourier bound for standard decision trees by Sherstov, Storozhenko, and Wu (STOC, 2021). As an application of our Fourier bounds, using the results of Bansal and Sinha (STOC, 2021), we show that the $k$-fold Forrelation problem has (randomized) parity decision tree complexity $\tilde{\Omega}\left(n^{1-1/k}\right)$, while having quantum query complexity $\lceil k/2\rceil$. Our proof follows a random-walk approach, analyzing the contribution of a random path in the decision tree to the level-$\ell$ Fourier expression. To carry the argument, we apply a careful cleanup procedure to the parity decision tree, ensuring that the value of the random walk is bounded with high probability. We observe that step sizes for the level-$\ell$ walks can be computed by the intermediate values of level $\le \ell-1$ walks, which calls for an inductive argument. Our approach differs from previous proofs of Tal (FOCS, 2020) and Sherstov, Storozhenko, and Wu (STOC, 2021) that relied on decompositions of the tree. In particular, for the special case of standard decision trees we view our proof as slightly simpler and more intuitive. In addition, we prove a similar bound for noisy decision trees of cost at most $d$ -- a model that was recently introduced by Ben-David and Blais (FOCS, 2020).
computer science
Social networks amplify inequalities due to fundamental mechanisms of social tie formation such as homophily and triadic closure. These forces sharpen social segregation reflected in network fragmentation. Yet, little is known about what structural factors facilitate fragmentation. In this paper we use big data from a widely-used online social network to demonstrate that there is a significant relationship between social network fragmentation and income inequality in cities and towns. We find that the organization of the physical urban space has a stronger relationship with fragmentation than unequal access to education, political segregation, or the presence of ethnic and religious minorities. Fragmentation of social networks is significantly higher in towns in which residential neighborhoods are divided by physical barriers such as rivers and railroads and are relatively distant from the center of town. Towns in which amenities are spatially concentrated are also typically more socially segregated. These relationships suggest how urban planning may be a useful point of intervention to mitigate inequalities in the long run.
physics
Scalar field theories with particular U(1)-symmetric potentials contain non-topological soliton solutions called Q-balls. Promoting the U(1) to a gauge symmetry leads to the more complicated situation of gauged Q-balls. The soliton solutions to the resulting set of nonlinear differential equations have markedly different properties, such as a maximal possible size and charge. Despite these differences, we discover a relation that allows one to extract the properties of gauged Q-balls (such as the radius, charge, and energy) from the more easily obtained properties of global Q-balls. These results provide a new guide to understanding gauged Q-balls as well as providing simple and accurate analytical characterization of the Q-ball properties.
high energy physics theory
If the time evolution of a system can be understood classically, then there must exist an underlying probability distribution for the variables describing the system at all times. It is well known that for systems described by a single time-evolving dichotomic variable $Q$ and for which a given set of temporal correlation functions are specified, a necessary set of conditions for the existence of such a probability are provided by the Leggett-Garg (LG) inequalities. Fine's theorem in this context is the non-trivial result that a suitably augmented set of LG inequalities are both necessary and sufficient conditions for the existence of an underlying probability. We present a proof of Fine's theorem for the case of measurements on a dichotomic variable at an abitrary number of times, thereby generalizing the familiar proofs for three and four times. We demonstrate how the LG framework and Fine's theorem can be extended to the case in which all possible two-time correlation functions are measured (instead of the partial set of two-time correlators normally studied). We examine the limit of a large number of measurements for both of the above cases.
quantum physics
The wear volume is known to keep increasing during frictional processes, and Archard notably proposed a model to describe the probability of wear particle formation upon asperity collision in a two-body contact configuration. While this model is largely adopted in the investigations of wear, the presence of wear debris trapped between the surfaces changes the system into a three-body contact configuration already since the early stages of the process. In such a configuration, a significant amount of wear is produced at the interface between the trapped debris and the sliding bodies. Here, relying on analytical models, we develop a framework that describes crack growth in a three-body configuration at the particle-surface interface. We then show that crack growth is favoured within the sliding surfaces, instead of within the debris particle, and test such result by means of numerical simulations with a phase-field approach to fracture. This leads to an increase in the wear volume and to debris particle accretion, rather than its break down. The effects of adhesion, coefficient of friction, and ratio of the applied global tangential and normal forces are also investigated.
condensed matter
Variants of the black hole information paradox are studied in Type IIB string theory setups that realize four-dimensional gravity coupled to a bath. The setups are string theory versions of doubly-holographic Karch/Randall brane worlds, with black holes coupled to non-gravitating and gravitating baths. The 10d versions are based on fully backreacted solutions for configurations of D3, D5 and NS5 branes, and admit dual descriptions as $\mathcal N=4$ SYM on a half space and 3d $T_\rho^\sigma[SU(N)]$ SCFTs. Island contributions to the entanglement entropy of black hole radiation regions are identified through Ryu/Takayanagi surfaces and lead to Page curves. Analogs of the critical angles found in the Karch/Randall models are identified in 10d, as critical parameters in the brane configurations and dual field theories.
high energy physics theory
Several methods for handling sloping fluid-solid interfaces with the elastic parabolic equation are tested. A single-scattering approach that is modified for the fluid-solid case is accurate for some problems but breaks down when the contrast across the interface is sufficiently large and when there is a Scholte wave. An approximate condition for conserving energy breaks down when a Scholte wave propagates along a sloping interface but otherwise performs well for a large class of problems involving gradual slopes, a wide range of sediment parameters, and ice cover. An approach based on treating part of the fluid layer as a solid with low shear speed handles Scholte waves and a wide range of sediment parameters accurately, but this approach needs further development. The variable rotated parabolic equation is not effective for problems involving frequent or continuous changes in slope, but it provides a high level of accuracy for most of the test cases, which have regions of constant slope. Approaches based on a coordinate mapping and on using a film of solid material with low shear speed on the rises of the stair steps that approximate a sloping interface are also tested and found to produce accurate results for some cases.
physics
A tree with $n$ vertices has at most $95^{n/13}$ minimal dominating sets. The growth constant $\lambda = \sqrt[13]{95} \approx 1.4194908$ is best possible. It is obtained in a semi-automatic way as a kind of "dominant eigenvalue" of a bilinear operation on sixtuples that is derived from the dynamic-programming recursion for computing the number of minimal dominating sets of a tree. We also derive an output-sensitive algorithm for listing all minimal dominating sets with linear set-up time and linear delay between successive solutions.
computer science
Randomized trials of infectious disease interventions, such as vaccines, often focus on groups of connected or potentially interacting individuals. When the pathogen of interest is transmissible between study subjects, interference may occur: individual infection outcomes may depend on treatments received by others. Epidemiologists have defined the primary causal effect of interest -- called the "susceptibility effect" -- as a contrast in infection risk under treatment versus no treatment, while holding exposure to infectiousness constant. A related quantity -- the "direct effect" -- is defined as an unconditional contrast between the infection risk under treatment versus no treatment. The purpose of this paper is to show that under a widely recommended randomization design, the direct effect may fail to recover the sign of the true susceptibility effect of the intervention in a randomized trial when outcomes are contagious. The analytical approach uses structural features of infectious disease transmission to define the susceptibility effect. A new probabilistic coupling argument reveals stochastic dominance relations between potential infection outcomes under different treatment allocations. The results suggest that estimating the direct effect under randomization may provide misleading inferences about the effect of an intervention -- such as a vaccine -- when outcomes are contagious.
statistics
Based on extended free energy of soft-matter quasicrystals and the variation principle on thermodynamic stability, this study reports the results on stability of the first kind of soft-matter quasicrystals. They are dependent only upon the material constants, and quite simple and intuitive, the material constants can be measured by experiments. The results are significant in studying thermodynamics of the matter.
condensed matter
The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPs, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task's dataset as input. We demonstrate that CNAPs achieves state-of-the-art results on the challenging Meta-Dataset benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPs is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning.
statistics
A computational study of capsule ejection from a narrow channel into a reservoir is undertaken for a combination of varying deformable capsule sizes and channel dimensions. A mass spring membrane model is coupled to an Immersed Boundary Lattice Boltzmann model solver. The aim of the present work is the description of the capsules motion, deformation and the response of the fluid due to the complex particle dynamics. The interactions between the capsules affect the local velocity field significantly and are responsible for the dynamics observed. Capsule membrane deformability is also seen to affect inter capsule interaction, and we observe that the train of three particles locally homogenizes the velocity field and the leading capsule travels faster than the other two trailing capsules. On the contrary, variations in size of the reservoir do not seem to be relevant, while the ratio of capsule diameter with respect to channel diameter plays a major role as well as the ratio of capsule diameter to inter capsule spacing. This flow set up has not been covered in the literature, and consequently we focus on describing capsule motion, membrane deformation and fluid dynamics, as a preliminary investigation in this field.
condensed matter
Multiscale thermodynamics is a theory of relations among levels of investigation of complex systems. It includes the classical equilibrium thermodynamics as a special case but it is applicable to both static and time evolving processes in externally and internally driven macroscopic systems that are far from equilibrium and are investigated on microscopic, mesoscopic, and macroscopic levels. In this paper we formulate the multiscale thermodynamics, explain its origin, and illustrate it in mesoscopic dynamics that combines levels.
condensed matter
Co-prime arrays and samplers with multiple periods is a framework in which the co-prime structure is repeated multiple times. In this paper, the effects of perturbations in sampling locations on the difference set of the prototype co-prime structure with multiple periods is analysed. Based on this analysis, a method to estimate the autocorrelation that maximizes the amount of information extracted from the data is proposed. The advantage is limited only to samplers, and is not observed in antenna arrays. The expression for the number of additional contributors available for estimation is derived. The contributors increase with the increase in the number of periods. In addition, the expressions for the computational complexity are derived in the presence of jitter. This provides an upper bound on the number of multiplications and additions for hardware implementation.
electrical engineering and systems science
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
quantum physics
We carry out assessments of the life cycle of Loop Current vortices, so-called rings, in the Gulf of Mexico by applying three objective (i.e., observer-independent) coherent Lagrangian vortex detection methods on velocities derived from satellite altimetry measurements of sea-surface height (SSH). The methods reveal material vortices with boundaries that withstand stretching or diffusion, or whose fluid elements rotate evenly. This involved a technology advance that enables framing vortex genesis and apocalypse robustly and with precision. We find that the stretching- and diffusion-withstanding assessments produce consistent results, which show large discrepancies with Eulerian assessments that identify vortices with regions instantaneously filled with streamlines of the SSH field. The even-rotation assessment, which is vorticity-based, is found to be quite unstable, suggesting life expectancies much shorter than those produced by all other assessments.
physics
We study minimizers of the Dirichlet phi-energy integral with generalized Orlicz growth. We prove the Kellogg property, the set of irregular points has zero capacity, and give characterizations of semiregular boundary points. The results are new ever for the special cases double phase and Orlicz growth.
mathematics
In this paper, we evaluate a semi-autonomous brain-computer interface (BCI) for manipulation tasks. In such system, the user controls a robotic arm through motor imagery commands. In traditional process-control BCI systems, the user has to provide those commands continuously in order manipulate the effector of the robot step-by-step, which results in a tiresome process for simple tasks such as pick and replace an item from a surface. Here, we take a semi-autonomous approach based on a conformal geometric algebra model that solves the inverse kinematics of the robot on the fly, then the user only has to decide on the start of the movement and the final position of the effector (goal-selection approach). Under these conditions, we implemented pick-and-place tasks with a disk as an item and two target areas placed on the table at arbitrary positions. An artificial vision (AV) algorithm was used to obtain the positions of the items expressed in the robot frame through images captured with a webcam. Then, the AV algorithm is integrated to the inverse kinematics model to perform the manipulation tasks. As proof-of-concept, different users were trained to control the pick-and-place tasks through the process-control and semi-autonomous goal-selection approaches, so that the performance of both schemes could be compared. Our results show the superiority in performance of the semi-autonomous approach, as well as evidence of less mental fatigue with it.
electrical engineering and systems science
Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause performance, scalability and energy bottlenecks: (1) data access is a key bottleneck as many important applications are increasingly data-intensive, and memory bandwidth and energy do not scale well, (2) energy consumption is a key limiter in almost all computing platforms, especially server and mobile systems, (3) data movement, especially off-chip to on-chip, is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today. At the same time, conventional memory technology is facing many technology scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, proliferation of different main memory standards and chips, specialized for different purposes (e.g., graphics, low-power, high bandwidth, low latency), and the necessity of designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend. This chapter discusses recent research that aims to practically enable computation close to data, an approach we call processing-in-memory (PIM). PIM places computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D-stacked memory, or in the memory controllers), so that data movement between the computation units and memory is reduced or eliminated.
computer science
Quantum computers promise tremendous impact across applications -- and have shown great strides in hardware engineering -- but remain notoriously error prone. Careful design of low-level controls has been shown to compensate for the processes which induce hardware errors, leveraging techniques from optimal and robust control. However, these techniques rely heavily on the availability of highly accurate and detailed physical models which generally only achieve sufficient representative fidelity for the most simple operations and generic noise modes. In this work, we use deep reinforcement learning to design a universal set of error-robust quantum logic gates on a superconducting quantum computer, without requiring knowledge of a specific Hamiltonian model of the system, its controls, or its underlying error processes. We experimentally demonstrate that a fully autonomous deep reinforcement learning agent can design single qubit gates up to $3\times$ faster than default DRAG operations without additional leakage error, and exhibiting robustness against calibration drifts over weeks. We then show that $ZX(-\pi/2)$ operations implemented using the cross-resonance interaction can outperform hardware default gates by over $2\times$ and equivalently exhibit superior calibration-free performance up to 25 days post optimization using various metrics. We benchmark the performance of deep reinforcement learning derived gates against other black box optimization techniques, showing that deep reinforcement learning can achieve comparable or marginally superior performance, even with limited hardware access.
quantum physics
Thermodynamic irreversibility is well characterized by the entropy production arising from non-equilibrium quantum processes. We show that the entropy production of a quantum system undergoing open-system dynamics can be formally split into a term that only depends on population unbalances, and one that is underpinned by quantum coherences. The population unbalances are found to contribute to both an entropy flux and an entropy production rate. The decoherence, on the other hand, contributes only to the entropy production rate. This allows us to identify a genuine quantum contribution to the entropy production in non-equilibrium quantum processes. We make use of such a division to address the open-system dynamics of a spin $J$ particle, which we describe in phase space through a spin-coherent representation.
quantum physics
We show a new family of neural networks based on the Schr\"{o}dinger equation (SE-NET). In this analogy, the trainable weights of the neural networks correspond to the physical quantities of the Schr\"{o}dinger equation. These physical quantities can be trained using the complex-valued adjoint method. Since the propagation of the SE-NET can be described by the evolution of physical systems, its outputs can be computed by using a physical solver. As a demonstration, we implemented the SE-NET using the finite difference method. The trained network is transferable to actual optical systems. Based on this concept, we show a numerical demonstration of end-to-end machine learning with an optical frontend. Our results extend the application field of machine learning to hybrid physical-digital optimizations.
physics
We generate representative structural models of amorphous carbon (a-C) from constant-volume quenching from the liquid with subsequent relaxation of internal stresses in molecular dynamics simulations using empirical and machine-learning interatomic potentials. By varying volume and quench rate we generate structures with a range of density and amorphous morphologies. We find that all a-C samples show a universal relationship between hybridization, bulk modulus and density despite having distinct cohesive energies. Differences in cohesive energy are traced back to slight changes in the distribution of bond-angles that will likely affect thermal stability of these structures.
condensed matter
We study bottom flavor abundance in the early Universe near to a temperature $T_\mathrm{H}\simeq150\,\mathrm{MeV}$, the condition for hadronization of deconfined quark-gluon plasma (QGP). We show bottom flavor abundance nonequilibrium lasting microseconds. In our study we use that in both QGP, and the hadronic gas phase (HG) $b$ and $\bar b$ quarks near $T_\mathrm{H}$ are bound in B-mesons and antimesons subject to $CP$ violating weak decays. A coincident non-equilibrium abundance of bottom flavor can lead to matter genesis at required strength: a) The specific thermal yield per entropy is $n_b^{th}/\sigma=10^{-10}\sim10^{-13}$. b) Considering time scales, millions of cycles of B-meson decays, and $b\bar b$-pair recreation processes occur.
high energy physics phenomenology
Free Probability Theory (FPT) provides rich knowledge for handling mathematical difficulties caused by random matrices that appear in research related to deep neural networks (DNNs), such as the dynamical isometry, Fisher information matrix, and training dynamics. FPT suits these researches because the DNN's parameter-Jacobian and input-Jacobian are polynomials of layerwise Jacobians. However, the critical assumption of asymptotic freenss of the layerwise Jacobian has not been proven completely so far. The asymptotic freeness assumption plays a fundamental role when propagating spectral distributions through the layers. Haar distributed orthogonal matrices are essential for achieving dynamical isometry. In this work, we prove asymptotic freeness of layerwise Jacobian of multilayer perceptrons in this case.
statistics
The theoretical study considers chiral spin texture induced in a 2D electron gas (2DEG) by magnetic skyrmions. We calculate the electron gas spin density as a linear response to the exchange interaction between the 2DEG and the magnetization field of a magnetic skyrmion. Two physically distinct regimes occur. When the size of the skyrmion is larger than the inverse Fermi wavevector $k_F^{-1}$, the spin density response follows the magnetization profile of the skyrmion. In the opposite case of a small skyrmion the emerging spin structure of 2DEG has a characteristic size of $k_F^{-1}$ and the response becomes non-local, it can be viewed as chiral Friedel oscillations. At that, the emerging spin structure of the oscillations appears to be more complex than that of the skyrmion itself.
condensed matter
Triple differential cross sections (TDCSs) for electron vortex projectile ionization of helium into the azimuthal plane are calculated using the distorted wave Born approximation. In this collision geometry, the TDCSs at low and intermediate energies exhibit unique qualitative features that can be used to identify single and double scattering mechanisms. In general, our results predict that the ionization dynamics for vortex projectiles are similar to those of their non-vortex counterparts. However, some key differences are observed. For non-vortex projectiles, a double scattering mechanism is required to emit electrons into the azimuthal plane, and this mechanism becomes more important with increasing energy. Our results demonstrate that for vortex projectiles, emission into the azimuthal plane does not require a double scattering mechanism, although this process still significantly influences the shape of the TDCS at higher energies. At low projectile energies, non-vortex ionization proceeds primarily through single binary collisions. The same is generally true for vortex projectiles, although our results indicate that double scattering is also important, even at low energy. Vortex projectiles have an inherent uncertainty in their incident momentum, which causes a broadening of the binary peak at all energies and results in a splitting of the binary peak at higher energies. The results presented here lead to several predictions that can be experimentally tested.
physics
The 1D hydrostatic base state of electroconvection driven by unipolar charge injection between two parallel electrodes is investigated using a finite difference method. A boundary layer near the anode surface is derived analytically. The computational grid is required to resolve this boundary layer to maintain high order accuracy.
physics
In the present work we discuss how to address the solution of electrostatic problems, in professional cycle, using Green's functions and the Poisson's equation. By using this procedure, it was possible to verify its relation with the method of images as an interdisciplinary approach in didactic physics textbooks. For this, it was considered the structural role that mathematics, specially the Green's function, have in physical thought presented in the method of images.
physics
We present the observation and analysis of newly discovered coherent structures in the L1688 region of Ophiuchus and the B18 region of Taurus. Using data from the Green Bank Ammonia Survey (GAS), we identify regions of high density and near-constant, almost-thermal, velocity dispersion. Eighteen coherent structures are revealed, twelve in L1688 and six in B18, each of which shows a sharp "transition to coherence" in velocity dispersion around its periphery. The identification of these structures provides a chance to study the coherent structures in molecular clouds statistically. The identified coherent structures have a typical radius of 0.04 pc and a typical mass of 0.4 Msun, generally smaller than previously known coherent cores identified by Goodman et al. (1998), Caselli et al. (2002), and Pineda et al. (2010). We call these structures "droplets." We find that unlike previously known coherent cores, these structures are not virially bound by self-gravity and are instead predominantly confined by ambient pressure. The droplets have density profiles shallower than a critical Bonnor-Ebert sphere, and they have a velocity (VLSR) distribution consistent with the dense gas motions traced by NH3 emission. These results point to a potential formation mechanism through pressure compression and turbulent processes in the dense gas. We present a comparison with a magnetohydrodynamic simulation of a star-forming region, and we speculate on the relationship of droplets with larger, gravitationally bound coherent cores, as well as on the role that droplets and other coherent structures play in the star formation process.
astrophysics
We propose a novel type of balanced clustering algorithm to approximate attention. Attention complexity is reduced from $O(N^2)$ to $O(N \log N)$, where $N$ is the sequence length. Our algorithm, SMYRF, uses Locality Sensitive Hashing (LSH) in a novel way by defining new Asymmetric transformations and an adaptive scheme that produces balanced clusters. The biggest advantage of SMYRF is that it can be used as a drop-in replacement for dense attention layers without any retraining. On the contrary, prior fast attention methods impose constraints (e.g. queries and keys share the same vector representations) and require re-training from scratch. We apply our method to pre-trained state-of-the-art Natural Language Processing and Computer Vision models and we report significant memory and speed benefits. Notably, SMYRF-BERT outperforms (slightly) BERT on GLUE, while using $50\%$ less memory. We also show that SMYRF can be used interchangeably with dense attention before and after training. Finally, we use SMYRF to train GANs with attention in high resolutions. Using a single TPU, we were able to scale attention to 128x128=16k and 256x256=65k tokens on BigGAN on CelebA-HQ.
computer science
We contrast the gas kinematics and dark matter contents of $z=2$ star-forming galaxies (SFGs) from state-of-the-art cosmological simulations within the $\Lambda$CDM framework to observations. To this end, we create realistic mock observations of massive SFGs ($M_*>4\times10^{10} M_{\odot}$, SFR $>50~M_{\odot}$ yr$^{-1}$) from the TNG50 simulation of the IllustrisTNG suite, resembling near-infrared, adaptive-optics assisted integral-field observations from the ground. Using observational line fitting and modeling techniques, we analyse in detail the kinematics of seven TNG50 galaxies from five different projections per galaxy, and compare them to observations of twelve massive SFGs by Genzel et al. (2020). The simulated galaxies show clear signs of disc rotation but mostly exhibit more asymmetric rotation curves, partly due to large intrinsic radial and vertical velocity components. At identical inclination angle, their one-dimensional velocity profiles can vary along different lines of sight by up to $\Delta v=200$ km s$^{-1}$. From dynamical modelling we infer rotation speeds and velocity dispersions that are broadly consistent with observational results. We find low central dark matter fractions compatible with observations ($f_{\rm DM}^v(<R_e)=v_{\rm DM}^2(R_e)/v_{\rm circ}^2(R_e)\sim0.32\pm0.10$), however for disc effective radii $R_e$ that are mostly too small: at fixed $R_e$ the TNG50 dark matter fractions are too high by a factor of $\sim2$. We speculate that the differences in gas kinematics and dark matter content compared to the observations may be due to physical processes that are not resolved in sufficient detail with the numerical resolution available in current cosmological simulations.
astrophysics
Chiral skyrmions are stable particle-like solutions of the Landau-Lifshitz equation for ferromagnets with the Dzyaloshinskii-Moriya (DM) interaction, characterized by a topological number. We study the profile of an axially symmetric skyrmion and give exact formulas for the solution of the corresponding far-field and near-field equations, in the asymptotic limit of small DM parameter (alternatively large anisotropy). The matching of these two fields leads to a formula for the skyrmion radius as a function of the DM parameter. The derived solutions show the different length scales which are present in the skyrmion profiles. The picture is thus created of a chiral skyrmion that is born out of a Belavin-Polyakov solution with an infinitesimally small radius, as the DM parameter is increased from zero. The skyrmion retains the Belavin-Polyakov profile over and well-beyond the core before it assumes an exponential decay; the profile of an axially-symmetric Belavin-Polyakov solution of unit degree plays the role of the universal core profile of chiral skyrmions.
condensed matter
Risk is 6 player game with significant randomness and a large game-tree complexity which poses a challenge to creating an agent to play the game effectively. Previous AIs focus on creating high-level handcrafted features determine agent decision making. In this project, I create D.A.D, A Risk agent using temporal difference reinforcement learning to train a Deep Neural Network including a Graph Convolutional Network to evaluate player positions. This is used in a game-tree to select optimal moves. This allows minimal handcrafting of knowledge into the AI, assuring input features are as low-level as possible to allow the network to extract useful and sophisticated features itself, even with the network starting from a random initialisation. I also tackle the issue of non-determinism in Risk by introducing a new method of interpreting attack moves necessary for the search. The result is an AI which wins 35% of the time versus 5 of best inbuilt AIs in Lux Delux, a Risk variant.
computer science
The presence of topological edge modes at the interface of two perturbed honeycomb photonic crystals with $C_6$ symmetry is often attributed to the different signs of Berry curvature at the K and K$'$ valleys. In contrast to the electronic counterpart, the Chern number defined in photonic valley Hall effect is not a quantized quantity but can be tuned to finite values including zero simply by changing geometrical perturbations. Here, we argue that the edge modes in photonic valley Hall effect can exist even when Berry curvature vanishes. We numerically demonstrate the presence of the zero-Berry-curvature edge modes in triangular lattice photonic crystal slab structures in which $C_3$ symmetry is maintained but inversion symmetry is broken. We investigate the evolution of the Berry curvature from the honeycomb-lattice photonic crystal slab to the triangular-lattice photonic crystal slab and show that the triangular-lattice photonic crystals still support edge modes in a very wide photonic bandgap. Additionally, we find that the edge modes with zero Berry curvature can propagate with extremely low bending loss.
physics
We calculate the collinear anomalous dimensions in massless four-loop QCD and $\mathcal{N} = 4$ supersymmetric Yang-Mills theory from the infrared poles of vertex form factors. We give very precise numerical approximations and a conjecture for the complete analytic results in both models we consider.
high energy physics phenomenology
We characterize the extreme heartbeat star system MACHO 80.7443.1718 in the LMC using TESS photometry and spectroscopic observations from the Magellan Inamori Kyocera Echelle (MIKE) and SOAR Goodman spectographs. MACHO 80.7443.1718 was first identified as a heartbeat star system in the All-Sky Automated Survey for SuperNovae (ASAS-SN) with $P_{\rm orb}=32.836\pm0.008\,{\rm d}$. MACHO 80.7443.1718 is a young (${\sim}6$~Myr), massive binary, composed of a B0 Iae supergiant with $M_1 \simeq 35 M_\odot$ and an O9.5V secondary with $M_2 \simeq 16 M_\odot$ on an eccentric ($e=0.51\pm0.03$) orbit. In addition to having the largest variability amplitude amongst all known heartbeats stars, MACHO 80.7443.1718 is also one of the most massive heartbeat stars yet discovered. The B[e] supergiant has Balmer emission lines and permitted/forbidden metallic emission lines associated with a circumstellar disk. The disk rapidly dissipates at periastron which could indicate mass transfer to the secondary, but re-emerges immediately following periastron passage. MACHO 80.7443.1718 also shows tidally excited oscillations at the $N=25$ and $N=41$ orbital harmonics and has a rotational period of 4.4 d.
astrophysics
Flowing granular materials segregate due to differences in particle size (driven by percolation) and density (driven by buoyancy). Modelling the segregation of mixtures of large/heavy particles and small/light particles is challenging due to the opposing effects of the two segregation mechanisms. Using discrete element method (DEM) simulations of combined size and density segregation we show that the segregation velocity is well described by a model that depends linearly on the local shear rate and quadratically on the species concentration. Concentration profiles predicted by incorporating this segregation velocity model into a continuum advection-diffusion-segregation transport model match DEM simulation results well for a wide range of particle size and density ratios. Most surprisingly, the DEM simulations and the segregation velocity model both show that the segregation direction for a range of size and density ratios depends on the local species concentration. This leads to a methodology to determine the combination of particle size ratio, density ratio, and particle concentration for which a bidisperse mixture will not segregate.
condensed matter
We analyze the known results for the eigenvalue of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation in the perturbative regime using the analytic continuation of harmonic sums from even positive arguments to the complex plane. The resulting meromorphic functions have poles at negative integer values of the argument. The typical classification of harmonic sums is determined by two major parameters: $a)$ the \textit{weight} - a sum of inverse powers of the summation indices; $b)$ the \textit{depth} - a number of nested summations. We introduce the third parameter: the \textit{alternation} - a number of nested sign-alternating summations in a given harmonic sum. We claim that the maximal alternation of the nested summation in the functions building the BFKL eigenvalue is preserved from loop to loop in the perturbative expansion. The BFKL equation is formulated for arbitrary color configuration of the propagating states in the $t$-channel. Based on known results one can state that color adjoint BFKL eigenvalue be can written using only harmonic sums with positive indices, maximal alternation zero, and at most depth one, whereas the singlet BFKL eigenvalue is constructed of harmonic sums with maximal sign alternation being equal one. We also note that for maximal alternation being equal unity the harmonic sums can be expressed through alternation zero harmonic sums with half-shifted arguments.
high energy physics theory
We present a measurement-device-independent quantum key distribution (MDI-QKD) using single photons in a linear superposition of three orthogonal time-bin states, for generating the key. The orthogonal states correspond to three distinct paths in the delay line interferometers used by two (trusted) sources. The key information is decoded based on the measurement outcomes obtained by an untrusted third party Charles, who uses a beamsplitter to measure the phase difference between pulses traveling through different paths of the two delay lines. The proposed scheme combines the best of both differential-phase-shift (DPS) QKD and MDI-QKD. It is more robust against phase fluctuations, and also ensures protection against detector side-channel attacks. We prove unconditional security by demonstrating an equivalent protocol involving shared entanglement between the two trusted parties. We show that the secure key rate for our protocol compares well to existing protocols in the asymptotic regime. For the decoy-state variant of our protocol, we evaluate the secure key rate by using a phase-post-selection technique. Finally, we estimate the bit error rate and the phase error rate, in the finite key regime.
quantum physics
Chuang and Rouquier describe an action by perverse equivalences on the set of bases of a triangulated category of Calabi-Yau dimension $-1$. We develop an analogue of their theory for Calabi-Yau categories of dimension $w<0$ and show it is equivalent to the mutation theory of $w$-simple-minded systems. Given a non-positively graded, finite-dimensional symmetric algebra $A$, we show that the differential graded stable category of $A$ has negative Calabi-Yau dimension. When $A$ is a Brauer tree algebra, we construct a combinatorial model of the dg-stable category and show that perverse equivalences act transitively on the set of $|w|$-bases.
mathematics
We present a new acquisition method that enables high-resolution, fine-detail full reconstruction of the three-dimensional movement and structure of individual human sperm cells swimming freely. We achieve both retrieval of the three-dimensional refractive-index profile of the sperm head, revealing its fine internal organelles and time-varying orientation, and the detailed four-dimensional localization of the thin, highly-dynamic flagellum of the sperm cell. Live human sperm cells were acquired during free swim using a high-speed off-axis holographic system that does not require any moving elements or cell staining. The reconstruction is based solely on the natural movement of the sperm cell and a novel set of algorithms, enabling the detailed four-dimensional recovery. Using this refractive-index imaging approach, we believe we have detected an area in the cell that is attributed to the centriole. This method has great potential for both biological assays and clinical use of intact sperm cells.
physics