text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Understanding physical rules underlying collective motions requires perturbation of controllable parameters in self-propelled particles. However, controlling parameters in animals is generally not easy, which makes collective behaviours of animals elusive. Here, we report an experimental system in which a conventional model animal, \textit {Caenorhabditis elegans}, collectively forms dynamical networks of bundle-shaped aggregates. We investigate the dependence of our experimental system on various extrinsic parameters (material of substrate, ambient humidity and density of worms). Taking advantage of well-established \textit {C.~elegans} genetics, we also control the intrinsic parameters (genetically determined motility) by mutations and by forced neural activation via optogenetics. Furthermore, we develop a minimal agent-based model that reproduces the dynamical network formation and its dependence on the parameters, suggesting that the key factors are alignment of worms after collision and smooth turning. Our findings imply that the concepts of active matter physics may help us to understand biological functions of animal groups.
|
physics
|
In this article, we present an extension of the Tensorflow Playground, called Tensorflow Meter (short TFMeter). TFMeter is an interactive neural network architecting tool that allows the visual creation of different architectures of neural networks. In addition to its ancestor, the playground, our tool shows information-theoretic measurements while constructing, training, and testing the network. As a result, each change results in a change in at least one of the measurements, providing for a better engineering intuition of what different architectures are able to learn. The measurements are derived from various places in the literature. In this demo, we describe our web application that is available online at http://tfmeter.icsi.berkeley.edu/ and argue that in the same way that the original Playground is meant to build an intuition about neural networks, our extension educates users on available measurements, which we hope will ultimately improve experimental design and reproducibility in the field.
|
computer science
|
Mechanical deformation of amorphous solids can be described as consisting of an elastic part in which the stress increases linearly with strain, up to a yield point at which the solid either fractures or starts deforming plastically. It is well established, however, that the apparent linearity of stress with strain is actually a proxy for a much more complex behavior, with a microscopic plasticity that is reflected in diverging nonlinear elastic coefficients. Very generally, the complex structure of the energy landscape is expected to induce a singular response to small perturbations. In the athermal quasistatic regime, this response manifests itself in the form of a scale free plastic activity. The distribution of the corresponding avalanches should reflect, according to theoretical mean field calculations (Franz and Spigler, Phys. Rev. E., 2017, 95, 022139), the geometry of phase space in the vicinity of a typical local minimum. In this work, we characterize this distribution for simple models of glass forming systems, and we find that its scaling is compatible with the mean field predictions for systems above the jamming transition. These systems exhibit marginal stability, and scaling relations that hold in the stationary state are examined and confirmed in the elastic regime. By studying the respective influence of system size and age, we suggest that marginal stability is systematic in the thermodynamic limit.
|
condensed matter
|
A novel two-mode non-degenerate squeezed light is generated based on a four-wave mixing (4WM) process driven by two pump fields crossing at a small angle. By exchanging the roles of the pump beams and the probe and conjugate beams, we have demonstrated the frequency-degenerate two-mode squeezed light with separated spatial patterns. Different from a 4WM process driven by one pump field, the refractive index of the corresponding probe field $n_{p}$ can be converted to a value that is greater than $1$ or less than $1$ by an angle adjustment. In the new region with $n_{p}<1$, the bandwidth of the gain is relatively large due to the slow change in the refractive index with the two-photon detuning. As the bandwidth is important for the practical application of a quantum memory, the wide-bandwidth intensity-squeezed light fields provide new prospects for quantum memories.
|
quantum physics
|
This paper introduces a new dataset called "ToyADMOS" designed for anomaly detection in machine operating sounds (ADMOS). To the best our knowledge, no large-scale datasets are available for ADMOS, although large-scale datasets have contributed to recent advancements in acoustic signal processing. This is because anomalous sound data are difficult to collect. To build a large-scale dataset for ADMOS, we collected anomalous operating sounds of miniature machines (toys) by deliberately damaging them. The released dataset consists of three sub-datasets for machine-condition inspection, fault diagnosis of machines with geometrically fixed tasks, and fault diagnosis of machines with moving tasks. Each sub-dataset includes over 180 hours of normal machine-operating sounds and over 4,000 samples of anomalous sounds collected with four microphones at a 48-kHz sampling rate. The dataset is freely available for download at https://github.com/YumaKoizumi/ToyADMOS-dataset
|
electrical engineering and systems science
|
This is an idiosyncratic account of the main results presented at the 31st Rencontres de Blois, which took place from June 2nd to June 7th, 2019 in the Castle of Blois, France.
|
high energy physics phenomenology
|
We consider the problem of designing minimax estimators for estimating the parameters of a probability distribution. Unlike classical approaches such as the MLE and minimum distance estimators, we consider an algorithmic approach for constructing such estimators. We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game. By leveraging recent results in online learning with non-convex losses, we provide a general algorithm for finding a mixed-strategy Nash equilibrium of general non-convex non-concave zero-sum games. Our algorithm requires access to two subroutines: (a) one which outputs a Bayes estimator corresponding to a given prior probability distribution, and (b) one which computes the worst-case risk of any given estimator. Given access to these two subroutines, we show that our algorithm outputs both a minimax estimator and a least favorable prior. To demonstrate the power of this approach, we use it to construct provably minimax estimators for classical problems such as estimation in the finite Gaussian sequence model, and linear regression.
|
statistics
|
FIRE is a program performing reduction of Feynman integrals to master integrals. The C++ version of FIRE was presented in 2014. There have been multiple changes and upgrades since then including the possibility to use multiple computers for one reduction task and to perform reduction with modular arithmetic. The goal of this paper is to present the current version of FIRE.
|
high energy physics phenomenology
|
The couplings between the neutrinos and exotic fermion can be probed in both neutrino scattering experiments and dark matter direct detection experiments. We present a detailed analysis of the general neutrino interactions with an exotic fermion and electrons at neutrino-electron scattering experiments. We obtain the constraints on the coupling coefficients of the scalar, pseudoscalar, vector, axialvector, tensor and electromagnetic dipole interactions from the CHARM-II, TEXONO and Borexino experiments. For the flavor-universal interactions, we find that the Borexino experiment sets the strongest bounds in the low mass region for the electromagnetic dipole interactions, and the CHARM-II experiment dominates the bounds for other scenarios. If the interactions are flavor dependent, the bounds from the CHARM-II or TEXONO experiment can be avoided, and there are correlations between the flavored coupling coefficients for the Borexino experiment. We also discuss the detection of sub-MeV DM absorbed by bound electron targets and illustrate that the vector coefficients preferred by XENON1T data is allowed by the neutrino experiments.
|
high energy physics phenomenology
|
We present here various techniques to work with clean and disordered quantum Ising chains, for the benefit of students and non-experts. Starting from the Jordan-Wigner transformation, which maps spin-1/2 systems into fermionic ones, we review some of the basic approaches to deal with the superconducting correlations that naturally emerge in this context. In particular, we analyse the form of the ground state and excitations of the model, relating them to the symmetry-breaking physics, and illustrate aspects connected to calculating dynamical quantities, thermal averages, correlation functions and entanglement entropy. A few problems provide simple applications of the techniques.
|
quantum physics
|
The thermal Sunyaev-Zeldovich (SZ) effect presents a relatively new tool for characterizing galaxy cluster merger shocks, traditionally studied through X-ray observations. Widely regarded as the "textbook example" of a cluster merger bow shock, the western shock front in the Bullet Cluster (1E0657-56) represents the ideal test case for such an SZ study. We aim to reconstruct a parametric model for the shock SZ signal by directly and jointly fitting deep, high-resolution interferometric data from the Atacama Large Millimeter/submillimeter Array (ALMA) and Atacama Compact Array (ACA) in Fourier space. The ALMA+ACA data are primarily sensitive to the electron pressure difference across the shock front. To estimate the shock Mach number $M$, this difference can be combined with the value for the upstream electron pressure derived from an independent Chandra X-ray analysis. In the case of instantaneous electron-ion temperature equilibration, we find $M=2.08^{+0.12}_{-0.12}$, in $\approx 2.4\sigma$ tension with the independent constraint from Chandra, $M_X=2.74\pm0.25$. The assumption of purely adiabatic electron temperature change across the shock leads to $M=2.53^{+0.33}_{-0.25}$, in better agreement with the X-ray estimate $M_X=2.57\pm0.23$ derived for the same heating scenario. We have demonstrated that interferometric observations of the SZ effect provide constraints on the properties of the shock in the Bullet Cluster that are highly complementary to X-ray observations. The combination of X-ray and SZ data yields a powerful probe of the shock properties, capable of measuring $M$ and addressing the question of electron-ion equilibration in cluster shocks. Our analysis is however limited by systematics related to the overall cluster geometry and the complexity of the post-shock gas distribution. To overcome these limitations, a joint analysis of SZ and X-ray data is needed.
|
astrophysics
|
We apply a tension metric $Q_\textrm{UDM}$, the update difference in mean parameters, to understand the source of the difference in the measured Hubble constant $H_0$ inferred with cosmic microwave background lensing measurements from the Planck satellite ($H_0=67.9^{+1.1}_{-1.3}\, \mathrm{km/s/Mpc}$) and from the South Pole Telescope ($H_0=72.0^{+2.1}_{-2.5}\, \mathrm{km/s/Mpc}$) when both are combined with baryon acoustic oscillation (BAO) measurements with priors on the baryon density (BBN). $Q_\textrm{UDM}$ isolates the relevant parameter directions for tension or concordance where the two data sets are both informative, and aids in the identification of subsets of data that source the observed tension. With $Q_\textrm{UDM}$, we uncover that the difference in $H_0$ is driven by the tension between Planck lensing and BAO+BBN, at probability-to-exceed of 6.6%. Most of this mild tension comes from the galaxy BAO measurements parallel to the line of sight. The redshift dependence of the parallel BAOs pulls both the matter density $\Omega_m$ and $H_0$ high in $\Lambda$CDM, but these parameter anomalies are usually hidden when the BAO measurements are combined with other cosmological data sets with much stronger $\Omega_m$ constraints.
|
astrophysics
|
Random forest is a popular prediction approach for handling high dimensional covariates. However, it often becomes infeasible to interpret the obtained high dimensional and non-parametric model. Aiming for obtaining an interpretable predictive model, we develop a forward variable selection method using the continuous ranked probability score (CRPS) as the loss function. Our stepwise procedure leads to a smallest set of variables that optimizes the CRPS risk by performing at each step a hypothesis test on a significant decrease in CRPS risk. We provide mathematical motivation for our method by proving that in population sense the method attains the optimal set. Additionally, we show that the test is consistent provided that the random forest estimator of a quantile function is consistent. In a simulation study, we compare the performance of our method with an existing variable selection method, for different sample sizes and different correlation strength of covariates. Our method is observed to have a much lower false positive rate. We also demonstrate an application of our method to statistical post-processing of daily maximum temperature forecasts in the Netherlands. Our method selects about 10% covariates while retaining the same predictive power.
|
statistics
|
We present nonperturbative fragmentation functions (FFs) for bottom-flavored ($B$) hadrons both at next-to-leading (NLO) and, for the first time, at next-to-next-to-leading order (NNLO) in the $\overline{\mathrm{MS}}$ factorization scheme with five massless quark flavors. They are determined by fitting all available experimental data of inclusive single $B$-hadron production in $e^+e^-$ annihilation, from the ALEPH, DELPHI, and OPAL Collaborations at CERN LEP1 and the SLD Collaboration at SLAC SLC. The uncertainties in these FFs as well as in the corresponding observables are estimated using the Hessian approach. We perform comparisons with available NLO sets of $B$-hadron FFs. We apply our new FFs to generate theoretical predictions for the energy distribution of $B$ hadrons produced through the decay of unpolarized or polarized top quarks, to be measured at the CERN LHC.
|
high energy physics phenomenology
|
In transportation applications such as real-time route guidance, ramp metering, congestion pricing and special events traffic management, accurate short-term traffic flow prediction is needed. For this purpose, this paper proposes several novel \textit{online} Grey system models (GM): GM(1,1$|cos(\omega t)$), GM(1,1$|sin(\omega t)$, $cos(\omega t)$), and GM(1,1$|e^{-at}$,$sin(\omega t)$,$cos(\omega t)$). To evaluate the performance of the proposed models, they are compared against a set of benchmark models: GM(1,1) model, Grey Verhulst models with and without Fourier error corrections, linear time series model, and nonlinear time series model. The evaluation is performed using loop detector and probe vehicle data from California, Virginia, and Oregon. Among the benchmark models, the error corrected Grey Verhulst model with Fourier outperformed the GM(1,1) model, linear time series, and non-linear time series models. In turn, the three proposed models, GM(1,1$|cos(\omega t)$), GM(1,1$|sin(\omega t)$,$cos(\omega t)$), and GM(1,1$|e^{-at}$,$sin(\omega t)$,$cos(\omega t)$), outperformed the Grey Verhulst model in prediction by at least $65\%$, $16\%$, and $11\%$, in terms of Root Mean Squared Error, and by $82\%$, $58\%$, and $42\%$, in terms of Mean Absolute Percentage Error, respectively. It is observed that the proposed Grey system models are more adaptive to location (e.g., perform well for all roadway types) and traffic parameters (e.g., speed, travel time, occupancy, and volume), and they do not require as many data points for training (4 observations are found to be sufficient).
|
statistics
|
We present a simple method for calculation of diffraction effects in a beam passing an aperture. It follows the well-known approach of Miyamoto and Wolf, but is simpler and does not lead to singularities. It is thus shown that in the near-field region, i.e., at short propagation distances, most results depend on values of the beam's field at the aperture's boundaries, making it possible to derive diffraction effects in the form of a simple contour integral over the boundaries. For a uniform, i.e., plane-wave incident beam, the contour integral predicts the diffraction effects exactly. Comparisons of the analytical method and full numerical solutions demonstrate highly accurate agreement between them.
|
physics
|
We systematically study the semileptonic decays of ${\bf B_c} \to {\bf B_n}\ell^+ \nu_{\ell}$ in the light-front constituent quark model, where ${\bf B_c}$ represent the anti-triplet charmed baryons of $(\Xi_c^0,\Xi_c^+,\Lambda_c^+)$ and ${\bf B_n}$ correspond to the octet ones. We determine the spin-flavor structures of the constituents in the baryons with the Fermi statistics and calculate the decay branching ratios (${\cal B}$s) and averaged asymmetry parameters ($\alpha$s) with the helicity formalism. In particular, we find that ${\cal B}( \Lambda_c^+ \to \Lambda e^+ \nu_{e}, ne^+ \nu_{e})=(3.55\pm1.04, 0.36\pm0.15)\%$, ${\cal B}( \Xi_c^+ \to \Xi^0 e^+ \nu_{e},\Sigma^0 e^+ \nu_{e},\Lambda e^+ \nu_{e})=(11.3\pm3.35), 0.33\pm0.09,0.12\pm0.04\%$ and ${\cal B}( \Xi_c^0 \to \Xi^- e^+ \nu_{e},\Sigma^- e^+ \nu_{e})=(3.49\pm0.95,0.22\pm0.06)\%$. Our results agree with the current experimental data. Our prediction for ${\cal B}( \Lambda_c^+ \to n e^+ \nu_{e})$ is consistent with those in the literature, which can be measured by the charm-facilities, such as BESIII and BELLE. Some of our results for the $\Xi_c^{+(0)}$ semileptonic channels can be tested by the experiments at BELLE as well as the ongoing ones at LHCb and BELLEII.
|
high energy physics phenomenology
|
Exploration of treatment effect heterogeneity (TEH) is an increasingly important aspect of modern statistical analysis for stratified medicine in randomised controlled trials (RCTs) as we start to gather more information on trial participants and wish to maximise the opportunities for learning from data. However, the analyst should refrain from including a large number of variables in a treatment interaction discovery stage. Because doing so can significantly dilute the power to detect any true outcome-predictive interactions between treatments and covariates. Current guidance is limited and mainly relies on the use of unsupervised learning methods, such as hierarchical clustering or principal components analysis, to reduce the dimension of the variable space prior to interaction tests. In this article we show that outcome-driven dimension reduction, i.e. supervised variable selection, can maintain power without inflating the type-I error or false-positive rate. We provide theoretical and applied results to support our approach. The applied results are obtained from illustrating our framework on the dataset from an RCT in severe malaria. We also pay particular attention to the internal risk model approach for TEH discovery, which we show is a particular case of our method and we point to improvements over current implementation.
|
statistics
|
We introduce Virasoro operators for any Landau-Ginzburg pair (W, G) where W is a non-degenerate quasi-homogeneous polynomial and G is a certain group of diagonal symmetries. We propose a conjecture that the total ancestor potential of the FJRW theory of the pair (W, G) is annihilated by these Virasoro operators. We prove the conjecture in various cases, including: (1) invertible polynomials with the maximal group, (2) two-variable homogeneous Fermat polynomials with the minimal group, (3) certain Calabi-Yau polynomials with groups. We also discuss the connections among Virasoro constraints, mirror symmetry of Landau-Ginzburg models, and Landau-Ginzburg/Calabi-Yau correspondence.
|
mathematics
|
We present an elementary way to transform an expander graph into a simplicial complex where all high order random walks have a constant spectral gap, i.e., they converge rapidly to the stationary distribution. As an upshot, we obtain new constructions, as well as a natural probabilistic model to sample constant degree high-dimensional expanders. In particular, we show that given an expander graph $G$, adding self loops to $G$ and taking the tensor product of the modified graph with a high-dimensional expander produces a new high-dimensional expander. Our proof of rapid mixing of high order random walks is based on the decomposable Markov chains framework introduced by Jerrum et al.
|
computer science
|
With high spatial resolution, polarimetric imaging of a supermassive black hole, like M87$^\star$ or Sgr A$^\star$, by the Event Horizon Telescope can be used to probe the existence of ultralight bosonic particles, such as axions. Such particles can accumulate around a rotating black hole through the superradiance mechanism, forming an axion cloud. When linearly polarized photons are emitted from an accretion disk near the horizon, their position angles oscillate due to the birefringent effect when traveling through the axion background. In particular, the observations of supermassive black holes M87$^\star$ (Sgr A$^\star$) can probe the dimensionless axion-photon coupling $c = 2 \pi g_{a \gamma} f_a$ for axions with mass around $O(10^{-20})$~eV ($O( 10^{-17}$)~eV) and decay constant $f_a < O(10^{16})$ GeV, which is complimentary to other axion measurements.
|
high energy physics phenomenology
|
Maximum likelihood estimators are proposed for the parameters and the densities in a semiparametric density ratio model in which the nonparametric baseline density is approximated by the Bernstein polynomial model. The EM algorithm is used to obtain the maximum approximate Bernstein likelihood estimates. Simulation study shows that the performance of the proposed method is much better than the existing ones. The proposed method is illustrated by real data examples. Some asymptotic results are also presented and proved.
|
statistics
|
Models of Co-Decaying dark matter lead to an early matter dominated epoch -- prior to BBN -- which results in an enhancement of the growth of dark matter substructure. If these primordial structures collapse further they can form primordial black holes providing an additional dark matter candidate. We derive the mass fraction in these black holes (which is not monochromatic) and consider observational constraints on how much of the dark matter could be comprised in these relics. We find that in many cases they can be a significant fraction of the dark matter. Interestingly, the masses of these black holes can be near the solar-mass range providing a new mechanism for producing black holes like those recently detected by LIGO.
|
astrophysics
|
The behavior of an electron spin interacting with a linearly polarized laser field is analyzed. In contrast to previous considerations of the problem, the initial state of the electron represents a localized wave packet, and a spatial envelope is introduced for the laser pulse, which allows one to take into account the finite size of both objects. Special attention is paid to ultrashort pulses possessing a high degree of unipolarity. Within a classical treatment (both nonrelativistic and relativistic), proportionality between the change of the electron spin projections and the electric field area of the pulse is clearly demonstrated. We also perform calculations of the electron spin dynamics according to the Dirac equation. Evolving the electron wave function in time, we compute the mean values of the spin operator in various forms. It is shown that the classical relativistic predictions are accurately reproduced when using the Foldy-Wouthuysen operator. The same results are obtained when using the Lorentz transformation and the nonrelativistic (Pauli) spin operator in the particle's rest frame.
|
quantum physics
|
For the last two decades we have been observing a huge increase in discoveries of long-period comets (LPCs), especially those with large-perihelion distances. We collected data for a full sample of LPCs discovered over the 1801-2017 period including their osculating orbits, discovery moments (to study the discovery distances), and original semimajor axes (to study the number ratio of large-perihelion to small-perihelion LPCs in function of 1/a-original, and to construct the precise distribution of an 1/a-original). To minimize the influence of parabolic comets on these distributions we determined definitive orbits (which include eccentricities) for more than 20 LPCs previously classified as parabolic comets. We show that the percentage of large-perihelion comets is significantly higher within Oort spike comets than in a group of LPCs with a<10000 au, and this ratio of large-perihelion to small-perihelion comets for both groups has grown systematically since 1970. The different shape of the Oort spike for small-perihelion and large-perihelion LPCs is also discussed. A spectacular decrease of the ratio of large-perihelion to small-perihelion LPCs with the shortening of semimajor axis within the range of 5000-100 au is also noticed. Analysing discovery circumstances, we found that Oort spike comets are discovered statistically at larger geocentric and heliocentric distances than the remaining LPCs. This difference in the percentage of large-perihelion comets in both groups of LPCs can probably be a direct consequence of a well-known comets fading process due to ageing of their surface during the consecutive perihelion passages and/or reflects the different actual q-distributions.
|
astrophysics
|
The early detection of tipping points, which describe a rapid departure from a stable state, is an important theoretical and practical challenge. Tipping points are most commonly associated with the disappearance of steady-state or periodic solutions at fold bifurcations. We discuss here multi-frequency tipping (M-tipping), which is tipping due to the disappearance of an attracting torus. M-tipping is a generic phenomenon in systems with at least two intrinsic or external frequencies that can interact and, hence, is relevant to a wide variety of systems of interest. We show that the more complicated sequence of bifurcations involved in M-tipping provides a possible consistent explanation for as yet unexplained behavior observed near tipping in climate models for the Atlantic meridional overturning circulation. More generally, this work provides a path towards identifying possible early-warning signs of tipping in multiple-frequency systems.
|
mathematics
|
We consider an extension of the Standard Model by right-handed neutrinos and we argue that, under plausible assumptions, a neutrino mass of ${\cal O}(0.1)\,{\rm eV}$ is naturally generated by the breaking of the lepton number at the Planck scale, possibly by gravitational effects, without the necessity of introducing new mass scales in the model. Some implications of this framework are also briefly discussed.
|
high energy physics phenomenology
|
"What are the consequences ... that Fermi particles cannot get into the same state ... " R. P. Feynman wrote of the Pauli exclusion principle, "In fact, almost all the peculiarities of the material world hinge on this wonderful fact." In 1972 Borland and Dennis showed that there exist powerful constraints beyond the Pauli exclusion principle on the orbital occupations of Fermi particles, providing important restrictions on quantum correlation and entanglement. Here we use computations on quantum computers to experimentally verify the existence of these additional constraints. Quantum many-fermion states are randomly prepared on the quantum computer and tested for constraint violations. Measurements show no violation and confirm the generalized Pauli exclusion principle with an error of one part in one quintillion.
|
quantum physics
|
The chiral symmetry of QCD requires energy-dependent pionic strong interactions at low energies. This constraint, however, is not fulfilled by the usual Breit--Wigner parameterization of pionic resonances, leading to masses larger than the real ones. We derive relations between nonleptonic three-body decays of the $B$-meson into a $D$-meson and a pair of light pseudoscalar mesons based on SU(3) chiral symmetry. Employing effective field theory methods, we demonstrate that taking into account the final-state interactions, the experimental data of the decays $B^-\to D^+\pi^-\pi^-$, $B_s^0\to \bar{D}^0K^-\pi^+$, $B^0\to\bar{D}^0\pi^-\pi^+$, $B^-\to D^+\pi^-K^-$ and $B^0\to\bar{D}^0\pi^-K^+$ can all be described by the nonperturbative $\pi/\eta/K$-$D/D_s$ scattering amplitudes previously obtained from a combination of chiral effective field theory and lattice QCD calculations. The results provide a strong support of the scenario that the broad scalar charmed meson $D^\ast_0(2400)$ should be replaced by two states, the lower one of which has a mass of around 2.1 GeV, much smaller than that extracted from experimental data using a Breit--Wigner parameterization.
|
high energy physics phenomenology
|
A fundamental difficulty in the study of automorphic representations, representations of $p$-adic groups and the Langlands program is to handle the non-generic case. In this work we develop a complete local and global theory of tensor product $L$-functions of $G\times GL_k$, where $G$ is a symplectic group, split special orthogonal group or the split general spin group, that includes both generic and non-generic representations of $G$. Our theory is based on a recent collaboration with David Ginzburg, where we presented a new integral representation that applies to all cuspidal automorphic representations. Here we develop the local theory over any field (of characteristic $0$), define the local $\gamma$-factors and provide a complete description of their properties. We then define $L$- and $\epsilon$-factors, and obtain the properties of the completed $L$-function. By combining our results with the Converse Theorem, we obtain a full proof of the global functorial lifting of cuspidal automorphic representations of $G$ to the natural general linear group.
|
mathematics
|
The recently proposed MUonE experiment at CERN aims at providing a novel determination of the leading order hadronic contribution to the muon anomalous magnetic moment through the study of elastic muon-electron scattering at relatively small momentum transfer. The anticipated accuracy of the order of 10ppm demands for high-precision predictions, including all the relevant radiative corrections. The theoretical formulation for the fixed-order NNLO photonic radiative corrections is described and the impact of the numerical results obtained with the corresponding Monte Carlo code is discussed for typical event selections of the MUonE experiment. In particular, the gauge-invariant subsets of corrections due to electron radiation as well as to muon radiation are treated exactly. The two-loop contribution due to diagrams where at least two virtual photons connect the electron and muon lines is approximated taking inspiration from the classical Yennie-Frautschi-Suura approach. The calculation and its Monte Carlo implementation pave the way towards the realization of a simulation code incorporating the full set of NNLO corrections matched to multiple photon radiation, that will be ultimately needed for data analysis.
|
high energy physics phenomenology
|
Mellin-Barnes (MB) integrals are well-known objects appearing in many branches of mathematics and physics, ranging from hypergeometric functions theory to quantum field theory, solid state physics, asymptotic theory, etc. Although MB integrals have been studied for more than one century, to our knowledge there is no systematic computational technique of the multiple series representations of $N$-fold MB integrals (for any given positive integer $N$). We present here a breakthrough in this context, which is based on simple geometrical considerations related to the gamma functions present in the numerator of the MB integrand. The method rests on a study of $N$-dimensional conic hulls constructed out of normal vectors of the singular (hyper)planes associated with each of the gamma functions. Specific sets of these conic hulls are shown to be in one-to-one correspondence with series representations of the MB integral. This provides a way to express the series representations as linear combinations of multiple series, the derivation of which does not depend on the convergence properties of the latter. Our method can be applied to $N$-fold MB integrals with straight as well as nonstraight contours, in the resonant and nonresonant cases and, depending on the form of the MB integrand, it gives rise to convergent series representations or diverging asymptotic ones. When convergent series are obtained the method also allows, in general, the determination of a single ``master series'' for each linear combination, which considerably simplifies convergence studies and numerical checks.
|
high energy physics theory
|
Bayesian optimisation is a well-known sample-efficient method for the optimisation of expensive black-box functions. However when dealing with big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function. Since the function evaluations are expensive in terms of both money and time, it may be desirable to alleviate this problem. One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation. In its standard form, Bayesian optimisation assumes the likelihood of any point in the search space being the optimum is equal. Therefore any prior knowledge that can provide information about the optimum of the function would elevate the optimisation performance. In this paper, we represent the prior knowledge about the function optimum through a prior distribution. The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum. We incorporate this prior directly in function model (Gaussian process), by redefining the kernel matrix, which allows this method to work with any acquisition function, i.e. acquisition agnostic approach. We show the superiority of our method over standard Bayesian optimisation method through optimisation of several benchmark functions and hyperparameter tuning of two algorithms: Support Vector Machine (SVM) and Random forest.
|
computer science
|
One of the central principles of quantum mechanics is that if there are multiple paths that lead to the same event, and there is no way to distinguish between them, interference occurs. It is usually assumed that distinguishing information in the preparation, evolution or measurement of a system is sufficient to destroy interference. For example, determining which slit a particle takes in Young's double slit experiment or using distinguishable photons in the two-photon Hong-Ou-Mandel effect allow discrimination of the paths leading to detection events, so in both cases interference vanishes. Remarkably for more than three independent quantum particles, distinguishability of the prepared states is not a sufficient condition for multiparticle interference to disappear. Here we experimentally demonstrate this for four photons prepared in pairwise distinguishable states, thus fundamentally challenging intuition of multiparticle interference.
|
quantum physics
|
Evidence from experimental studies shows that oscillations due to electro-mechanical coupling can be generated spontaneously in smooth muscle cells. Such cellular dynamics are known as \textit{pacemaker dynamics}. In this article we address pacemaker dynamics associated with the interaction of $\text{Ca}^{2+}$ and $\text{K}^+$ fluxes in the cell membrane of a smooth muscle cell. First we reduce a pacemaker model to a two-dimensional system equivalent to the reduced Morris-Lecar model and then perform a detailed numerical bifurcation analysis of the reduced model. Existing bifurcation analyses of the Morris-Lecar model concentrate on external applied current whereas we focus on parameters that model the response of the cell to changes in transmural pressure. We reveal a transition between Type I and Type II excitabilities with no external current required. We also compute a two-parameter bifurcation diagram and show how the transition is explained by the bifurcation structure.
|
mathematics
|
Positivity bounds coming from consistency of UV scattering amplitudes are in general insufficient to prove the weak gravity conjecture for theories beyond Einstein-Maxwell. Additional ingredients about the UV may be necessary to exclude those regions of parameter space which are na\"ively in conflict with the predictions of the weak gravity conjecture. In this paper we explore the consequences of imposing additional symmetries inherited from the UV theory on higher-derivative operators for Einstein-Maxwell-dilaton-axion theory. Using black hole thermodynamics, for a preserved SL($2,\mathbb{R}$) symmetry we find that the weak gravity conjecture then does follow from positivity bounds. For a preserved O($d,d;\mathbb{R}$) symmetry we find a simple condition on the two Wilson coefficients which ensures the positivity of corrections to the charge-to-mass ratio and that follows from the null energy condition alone. We find that imposing supersymmetry on top of either of these symmetries gives corrections which vanish identically, as expected for BPS states.
|
high energy physics theory
|
Among the possible superalgebras that contain the AdS$_3$ isometries, two interesting possibilities are the exceptional $F(4)$ and $G(3)$. Their R-symmetry is respectively SO(7) and $G_2$, and the amount of supersymmetry ${\cal N}=8$ and ${\cal N}=7$. We find that there exist two (locally) unique solutions in type IIA supergravity that realize these superalgebras, and we provide their analytic expressions. In both cases, the internal space is obtained by a round six-sphere fibred over an interval, with an O8-plane at one end. The R-symmetry is the symmetry group of the sphere; in the $G(3)$ case, it is broken to $G_2$ by fluxes. We also find several numerical ${\cal N}=1$ solutions with $G_2$ flavor symmetry, with various localized sources, including O2-planes and O8-planes.
|
high energy physics theory
|
We explore the 1-loop renormalization group flow of two models coming from a generalization of the Connes-Lott version of Noncommutative Geometry in Lorentzian signature: the Noncommutative Standard Model and its B-L extension. Both make predictions on coupling constants at high energy, but only the latter is found to be compatible with the top quark and Higgs boson masses at the electroweak scale. We took into account corrections introduced by threshold effects and the relative positions of the Dirac and Majorana neutrino mass matrices and found them to be important. Some effects of 2-loop corrections are briefly discussed. The model is consistent with experiments only for a very small part of its parameter space and is thus predictive. The masses of the $Z'$ and B-L breaking scalar are found to be of the order $10^{14}$ GeV.
|
high energy physics phenomenology
|
This paper proposes a novel maneuvering technique for the complex-Laplacian-based formation control. We show how to modify the original weights that build the Laplacian such that a designed steady-state motion of the desired shape emerges from the local interactions among the agents. These collective motions can be exploited to solve problems such as the shaped consensus (the rendezvous with a particular shape), the enclosing of a target, or translations with controlled speed and heading to assist mobile robots in area coverage, escorting, and traveling missions, respectively. The designed steady-state collective motions correspond to rotations around the centroid, translations, and scalings of a reference shape. The proposed modification of the weights relocates one of the Laplacian's zero eigenvalues while preserving its associated eigenvector that constructs the desired shape. For example, such relocation on the imaginary or real axis induces rotational and scaling motions, respectively. We will show how to satisfy a sufficient condition to guarantee the global convergence to the desired shape and motions. Finally, we provide simulations and comparisons with other maneuvering techniques.
|
electrical engineering and systems science
|
In this paper we find that confining a second-order topological superconductor with a harmonic potential leads to a proliferation of Majorana corner modes. As a consequence, this results in the formation of Majorana corner flat bands which have a fundamentally different origin from that of the conventional mechanism. This is due to the fact that they arise solely from the one-dimensional gapped boundary states of the hybrid system that become gapless without the bulk gap closing under the increase of the trapping potential magnitude. The Majorana corner states are found to be robust against the strength of the harmonic trap and the transition from Majorana corner states to Majorana flat bands is merely a smooth crossover. As a harmonic trap can potentially be realized in heterostructures, this proposal paves a way to observe these Majorana corner flat bands in an experimental context.
|
condensed matter
|
If confirmed, a wide binary system of 70 $M_{\odot}$ black hole (BH) and an 8 $M_{\odot}$ main sequence star (LB-1) is observed to reside in the Milky Way (MW). We show that long term evolution of an 8 $M_{\odot}$ star around a BH with mass between 5-70 $M_{\odot}$ makes them visible as ultra-luminous X-ray (ULX) sources in the sky. Given the expected ULX phase lifetime ($\approx0.1$ Myr) and their lack of detection in the MW, we conclude that the frequency of an 8-20 $M_{\odot}$ star to be in binary around a stellar mass BH should be less than ($f<2\times10^{-3}$). This is in tension with Liu et al. (2019) claimed detection frequency of LB-1 like system around 8-20$M_{\odot}$ stars ($f\approx3\times10^{-2}$). Moreover, the 8 $M_{\odot}$ star is likely to end as a neutron star (NS) born with a very small kick from an electron capture supernova (ECSN), leaving behind a wide NS-BH binary. So far less than 1\% of all the detectable pulsars in the MW are mapped and there has been no detection of any pulsars in binary systems around BHs which sets an upper bound of about 100 possible pulsar-BH systems in the MW. We show if the NS is born from ECSN, a frequency upper limit of ($f=5\times10^{-4}$) for stars with masses $\approx 8-20~M_{\odot}$ in the MW to have a BH companion. The rate discrepancy will further increase as more pulsars are mapped in the MW, yet these searches would not be able to rule out the Liu et al. detection frequency if NSs are instead born in core collapse SNe with the commonly inferred high kick velocities.
|
astrophysics
|
We study wireless collaborative machine learning (ML), where mobile edge devices, each with its own dataset, carry out distributed stochastic gradient descent (DSGD) over-the-air with the help of a wireless access point acting as the parameter server (PS). At each iteration of the DSGD algorithm wireless devices compute gradient estimates with their local datasets, and send them to the PS over a wireless fading multiple access channel (MAC). Motivated by the additive nature of the wireless MAC, we propose an analog DSGD scheme, in which the devices transmit scaled versions of their gradient estimates in an uncoded fashion. We assume that the channel state information (CSI) is available only at the PS. We instead allow the PS to employ multiple antennas to alleviate the destructive fading effect, which cannot be cancelled by the transmitters due to the lack of CSI. Theoretical analysis indicates that, with the proposed DSGD scheme, increasing the number of PS antennas mitigates the fading effect, and, in the limit, the effects of fading and noise disappear, and the PS receives aligned signals used to update the model parameter. The theoretical results are then corroborated with the experimental ones.
|
computer science
|
We offer a generalization of a formula of Popov involving the Von Mangoldt function. Some commentary on its relation to other results in analytic number theory is mentioned as well as an analogue involving the m$\ddot{o}$bius function.
|
mathematics
|
We present Gemini-S and {\it Spitzer}-IRAC optical-through-near-IR observations in the field of the SPT2349-56 proto-cluster at $z=4.3$. We detect optical/IR counterparts for only nine of the 14 submillimetre galaxies (SMGs) previously identified by ALMA in the core of SPT2349-56. In addition, we detect four $z\sim4$ Lyman-break galaxies (LBGs) in the 30 arcsec diameter region surrounding this proto-cluster core. Three of the four LBGs are new systems, while one appears to be a counterpart of one of the nine observed SMGs. We identify a candidate brightest cluster galaxy (BCG) with a stellar mass of $(3.2^{+2.5}_{-1.4})\times10^{11}\,{\rm M}_{\odot}$. The stellar masses of the eight other SMGs place them on, above, and below the main sequence of star formation at $z\approx4.5$. The cumulative stellar mass for the SPT2349-56 core is at least $(11.5\pm2.9)\times10^{11}\,{\rm M}_{\odot}$, a sizeable fraction of the stellar mass in local BCGs, and close to the universal baryon fraction (0.16) relative to the virial mass of the core ($10^{13}\,{\rm M}_{\odot}$). As all 14 of these SMGs are destined to quickly merge, we conclude that the proto-cluster core has already developed a significant stellar mass at this early stage, comparable to $z=1$ BCGs. Importantly, we also find that the SPT2349-56 core structure would be difficult to uncover in optical surveys, with none of the ALMA sources being easily identifiable or constrained through $g,r,$ and $i$ colour-selection in deep optical surveys and only a modest overdensity of LBGs over the extended core structure. SPT2349-56 therefore represents a truly dust-obscured phase of a massive cluster core under formation.
|
astrophysics
|
The Hodge decomposition provides a very powerful mathematical method for the analysis of 2D and 3D vector fields. It states roughly that any vector field can be $L^2$-orthogonally decomposed into a curl-free, divergence-free, and a harmonic field. The harmonic field itself can be further decomposed into three components, two of which are closely tied to the topology of the underlying domain. For practical computations it is desirable to find a discretization which preserves as many aspects inherent to the smooth theory as possible while at the same time remains computationally tractable, in particular on large-sized models. The correctness and convergence of such a discretization depends strongly on the choice of ansatz spaces defined on the surface or volumetric mesh to approximate infinite dimensional subspaces. This paper presents a consistent discretization of Hodge-type decomposition for piecewise constant vector fields on volumetric meshes. Our approach is based on a careful interplay between edge-based \textNedelec elements and face-based Crouzeix-Raviart elements resulting in a very simple formulation. The method is stable under noisy vector field and mesh resolution, and has a good performance for large sized models. We give pseudocodes for a possible implementation of the method together with some insights on how the Hodge decomposition could answer some central question in computational fluid.
|
mathematics
|
Superpositions of spin helices can yield topological spin textures, such as two-dimensional vortices and skyrmions, and three-dimensional hedgehogs. Their topological nature and spatial dimensionality depend on the number and relative directions of the constituent helices. This allows mutual transformation between the topological spin textures by controlling the spatial anisotropy. Here we theoretically study the effect of anisotropy in the magnetic interactions for an effective spin model for chiral magnetic metals. By variational calculations for both cases with triple and quadruple superpositions, we find that the hedgehog lattices, which are stable in the isotropic case, are deformed by the anisotropy, and eventually changed into other spin textures with reduced dimension, such as helices and vortices. We also clarify the changes of topological properties by tracing the real-space positions of magnetic monopoles and antimonopoles as well as the emergent magnetic field generated by the noncoplanar spin textures. Our results suggest possible control of the topological spin textures, e.g., by uniaxial pressure and chemical substitution in chiral materials.
|
condensed matter
|
A bipartite entanglement between two nearest-neighbor Heisenberg spins of a spin-1/2 Ising-Heisenberg model on a triangulated Husimi lattice is quantified using a concurrence. It is shown that the concurrence equals zero in a classical ferromagnetic and a quantum disordered phase, while it becomes sizable though unsaturated in a quantum ferromagnetic phase. A thermally-assisted reentrance of the concurrence is found above a classical ferromagnetic phase, whereas a quantum ferromagnetic phase displays a striking cusp of the concurrence at a critical temperature.
|
condensed matter
|
We study the multifield dynamics of axion models nonminimally coupled to gravity. As usual, we consider a canonical $U(1)$ symmetry-breaking model in which the axion is the phase of a complex scalar field. If the complex scalar field has a nonminimal coupling to gravity, then the (oft-forgotten) radial component can drive a phase of inflation prior to an inflationary phase driven by the axion field. In this setup, the mass of the axion field is dependent on the radial field because of the nonminimal coupling, and the axion remains extremely light during the phase of radial inflation. As the radial field approaches the minimum of its potential, there is a transition to natural inflation in the angular direction. In the language of multifield inflation, this system exhibits ultra-light isocurvature perturbations, which are converted to adiabatic perturbations at a fast turn, namely the onset of axion inflation. For models wherein the CMB pivot scale exited the horizon during radial inflation, this acts to suppresses the tensor-to-scalar ratio $r$, without generating CMB non-Gaussianity or observable isocurvature perturbations. Finally, we note that the interaction strength between axion and gauge fields is suppressed during the radial phase relative to its value during the axion inflation phase by several orders of magnitude. This decouples the constraints on the inflationary production of gauge fields (e.g., from primordial black holes) from the constraints on their production during (p)reheating.
|
high energy physics theory
|
Within the next few years, the Square Kilometer Array (SKA) or one of its pathfinders will hopefully provide a detection of the 21-cm signal fluctuations from the Epoch of Reionization (EoR). Then, the main goal will be to accurately constrain the underlying astrophysical parameters. Currently, this is mainly done with Bayesian inference using Markov Chain Monte Carlo sampling. Recently, studies using neural networks trained to performed inverse modelling have shown interesting results. We build on these by improving the accuracy of the predictions using neural network and exploring other supervised learning methods: the kernel and ridge regressions. Based on a large training set of 21-cm power spectra, we compare the performances of these supervised learning methods. When using an un-noised signal as input, we improve on previous neural network accuracy by one order of magnitude and, using local ridge kernel regression, we gain another factor of a few. We then reach a rms prediction error of a few percents of the 1-sigma confidence level due to SKA thermal noise (as estimated with Bayesian inference). This last performance level requires optimizing the hyper-parameters of the method: how to do that perfectly in the case of an unknown signal remains an open question. For an input signal altered by a SKA-type thermal noise, our neural network recovers the astrophysical parameter values with an error within half of the 1$\sigma$ confidence level due to the SKA thermal noise. This accuracy improves to 10$\%$ of the 1$\sigma$ level when using the local ridge kernel regression (with optimized hyper-parameters). We are thus reaching a performance level where supervised learning methods are a viable alternative to determine the best-fit parameters values.
|
astrophysics
|
The basic physics and recent progresses in theoretical and particle-in-cell (PIC) simulation studies of particle acceleration in multi-island magnetic reconnection are briefly reviewed. Particle acceleration in multi-island magnetic reconnection is considered a plausible mechanism for the acceleration of energetic particles in solar flares and the solar wind. Theoretical studies have demonstrated that such a mechanism can produce the observed power-law energy distribution of energetic particles if the particle motion is sufficiently randomized in the reconnection event. However, PIC simulations seem to suggest that the first-order Fermi acceleration mechanism is unable to produce a power-law particle energy distribution function in mildly relativistic multi-island magnetic reconnections. On the other hand, while simulations of highly relativistic reconnections appear to be able to produce a power-law energy spectrum, the spectral indices obtained are generally harder than the soft power-law spectra with indices $\sim -5$ commonly observed in the solar wind and solar flare events. In addition, the plasma heating due to kinetic instabilities in 3D magnetic reconnection may "thermalize" the power-law particles, making it even more difficult for multi-island reconnections to generate a power-law spectrum. We discuss the possible reasons that may lead to these problems.
|
astrophysics
|
Free Electron Lasers (FELs) are a solution for providing intense, coherent and bright radiation in the hard X-ray regime. Due to the low wall-plug efficiency of FEL facilities, it is crucial and additionally very useful to develop complete and accurate simulation tools for better optimizing a FEL interaction. The highly sophisticated dynamics involved in a FEL process was the main obstacle hindering the development of general simulation tools for this problem. The software MITHRA tackles this problem by offering a platform for full-wave simulation of FEL process. The full-wave solver employs a numerical algorithm based on finite difference time domain/Particle in cell (FDTD/PIC) in a Lorentz boosted coordinate system. The developed software offers a suitable tool for the analysis of FEL interactions without considering any of the usual approximations. This contribution is the complete manual for the second version of this software, namely MITHRA 2.0, which is released after extensive efforts on optimizing the software performance and removal of several bugs in the code.
|
physics
|
Simplest extensions of single particle dynamics in momentum conserving active fluid - that of an active suspension of two colloidal particles or a single particle confined by a wall - exhibit strong departures from Boltzmann behavior, resulting in either a breakdown of an effective temperature description or a steady state with nonzero entropy production rate. This is a consequence of hydrodynamic interactions that introduce multiplicative noise in the stochastic description of the particle positions. This results in fluctuation induced interactions that depend on distance as a power law. We find that the dynamics of activated colloids in a passive fluid, with stochastic forcing localized on the particle, is different from that of passive colloids in an active fluctuating fluid.
|
condensed matter
|
Motivated by the realisation of Yang-Baxter equation of 2d Integrable models in the 4d gauge theory of Costello-Witten-Yamazaki (CWY), we study the embedding of integrable 2d Toda field models inside this construction. This is done by using the Lax formulation of 2d integrable systems and by thinking of the standard Lax pair $L_{\pm }$ in terms of components of CWY gauge connection propagating along particular directions in the gauge bundle. We also use results of the CWY theory to build quantum line operators for 2d Toda models and compute the one loop contribution of two intersecting lines exchanging one gluon. Other features like local symmetries and comments on extension of our method to other 2d integrable models are also discussed. We also comment on\textrm{\ }some basic points that still need a refinement before talking about a fully consistent embedding of Lax pairs into CWY theory.
|
high energy physics theory
|
We consider the effective field theory of multiple interacting massive spin-2 fields. We focus on the case where the interactions are chosen so that the cutoff is the highest possible, and highlight two distinct classes of theories. In the first class, the mass eigenstates only interact through potential operators that carry no derivatives in unitary gauge at leading order. In the second class, a specific kinetic mixing between the mass eigenstates is included non-linearly. Performing a decoupling and ADM analysis, we point out the existence of a ghost present at a low scale for the first class of interactions. For the second class of interactions where kinetic mixing is included, we derive the full $\Lambda_3$ decoupling limit and confirm the absence of any ghosts. Nevertheless both formulations can be used to consistently describe an EFT of interacting massive spin-2 fields which, for a suitable technically natural tuning of the EFT, have the same strong coupling scale $\Lambda_3$. We identify the generic form of EFT corrections in each case. By using Galileon Duality transformations for the specific case of two massive spin-2 fields with suitable couplings, the decoupling limit theory is shown to be a bi-Galileon.
|
high energy physics theory
|
We introduce the sudden variant (SNZ) of the Net Zero scheme realizing controlled-$Z$ (CZ) gates by baseband flux control of transmon frequency. SNZ CZ gates operate at the speed limit of transverse coupling between computational and non-computational states by maximizing intermediate leakage. The key advantage of SNZ is tuneup simplicity, owing to the regular structure of conditional phase and leakage as a function of two control parameters. We realize SNZ CZ gates in a multi-transmon processor, achieving $99.93\pm0.24\%$ fidelity and $0.10\pm0.02\%$ leakage. SNZ is compatible with scalable schemes for quantum error correction and adaptable to generalized conditional-phase gates useful in intermediate-scale applications.
|
quantum physics
|
The concept of homology, originally developed as a useful tool in algebraic topology, has by now become pervasive in quite different branches of mathematics. The notion particularly carries over quite naturally to the setup of measure-preserving transformations arising from various group actions or, equivalently, the setup of stationary sequences considered in this paper. Our main result provides a sharp criterion which determines (and rules out) when two stationary processes belong to the same {\it null-homology equivalence class}. We also discuss some concrete cases where the notion of null-homology turns up in a relevant manner.
|
mathematics
|
Improved statistics and mass-composition-sensitive observation are required to clarify the origin of ultra-high energy cosmic rays (UHECRs). Inevitably in the future, the UHECR observatories will have to be expanded due to the small flux; however, the upgrade will be expensive with the detectors currently in use. Hence, we are developing a new fluorescence detector for UHECR observation. The proposed fluorescence detector, called cosmic ray air fluorescence Fresnel-lens telescope (CRAFFT), has an extremely simple structure and can observe the longitudinal development of an air shower. Furthermore, CRAFFT has the potential to significantly reduce costs for the realization of a huge observatory for UHECR research. We deployed four CRAFFT detectors at the Telescope Array site and conducted the test observation. We have successfully observed ten air-shower events using CRAFFT. Thus, CRAFFT can be a solution to realize the next generation of UHECR observatories.
|
astrophysics
|
The quantum approximate optimisation algorithm (QAOA) has become a cornerstone of contemporary quantum applications development. Here we show that the $density$ of problem constraints versus problem variables acts as a performance indicator. Density is found to correlate strongly with approximation inefficiency for fixed depth QAOA applied to random graph minimization problem instances. Further, the required depth for accurate QAOA solution to graph problem instances scales critically with density. We performed a detailed reanalysis of the data reproduced from Google's Sycamore superconducting qubit quantum processor executing QAOA applied to minimization problems on graphs. We found that Sycamore approaches a rapid fall-off in approximation quality experienced beyond intermediate-density instances. Our findings offer new insight into performance analysis of contemporary quantum optimization algorithms and contradict recent speculation regarding low-depth QAOA performance benefits.
|
quantum physics
|
Statistical modeling can involve a tension between assumptions and statistical identification. The law of the observable data may not uniquely determine the value of a target parameter without invoking a key assumption, and, while plausible, this assumption may not be obviously true in the scientific context at hand. Moreover, there are many instances of key assumptions which are untestable, hence we cannot rely on the data to resolve the question of whether the target is legitimately identified. Working in the Bayesian paradigm, we consider the grey zone of situations where a key assumption, in the form of a parameter space restriction, is scientifically reasonable but not incontrovertible for the problem being tackled. Specifically, we investigate statistical properties that ensue if we structure a prior distribution to assert that `maybe' or `perhaps' the assumption holds. Technically this simply devolves to using a mixture prior distribution putting just some prior weight on the assumption, or one of several assumptions, holding. However, while the construct is straightforward, there is very little literature discussing situations where Bayesian model averaging is employed across a mix of fully identified and partially identified models.
|
statistics
|
In a first-of-its-kind study, this paper proposes the formulation of constructing prediction intervals (PIs) in a time series as a bi-objective optimization problem and solves it with the help of Nondominated Sorting Genetic Algorithm (NSGA-II). We also proposed modeling the chaos present in the time series as a preprocessor in order to model the deterministic uncertainty present in the time series. Even though the proposed models are general in purpose, they are used here for quantifying the uncertainty in macroeconomic time series forecasting. Ideal PIs should be as narrow as possible while capturing most of the data points. Based on these two objectives, we formulated a bi-objective optimization problem to generate PIs in 2-stages, wherein reconstructing the phase space using Chaos theory (stage-1) is followed by generating optimal point prediction using NSGA-II and these point predictions are in turn used to obtain PIs (stage-2). We also proposed a 3-stage hybrid, wherein the 3rd stage invokes NSGA-II too in order to solve the problem of constructing PIs from the point prediction obtained in 2nd stage. The proposed models when applied to the macroeconomic time series, yielded better results in terms of both prediction interval coverage probability (PICP) and prediction interval average width (PIAW) compared to the state-of-the-art Lower Upper Bound Estimation Method (LUBE) with Gradient Descent (GD). The 3-stage model yielded better PICP compared to the 2-stage model but showed similar performance in PIAW with added computation cost of running NSGA-II second time.
|
computer science
|
Modern nanotechnology techniques offer new opportunities for fabricating structures and devices at the micron and sub-micron level. Here, we use focused ion beam techniques to realize drift tube Zernike phase plates for electrons, whose operation is based on the presence of contact potentials in Janus bimetallic cylinders, in a similar manner to the electrostatic Aharonov-Bohm effect in bimetallic wires. We use electron Fraunhofer interference to demonstrate that such bimetallic pillar structures introduce phase shifts that can be tuned to desired values by varying their dimensions, in particular their heights.
|
physics
|
We review recent work on low-frequency Floquet engineering and its application to quantum materials driven by light, emphasizing van der Waals systems hosting Moir\'e superlattices. These non-equilibrium systems combine the twist-angle sensitivity of the band structures with the flexibility of light drives. The frequency, amplitude, and polarization of light can be easily tuned in experimental setups, leading to platforms with on-demand properties. First, we review recent theoretical developments to derive effective Floquet Hamiltonians in different frequency regimes. We apply some of these theories to study twisted graphene and twisted transition metal dichalcogenide systems irradiated by light in free space and inside a waveguide. We study the changes induced in the quasienergies and steady-states, which can lead to topological transitions. Next, we consider van der Waals magnetic materials driven by low-frequency light pulses in resonance with the phonons. We discuss the phonon dynamics induced by the light and resulting magnetic transitions from a Floquet perspective. We finish by outlining new directions for Moir\'e-Floquet engineering in the low-frequency regime and their relevance for technological applications.
|
condensed matter
|
Galaxy evolution is driven by many complex interrelated processes as galaxies accrete gas, form new stars, grow their stellar masses and central black holes, and subsequently quench. The processes that drive these transformations is poorly understood, but it is clear that the local environment on multiple scales plays a significant role. Today's massive clusters are dominated by spheroidal galaxies with low levels of star formation while those in the field are mostly still actively forming their stars. In order to understand the physical processes that drive both the mass build up in galaxies and the quenching of star formation, we need to investigate galaxies and their surrounding gas within and around the precursors of today's massive galaxy clusters -- protoclusters at z>2. The transition period before protoclusters began to quench and become the massive clusters we observe today is a crucial time to investigate their properties and the mechanisms driving their evolution. However, until now, progress characterizing the galaxies within protoclusters has been slow, due the difficulty of obtaining highly complete spectroscopic observations of faint galaxies at z>2 over large areas of the sky. The next decade will see a transformational shift in our understanding of protoclusters as deep spectroscopy over wide fields of view will be possible in conjunction with high resolution deep imaging in the optical and near-infrared.
|
astrophysics
|
We consider holography of two pp-wave metrics in conformal gravity, their one point functions, and asymptotic symmetries. One of the metrics is a generalization of the standard pp-waves in Einstein gravity to conformal gravity. The holography of this metric shows that within conformal gravity one can have realised solution which has non-vanishing partially massless response (PMR) tensor even for vanishing subleading term in the Fefferman-Graham expansion (i.e. Neumann boundary conditions), and vice-versa.
|
high energy physics theory
|
We established a production method of a good millimeter-wave absorber by using a 3D-printed mold. The mold has a periodic pyramid shape, and an absorptive material is filled into the mold. This shape reduces the surface reflection. The 3D-printed mold is made from a transparent material in the millimeter-wave range. Therefore, unmolding is not necessary. A significant benefit of this production method is easy prototyping with various shapes and various absorptive materials. We produced a test model and used a two-component epoxy encapsulant as the absorptive material. The test model achieved a low reflectance: $\sim 1\%$ at 100 GHz. The absorber is sometimes maintained at a low temperature condition for cases in which superconducting detectors are used. Therefore, cryogenic performance is required in terms of a mechanical strength for the thermal cycles, an adhesive strength, and a sufficient thermal conductivity. We confirmed the test-model strength by immersing the model into a liquid-nitrogen bath.
|
astrophysics
|
The standard formulation of quantum theory relies on a fixed space-time metric determining the localisation and causal order of events. In general relativity, the metric is influenced by matter, and is expected to become indefinite when matter behaves quantum mechanically. Here, we develop a framework to operationally define events and their localisation with respect to a quantum clock reference frame, also in the presence of gravitating quantum systems. We find that, when clocks interact gravitationally, the time localisability of events becomes relative, depending on the reference frame. This relativity is a signature of an indefinite metric, where events can occur in an indefinite causal order. Even if the metric is indefinite, for any event we can find a reference frame where local quantum operations take their standard unitary dilation form. This form is preserved when changing clock reference frames, yielding physics covariant with respect to quantum reference frame transformations.
|
quantum physics
|
Convolutional Neural Networks have been known as black-box models as humans cannot interpret their inner functionalities. With an attempt to make CNNs more interpretable and trustworthy, we propose IS-CAM (Integrated Score-CAM), where we introduce the integration operation within the Score-CAM pipeline to achieve visually sharper attribution maps quantitatively. Our method is evaluated on 2000 randomly selected images from the ILSVRC 2012 Validation dataset, which proves the versatility of IS-CAM to account for different models and methods.
|
computer science
|
The density profile of the ambient medium around a supermassive black hole plays an important role in understanding the inflow-outflow mechanisms in the Galactic Centre. We constrain the spherical density profile using the stellar bow shock of the star S2 which orbits the supermassive black hole at the Galactic Centre with the pericentre distance of 14.4 mas ($\sim$ 1500 R$_\text{s}$). Assuming an elliptical orbit, we apply celestial mechanics and the theory of bow shocks that are at ram pressure equilibrium. We analyse the measured infrared flux density and magnitudes of S2 in the L'-band (3.8 micron) obtained over seven epochs in the years between 2004-2018. We detect no significant change in S2 flux density until the recent periapse in May 2018. The intrinsic flux variability of S2 is at the level of 2 - 3%. Based on the dust-extinction model, the upper limit on the number density at the S2 periapse is $\sim$1.87$\times$10$^9$ cm$^{-3}$, which yields a density slope of at most 3.20. Using the synchrotron bow-shock emission, we obtain an ambient density of $\leq$ 1.01$\times$10$^5$ cm$^{-3}$ and a slope of $\leq$ 1.47. These values are consistent with a wide variety of media from hot accretion flows to potentially colder and denser media comparable in properties to broad-line region clouds. A standard thin disc can be, however, excluded at the distance of the S2 pericentre. Based on our sensitivity of 0.01 mag, we can distinguish between hot accretion flows and thin, cold discs, where the latter can be excluded at the scale of the S2 periapse. Future observations of stars with smaller pericentre distances in the S cluster using instruments such as METIS@ELT with a photometric sensitivity of as much as 10$^{-3}$ mag will allow to probe the Galactic Centre medium at intermediate scales at densities as low as $\sim$ 700 cm$^{-3}$ in case of non-thermal bow-shock emission.
|
astrophysics
|
First-principles calculations based on density-functional theory in the pseudo-potential approach have been performed for the total energy, crystal structure and cell polarization for SrTaO$_2$N/SrTiO$_3$ heterostructures. Different heterojunctions were analyzed in terms of the termination atoms at the interface plane, and periodic or non-periodic stacking in the perpendicular direction. The calculations show that the SrTaO$_2$N layer is compressed along the $ab$-plane, while the SrTiO$_3$ is elongated, thus favoring the formation of P4mm local environment on both sides of the interface, leading to net macroscopic polarization. The analysis of the local polarization as a function of the distance to the interface, for each individual unit cell was found to depend on the presence of a N or an O atom at the interface, and also on the asymmetric and not uniform $c$-axis deformation due to the induced strain in the $ab$-plane. The resulting total polarization in the periodic array was $ \approx 0.54$ C/m$^2$, which makes this type of arrangement suitable for microelectronic applications.
|
condensed matter
|
Measurements of cosmic neutrinos have a reach potential for providing an insight into fundamental neutrino properties. For this a precise knowledge about an astrophysical environment of cosmic neutrinos propagation is needed. However this is not always possible, and the lack of information can bring about theoretical uncertainties in our physical interpretation of the results of experiments on cosmic neutrino fluxes. We formulate an approach that allows one to quantify the uncertainties using the apparatus of quantum measurement theory. We consider high-energy Dirac neutrinos emitted by some distant source and propagating towards the earth in the interstellar space. It is supposed that neutrinos can meet on their way to the detector at the earth a dense cosmic object serving as a filter that stops active, left-handed neutrinos and letting only sterile, right-handed neutrinos to propagate further. Such a filter mimics the strongest effect on the neutrino flux that can be induced by the cosmic object and that can be missed in the theoretical interpretation of the lab measurements due to the insufficient information about the astrophysical environment of the neutrino propagation. Treating the neutrino interaction with the cosmic object as the first, neutrino-spin measurement, whose result is not recorded, we study its invasive effect on the second, neutrino-flavor measurement in the lab.
|
high energy physics phenomenology
|
Consider a real matrix $\Theta$ consisting of rows $(\theta_{i,1},\ldots,\theta_{i,n})$, for $1\leq i\leq m$. The problem of making the system linear forms $x_{1}\theta_{i,1}+\cdots+x_{n}\theta_{i,n}-y_{i}$ for integers $x_{j},y_{i}$ small naturally induces an ordinary and a uniform exponent of approximation, denoted by $w(\Theta)$ and $\widehat{w}(\Theta)$ respectively. For $m=1$, a sharp lower bound for the ratio $w(\Theta)/\widehat{w}(\Theta)$ was recently established by Marnat and Moshchevitin. We give a short, new proof of this result upon a hypothesis on the best approximation integer vectors associated to $\Theta$. Our conditional result extends to general $m>1$ (but may not be optimal in this case). Moreover, our hypothesis is always satisfied in particular for $m=1, n=2$ and thereby unconditionally confirms a previous observation of Jarn\'ik. We formulate our results in the more general context of approximation of subspaces of Euclidean spaces by lattices. We further establish criteria upon which a given number $\ell$ of consecutive best approximation vectors are linearly independent. Our method is based on Siegel's Lemma.
|
mathematics
|
We present a very minimal model for baryogenesis by a dark first-order phase transition. It employs a new dark $SU(2)_{D}$ gauge group with two doublet Higgs bosons, two lepton doublets, and two singlets. The singlets act as a neutrino portal that transfer the generated asymmetry to the Standard Model. The model predicts $\Delta N_\text{eff} = 0.09-0.13$ detectable by future experiments as well as possible signals from exotic decays of the Higgs and $Z$ bosons and stochastic gravitational waves.
|
high energy physics phenomenology
|
Phonons, and in particular surface acoustic wave phonons, have been proposed as a means to coherently couple distant solid-state quantum systems. Recent experiments have shown that superconducting qubits can control and detect individual phonons in a resonant structure, enabling the coherent generation and measurement of complex stationary phonon states. Here, we report the deterministic emission and capture of itinerant surface acoustic wave phonons, enabling the quantum entanglement of two superconducting qubits. Using a 2 mm-long acoustic quantum communication channel, equivalent to a 500 ns delay line, we demonstrate the emission and re-capture of a phonon by one qubit; quantum state transfer between two qubits with a 67\% efficiency; and, by partial transfer of a phonon between two qubits, generation of an entangled Bell pair with a fidelity of $\mathcal{F}_B = 84 \pm 1$ %
|
quantum physics
|
It is well-known that unitary irreducible representations of groups can be usefully classified in a 3-fold classification scheme: Real, Complex, Quaternionic. In 1962 Freeman Dyson pointed out that there is an analogous 10-fold classification of irreducible representations of groups involving both unitary and antiunitary operators. More recently it was realized that there is also a 10-fold classification scheme involving superdivision algebras. Here we give a careful proof of the equivalence of these two 10-fold ways.
|
high energy physics theory
|
Liquid argon is being employed as a detector medium in neutrino physics and Dark Matter searches. A recent push to expand the applications of scintillation light in Liquid Argon Time Projection Chamber neutrino detectors has necessitated the development of advanced methods of simulating this light. The presently available methods tend to be prohibitively slow or imprecise due to the combination of detector size and the amount of energy deposited by neutrino beam interactions. In this work we present a semi-analytical model to predict the quantity of argon scintillation light observed by a light detector with a precision better than $10\%$, based only on the relative positions between the scintillation and light detector. We also provide a method to predict the distribution of arrival times of these photons accounting for propagation effects. Additionally, we present an equivalent model to predict the number of photons and their arrival times in the case of a wavelength-shifting, highly-reflective layer being present on the detector cathode. Our proposed method can be used to simulate light propagation in large-scale liquid argon detectors such as DUNE or SBND, and could also be applied to other detector mediums such as liquid xenon or xenon-doped liquid argon.
|
physics
|
In this paper we study lumpy black holes with AdS${}_p \times S^q$ asymptotics, where the isometry group coming from the sphere factor is broken down to SO($q$). Depending on the values of $p$ and $q$, these are solutions to a certain Supergravity theory with a particular gauge field. We have considered the values $(p,q) = (5,5)$ and $(p,q) = (4,7)$, corresponding to type IIB supergravity in ten dimensions and eleven-dimensional supergravity respectively. These theories presumably contain an infinite spectrum of families of lumpy black holes, labeled by a harmonic number $\ell$, whose endpoints in solution space merge with another type of black holes with different horizon topology. We have numerically constructed the first four families of lumpy solutions, corresponding to $\ell = 1, 2^+, 2^-$ and $3$. We show that the geometry of the horizon near the merger is well-described by a cone over a triple product of spheres, thus extending Kol's local model to the present asymptotics. Interestingly, the presence of non-trivial fluxes in the internal sphere implies that the cone is no longer Ricci flat. This conical manifold accounts for the geometry and the behavior of the physical quantities of the solutions sufficiently close to the critical point. Additionally, we show that the vacuum expectation values of the dual scalar operators approach their critical values with a power law whose exponents are dictated by the local cone geometry in the bulk.
|
high energy physics theory
|
This paper proposes a new robust update rule of target network for deep reinforcement learning (DRL), to replace the conventional update rule, given as an exponential moving average. The target network is for smoothly generating the reference signals for a main network in DRL, thereby reducing learning variance. The problem with its conventional update rule is the fact that all the parameters are smoothly copied with the same speed from the main network, even when some of them are trying to update toward the wrong directions. This behavior increases the risk of generating the wrong reference signals. Although slowing down the overall update speed is a naive way to mitigate wrong updates, it would decrease learning speed. To robustly update the parameters while keeping learning speed, a t-soft update method, which is inspired by student-t distribution, is derived with reference to the analogy between the exponential moving average and the normal distribution. Through the analysis of the derived t-soft update, we show that it takes over the properties of the student-t distribution. Specifically, with a heavy-tailed property of the student-t distribution, the t-soft update automatically excludes extreme updates that differ from past experiences. In addition, when the updates are similar to the past experiences, it can mitigate the learning delay by increasing the amount of updates. In PyBullet robotics simulations for DRL, an online actor-critic algorithm with the t-soft update outperformed the conventional methods in terms of the obtained return and/or its variance. From the training process by the t-soft update, we found that the t-soft update is globally consistent with the standard soft update, and the update rates are locally adjusted for acceleration or suppression.
|
computer science
|
It is well known that for all $n\geq1$ the number $n+ 1$ is a divisor of the central binomial coefficient ${2n\choose n}$. Since the $n$th central binomial coefficient equals the number of lattice paths from $(0,0)$ to $(n,n)$ by unit steps north or east, a natural question is whether there is a way to partition these paths into sets of $n+ 1$ paths or $n+1$ equinumerous sets of paths. The Chung-Feller theorem gives an elegant answer to this question. We pose and deliver an answer to the analogous question for $2n-1$, another divisor of ${2n\choose n}$. We then show our main result follows from a more general observation regarding binomial coefficients ${n\choose k}$ with $n$ and $k$ relatively prime. A discussion of the case where $n$ and $k$ are not relatively prime is also given, highlighting the limitations of our methods. Finally, we come full circle and give a novel interpretation of the Catalan numbers.
|
mathematics
|
Twist-untwist protocols for quantum metrology consist of a serial application of: 1. unitary nonlinear dynamics (e.g., spin squeezing or Kerr nonlinearity), 2. parameterized dynamics $U(\phi)$ (e.g., a collective rotation or phase space displacement), 3. time reversed application of step 1. Such protocols are known to produce states that allow Heisenberg scaling for experimentally accessible estimators of $\phi$ even when the nonlinearities are applied for times much shorter than required to produce Schr\"{o}dinger cat states. In this work, we prove that twist-untwist protocols provide the lowest estimation error among quantum metrology protocols that utilize two calls to a weakly nonlinear evolution and a readout involving only measurement of a spin operator $\vec{n}\cdot \vec{J}$, asymptotically in the number of particles. We consider the following physical settings: all-to-all interactions generated by one-axis twisting $J_{z}^{2}$ (e.g., interacting Bose gases), constant finite range spin-spin interactions of distinguishable or bosonic atoms (e.g., trapped ions or Rydberg atoms, or lattice bosons). In these settings, we further show that the optimal twist-untwist protocols asymptotically achieve $85\%$ and $92\%$ of the respective quantum Cram\'{e}r-Rao bounds. We show that the error of a twist-untwist protocol can be decreased by a factor of $L$ without an increase in the noise of the spin measurement if the twist-untwist protocol can be noiselessly iterated as an $L$ layer quantum alternating operator ansatz.
|
quantum physics
|
Gridding operation, which is to map non-uniform data samples onto a uniformly distributedgrid, is one of the key steps in radio astronomical data reduction process. One of the mainbottlenecks of gridding is the poor computing performance, and a typical solution for suchperformance issue is the implementation of multi-core CPU platforms. Although such amethod could usually achieve good results, in many cases, the performance of gridding is stillrestricted to an extent due to the limitations of CPU, since the main workload of gridding isa combination of a large number of single instruction, multi-data-stream operations, which ismore suitable for GPU, rather than CPU implementations. To meet the challenge of massivedata gridding for the modern large single-dish radio telescopes, e.g., the Five-hundred-meterAperture Spherical radio Telescope (FAST), inspired by existing multi-core CPU griddingalgorithms such as Cygrid, here we present an easy-to-install, high-performance, and open-source convolutional gridding framework, HCGrid,in CPU-GPU heterogeneous platforms. Itoptimises data search by employing multi-threading on CPU, and accelerates the convolutionprocess by utilising massive parallelisation of GPU. In order to make HCGrid a more adaptivesolution, we also propose the strategies of thread organisation and coarsening, as well as optimalparameter settings under various GPU architectures. A thorough analysis of computing timeand performance gain with several GPU parallel optimisation strategies show that it can leadto excellent performance in hybrid computing environments.
|
astrophysics
|
The Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications. Despite this, current bounds for the Langevin algorithms are slower than those of competing algorithms in many important situations, for instance when sampling from weakly log-concave distributions, or when sampling or optimizing non-convex log-densities. In this paper, we obtain improved bounds in many of these situations, showing that the Metropolis-adjusted Langevin algorithm (MALA) is faster than the best bounds for its competitor algorithms when the target distribution satisfies weak third- and fourth- order regularity properties associated with the input data. In many settings, our regularity conditions are weaker than the usual Euclidean operator norm regularity properties, allowing us to show faster bounds for a much larger class of distributions than would be possible with the usual Euclidean operator norm approach, including in statistics and machine learning applications where the data satisfy a certain incoherence condition. In particular, we show that using our regularity conditions one can obtain faster bounds for applications which include sampling problems in Bayesian logistic regression with weakly convex priors, and the nonconvex optimization problem of learning linear classifiers with zero-one loss functions. Our main technical contribution in this paper is our analysis of the Metropolis acceptance probability of MALA in terms of its "energy-conservation error," and our bound for this error in terms of third- and fourth- order regularity conditions. Our combination of this higher-order analysis of the energy conservation error with the conductance method is key to obtaining bounds which have a sub-linear dependence on the dimension $d$ in the non-strongly logconcave setting.
|
computer science
|
Schwarzites are porous crystalline structures with Gaussian negative curvature. In this work, we investigated the mechanical behavior and energy absorption properties of two carbon-based diamond schwarzites (D688 and D8bal). We carried out fully atomistic molecular dynamics (MD) simulations. The optimized MD atomic models were used to generate macro-scale models for 3D-printing (PolyLactic Acid (PLA) polymer filaments) through Fused Deposition Modelling (FDM). Mechanical properties under uniaxial compression were investigated for both the atomic models and the 3D-printed ones. Mechanical testings were performed on the 3D-printed schwarzites where the deformation mechanisms were found to be similar to those observed in MD simulations. These results are suggestive of a scale-independent mechanical behavior that is dominated by structural topology. The structures exhibit high specific energy absorption and crush force efficiency ~0.8, which suggest that the 3D-printed diamond schwarzites are good candidates as energy-absorbing materials.
|
physics
|
The subject of radiation reaction in classical electromagnetism remains controversial over 120 years after the pioneering work of Lorentz. We give a simple but rigorous treatment of the subject at the textbook level that explains the apparent paradoxes that are much discussed in the literature on the subject. We first derive the equation of motion of a charged particle from conservation of energy and momentum, which includes the self-force term. We then show that this theory is unstable if charged particles are pointlike: the energy is unbounded from below, and charged particles self-accelerate (`over-react') due to their negative `bare' mass. This theory clearly does not describe our world, but we show that these instabilities are absent if the particle has a finite size larger than its classical radius. For such finite-size charged particles, the effects of radiation reaction can be computed in a systematic expansion in the size of the particle. The leading term in this expansion is the reduced-order Abraham-Lorentz equation of motion, which has no stability problems. We also discuss the apparent paradox that a particle with constant acceleration radiates, but does not suffer radiation reaction (`under-reaction'). Along the way, we introduce the ideas of renormalization and effective theories, which are important in many areas of modern theoretical physics. We hope that this will be a useful addition to the literature that will remove some of the air of mystery and paradox surrounding the subject.
|
high energy physics theory
|
Model-based 3D pose and shape estimation methods reconstruct a full 3D mesh for the human body by estimating several parameters. However, learning the abstract parameters is a highly non-linear process and suffers from image-model misalignment, leading to mediocre model performance. In contrast, 3D keypoint estimation methods combine deep CNN network with the volumetric representation to achieve pixel-level localization accuracy but may predict unrealistic body structure. In this paper, we address the above issues by bridging the gap between body mesh estimation and 3D keypoint estimation. We propose a novel hybrid inverse kinematics solution (HybrIK). HybrIK directly transforms accurate 3D joints to relative body-part rotations for 3D body mesh reconstruction, via the twist-and-swing decomposition. The swing rotation is analytically solved with 3D joints, and the twist rotation is derived from the visual cues through the neural network. We show that HybrIK preserves both the accuracy of 3D pose and the realistic body structure of the parametric human model, leading to a pixel-aligned 3D body mesh and a more accurate 3D pose than the pure 3D keypoint estimation methods. Without bells and whistles, the proposed method surpasses the state-of-the-art methods by a large margin on various 3D human pose and shape benchmarks. As an illustrative example, HybrIK outperforms all the previous methods by 13.2 mm MPJPE and 21.9 mm PVE on 3DPW dataset. Our code is available at https://github.com/Jeff-sjtu/HybrIK.
|
computer science
|
The ESA Gaia astrometric mission has enabled the remarkable discovery that a large fraction of the stars near the Solar neighbourhood appear to be debris from a single in-falling system, the so-called Gaia-Enceladus. One exciting feature of this result is that it gives astronomers for the first time a large sample of easily observable unevolved stars that formed in an extra-Galactic environment, which can be compared to stars that formed within our Milky Way. Here we use these stars to investigate the "Spite Plateau" -- the near-constant lithium abundance observed in metal-poor dwarf stars across a wide range of metallicities (-3<[Fe/H]<-1). In particular our aim is to test whether the stars that formed in the Gaia-Enceladus show a different Spite Plateau to other Milky Way stars that inhabit the disk and halo. Individual galaxies could have different Spite Plateaus --- e.g., the ISM could be more depleted in lithium in a lower galactic mass system due to it having a smaller reservoir of gas. We identified 76 Gaia-Enceladus dwarf stars observed and analyzed by the GALactic Archeology with HERMES (GALAH) survey as part of its Third Data Release. Orbital actions were used to select samples of Gaia-Enceladus stars, and comparison samples of halo and disk stars. We find that the Gaia-Enceladus stars show the same lithium abundance as other likely accreted stars and in situ Milky Way stars, strongly suggesting that the "lithium problem" is not a consequence of the formation environment. This result fits within the growing consensus that the Spite Plateau, and more generally the "cosmological lithium problem" -- the observed discrepancy between the amount of lithium in warm, metal-poor dwarf stars in our Galaxy, and the amount of lithium predicted to have been produced by Big Bang Nucleosynthesis -- is the result of lithium depletion processes within stars.
|
astrophysics
|
Heliospheric plasmas require multi-scale and multi-physics considerations. On one hand, MHD codes are widely used for global simulations of the solar-terrestrial environments, but do not provide the most elaborate physical description of space plasmas. Hybrid codes, on the other hand, capture important physical processes, such as electric currents and effects of finite Larmor radius, but they can be used locally only, since the limitations in available computational resources do not allow for their use throughout a global computational domain. In the present work, we present a new coupled scheme which allows to switch blocks in the block-adaptive grids from fluid MHD to hybrid simulations, without modifying the self-consistent computation of the electromagnetic fields acting on fluids (in MHD simulation) or charged ion macroparticles (in hybrid simulation). In this way, the hybrid scheme can refine the description in specified regions of interest without compromising the efficiency of the global MHD code.
|
physics
|
The main theme of the paper is the detailed discussion of the renormalization of the quantum field theory comprising two interacting scalar fields. The potential of the model is the fourth-order homogeneous polynomial of the fields, symmetric with respect to the transformation $\phi_{i}\rightarrow{-\phi_{i}}$. We determine the Feynman rules for the model and then we present a detailed discussion of the renormalization of the theory at one loop. Next, we derive the one loop renormalization group equations for the running masses and coupling constants. At the level of two loops, we use the FeynArts package of Mathematica to generate the two loops Feynman diagrams and calculate in detail the setting sun diagram.
|
high energy physics theory
|
We study correlated switchings of superconducting nanobridge probed with train of current pulses. For pulses with low repetition rate each pulse transits the superconducting bridge to normal state with probability $P$ independent of the outcomes in the preceding pulses. We show that with reduction of the time interval between pulses long range correlation between pulses occurs: stochastic switching in a single pulse rises temperature of the bridge and affects outcome of the probing for next pulses. As a result, an artificial intricate stochastic process with adjustable strength of correlation is produced. We identify regime where apparent switching probability exhibits the thermal hysteresis with discontinuity at a critical current amplitude of the probing pulse. This engineered stochastic process can be viewed as an artificial phase transition and provides an interesting framework for studying correlated systems. Due to its extreme sensitivity on the control parameter, i.e. electric current, temperature or magnetic field, it offers opportunity for ultra-sensitive detection.
|
condensed matter
|
We study the minimum-cost metric perfect matching problem under online i.i.d arrivals. We are given a fixed metric with a server at each of the points, and then requests arrive online, each drawn independently from a known probability distribution over the points. Each request has to be matched to a free server, with cost equal to the distance. The goal is to minimize the expected total cost of the matching. Such stochastic arrival models have been widely studied for the maximization variants of the online matching problem; however, the only known result for the minimization problem is a tight $O(\log n)$-competitiveness for the random-order arrival model. This is in contrast with the adversarial model, where an optimal competitive ratio of $O(\log n)$ has long been conjectured and remains a tantalizing open question. In this paper, we show improved results in the i.i.d arrival model. We show how the i.i.d model can be used to give substantially better algorithms: our main result is an $O((\log \log \log n)^2)$-competitive algorithm in this model. Along the way we give a $9$-competitive algorithm for the line and tree metrics. Both results imply a strict separation between the i.i.d model and the adversarial and random order models, both for general metrics and these much-studied metrics.
|
computer science
|
Observables in quantum mechanics are represented by self-adjoint operators on Hilbert space. Such ubiquitous, well-known, and very foundational fact, however, is traditionally subtle to be explained in typical first classes in quantum mechanics, as well as to senior physicists who have grown up with the lesson that self-adjointness is "just technical". The usual difficulties are to clarify the connection between the demand for certain physical features in the theory and the corresponding mathematical requirement of self-adjointness, and to distinguish between self-adjoint and hermitian operator not just at the level of the mathematical definition but most importantly from the perspective that mere hermiticity, without self-adjointness, does not ensure the desired physical requirements and leaves the theory inconsistent. In this work we organise an amount of standard facts on the physical role of self-adjointness into a coherent pedagogical path aimed at making quantum observables emerge as necessarily self-adjoint, and not merely hermitian operators. Next to the central core of our line of reasoning -- the necessity of a non-trivial declaration of a domain to associate to the formal action of an observable, and the emergence of self-adjointness as a consequence of fundamental physical requirements -- we include some complementary materials consisting of a few instructive mathematical proofs and a short retrospective, ranging from the past decades to the current research agenda, on the self-adjointness problem for quantum Hamiltonians of relevance in applications.
|
quantum physics
|
In this report, we summarize the end-to-end signal-to-noise ratio and the rate of half-duplex, full-duplex, amplify-and-forward, and decode-and-forward relay-aided communications, and well as the signal-to-noise ratio and the rate of the emerging technology known as reconfigurable intelligent surfaces.
|
electrical engineering and systems science
|
We develop a novel theory of one-parameter families of multi-view varieties. These families are induced by quotient lattices over discrete valuation rings and generalise the notion of \textit{Mustafin varieties}. We study the geometry and the combinatorics of the limit of these families.
|
mathematics
|
In this paper, two new subspace minimization conjugate gradient methods based on $p - $regularization models are proposed, where a special scaled norm in $p - $regularization model is analyzed. Different choices for special scaled norm lead to different solutions to the $p - $regularized subproblem. Based on the analyses of the solutions in a two-dimensional subspace, we derive new directions satisfying the sufficient descent condition. With a modified nonmonotone line search, we establish the global convergence of the proposed methods under mild assumptions. $R - $linear convergence of the proposed methods are also analyzed. Numerical results show that, for the CUTEr library, the proposed methods are superior to four conjugate gradient methods, which were proposed by Hager and Zhang (SIAM J Optim 16(1):170-192, 2005), Dai and Kou (SIAM J Optim 23(1):296-320, 2013), Liu and Liu (J Optim Theory Appl 180(3):879-906, 2019) and Li et al. (Comput Appl Math 38(1): 2019), respectively.
|
mathematics
|
Increasingly in medical imaging has emerged an issue surrounding the reconstruction of noisy images from raw measurement data. Where the forward problem is the generation of raw measurement data from a ground truth image, the inverse problem is the reconstruction of those images from the measurement data. In most cases with medical imaging, classical inverse Radon transforms, such as an inverse Fourier transform for MRI, work well for recovering clean images from the measured data. Unfortunately in the case of X-Ray CT, where undersampled data is very common, more than this is needed to resolve faithful and usable images. In this paper, we explore the history of classical methods for solving the inverse problem for X-Ray CT, followed by an analysis of the state of the art methods that utilize supervised deep learning. Finally, we will provide some possible avenues for research in the future.
|
electrical engineering and systems science
|
In this article we propose an optimal method referred to as SPlit for splitting a dataset into training and testing sets. SPlit is based on the method of Support Points (SP), which was initially developed for finding the optimal representative points of a continuous distribution. We adapt SP for subsampling from a dataset using a sequential nearest neighbor algorithm. We also extend SP to deal with categorical variables so that SPlit can be applied to both regression and classification problems. The implementation of SPlit on real datasets shows substantial improvement in the worst-case testing performance for several modeling methods compared to the commonly used random splitting procedure.
|
statistics
|
A circularly polarized leaky-wave antenna capable of frequency scanning is proposed in this paper. The main objective is to achieve high gain and polarization purity without the need for a complex feed network. The antenna consists of two independent modules: (1) Anisotropic modulated metasurface antenna (MoMetA) for generating vertically polarized radiation with an operational scanning range of 19 to 47 degrees in elevation; (2) Wide-band polarization converter with high angular stability and capable of converting vertically linear polarization into circular polarization (CP). Aperture field estimation (AFE) method is used to design the antenna with high accuracy in predicting far-field pattern without the need for full-wave simulations. The gain of antenna in the bandwidth of 16 to 21 GHz is obtained better than 18 dBi. Simulation results show that the axial ratio in the maximum gain direction is lower than 3 dB all over its operational frequency bandwidth, demonstrating the proper operation of polarizer. In order to verify the simulation results, one prototype of the antenna is fabricated and its radiation patterns are measured in an anechoic chamber
|
physics
|
Recent advances in 3D sensing have created unique challenges for computer vision. One fundamental challenge is finding a good representation for 3D sensor data. Most popular representations (such as PointNet) are proposed in the context of processing truly 3D data (e.g. points sampled from mesh models), ignoring the fact that 3D sensored data such as a LiDAR sweep is in fact 2.5D. We argue that representing 2.5D data as collections of (x, y, z) points fundamentally destroys hidden information about freespace. In this paper, we demonstrate such knowledge can be efficiently recovered through 3D raycasting and readily incorporated into batch-based gradient learning. We describe a simple approach to augmenting voxel-based networks with visibility: we add a voxelized visibility map as an additional input stream. In addition, we show that visibility can be combined with two crucial modifications common to state-of-the-art 3D detectors: synthetic data augmentation of virtual objects and temporal aggregation of LiDAR sweeps over multiple time frames. On the NuScenes 3D detection benchmark, we show that, by adding an additional stream for visibility input, we can significantly improve the overall detection accuracy of a state-of-the-art 3D detector.
|
computer science
|
We present a novel open-source Python software package, bfieldtools, for magneto-quasistatic calculations with current densities on surfaces of arbitrary shape. The core functionality of the software relies on a stream-function representation of surface-current density and its discretization on a triangle mesh. Although this stream-function technique is well-known in certain fields, to date the related software implementations have not been published or have been limited to specific applications. With bfieldtools, we aimed to produce a general, easy-to-use and well-documented open-source software. The software package is written purely in Python; instead of explicitly using lower-level languages, we address computational bottlenecks through extensive vectorization and use of the NumPy library. The package enables easy deployment, rapid code development and facilitates application of the software to practical problems. In this paper, we describe the software package and give an extensive demonstration of its use with an emphasis on one of its main applications -- coil design.
|
physics
|
Many statistical estimands can expressed as continuous linear functionals of a conditional expectation function. This includes the average treatment effect under unconfoundedness and generalizations for continuous-valued and personalized treatments. In this paper, we discuss a general approach to estimating such quantities: we begin with a simple plug-in estimator based on an estimate of the conditional expectation function, and then correct the plug-in estimator by subtracting a minimax linear estimate of its error. We show that our method is semiparametrically efficient under weak conditions and observe promising performance on both real and simulated data.
|
statistics
|
In power systems, an asset class is a group of power equipment that has the same function and shares similar electrical or mechanical characteristics. Predicting failures for different asset classes is critical for electric utilities towards developing cost-effective asset management strategies. Previously, physical age based Weibull distribution has been widely used to failure prediction. However, this mathematical model cannot incorporate asset condition data such as inspection or testing results. As a result, the prediction cannot be very specific and accurate for individual assets. To solve this important problem, this paper proposes a novel and comprehensive data-driven approach based on asset condition data: K-means clustering as an unsupervised learning method is used to analyze the inner structure of historical asset condition data and produce the asset conditional ages; logistic regression as a supervised learning method takes in both asset physical ages and conditional ages to classify and predict asset statuses. Furthermore, an index called average aging rate is defined to quantify, track and estimate the relationship between asset physical age and conditional age. This approach was applied to an urban distribution system in West Canada to predict medium-voltage cable failures. Case studies and comparison with standard Weibull distribution are provided. The proposed approach demonstrates superior performance and practicality for predicting asset class failures in power systems.
|
statistics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.