text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The new precision era of jet quenching observables at both RHIC and the LHC calls for an improved and more precise description of in-medium gluon emissions. The development of new theoretical tools and analytical calculations to tackle this challenge has been hampered by the inability to include the effects of multiple scatterings with the medium using a realistic model for the parton-medium interactions. In this paper, we show how the analytical expressions for the full in-medium spectrum, including the resummation of all multiple scatterings, can be written in a form where the numerical evaluation can be easily performed without the need of the usually employed harmonic or single hard approximations. We present the transverse momentum and energy-dependent medium-induced gluon emission distributions for known realistic interaction models to illustrate how our framework can be applied beyond the limited kinematic regions of previous calculations.
|
high energy physics phenomenology
|
This paper investigates the distributed consensus tracking control problem for general linear multi-agent systems (MASs) with external disturbances and heterogeneous time-varying input and communication delays under a directed communication graph topology, containing a spanning tree. First, for all agents whose state matrix has no eigenvalues with positive real parts, a communication-delay-related observer, which is used to construct the controller, is designed for followers to estimate the leader's state information. Second, by means of the output regulation theory, the results are relaxed to the case that only the leader's state matrix eigenvalues have non-positive real parts and, under these relaxed conditions, the controller is redesigned. Both cases lead to a closed-loop error system of which the stability is guaranteed via a Lyapunov-Krasovskii functional with sufficient conditions in terms of input-delay-dependent linear matrix inequalities (LMIs). An extended LMI is proposed which, in conjunction with the rest of LMIs, results in a solution with a larger upper bound on delays than what would be feasible without it. It is highlighted that the integration of communication-delay-related observer and input-delay-related LMI to construct a fully distributed controller (which requires no global information) is scalable to arbitrarily large networks. The efficacy of the proposed scheme is demonstrated via illustrative numerical examples.
|
electrical engineering and systems science
|
In recent years there has been an increase in the number of scientific papers that suggest using conformal predictions in drug discovery. We consider that some versions of conformal predictions applied in binary settings are embroiled in pitfalls, not obvious at first sight, and that it is important to inform the scientific community about them. In the paper we first introduce the general theory of conformal predictions and follow with an explanation of the versions currently dominant in drug discovery research today. Finally, we provide cases for their critical assessment in binary classification settings.
|
statistics
|
A compact Riemannian manifold is associated with geometric data given by the eigenvalues of various Laplacian operators on the manifold and the triple overlap integrals of the corresponding eigenmodes. This geometric data must satisfy certain consistency conditions that follow from associativity and the completeness of eigenmodes. We show that it is possible to obtain nontrivial bounds on the geometric data of closed Einstein manifolds by using semidefinite programming to study these consistency conditions, in analogy to the conformal bootstrap bounds on conformal field theories. These bootstrap bounds translate to constraints on the tree-level masses and cubic couplings of Kaluza-Klein modes in theories with compact extra dimensions. We show that in some cases the bounds are saturated by known manifolds.
|
high energy physics theory
|
We argue for a foundational epistemic claim and a hypothesis about the production and uses of mathematical epidemiological models, exploring the consequences for our political and socio-economic lives. First, in order to make the best use of scientific models, we need to understand why models are not truly representational of our world, but are already pitched towards various uses. Second, we need to understand the implicit power relations in numbers and models in public policy, and, thus, the implications for good governance if numbers and models are used as the exclusive drivers of decision making.
|
computer science
|
The spatially nonlocal response functions are proposed which nearly coincide with the commonly used local response for electromagnetic fields and fluctuations on the mass shell, but differ significantly for the off-shell fluctuating field. It is shown that the fundamental Lifshitz theory using the suggested response functions comes to an agreement with the measurement data for the Casimir force without neglecting the dissipation of free electrons. We demonstrate that reflectances of the on-shell electromagnetic waves calculated using the nonlocal and commonly employed local responses differ only slightly. The Kramers-Kronig relations for nonlocal response functions possessing the first- and second-order poles at zero frequency are derived, i.e., the proposed response satisfies the principle of causality. An application of these results to resolution of the Casimir puzzle, which lies in the fact that the Lifshitz theory is experimentally consistent only with discarded dissipation, is discussed.
|
quantum physics
|
We review the main features of the relativistic Snyder model and its generalizations. We discuss the quantum field theory on this background using the standard formalism of noncommutaive QFT and discuss the possibility of obtaining a finite theory.
|
high energy physics theory
|
Numerous theories extending beyond the standard model of particle physics predict the existence of bosons that could constitute the dark matter (DM) permeating the universe. In the standard halo model (SHM) of galactic dark matter the velocity distribution of the bosonic DM field defines a characteristic coherence time $\tau_c$. Until recently, laboratory experiments searching for bosonic DM fields have been in the regime where the measurement time $T$ significantly exceeds $\tau_c$, so null results have been interpreted as constraints on the coupling of bosonic DM to standard model particles with a bosonic DM field amplitude $\Phi_0$ fixed by the average local DM density. However, motivated by new theoretical developments, a number of recent searches probe the regime where $T\ll\tau_c$. Here we show that experiments operating in this regime do not sample the full distribution of bosonic DM field amplitudes and therefore it is incorrect to assume a fixed value of $\Phi_0$ when inferring constraints on the coupling strength of bosonic DM to standard model particles. Instead, in order to interpret laboratory measurements (even in the event of a discovery), it is necessary to account for the stochastic nature of such a virialized ultralight field (VULF). The constraints inferred from several previous null experiments searching for ultralight bosonic DM were overestimated by factors ranging from 3 to 10 depending on experimental details, model assumptions, and choice of inference framework.
|
astrophysics
|
The proper classification of major eye movements, saccades, fixations, and smooth pursuits, remains essential to utilizing eye-tracking data. There is difficulty in separating out smooth pursuits from the other behavior types, particularly from fixations. To this end, we propose a new offline algorithm, I-VDT-HMM, for tertiary classification of eye movements. The algorithm combines the simplicity of two foundational algorithms, I-VT and I-DT, as has been implemented in I-VDT, with the statistical predictive power of the Viterbi algorithm. We evaluate the fitness across a dataset of eight eye movement records at eight sampling rates gathered from previous research, with a comparison to the current state-of-the-art using the proposed quantitative and qualitative behavioral scores. The proposed algorithm achieves promising results in clean high sampling frequency data and with slight modifications could show similar results with lower quality data. Though, the statistical aspect of the algorithm comes at a cost of classification time.
|
computer science
|
Logical inference leads to one of the major interpretations of probability theory called logical interpretation, in which the probability is seen as a measure of the plausibility of a logical statement under incomplete information. In this paper, assuming that our usual inference procedure makes sense for every set of logical propositions represented in terms of commuting projectors on a given Hilbert space, we extend the logical interpretation to quantum mechanics and derive the Born rule. Our result implies that, from the epistemological viewpoints, we can regard quantum mechanics as a natural extension of the classical probability.
|
quantum physics
|
Using intrinsic multiple Andreev reflections effect (IMARE) spectroscopy, we studied ballistic superconductor - normal metal - superconductor (SnS) contacts in layered oxypnictide superconductors NdFeAsO$_{0.6}$H$_{0.36}$ with critical temperatures $T_c = 45-48$ K. We directly determined the magnitude of two bulk superconducting order parameters, the large gap $\Delta_L \approx 10.4$ meV, and a possible small gap $\Delta_S \approx 1.8$ meV, and their temperature dependence. Additionally, a resonant coupling with a characteristic bosonic mode was observed. The boson energy at 4.2 K, $\varepsilon_0 = 10.5-11.0$ meV being less than the indirect gap ($\Delta_L < \varepsilon_0 < \Delta_L +\Delta_S$).
|
condensed matter
|
We investigate chemo-photothermal effects of gold nanorods (GNRs) coated using mesoporous silica (mSiO2) loading doxorubicin (DOX). When the mesoporous silica layer is embedded by doxorubicin drugs, a significant change in absorption spectra enable to quantify the drug loading. We carry out photothermal experiments on saline and livers of mice having GNRs@mSiO2 and GNRs@mSiO2-DOX. We also inject the gold nanostructures into many tumor-implanted mice and use laser illumination on some of them. By measuring weight and size of tumors, the distinct efficiency of photothermal therapy and chemotherapy on treatment is determined. We experimentally confirm the accumulation of gold nanostructures in liver.
|
physics
|
Markov decision processes (MDPs) in queues and networks have been an interesting topic in many practical areas since the 1960s. This paper provides a detailed overview on this topic and tracks the evolution of many basic results. Also, this paper summarizes several interesting directions in the future research. We hope that this overview can shed light to MDPs in queues and networks, and also to their extensive applications in various practical areas.
|
mathematics
|
Kramers-Kronig (KK) receiver, which is equivalent to heterodyne detection with one single photodetector, provides an efficient method to reconstruct the complex-valued optical field by means of intensity detection given a minimum-phase signal. In this letter, quantum noise of the KK receiver is derived analytically and compared with that of the balanced heterodyne detection. We show that the quantum noise of the KK receiver keeps the tangential fluctuation of the measured signal the same as that of a coherent state while concentrates the excess noise on the amplitude. In consequence, the radial fluctuation of the KK receiver is three times large as the tangential one, which presents an asymmetric distribution and is different from that of the balanced heterodyne detection. More interestingly, the projected in-phase and quadrature field operators of the retrieved signal after down conversion have a time dependent quantum noise distribution depending on the time-varying phase.
|
quantum physics
|
Coronavirus Disease 2019 (COVID-19) is an emerging respiratory disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) with rapid human-to-human transmission and a high case fatality rate particularly in older patients. Due to the exponential growth of infections, many healthcare systems across the world are under pressure to care for increasing amounts of at-risk patients. Given the high number of infected patients, identifying patients with the highest mortality risk early is critical to enable effective intervention and optimal prioritisation of care. Here, we present the COVID-19 Early Warning System (CovEWS), a clinical risk scoring system for assessing COVID-19 related mortality risk. CovEWS provides continuous real-time risk scores for individual patients with clinically meaningful predictive performance up to 192 hours (8 days) in advance, and is automatically derived from patients' electronic health records (EHRs) using machine learning. We trained and evaluated CovEWS using de-identified data from a cohort of 66430 COVID-19 positive patients seen at over 69 healthcare institutions in the United States (US), Australia, Malaysia and India amounting to an aggregated total of over 2863 years of patient observation time. On an external test cohort of 5005 patients, CovEWS predicts COVID-19 related mortality from $78.8\%$ ($95\%$ confidence interval [CI]: $76.0$, $84.7\%$) to $69.4\%$ ($95\%$ CI: $57.6, 75.2\%$) specificity at a sensitivity greater than $95\%$ between respectively 1 and 192 hours prior to observed mortality events - significantly outperforming existing generic and COVID-19 specific clinical risk scores. CovEWS could enable clinicians to intervene at an earlier stage, and may therefore help in preventing or mitigating COVID-19 related mortality.
|
statistics
|
The reverse shock in the ejecta of core-collapse supernovae is potentially able to destroy newly formed dust material. In order to determine dust survival rates, we have performed a set of hydrodynamic simulations using the grid-based code AstroBEAR in order to model a shock wave interacting with clumpy supernova ejecta. Dust motions and destruction rates were computed using our newly developed external, post-processing code Paperboats, which includes gas drag, grain charging, sputtering and grain-grain collisions. We have determined dust destruction rates for the oxygen-rich supernova remnant Cassiopeia A as a function of initial grain sizes and clump gas density. We found that up to 30 % of the carbon dust mass is able to survive the passage of the reverse shock if the initial grain size distribution is narrow with radii around ~10 - 50 nm for high gas densities, or with radii around ~0.5 - 1.5 ${\mu}$m for low and medium gas densities. Silicate grains with initial radii around 10 - 30 nm show survival rates of up to 40 % for medium and high density contrasts, while silicate material with micron sized distributions is mostly destroyed. For both materials, the surviving dust mass is rearranged into a new size distribution that can be approximated by two components: a power-law distribution of small grains and a log-normal distribution of grains having the same size range as the initial distribution. Our results show that grain-grain collisions and sputtering are synergistic and that grain-grain collisions can play a crucial role in determining the surviving dust budget in supernova remnants.
|
astrophysics
|
We report the existence of a phase transition at high temperature in the 3D Kitaev candidate material, $\beta$-Li$_2$IrO$_3$. We show that the transition is bulk, intrinsic and orders a tiny magnetic moment with a spatially anisotropic saturation moment. We show that even though this transition is global, it does not freeze the local Ir moments, which order at much lower temperatures into an incommensurate state. Rather, the ordered moment has an orbital origin that is coupled to spin correlations, likely of a Kitaev origin. The separate ordering of spin-correlated orbital moments and of local Ir moments reveals a novel way in which magnetic frustration in Kitaev systems can lead to coexisting magnetic states.
|
condensed matter
|
We present weighted Sobolev spaces and prove a trace theorem for the spaces. As an application, we discuss non-zero boundary value problems for parabolic equations. The weighted parabolic Sobolev spaces we consider are designed, in particular, for the regularity theory of stochastic partial differential equations on bounded domains.
|
mathematics
|
We investigate simple extensions of the Mirror Twin Higgs model in which the twin color gauge symmetry and the discrete $Z_2$ mirror symmetry are spontaneously broken. This is accomplished in a minimal way by introducing a single new colored triplet, sextet, or octet scalar field and its twin along with a suitable scalar potential. This spontaneous $Z_2$ breaking allows for a phenomenologically viable alignment of the electroweak vacuum, and leads to dramatic differences between the visible and mirror sectors with regard to the residual gauge symmetries at low energies, color confinement scales, and particle spectra. In particular, several of our models feature a remnant $SU(2)$ or $SO(3)$ twin color gauge symmetry with a very low confinement scale in comparison to $\Lambda_{\rm QCD}$. Furthermore, couplings between the colored scalar and matter provide a new dynamical source of twin fermion masses, and due to the mirror symmetry, these lead to a variety of correlated visible sector effects that can be probed through precision measurements and collider searches.
|
high energy physics phenomenology
|
In this paper, we extend the unified gas-kinetic wave-particle (UGKWP) method to the multi-species gas mixture and multiscale plasma transport. The construction of the scheme is based on the direct modeling on the mesh size and time step scales, and the local cell's Knudsen number determines the flow physics. The proposed scheme has the multiscale and asymptotic complexity diminishing properties. The multiscale property means that according to cell's Knudsen number the scheme can capture the non-equilibrium flow physics in the rarefied flow regime, and preserve the asymptotic Euler, Navier-Stokes, and magnetohydrodynamics limit in the continuum regime. The asymptotic complexity diminishing property means that the total degree of freedom of the scheme automatically decreases as cell's Knudsen number decreases. In the continuum regime, the scheme automatically degenerates from a kinetic solver to a hydrodynamic solver. In UGKWP, the evolution of microscopic velocity distribution is coupled with the evolution of macroscopic variables, and the particle evolution as well as the macroscopic fluxes are modeled from the time accumulating solution up to a time step scale from the kinetic model equation. For plasma transport, current scheme provides a smooth transition from particle in cell (PIC) method in the rarefied regime to the magnetohydrodynamic (MHD) solver in the continuum regime. In the continuum limit, the cell size and time step of the UGKWP method is not restricted to be less than the mean free path and mean collision time. In the highly magnetized regime, the cell size and time step are not restricted by the Debye length and plasma cyclotron period. The multiscale and asymptotic complexity diminishing properties of the scheme are verified by numerical tests in multiple flow regimes.
|
physics
|
An important conjecture in knot theory relates the large-$N$, double scaling limit of the colored Jones polynomial $J_{K,N}(q)$ of a knot $K$ to the hyperbolic volume of the knot complement, $\text{Vol}(K)$. A less studied question is whether $\text{Vol}(K)$ can be recovered directly from the original Jones polynomial ($N = 2$). In this report we use a deep neural network to approximate $\text{Vol}(K)$ from the Jones polynomial. Our network is robust and correctly predicts the volume with $97.6\%$ accuracy when training on $10\%$ of the data. This points to the existence of a more direct connection between the hyperbolic volume and the Jones polynomial.
|
high energy physics theory
|
Bayesian network (BN) modelling is extensively used in systems epidemiology. Usually it consists in selecting and reporting the best-fitting structure conditional to the data. A major practical concern is avoiding overfitting, on account of its extreme flexibility and its modelling richness. Many approaches have been proposed to control for overfitting. Unfortunately, they essentially all rely on very crude decisions that result in too simplistic approaches for such complex systems. In practice, with limited data sampled from complex system, this approach seems too simplistic. An alternative would be to use the Monte Carlo Markov chain model choice (MC3) over the network to learn the landscape of reasonably supported networks, and then to present all possible arcs with their MCMC support. This paper presents an R implementation, called mcmcabn, of a flexible structural MC3 that is accessible to non-specialists.
|
statistics
|
Residential demands for space heating and hot water account for 31% of the total European energy demand. Space heating is highly dependent on ambient conditions and susceptible to climate change. We adopt a techno-economic standpoint and assess the impact of climate change on decentralised heating demand and the cost-optimal mix of heat pump and gas boiler technologies. Temperature data with high spatial resolution from nine climate models implementing three Representative Concentration Pathways from IPCC are used to estimate climate induced changes in the European demand side for heating. The demand side is modelled by the proxy of heating-degree days. The supply side is modelled by using a screening curve approach to the economics of heat generation. We find that space heating demand decreases by about 16%, 24% and 42% in low, intermediate and extreme global warming scenarios. When considering historic weather data, we find a heterogeneous mix of technologies are cost-optimal, depending on the heating load factor (number of full-load hours per year). Increasing ambient temperatures toward the end-century improve the economic performance of heat pumps in all concentration pathways. Cost optimal technologies broadly correspond to heat markets and policies in Europe, with some exceptions
|
electrical engineering and systems science
|
We have developed a unique neutralizer device that uses an yttrium target surrounded by a platinum wall to magneto-optically trap radioactive atoms. In general, the radioactive nucleus produced in a nuclear reaction is extracted and transported in ion form. For the magneto-optical trap, thermal neutralization must occur on the surface of a metal with a small work function. The converter can produce a neutral atomic beam with small angular divergence that, given the recycling of atoms and ions, converts ions into neutral atoms with remarkable efficiency. We demonstrated the ion neutralization process using stable rubidium and confirmed $10^6$ neutralized atoms in the magneto-optical trap. Additionally, the experiment using francium demonstrated the obtaining of neutralized francium atoms.
|
physics
|
Piecewise Linear-Quadratic (PLQ) penalties are widely used to develop models in statistical inference, signal processing, and machine learning. Common examples of PLQ penalties include least squares, Huber, Vapnik, 1-norm, and their asymmetric generalizations. Properties of these estimators depend on the choice of penalty and its shape parameters, such as degree of asymmetry for the quantile loss, and transition point between linear and quadratic pieces for the Huber function. In this paper, we develop a statistical framework that can help the modeler to automatically tune shape parameters once the shape of the penalty has been chosen. The choice of the parameter is informed by the basic notion that each QS penalty should correspond to a true statistical density. The normalization constant inherent in this requirement helps to inform the optimization over shape parameters, giving a joint optimization problem over these as well as primary parameters of interest. A second contribution is to consider optimization methods for these joint problems. We show that basic first-order methods can be immediately brought to bear, and design specialized extensions of interior point (IP) methods for PLQ problems that can quickly and efficiently solve the joint problem. Synthetic problems and larger-scale practical examples illustrate the potential of the approach.
|
statistics
|
We propose an alternate construction to compute the minimal entanglement wedge cross section (EWCS) for a single interval in a $(1+1)$ dimensional holographic conformal field theory at a finite temperature, dual to a bulk planar BTZ black hole geometry. Utilizing this construction we compute the holographic entanglement negativity for the above mixed state configuration from a recent conjecture in the literature. Our results exactly reproduce the corresponding replica technique results in the large central charge limit and resolves the issue of the missing thermal term for the holographic entanglement negativity computed earlier in the literature. In this context we compare the results for the holographic entanglement negativity utilizing the minimum EWCS and an alternate earlier proposal involving an algebraic sum of the lengths of the geodesics homologous to specific combinations of appropriate intervals. From our analysis we conclude that the two quantities are proportional in the context of the $AdS_3/CFT_2$ scenario and this possibly extends to the higher dimensional $AdS_{d+1}/CFT_d$ framework.
|
high energy physics theory
|
Data augmentation by incorporating cheap unlabeled data from multiple domains is a powerful way to improve prediction especially when there is limited labeled data. In this work, we investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data. We demonstrate that for broad classes of distributions and classifiers, there exists a sample complexity gap between standard and robust classification. We quantify to what degree this gap can be bridged via leveraging unlabeled samples from a shifted domain by providing both upper and lower bounds. Moreover, we show settings where we achieve better adversarial robustness when the unlabeled data come from a shifted domain rather than the same domain as the labeled data. We also investigate how to leverage out-of-domain data when some structural information, such as sparsity, is shared between labeled and unlabeled domains. Experimentally, we augment two object recognition datasets (CIFAR-10 and SVHN) with easy to obtain and unlabeled out-of-domain data and demonstrate substantial improvement in the model's robustness against $\ell_\infty$ adversarial attacks on the original domain.
|
computer science
|
We present the analysis of nine radio sources belonging to the Third Cambridge Revised catalog (3CR) observed with $Chandra$ during Cycle 20 in the redshift range between 1.5 and 2.5. This study completes the 3CR $Chandra$ Snapshot Survey thus guaranteeing the X-ray coverage of all 3CR sources identified to date. This sample lists two compact steep spectrum sources, four radio galaxies and three quasars. We detected X-ray emission from all nuclei, with the only exception of 3C 326.1 and 3C 454.1 and from radio lobes in 6 out of 9 sources at level of confidence larger than $\sim$5$\sigma$. We measured X-ray fluxes and luminosities for all nuclei and lobes in the soft (0.5 - 1 keV), medium (1 - 2 keV) and hard (2 - 7 keV) X-ray bands. Since the discovered X-ray extended emission is spatially coincident with the radio structure in all cases, its origin could be due to Inverse Compton scattering of the Cosmic Microwave Background (IC/CMB) occurring in radio lobes.
|
astrophysics
|
$BMO$ commutators of some sublinear operators such as singular integral operators and Hardy-Littelwood maximal operator are well known to be bounded from $L_w^p$ to itself for all $1<p<\infty$ and all $w\in A_p$ (the classical Muckenhopt class), but for these commutators, it has been an open question how to extend the estimate to all $0<p<\infty$. In addition, there are many classical operators, especially some maximal operators such as Carleson maximal operator, for which the $L^p$-boundedness holds for $1<p<\infty$, but for the $BMO$ commutators of these operators, it has also been an open question whether there are similar unweighted or weighted estimates for each $0<p<\infty$. In this paper, giving a weights class $A_p^+$ with $0<p\leq \infty$ which is an extension of $A_p$ when $1\leq p\leq \infty$ and is a refinement of $A_1$ when $0< p\leq 1$, for $BMO$ commutators of some sublinear operators, we obtain weighted estimates from some subspaces of $L^p_w$ to $L^p_w$ (and to themselves) for all $0<p<\infty$ and $w\in A_q^+$ with $0<q<p$. These results are applied the commutators in above two questions. In particular, these imply that the $BMO$ commutators of many classical operators including those mentioned above are bounded from $H^p_w$ to $L^p_w$ and from $H^p_w$ to itself for all $0<p\leq 1$ and $w\in A_q^+$ with $0<q<p$.
|
mathematics
|
The entropy of $1/16$-th BPS AdS$_5$ black holes can be microscopically accounted for by the superconformal index of the $\mathcal{N}=4$ super-Yang-Mills theory. One way to compute this is through a Cardy-like limit of a formula for the index obtained in [1] using the $S$-transformation of the elliptic $\Gamma$ function. In this paper, we derive more general $SL(3,\mathbb{Z})$ modular properties of the elliptic $\Gamma$ function. We then use these properties to obtain a three integer parameter family of generalized Cardy-like limits of the $\mathcal{N}=4$ superconformal index. From these limits, we obtain entropy formulae that have a similar form as that of the original AdS$_5$ black hole, up to an overall rescaling of the entropy and a shift in the charges. We interpret this both on the field theory and the gravitational side. Finally, we comment on how our work suggests a generalization of the Farey tail story to four dimensions.
|
high energy physics theory
|
Radio fingerprinting provides a reliable and energy-efficient IoT authentication strategy. By mapping inputs onto a very large feature space, deep learning algorithms can be trained to fingerprint large populations of devices operating under any wireless standard. One of the most crucial challenges in radio fingerprinting is to counteract the action of the wireless channel, which decreases fingerprinting accuracy significantly by disrupting hardware impairments. On the other hand, due to their sheer size, deep learning algorithms are hardly re-trainable in real-time. Another aspect that is yet to be investigated is whether an adversary can successfully impersonate another device fingerprint. To address these key issues, this paper proposes DeepRadioID, a system to optimize the accuracy of deep-learning-based radio fingerprinting algorithms without retraining the underlying deep learning model. We extensively evaluate DeepRadioID on a experimental testbed of 20 nominally-identical software-defined radios, as well as on two datasets made up by 500 ADS-B devices and by 500 WiFi devices provided by the DARPA RFMLS program. Experimental results show that DeepRadioID (i) increases fingerprinting accuracy by about 35%, 50% and 58% on the three scenarios considered; (ii) decreases an adversary's accuracy by about 54% when trying to imitate other device fingerprints by using their filters; (iii) achieves 27% improvement over the state of the art on a 100-device dataset.
|
computer science
|
Understanding program code is a complicated endeavor. As such, myriad different factors can influence the outcome. Investigations of program comprehension, and in particular those using controlled experiments, have to take these factors into account. In order to promote the development and use of sound experimental methodology, we discuss potential problems with regard to the experimental subjects, the code they work on, the tasks they are asked to perform, and the metrics for their performance.
|
computer science
|
We investigate a new class of LRS Bianchi type-II cosmological models by revisiting in the paper of Mishra {\it et al} (2013) by considering a new deceleration parameter (DP) depending on the time in string cosmology for the modified gravity theory suggested by S$\acute{a}$ez \& Ballester (1986). We have considered the energy-momentum tensor proposed by Leteliar (1983) for bulk viscous and perfect fluid under some assumptions. To make our models consistent with recent astronomical observations, we have used scale factor (Sharma {\it et al} 2018; Garg {\it et al} 2019) $ a(t)=\exp{[\frac{1}{\beta}\sqrt{2 \beta t + k}]}$, where $\beta $ and $k$ are positive constants and it provides a time-varying DP. By using the recent constraints ($H_{0}=73.8$, and $q_{0} = -0.54$) from SN Ia data in combination with BAO and CMB observations (Giostri {\it et al}, arXiv:1203.3213v2[astro-ph.CO]), we affirm $\beta = 0.0062$ and $k = 0.000016$. For these constraints, we have substantiated a new class of cosmological transit models for which the expansion takes place from early decelerated phase to the current accelerated phase. Also, we have studied some physical, kinematic and geometric behavior of the models, and have found them consistent with observations and well established theoretical results . We have also compared our present results with those of Mishra {\it et al} (2013) and observed that the results in this paper are much better, stable under perturbation and in good agreement with cosmological reflections.
|
physics
|
In this paper we study a finite-depth layer of viscous incompressible fluid in dimension $n \ge 2$, modeled by the Navier-Stokes equations. The fluid is assumed to be bounded below by a flat rigid surface and above by a free, moving interface. A uniform gravitational field acts perpendicularly to the flat surface, and we consider the cases with and without surface tension acting on the free interface. In addition to these gravity-capillary effects, we allow for a second force field in the bulk and an external stress tensor on the free interface, both of which are posited to be in traveling wave form, i.e. time-independent when viewed in a coordinate system moving at a constant velocity parallel to the rigid lower boundary. We prove that, with surface tension in dimension $n \ge 2$ and without surface tension in dimension $n=2$, for every nontrivial traveling velocity there exists a nonempty open set of force and stress data that give rise to traveling wave solutions. While the existence of inviscid traveling waves is well known, to the best of our knowledge this is the first construction of viscous traveling wave solutions. Our proof involves a number of novel analytic ingredients, including: the study of an over-determined Stokes problem and its under-determined adjoint, a delicate asymptotic development of the symbol for a normal-stress to normal-Dirichlet map defined via the Stokes operator, a new scale of specialized anisotropic Sobolev spaces, and the study of a pseudodifferential operator that synthesizes the various operators acting on the free surface functions.
|
mathematics
|
In recent work we computed the path integral of three-dimensional gravity with negative cosmological constant on spaces which are topologically a torus times an interval. Here we employ a modular bootstrap to show that the amplitude is completely fixed by consistency conditions and a few basic inputs from gravity. This bootstrap is notably for an ensemble of CFTs, rather than for a single instance. We also compare the 3d gravity result with the Narain ensemble. The former is well-approximated at low temperature by a random matrix theory ansatz, and we conjecture that this behavior is generic for an ensemble of CFTs at large central charge with a chaotic spectrum of heavy operators.
|
high energy physics theory
|
A recently discovered high-Tc cuprate superconductor Ba2CuO$_{4-\delta}$ exhibits exceptional Jahn-Teller distortion, wherein the CuO6 octahedrons are compressed along the c axis. As a consequence, the O vacancies prefer to reside in the CuO2 plane, but the exact structure is not known. By combining first-principles total energy calculation with the automated structure inversion method, the effective cluster interactions of O vacancies are mapped out. Around $\delta$=0.8, where the 73K superconductivity was observed experimentally, we predict that the ordered O vacancies slice the CuO2 plane into not only 1D chains and but also two-leg ladders. A Monte Carlo simulation is performed based on the effective cluster interaction model, showing that such an ordering pattern is stable up to ~900 K. Our results put forth a concrete structural basis to discuss the underlying superconducting mechanism.
|
condensed matter
|
The motion of the floating bridge of the banjo, in conjunction with the break angle of the strings over that bridge, produces string tension modulation that is first order in the amplitude of the string motion. This note refines a previous suggestion regarding the impact on the frequencies of the strings' and bridge's motion. For a given mode frequency pair of string and bridge, the resulting tension modulation produces a new, additional motion characterized by the sum and difference of the original ones. Strictly speaking, this corresponds to canonical "frequency modulation" only in the limit of modulation slow compared to the string frequency. The more general result is precisely an example of what is known as "parametric oscillation," first analyzed by Rayleigh. The qualitative impact of tension modulation on banjo timbre remains as suggested previously. It is only the precise math and physics that warrants this correction.
|
physics
|
This work explores how self-supervised learning can be universally used to discover speaker-specific features towards enabling personalized speech enhancement models. We specifically address the few-shot learning scenario where access to cleaning recordings of a test-time speaker is limited to a few seconds, but noisy recordings of the speaker are abundant. We develop a simple contrastive learning procedure which treats the abundant noisy data as makeshift training targets through pairwise noise injection: the model is pretrained to maximize agreement between pairs of differently deformed identical utterances and to minimize agreement between pairs of similarly deformed nonidentical utterances. Our experiments compare the proposed pretraining approach with two baseline alternatives: speaker-agnostic fully-supervised pretraining, and speaker-specific self-supervised pretraining without contrastive loss terms. Of all three approaches, the proposed method using contrastive mixtures is found to be most robust to model compression (using 85% fewer parameters) and reduced clean speech (requiring only 3 seconds).
|
electrical engineering and systems science
|
We compute the compactly supported Euler characteristic of the space of degree $d$ irreducible polynomials in $n$ variables with real coefficients and show that the values are given by the digits in the so-called balanced binary expansion of the number of variables $n$.
|
mathematics
|
Space-division multiplexing optical fibers can provide not only parallel channel transmission but also parallel distributed signal processing, being this a feature particularly attractive in Microwave Photonics applications. We present here few-mode fiber links with tailored modal propagation and dispersion properties that operate as sampled true time delay lines for radiofrequency signals.
|
electrical engineering and systems science
|
In this paper, we study the capped vertex functions associated to certain zero-dimensional type-$A$ Nakajima quiver varieties. The insertion of descendants into the vertex functions can be expressed by the Macdonald operators, which leads to explicit combinatorial formulas for the capped vertex functions. We determine the monodromy of the vertex functions and show that it coincides with the elliptic R-matrix of symplectic dual variety. We apply our results to give the vertex functions and the characters of the tautological bundles on the quiver varieties formed from arbitrary stability conditions.
|
mathematics
|
We compute the three-loop master integrals required for the calculation of the triple-real contribution to the N$^3$LO quark beam function due to the splitting of a quark into a virtual quark and three collinear gluons, $q \to q^*+ggg$. This provides an important ingredient for the calculation of the leading-color contribution to the quark beam function at N$^3$LO.
|
high energy physics phenomenology
|
Accurate characterization of microcalcifications (MCs) in 2D full-field digital screening mammography is a necessary step towards reducing diagnostic uncertainty associated with the callback of women with suspicious MCs. Quantitative analysis of MCs has the potential to better identify MCs that have a higher likelihood of corresponding to invasive cancer. However, automated identification and segmentation of MCs remains a challenging task with high false positive rates. We present Hessian Difference of Gaussians Regression (HDoGReg), a two stage multi-scale approach to MC segmentation. Candidate high optical density objects are first delineated using blob detection and Hessian analysis. A regression convolutional network, trained to output a function with higher response near MCs, chooses the objects which constitute actual MCs. The method is trained and validated on 435 mammograms from two separate datasets. HDoGReg achieved a mean intersection over the union of 0.670$\pm$0.121 per image, intersection over the union per MC object of 0.607$\pm$0.250 and true positive rate of 0.744 at 0.4 false positive detections per $cm^2$. The results of HDoGReg perform better when compared to state-of-the-art MC segmentation and detection methods.
|
electrical engineering and systems science
|
We study decays of a spin $1$ boson within the formalism of Hagen-Hurley equations. Such particle can decay into two spin $\frac{1}{2}$ particles, a Weyl neutrino and a massive fermion, whose spins can couple to $S=0$ or $S=1$. Since spin $0$ and spin $1$ bosons can be described by the Dirac equation within the same representation of $\gamma ^{\mu }$ matrices mixing of $S=0$ and $S=1$ states is possible. We argue that the Hagen-Hurley equations describe $W$ boson with spin $S\in 0\oplus 1$ space and analyse mixed beta decays as well as top quark decays from this perspective.
|
physics
|
We report a tunable optical filter based on phase change material $Ge_{2}Sb_{2}Te_{5}$ embedded in a silicon microring resonator. The high thermo-optic coefficient of $Ge_{2}Sb_{2}Te_{5}$ in amorphous phase enables tuning of resonance wavelength in broad range with a very small active volume. The low-loss indium-tin-oxide electrodes are employed to induce Joule heating in $Ge_{2}Sb_{2}Te_{5}$-Si active waveguide region. The electrically induced heating in the active region alters the effective refractive index of hybrid microring resulting in a wavelength tuning of 1.04 nm for an applied voltage of only 3V. The device exhibits high extinction ratios in the range of 20-41 dB and a compact active footprint 0.96 $\mu m^{2}$ only and is suitable for the large scale reconfigurable integrated photonic circuits.
|
physics
|
In a universe with quintessence isocurvature, or perturbations in dark energy that are independent from the usual curvature perturbations, structure formation is changed qualitatively. The existence of two independent fields, curvature and isocurvature, causes the growth rate of matter perturbations to depend on their initial conditions. The quintessence perturbations cause their growth to depend on scale. We perform the first separate universe simulations for this cosmology. We demonstrate that the power spectrum response and the halo bias depend on scale and initial conditions and that the presence of the isocurvature mode changes the mapping from these quantities to the halo auto- and cross-power spectra, and the squeezed-limit bispectrum. We compare the bias to several models, finding reasonable agreement with both a power-spectrum-response model with one free parameter and a model that fits two independent bias parameters for curvature and isocurvature sourced fluctuations. We also verify that simulation responses to pure isocurvature and pure curvature modes can be linearly combined to reproduce responses with different ratios of isocurvature and curvature. This allows our results to be used to predict the halo power spectrum and stochasticity with arbitrary large-scale curvature and isocurvature power spectra. In an appendix, we study the generation of quintessence isocurvature during inflation and show that a modified kinetic term is typically required to produce observable isocurvature modes in a field with $w_Q\approx -1$.
|
astrophysics
|
We derive an expression for conserved charges in Lovelock AdS gravity for solutions having $k$-fold degenerate vacua, making manifest a link between the degeneracy of a given vacuum and the nonlinearity of the energy formula. We show for a black hole solution to the field equations on a branch of multiplicity $k$ that its mass comes from an expression that contains the product of $k$ Weyl tensors. We prove that all divergent contributions of the type (Weyl)$^q$, with $1\le q<k$, are suppressed. Our conserved charge definition is a natural generalization of the Conformal Mass by Ashtekar, Magnon and Das to the cases when $k>1$. Our results provide insight on the holographic properties of degenerate Lovelock theories.
|
high energy physics theory
|
We derive a necessary and sufficient condition for Poincar\'e Lie superalgebras in any dimension and signature to be isomorphic. This reduces the classification problem, up to certain discrete operations, to classifying the orbits of the Schur group on the vector space of superbrackets. We then classify four-dimensional ${\cal N}=2$ supersymmetry algebras, which are found to be unique in Euclidean and in neutral signature, while in Lorentz signature there exist two algebras with R-symmetry groups $\mathrm{U}(2)$ and $\mathrm{U(}1,1)$, respectively. By dimensional reduction we construct two off shell vector multiplet representations for each possible signature, and find that the corresponding Lagrangians always have a different relative sign between the scalar and the Maxwell term. In Lorentzian signature this is related to the existence of two non-isomorphic algebras, while in Euclidean and neutral signature the two theories are related by a local field redefinition which implements an isomorphism between the underlying supersymmetry algebras.
|
high energy physics theory
|
We obtain an approximate value of the quantized momentum eigenvalues, $P_n$, together with the space-like coherent eigenvectors for the space-like counterpart of the Schrodinger equation, the Feinberg-Horodecki equation, with a screened Kratzer-Hellmann potential which is constructed by the temporal counterpart of the spatial form of this potential. In addition, we got exact eigenvalues of the momentum and the eigenstates by solving Feinberg-Horodecki equation with Kratzer potential. The present work is illustrated with three special cases of the screened Kratzer-Hellman potential: the time-dependent screened Kratzer potential, time-dependent Hellmann potential and, the time-dependent screened Coulomb potential.
|
quantum physics
|
Newly-introduced deep learning architectures, namely BERT, XLNet, RoBERTa and ALBERT, have been proved to be robust on several NLP tasks. However, the datasets trained on these architectures are fixed in terms of size and generalizability. To relieve this issue, we apply one of the most inexpensive solutions to update these datasets. We call this approach BET by which we analyze the backtranslation data augmentation on the transformer-based architectures. Using the Google Translate API with ten intermediary languages from ten different language families, we externally evaluate the results in the context of automatic paraphrase identification in a transformer-based framework. Our findings suggest that BET improves the paraphrase identification performance on the Microsoft Research Paraphrase Corpus (MRPC) to more than 3% on both accuracy and F1 score. We also analyze the augmentation in the low-data regime with downsampled versions of MRPC, Twitter Paraphrase Corpus (TPC) and Quora Question Pairs. In many low-data cases, we observe a switch from a failing model on the test set to reasonable performances. The results demonstrate that BET is a highly promising data augmentation technique: to push the current state-of-the-art of existing datasets and to bootstrap the utilization of deep learning architectures in the low-data regime of a hundred samples.
|
computer science
|
The task of motion transfer between a source dancer and a target person is a special case of the pose transfer problem, in which the target person changes their pose in accordance with the motions of the dancer. In this work, we propose a novel method that can reanimate a single image by arbitrary video sequences, unseen during training. The method combines three networks: (i) a segmentation-mapping network, (ii) a realistic frame-rendering network, and (iii) a face refinement network. By separating this task into three stages, we are able to attain a novel sequence of realistic frames, capturing natural motion and appearance. Our method obtains significantly better visual quality than previous methods and is able to animate diverse body types and appearances, which are captured in challenging poses, as shown in the experiments and supplementary video.
|
computer science
|
We extend the tight distance-dependent estimator proposed by Hollman et al. [J. Chem. Phys. 142, 154106 (2015)] for the three-center Coulomb integrals over Gaussian atomic orbitals to handle the two-center case. We also propose minor modifications of the original three-center estimator for the case of contracted ket Gaussians and concentric bra Gaussians.
|
physics
|
We analyse what happens when the Horndeski Lagrangian is varied within the Palatini approach by considering the metric and connection as independent variables. Assuming the connection to be torsionless, there can be infinitely many metric-affine versions $L_{\rm P}$ of the original Lagrangian which differ from each other by terms proportional to the non-metricity tensor. After integrating out the connection, each $L_{\rm P}$ defines a metric theory, which can either belong to the original Horndeski family, or it can be of a more general DHOST type, or it shows the Ostrogradsky ghost. We analyse in detail the subclass of the theory for which the equations are linear in the connection and find that its metric-affine version is ghost-free. We present a detailed classifications of homogeneous and isotropic cosmologies in these theories. Taking into consideration other pieces of the Horndeski Lagrangian which are non-linear in the connection leads to more complex metric-affine theories which generically show the ghost. In some special cases the ghost can be removed by carefully adjusting the non-metricity contribution, but it is unclear if this is always possible. Therefore, the metric-affine generalisations of the Horndeski theory can be ghost-free, but not all of them are ghost-free, neither are they the only metric-affine theories for a gravity-coupled scalar field which can be ghost-free.
|
high energy physics theory
|
In this work, a method for directly measuring target velocity in three dimensions using a dual axis correlation interferometric radar is presented. Recent advances have shown that the measurement of the angular velocity of a target is possible by correlating, or mixing, the signals measured at spatially diverse aperture locations. By utilizing multiple orthogonal baselines, and using conventional Doppler velocity methods to obtain radial velocity, a full three-dimensional velocity vector can be obtained using only three receive antennas and a single transmitter, without the need for tracking. A 40.5 GHz dual axis interferometric radar with a $7\lambda$ antenna baseline is presented along with measurements of a target moving fully tangentially to the radar, and of a target with a component of both radial and tangential velocity. These experiments obtain total velocity root-mean-square errors (RMSEs) of $\text{15.404 mm}\cdot\text{s}^{-1}$ for a target moving purely tangentially to the array, and $\text{39.22 mm}\cdot\text{s}^{-1}$ for a target moving with a radial component up to 30{\deg} off of tangent to the array, and estimated trajectory angle RMSEs of 2.33{\deg} and 2.35{\deg} for each experiment respectively.
|
electrical engineering and systems science
|
The modification of the hard core of jets in a dense QCD medium is studied. In particular, we consider partons which possess a virtuality somewhat larger than the multiple scattering scale of the medium ($\hat{q} \tau$, where $\hat{q}$ is the transverse broadening jet transport coefficient, and $\tau$ is the formation length of a particular emission). We delineate the region of parameter space where the higher-twist approach is applicable, and derive the in-medium DGLAP evolution equation. We study a region in parameter space where this is the dominant mechanism of energy loss. We argue that such a regime is pervasive in most cases of jets in $A$-$A$ and future $e$-$A$ collisions, and controls the modification of the hard core of jets and the leading single particle spectrum at high transverse momentum ($p_\mathrm{T}$).
|
high energy physics phenomenology
|
We present high-statistics, precision measurements by AMS of the detailed time and rigidity dependence of the primary cosmic-ray electron, positron, proton and helium fluxes over 79 Bartels rotations from May 2011 to May 2017 in the energy range from 1 to 50 GeV. For the first time, the charge-sign dependent modulation during solar maximum has been investigated in detail by leptons alone. We report the observation of short-term structures on the timescale of months coincident in all the fluxes. These structures are not visible in the positron-to-electron flux ratio. The precision measurements across the solar polarity reversal show that the ratio exhibits a smooth transition over ~800 days from one value to another.
|
astrophysics
|
This work is a galoisian study of the spectral problem $L\Psi=\lambda\Psi$, for algebro-geometric second order differential operators $L$, with coefficients in a differential field, whose field of constants $C$ is algebraically closed and of characteristic zero. Our approach regards the spectral parameter $\lambda$ an algebraic variable over $C$, forcing the consideration of a new field of coefficients for $L-\lambda$, whose field of constants is the field $C(\Gamma)$ of the spectral curve $\Gamma$. Since $C(\Gamma)$ is no longer algebraically closed, the need arises of a new algebraic structure, generated by the solutions of the spectral problem over $\Gamma$, called "Spectral Picard-Vessiot field" of $L-\lambda$. An existence theorem is proved using differential algebra, allowing to recover classical Picard-Vessiot theory for each $ \lambda = \lambda_0 $. For rational spectral curves, the appropriate algebraic setting is established to solve $L\Psi=\lambda\Psi$ analitically and to use symbolic integration. We illustrate our results for Rosen-Morse solitons.
|
mathematics
|
If the $X(3872)$ is a weakly bound charm-meson molecule, it can be produced in $e^+ e^-$ annihilation by the creation of $D^{*0} \bar D^{*0}$ from a virtual photon followed by the rescattering of the charm-meson pair into $X$ and a photon. A triangle singularity produces a narrow peak in the cross section for $e^+ e^- \to X \gamma$ about 2.2 MeV above the $D^{*0} \bar{D}^{*0}$ threshold. We predict the normalized cross section in the region near the peak. The peak from the triangle singularity may be observable by the BESIII detector.
|
high energy physics phenomenology
|
In this paper, we calculate the total decay widths for the $W^+$-boson decays, $W^+ \to B_c+b+\bar{s}+X$ and $W^+ \to B^*_c+b+\bar{s}+X$, up to next-to-leading order (NLO) accuracy within the framework of the nonrelativistic QCD theory. Both the fixed-order and the fragmentation approaches are adopted to do the calculation. Differential decay widths $d\Gamma/dz$ and $d\Gamma/ds_1$ are also given. We find that the NLO corrections are significant in those two $W^+$ decay channels. Our numerical results show that at the LHC, there are about $7.03\times 10^4$ $B_c$-meson events and $5.10\times 10^4$ $B^*_c$-meson events to be produced via the $W^+$-boson decays per operation year.
|
high energy physics phenomenology
|
Healing process assessment of the Achilles tendon is usually a complex procedure that relies on a combination of biomechanical and medical imaging tests. As a result, diagnostics remains a tedious and long-lasting task. Recently, a novel method for the automatic assessment of tendon healing based on Magnetic Resonance Imaging and deep learning was introduced. The method assesses six parameters related to the treatment progress utilizing a modified pre-trained network, PCA-reduced space, and linear regression. In this paper, we propose to improve this approach by incorporating hand-crafted features. We first perform a feature selection in order to obtain optimal sets of mixed hand-crafted and deep learning predictors. With the use of approx. 20,000 MRI slices, we then train a meta-regression algorithm that performs the tendon healing assessment. Finally, we evaluate the method against scores given by an experienced radiologist. In comparison with the previous baseline method, our approach significantly improves correlation in all of the six parameters assessed. Furthermore, our method uses only one MRI protocol and saves up to 60\% of the time needed for data acquisition.
|
electrical engineering and systems science
|
Nonreciprocal devices effectively mimic the breaking of time-reversal symmetry for the subspace of dynamical variables that they couple, and can be used to create chiral information processing networks. We study the systematic inclusion of ideal gyrators and circulators into Lagrangian and Hamiltonian descriptions of lumped-element electrical networks. The proposed theory is of wide applicability in general nonreciprocal networks on the quantum regime. We apply it to pedagogical and pathological examples of circuits containing Josephson junctions and ideal nonreciprocal elements described by admittance matrices, and compare it with the more involved treatment of circuits based on nonreciprocal devices characterized by impedance or scattering matrices. Finally, we discuss the dual quantization of circuits containing phase-slip junctions and nonreciprocal devices.
|
quantum physics
|
The advance of modern sensor technologies enables collection of multi-stream longitudinal data where multiple signals from different units are collected in real-time. In this article, we present a non-parametric approach to predict the evolution of multi-stream longitudinal data for an in-service unit through borrowing strength from other historical units. Our approach first decomposes each stream into a linear combination of eigenfunctions and their corresponding functional principal component (FPC) scores. A Gaussian process prior for the FPC scores is then established based on a functional semi-metric that measures similarities between streams of historical units and the in-service unit. Finally, an empirical Bayesian updating strategy is derived to update the established prior using real-time stream data obtained from the in-service unit. Experiments on synthetic and real world data show that the proposed framework outperforms state-of-the-art approaches and can effectively account for heterogeneity as well as achieve high predictive accuracy.
|
statistics
|
We study the predictions for the p/He ratio in galactic cosmic rays according to the force-field approximation. The dependence of the time variation of p/He on the local interstellar spectrum (LIS) shape and on the mass-to-charge ratio, A/Z, is analyzed in detail. We find that, depending on the rigidity range and the sign of the spectral index of the p/He LIS ratio, the p/He time variation can be correlated or anti-correlated with the phase of the solar cycle. We show that the A/Z dependence is the most probable cause for the p/He decrease recently observed by AMS-02 after 2015 between 2 and 3 GV.
|
astrophysics
|
The concept of simplicial complex from Algebraic Topology is applied to understand and model the flow of genetic information, processes and organisms between the areas of unimpaired habitats to design a network of wildlife corridors for Tigers (Panthera Tigris Tigris) in Central India Eastern Ghats landscape complex. The work extends and improves on a previous work that has made use of the concept of minimum spanning tree obtained from the weighted graph in the focal landscape, which suggested a viable corridor network for the tiger population of the Protected Areas (PAs) in the landscape complex. Centralities of the network identify the habitat patches and the critical parameters that are central to the process of tiger movement across the network. We extend the concept of vertex centrality to that of the simplicial centrality yielding inter-vertices adjacency and connection. As a result, the ecological information propagates expeditiously and even on a local scale in these networks representing a well-integrated and self-explanatory model as a community structure. A simplicial complex network based on the network centralities calculated in the landscape matrix presents a tiger corridor network in the landscape complex that is proposed to correspond better to reality than the previously proposed model. Because of the aforementioned functional and structural properties of the network, the work proposes an ecological network of corridors for the most tenable usage by the tiger populations both in the PAs and outside the PAs in the focal landscape.
|
physics
|
We describe a prescription for constructing conformal blocks in conformal field theories in any space-time dimension with arbitrary quantum numbers. Our procedure reduces the calculation of conformal blocks to constructing certain group theoretic structures that depend on the quantum numbers of primary operators. These structures project into irreducible Lorentz representations. Once the Lorentz quantum numbers are accounted for there are no further calculations left to do. We compute a multivariable generalization of the Exton function. This generalized Exton function, together with the group theoretic structures, can be used to construct conformal blocks for four-point as well as higher-point correlation functions.
|
high energy physics theory
|
Let p: M -> B be a Lagrangian fibration on a hyperkahler manifold of maximal holonomy (also known as IHS), and H the generator of the Picard group of B. We prove that the pullback $p^*(H)$ is a primitive class on M.
|
mathematics
|
Artificial intelligence (AI) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. In 2018, the European Commission introduced its AI strategy able to compete in the next years with world powers such as China and US, but relying on the respect of European values and fundamental rights. As a result, most of the Member States have published their own National Strategy with the aim to work on a coordinated plan for Europe. In this paper, we present an ongoing study on how European countries are approaching the field of Artificial Intelligence, with its promises and risks, through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. This paper reports the main findings of a qualitative analysis of the investment plans reported in 15 European National Strategies
|
computer science
|
Compared to abundant classical statistics-based literature, to date, very little Bayesian literature exists on Procrustes shape analysis in Geometric Morphometrics, probably because of being a relatively new branch of statistical research and because of inherent computational difficulty associated with Bayesian analysis. Moreover, we may obtain a plethora of novel inferences from Bayesian Procrustes analysis of shape parameter distributions. In this paper, we propose to regard the posterior of Procrustes shape variance as morphological variability indicators. Here we propose novel Bayesian methodologies for Procrustes shape analysis based on landmark data's isotropic variance assumption and propose a Bayesian statistical test for model validation of new species discovery using morphological variation reflected in the posterior distribution of landmark-variance of objects studied under Geometric Morphometrics. We will consider Gaussian distribution-based and heavy-tailed t distribution-based models for Procrustes analysis. To date, we are not aware of any direct R package for Bayesian Procrustes analysis for landmark-based Geometric Morphometrics. Hence, we introduce a novel, simple R package \textbf{BPviGM1} ("Bayesian Procrustes Variance-based inferences in Geometric Morphometrics 1"), which essentially contains the R code implementations of the computations for proposed models and methodologies, such as R function for Markov Chain Monte Carlo (MCMC) run for drawing samples from posterior of parameters of concern and R function for the proposed Bayesian test of model validation based on significance morphological variation. As an application, we can quantitatively show that primate male-face may be genetically viable to more shape-variation than the same for females.
|
statistics
|
Active galactic nuclei (AGN) feedback operated by the expansion of radio jets can play a crucial role in driving gaseous outflows on galaxy scales. Galaxies hosting young radio AGN, whose jets are in the first phases of expansion through the surrounding interstellar medium (ISM), are the ideal targets to probe the energetic significance of this mechanism. In this paper, we characterise the warm ionised gas outflows in a sample of nine young radio sources from the 2Jy sample, combining X-shooter spectroscopy and Hubble Space Telescope (HST) imaging data. We find that the warm outflows have similar radial extents (~0.06-2 kpc) as radio sources, consistent with the idea that `jet mode' AGN feedback is the dominant driver of the outflows detected in young radio galaxies. Exploiting the broad spectral coverage of the X-shooter data, we have used the ratios of trans-auroral emission lines of [SII] and [OII] to estimate the electron densities, finding that most of the outflows have gas densities ($\log( n_e~cm^{-3})~3-4.8 $), which we speculate could be the result of compression by jet-induced shocks. Combining our estimates of the emission-line luminosities, radii, and densities, we find that the kinetic powers of the warm outflows are a relatively small fraction of the energies available from the accretion of material onto the central supermassive black hole (SMBH), reflecting AGN feedback efficiencies below 1% in most cases. Overall, the warm outflows detected in our sample are strikingly similar to those found in nearby ultraluminous infrared galaxies (ULIRGs), but more energetic and with a high feedback efficiencies on average than the general population of nearby AGN of similar bolometric luminosity; this is likely to reflect a high degree of coupling between the jets and the near-nuclear ISM in the early stages of radio source evolution.
|
astrophysics
|
Devices that use quantum advantages for storing energy in the degree of freedom of quantum systems have drawn attention due to their properties of working as quantum batteries. However, one can identify a number of problems that need to be adequately solved before a real manufacturing process of these devices. In particular, it is important paying attention to the ability of quantum batteries in storing energy when no consumption center is connected to them. In this paper, by considering quantum batteries disconnected from external charging fields and consumption center, we study the decoherence effects that lead to charge leakage to the surrounding environment. We identify this phenomena as a self-discharging of QBs, in analogy to the inherent decay of the stored charge of conventional classical batteries in a open-circuit configuration. The quantum advantage concerning the classical counterpart is highlighted for single- and multi-cell quantum batteries.
|
quantum physics
|
In d-dimensional CFTs with a large number of degrees of freedom an important set of operators consists of the stress tensor and its products, multi stress tensors. Thermalization of such operators, the equality between their expectation values in heavy states and at finite temperature, is equivalent to a universal behavior of their OPE coefficients with a pair of identical heavy operators. We verify this behavior in a number of examples which include holographic and free CFTs and provide a bootstrap argument for the general case. In a free CFT we check the thermalization of multi stress tensor operators directly and also confirm the equality between the contributions of multi stress tensors to heavy-heavy-light-light correlators and to the corresponding thermal light-light two-point functions by disentangling the contributions of other light operators. Unlike multi stress tensors, these light operators violate the Eigenstate Thermalization Hypothesis and do not thermalize.
|
high energy physics theory
|
Processes occurring in the strong-field regime of QED are characterized by background electromagnetic fields of the order of the critical field $F_{cr}=m^2c^3/\hbar|e|$ in the rest frame of participating charges. It has been conjectured that if in their rest frame electrons/positrons experience field strengths of the order of $F_{cr}/\alpha^{3/2}\approx 1600\,F_{cr}$, with $\alpha\approx 1/137$ being the fine-structure constant, their effective coupling with radiation becomes of the order of unity. Here we show that channeling radiation by ultrarelativistic electrons with energies of the order of a few TeV on thin tungsten crystals allows to test the predictions of QED close to this fully non-perturbative regime by measuring the angularly resolved single photon intensity spectrum. The proposed setup features the unique characteristics that essentially all electrons 1) undergo at most a single photon emission and 2) experience at the moment of emission and in the angular region of interest the maximum allowed value of the field strength, which at $2\;\text{TeV}$ exceeds $F_{cr}$ by more than two orders of magnitudes in their rest frame.
|
high energy physics phenomenology
|
In recent proposals for achieving optical super-resolution, variants of the Quantum Fisher Information (QFI) quantify the attainable precision. We find that claims about a strong enhancement of the resolution resulting from coherence effects are questionable because they refer to very small subsets of the data without proper normalization. When the QFI is normalized, accounting for the strength of the signal, there is no advantage of coherent sources over incoherent ones. Our findings have a bearing on further studies of the achievable precision of optical instruments.
|
quantum physics
|
An adiabatic quantum algorithm is essentially given by three elements: An initial Hamiltonian with known ground state, a problem Hamiltonian whose ground state corresponds to the solution of the given problem and an evolution schedule such that the adiabatic condition is satisfied. A correct choice of these elements is crucial for an efficient adiabatic quantum computation. In this paper we propose a hybrid quantum-classical algorithm to solve optimization problems with an adiabatic machine assuming restrictions on the class of available problem Hamiltonians. The scheme is based on repeated calls to the quantum machine into a classical iterative structure. In particular we present a technique to learn the encoding of a given optimization problem into a problem Hamiltonian and we prove the convergence of the algorithm. Moreover the output of the proposed algorithm can be used to learn efficient adiabatic algorithms from examples.
|
quantum physics
|
We derive the $2$d Zakharov-Mikhailov action from $4$d Chern-Simons theory. This $2$d action is known to produce as equations of motion the flatness condition of a large class of Lax connections of Zakharov-Shabat type, which includes an ultralocal variant of the principal chiral model as a special case. At the $2$d level, we determine for the first time the covariant Poisson bracket $r$-matrix structure of the Zakharov-Shabat Lax connection, which is of rational type. The flatness condition is then derived as a covariant Hamilton equation. We obtain a remarkable formula for the covariant Hamiltonian in term of the Lax connection which is the covariant analogue of the well-known formula "$H=Tr L^2$".
|
high energy physics theory
|
We show that any $d$-Ahlfors regular subset of $\mathbb{R}^{n}$ supporting a weak $(1,d)$-Poincar\'e inequality with respect to surface measure is uniformly rectifiable.
|
mathematics
|
The transition between the N\'{e}el antiferromagnet and the valence-bond solid state in two dimensions has become a paradigmatic example of deconfined quantum criticality, a non-Landau transition characterized by fractionalized excitations (spinons). We consider an extension of this scenario whereby the deconfined spinons are subject to a magnetic field. The primary purpose is to identify the exotic scenario of a Bose-Einstein condensate of spinons. We employ quantum Monte Carlo simulations of the \mbox{$J$-$Q$} model with a magnetic field and perform a quantum field theoretic analysis of the magnetic field and temperature dependence of thermodynamic quantities. The combined analysis provides compelling evidence for the Bose-Einstein condensation of spinons and also demonstrates an extended temperature regime in which the system is best described as a gas of spinons interacting with an emergent gauge field.
|
condensed matter
|
Controlling the False Discovery Rate (FDR) in a variable selection procedure is critical for reproducible discoveries, which receives an extensive study in sparse linear models. However, in many scenarios, the sparsity constraint is not directly imposed on the parameters, but on a linear transformation of the parameters to be estimated. Examples can be found in total variations, wavelet transforms, fused LASSO, and trend filtering, etc. In this paper, we proposed a data adaptive FDR control in this structural sparsity setting, the Split Knockoff method. The proposed scheme relaxes the linear subspace constraint to its neighborhood, often known as variable splitting in optimization, that enjoys new statistical benefits. It yields orthogonal design and split knockoff matrices, that exhibit desired FDR control empirically in structural sparsity discovery, and improve the power of strong feature selection by enhancing the incoherence condition for model selection consistency. Yet, the split knockoff statistics fail to satisfy the exchangeability, a crucial property in the classical knockoff method for provable FDR control. To address this challenge, we introduce an almost supermartingale construction under a perturbation of exchangeability, that enables us to establish FDR control up to an arbitrarily small inflation that vanishes as the relaxed neighborhood enlarges. Simulation experiments show the effectiveness of split knockoffs with possible improvements over knockoffs in both FDR control and Power. An application to Alzheimer's Disease study with MRI data demonstrates that the split knockoff method can disclose important lesion regions in brains associated with the disease and connections between neighboring regions of high contrast variations during disease progression.
|
statistics
|
Ferromagnetism was observed in a Pt(100) ultrathin film deposited on a SrTiO3(100) substrate. The ferromagnetism, which appears in films with thicknesses of 2.2-4.4 nm, periodically changes with a period of approximately 1 nm (5-6 ML) depending on the film thickness. This is consistent with the period derived from the quantum-well states formed in the thin film. X-ray magnetic circular dichroism measurements were conducted to understand the intrinsic nature of the ferromagnetism in the Pt(100) ultrathin films, and contrary to our expectations, the orbital magnetic moment of pure Pt is much smaller than that of the Pt/ferromagnetic multilayer system. These results suggest that the origin of the large magnetic anisotropy in Pt components cannot be explained only by the amount of spin-orbit coupling in Pt.
|
condensed matter
|
Flavor violating processes in the lepton sector have highly suppressed branching ratios in the standard model mainly due to the tiny neutrino mass. This means that observing lepton flavor violation (LFV) in the next round of experiments would constitute a clear indication of physics beyond the standard model (BSM). We revisit a discussion of one possible way to search for LFV, muonium-antimuonium oscillations. This process violates muon lepton number by two units and could be sensitive to the types of BSM physics that are not probed by other types of LFV processes. Using techniques of effective field theory, we calculate the mass and width differences of the mass eigenstates of muonium. We argue that its invisible decays give the parametrically leading contribution to the lifetime difference and put constraints on the scales of new physics probed by effective operators in muonium oscillations.
|
high energy physics phenomenology
|
Population adjustment methods such as matching-adjusted indirect comparison (MAIC) are increasingly used to compare marginal treatment effects when there are cross-trial differences in effect modifiers and limited patient-level data. MAIC is based on propensity score weighting, which is sensitive to poor covariate overlap because of its inability to extrapolate. Current regression adjustment methods can extrapolate beyond the observed covariate space but target conditional treatment effects. This is problematic when the measure of effect is non-collapsible. To overcome these limitations, we develop a novel method based on multiple imputation called predictive-adjusted indirect comparison (PAIC). PAIC is a regression adjustment method that targets marginal treatment effects. It proceeds by splitting the adjustment into two separate stages: the generation of synthetic datasets and their analysis. We compare two versions of PAIC to MAIC in a comprehensive simulation study of 162 scenarios. This simulation study is based on binary outcomes and binary covariates and uses the log-odds ratio as the measure of effect. The simulation scenarios vary the trial sample size, prognostic variable effects, interaction effects, covariate correlations and covariate overlap. Generally, both PAIC and MAIC yield unbiased treatment effect estimates and valid coverage rates. In the simulations, PAIC provides more precise and more accurate estimates than MAIC, particularly when overlap is poor. MAIC and PAIC use different adjustment mechanisms and considering their results jointly may be helpful to evaluate the robustness of analyses.
|
statistics
|
The success of the Higgs mechanism in the standard model has led to the speculation that the standard model gauge group might arise through an analogous breaking of a yet more unified group. Such `grand unified theories' have the advantage of unifying both the gauge structure and fermion representations of the standard model. Unfortunately, the theories that most elegantly unify the fermions, without predicting extra unobserved fermion states, do not explain the existence of the three fermion generations. They also typically predict a proliferation of bosonic states, which lead to so-far unobserved processes like proton decay. In this paper we introduce an alternative explanation for why one might only observe a subgroup of a larger `unified' group in nature. The approach we introduce gives rise naturally to a generation structure without the appearance of unwanted fermion states, and is cleaner in the sense that it avoids the usual proliferation of unobserved bosonic states and resulting unobserved processes.
|
high energy physics phenomenology
|
We study various relations governing quasi-automorphic forms associated to discrete subgroups of ${\rm SL}(2,\mathbb{R}) $ called Hecke groups. We show that the Eisenstein series associated to a Hecke group ${\rm H}(m)$ satisfy a set of $m$ coupled linear differential equations, which are natural analogues of the well-known Ramanujan identities for quasi-modular forms of ${\rm SL}(2,\mathbb{Z})$. Each Hecke group is then associated to a (hyper-)elliptic curve, whose coefficients are determined by an anomaly equation. For the $m=3$ and $4$ cases, the Ramanujan identities admit a natural geometric interpretation as a Gauss-Manin connection on the parameter space of the elliptic curve. The Ramanujan identities also allow us to associate a nonlinear differential equation of order $ m $ to each Hecke group. These equations are higher-order analogues of the Chazy equation, and we show that they are solved by the quasi-automorphic Eisenstein series $E_2^{(m)}$ associated to ${\rm H}(m) $ and its orbit under the Hecke group. We conclude by demonstrating that these nonlinear equations possess the Painlev\'e property.
|
high energy physics theory
|
The design of optical systems for underwater vehicles is a complex process where the selection of cameras, lenses, housings, and operational parameters greatly influence the performance of the complete system. Determining the correct combination of components and parameters for a given set of operational requirements is currently a process based on trial and error as well as the specialized knowledge and experience of the designer. In this paper, we introduce an open-source tool for the parametric exploration of the design space of underwater optical systems and review the most significant underwater light effects with the corresponding models to estimate the response and performance of the complete imaging system.
|
electrical engineering and systems science
|
We present longitudinal-field muon-spin relaxation (LF $\mu$SR) measurements on two systems that stabilize a skyrmion lattice (SkL): Cu$_2$OSeO$_3$, and Co$_x$Zn$_y$Mn$_{20-x-y}$ for $(x,y)~=~(10,10)$, $(8,9)$ and $(8,8)$. We find that the SkL phase of Cu$_2$OSeO$_3$ exhibits emergent dynamic behavior at megahertz frequencies, likely due to collective excitations, allowing the SkL to be identified from the $\mu$SR response. From measurements following different cooling protocols and calculations of the muon stopping site, we suggest that the metastable SkL is not the majority phase throughout the bulk of this material at the fields and temperatures where it is often observed. The dynamics of bulk Co$_8$Zn$_9$Mn$_3$ are well described by $\simeq~2$ GHz excitations that reduce in frequency near the critical temperature, while in Co$_8$Zn$_8$Mn$_4$ we observe similar behavior over a wide range of temperatures, implying that dynamics of this kind persist beyond the SkL phase.
|
condensed matter
|
A fundamental task in AI is to assess (in)dependence between mixed-type variables (text, image, sound). We propose a Bayesian kernelised correlation test of (in)dependence using a Dirichlet process model. The new measure of (in)dependence allows us to answer some fundamental questions: Based on data, are (mixed-type) variables independent? How likely is dependence/independence to hold? How high is the probability that two mixed-type variables are more than just weakly dependent? We theoretically show the properties of the approach, as well as algorithms for fast computation with it. We empirically demonstrate the effectiveness of the proposed method by analysing its performance and by comparing it with other frequentist and Bayesian approaches on a range of datasets and tasks with mixed-type variables.
|
statistics
|
We present a novel neural network architecture called AutoAtlas for fully unsupervised partitioning and representation learning of 3D brain Magnetic Resonance Imaging (MRI) volumes. AutoAtlas consists of two neural network components: one that performs multi-label partitioning based on local texture in the volume and a second that compresses the information contained within each partition. We train both of these components simultaneously by optimizing a loss function that is designed to promote accurate reconstruction of each partition, while encouraging spatially smooth and contiguous partitioning, and discouraging relatively small partitions. We show that the partitions adapt to the subject specific structural variations of brain tissue while consistently appearing at similar spatial locations across subjects. AutoAtlas also produces very low dimensional features that represent local texture of each partition. We demonstrate prediction of metadata associated with each subject using the derived feature representations and compare the results to prediction using features derived from FreeSurfer anatomical parcellation. Since our features are intrinsically linked to distinct partitions, we can then map values of interest, such as partition-specific feature importance scores onto the brain for visualization.
|
electrical engineering and systems science
|
Under the assumption that the product of two spin operators decomposes uniquely into the degenerate conformal fields $\{\Phi_{n',n}\}$, the general expression for the correlation function of four spins is defined for the $q$ states Potts model with $q$ taking general values in the interval $1 \leq q \leq 4$. The limit of $q \rightarrow 1$ is considered in detail and the four spins function is obtained for the percolation model.
|
high energy physics theory
|
In the upcoming sixth-generation (6G) era, the demand for constructing a wide-area time-sensitive Internet of Things (IoT) keeps increasing. As conventional cellular technologies are hard to be directly used for wide-area timesensitive IoT, it is beneficial to use non-terrestrial infrastructures including satellites and unmanned aerial vehicles (UAVs), where a non-terrestrial network (NTN) can be built under the cell-free architecture. Driven by the timesensitive requirements and uneven distribution of machines, the NTN is required to be empowered by mobile edge computing (MEC) while providing oasis-oriented on-demand coverage for machines. Nevertheless, communication and MEC systems are coupled with each other under the influence of complex propagation environment in the MECempowered NTN, which makes it hard to orchestrate the resources. In this paper, we propose a process-oriented framework to design the communication and MEC systems in a time-decoupled manner. Under this framework, the large-scale channel state information (CSI) is used to characterize the complex propagation environment with an affordable cost, where a non-convex task completion latency minimization problem is formulated. After that, the approximated dual problem is given and it can be decomposed into subproblems. These subproblems are further solved in an iterative way. Simulation results demonstrate the superiority of the proposed process-oriented scheme over other algorithms. These results also indicate that the payload deployments of UAVs should be appropriately predesigned to improve the efficiency of resource use. Furthermore, the results imply that it is advantageous to integrate NTN with MEC for wide-area time-sensitive IoT.
|
electrical engineering and systems science
|
Dark photon is a massive vector field which interacts only with the physical photon through the kinetic mixing. This coupling is assumed to be weak so that the dark photon becomes almost unobservable in processes with elementary particles, but can serve as a dark matter particle. We argue that in very early Universe ($z>3000$) this vector field may have the equation of state of radiation ($w=1/3$) but later behaves as cold dark matter ($w=0$). This may slightly change the expansion rate of the Universe at early time and reduce the value of the sound horizon of baryon acoustic oscillations (standard ruler). As a result, in this model the value of the Hubble constant appears to be larger than that in the standard $\Lambda$CDM model. In particular, it is sufficient to have the dark photon mass of order $m\sim 10^{-27}-10^{-25}$ eV to fit the value of the Hubble constant to $H_0 = 73$ km$\cdot$s$^{-1}$Mpc$^{-1}$ thus resolving the Hubble tension.
|
astrophysics
|
Vortex beams carrying orbital angular momentum (OAM) have been widely applied in various electromagnetic, optical, and quantum systems. A tailored OAM spectrum composed of several specific modes as expected holds a promise for expanding the degrees of freedom of the systems. However, such a broadband high-purity tailored spectrum is difficult to be achieved by the present devices, where the broadband amplitude manipulation has not been explored yet. In this work, inspired by the envelope-modulation theory, an elegant and universal way to manipulate the OAM spectrum in wide bandwidth is proposed by using a shape-tailored metasurface. Firstly, the rotating meta-atoms on a triangular lattice are proved to have smaller coupling distortion than that on a square lattice, and this behavior is critical for high-purity vortex spectrum generation by the Pancharatnam-Berry-based metasurfaces. Secondly, a universal modulation relation is established between the spatial arrangement of metasurfaces and the generated vortex beams. Finally, the broadband modulated OAM spectra and the comb-like OAM spectra are theoretically and experimentally demonstrated by the shape-tailored metasurfaces. The proposed amplitude-modulation scheme offers a novel concept and engineering route to manipulate the OAM spectrum in wide bandwidth, which could promote the development of OAM-based applications.
|
physics
|
In the 1980's, work by Coleman and by Giddings and Strominger linked the physics of spacetime wormholes to `baby universes' and an ensemble of theories. We revisit such ideas, using features associated with a negative cosmological constant and asymptotically AdS boundaries to strengthen the results, introduce a change in perspective, and connect with recent replica wormhole discussions of the Page curve. A key new feature is an emphasis on the role of null states. We explore this structure in detail in simple topological models of the bulk that allow us to compute the full spectrum of associated boundary theories. The dimension of the asymptotically AdS Hilbert space turns out to become a random variable $Z$, whose value can be less than the naive number $k$ of independent states in the theory. For $k>Z$, consistency arises from an exact degeneracy in the inner product defined by the gravitational path integral, so that many a priori independent states differ only by a null state. We argue that a similar property must hold in any consistent gravitational path integral. We also comment on other aspects of extrapolations to more complicated models, and on possible implications for the black hole information problem in the individual members of the above ensemble.
|
high energy physics theory
|
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver. In this paper, we minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming. Due to uncertain channel conditions, we formulate a robust power minimization problem subject to the receiver's signal-to-noise ratio (SNR) requirement and the IRS's power budget constraint. We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences. To improve the learning performance, we derive a convex approximation as a lower bound on the robust problem, which is integrated into the DRL framework and thus promoting a novel optimization-driven deep deterministic policy gradient (DDPG) approach. In particular, when the DDPG algorithm generates a part of the action (e.g., passive beamforming), we can use the model-based convex approximation to optimize the other part (e.g., active beamforming) of the action more efficiently. Our simulation results demonstrate that the optimization-driven DDPG algorithm can improve both the learning rate and reward performance significantly compared to the conventional model-free DDPG algorithm.
|
electrical engineering and systems science
|
A key open problem in M-theory is the mechanism of "gauge enhancement", which supposedly makes M-branes exhibit the nonabelian gauge degrees of freedom that are seen perturbatively in the limit of 10d string theory. In fact, since only the twisted K-theory classes represented by nonabelian Chan-Paton gauge fields on D-branes have invariant meaning, the problem is really the lift to M-theory of the twisted K-theory classification of D-brane charges. Here we show how this problem has a solution by universal constructions in super homotopy theory, at least rationally. We recall how double dimensional reduction of super M-brane charges is described by the cyclification adjunction applied to the 4-sphere, and how M-theory degrees of freedom hidden at ADE-singularities are induced by the suspended Hopf action on the 4-sphere. Combining these, we demonstrate, at the level of rational homotopy theory, that gauge enhancement in M-theory is exhibited by lifting against the fiberwise stabilization of the unit of this cyclification adjunction on the A-type orbispace of the 4-sphere. This explains how the fundamental D6 and D8 brane cocycles can be lifted from twisted K-theory to a cohomology theory for M-brane charge, at least rationally.
|
high energy physics theory
|
The low temperatures and high ultraviolet (UV) radiation levels at the surface of Mars today currently preclude the survival of life anywhere except perhaps in limited subsurface niches. Several ideas for making the martian surface more habitable have been put forward previously, but they all involve massive environmental modification that will be well beyond human capability for the foreseeable future. Here we present a new approach to this problem. We show that widespread regions of the surface of Mars could be made habitable to photosynthetic life in the future via a solid-state analogue to Earth's atmospheric greenhouse effect. Specifically, we demonstrate via experiments and modelling that under martian environmental conditions, a 2 to 3-cm thick layer of silica (SiO2) aerogel will simultaneously transmit sufficient visible light for photosynthesis, block hazardous ultraviolet radiation, and raise temperatures underneath permanently to above the melting point of water, without the need for any internal heat source. Placing silica aerogel shields over sufficiently ice-rich regions of the martian surface could therefore allow photosynthetic life to survive there with minimal subsequent intervention. This regional approach to making Mars habitable is much more achievable than global atmospheric modification. In addition, it can be developed systematically starting from minimal resources, and can be further tested in extreme environments on Earth today.
|
astrophysics
|
Adding or subtracting a single quantum of excitation to a thermal state of a bosonic system has the counter-intuitive effect of approximately doubling its mean occupation. We perform the first experimental demonstration of this effect outside optics by implementing single-phonon addition and subtraction to a thermal state of a mechanical oscillator via Brillouin optomechanics in an optical whispering-gallery microresonator. Using a detection scheme that combines single-photon counting and optical heterodyne detection, we observe this doubling of the mechanical thermal fluctuations to a high precision. The capabilities of this joint click-dyne detection scheme adds a significant new dimension for optomechanical quantum science and applications.
|
quantum physics
|
In situations where it is difficult to enroll patients in randomized controlled trials, external data can improve efficiency and feasibility. In such cases, adaptive trial designs could be used to decrease enrollment in the control arm of the trial by updating the randomization ratio at the interim analysis. Updating the randomization ratio requires an estimate of the amount of information effectively borrowed from external data, which is typically done with a linear approximation. However, this linear approximation is not always a reliable estimate, which could potentially lead to sub-optimal randomization ratio updates. In this note, we highlight this issue through simulations for exponential time-to-event outcomes, because in this simple setting there is an exact solution available for comparison. We also propose a potential generalization that could complement the linear approximation in more complex settings, discuss challenges for this generalization, and recommend best practices for computing and interpreting estimates of the effective number of events borrowed.
|
statistics
|
As a step towards quantization of Higher Spin Gravities we construct the presymplectic AKSZ sigma-model for $4d$ Higher Spin Gravity which is AdS/CFT dual of Chern-Simons vector models. It is shown that the presymplectic structure leads to the correct quantum commutator of higher spin fields and to the correct algebra of the global higher spin symmetry currents. The presymplectic AKSZ model is proved to be unique, it depends on two coupling constants in accordance with the AdS/CFT duality, and it passes some simple checks of interactions.
|
high energy physics theory
|
Let $\mathfrak{g}$ be a real finite-dimensional Lie algebra equipped with a symmetric bilinear form $\langle\cdot,\cdot\rangle$. We assume that $\langle\cdot,\cdot\rangle $ is nil-invariant. This means that every nilpotent operator in the smallest algebraic Lie subalgebra of endomomorphims containing the adjoint representation of $\mathfrak{g}$ is an infinitesimal isometry for $\langle\cdot,\cdot\rangle $. Among these Lie algebras are the isometry Lie algebras of pseudo-Riemannian manifolds of finite volume. We prove a strong invariance property for nil-invariant symmetric bilinear forms, which states that the adjoint representations of the solvable radical and all simple subalgebras of non-compact type of $\mathfrak{g} $ act by infinitesimal isometries for $\langle\cdot,\cdot\rangle $. Moreover, we study properties of the kernel of $\langle\cdot,\cdot\rangle $ and the totally isotropic ideals in $\mathfrak{g} $ in relation to the index of $\langle\cdot,\cdot\rangle $. Based on this, we derive a structure theorem and a classification for the isometry algebras of indefinite homogeneous spaces of finite volume with metric index at most two. Examples show that the theory becomes significantly more complicated for index greater than two.
|
mathematics
|
In this paper, through the combination of Tietze extension theorem and Baer criteria, we build a new mathematical structure which is similar to a triangular pyramid, and then we prove that the topological space which we call it Tb appeared as a result of the combination and sat at the apex of the pyramid is a tychonof space. Finally, we obtain three new extension theorems
|
mathematics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.