text
stringlengths
11
9.77k
label
stringlengths
2
104
In this article, we construct the axialvector-diquark-axialvector-antidiquark type currents to study both the vector and axialvector $QQ\bar{Q}\bar{Q}$ tetraquark states with the QCD sum rules, and obtain the masses $M_{Y(cc\bar{c}\bar{c},1^{+-})} =6.05\pm0.08\,\rm{GeV}$, $M_{Y(cc\bar{c}\bar{c},1^{--})} =6.11\pm0.08\,\rm{GeV}$, $M_{Y(bb\bar{b}\bar{b},1^{+-})} =18.84\pm0.09\,\rm{GeV}$, $M_{Y(bb\bar{b}\bar{b},1^{--})} =18.89\pm0.09\,\rm{GeV}$. The vector tetraquark states lie $40\,\rm{MeV}$ above the corresponding centroids of the $0^{++}$, $1^{+-}$ and $2^{++}$ tetraquark states, which is a typical feature of the vector tetraquark states consist of four heavy quarks.
high energy physics phenomenology
The number of studies for the analysis of remote sensing images has been growing exponentially in the last decades. Many studies, however, only report results---in the form of certain performance metrics---by a few selected algorithms on a training and testing sample. While this often yields valuable insights, it tells little about some important aspects. For example, one might be interested in understanding the nature of a study by the interaction of algorithm, features, and the sample as these collectively contribute to the outcome; among these three, which would be a more productive direction in improving a study; how to assess the sample quality or the value of a set of features etc. With a focus on land-use classification, we advocate the use of a structured analysis. The output of a study is viewed as the result of the interplay among three input dimensions: feature, sample, and algorithm. Similarly, another dimension, the error, can be decomposed into error along each input dimension. Such a structural decomposition of the inputs or error could help better understand the nature of the problem and potentially suggest directions for improvement. We use the analysis of a remote sensing image at a study site in Guangzhou, China, to demonstrate how such a structured analysis could be carried out and what insights it generates. The structured analysis could be applied to a new study, or as a diagnosis to an existing one. We expect this will inform practice in the analysis of remote sensing images, and help advance the state-of-the-art of land-use classification.
statistics
We propose a visual SLAM method by predicting and updating line flows that represent sequential 2D projections of 3D line segments. While feature-based SLAM methods have achieved excellent results, they still face problems in challenging scenes containing occlusions, blurred images, and repetitive textures. To address these problems, we leverage a line flow to encode the coherence of line segment observations of the same 3D line along the temporal dimension, which has been neglected in prior SLAM systems. Thanks to this line flow representation, line segments in a new frame can be predicted according to their corresponding 3D lines and their predecessors along the temporal dimension. We create, update, merge, and discard line flows on-the-fly. We model the proposed line flow based SLAM (LF-SLAM) using a Bayesian network. Extensive experimental results demonstrate that the proposed LF-SLAM method achieves state-of-the-art results due to the utilization of line flows. Specifically, LF-SLAM obtains good localization and mapping results in challenging scenes with occlusions, blurred images, and repetitive textures.
computer science
Let $F$ be a totally real field, and $\mathbb{A}_F$ be the adele ring of $F$. Let us fix $N$ to be a positive integer. Let $\pi_1=\otimes\pi_{1,v}$ and $\pi_2=\otimes\pi_{2,v}$ be distinct cohomological cuspidal automorphic representations of $\mathrm{GL}_n(\mathbb{A}_{F})$ with levels less than or equal to $N$. Let $\mathcal{N}(\pi_1,\pi_2)$ be the minimum of the absolute norm of $v \nmid \infty$ such that $\pi_{1,v} \not \simeq \pi_{2,v}$ and that $\pi_{1,v}$ and $\pi_{2,v}$ are unramified. We prove that there exists a constant $C_N$ such that for every pair $\pi_1$ and $\pi_2$, $$\mathcal{N}(\pi_1,\pi_2) \leq C_N.$$ This improves known bounds $$ \mathcal{N}(\pi_1,\pi_2)=O(Q^A) \;\;\; (\text{some } A \text{ depending only on } n), $$ where $Q$ is the maximum of the analytic conductors of $\pi_1$ and $\pi_2$. This result applies to newforms on $\Gamma_1(N)$. In particular, assume that $f_1$ and $f_2$ are Hecke eigenforms of weight $k_1$ and $k_2$ on $\mathrm{SL}_2(\mathbb{Z})$, respectively. We prove that if for all $p \in \{2,7\}$, $$\lambda_{f_1}(p)/\sqrt{p}^{(k_1-1)} = \lambda_{f_2}(p)/\sqrt{p}^{(k_2-1)},$$ then $f_1=cf_2$ for some constant $c$. Here, for each prime $p$, $\lambda_{f_i}(p)$ denotes the $p$-th Hecke eigenvalue of $f_i$.
mathematics
We give a survey on recent development of the Novikov conjecture and its applications to topological rigidity and non-rigidity. .
mathematics
We present full expressions for the surface part of polarization tensor of a Dirac fermion confined in a half-space in $3+1$ dimensions. We compare this tensor to the polarization tensor of eventual surface mode (which is a $2+1$ dimensional Dirac fermion) and find essential differences in the conductivities in both Hall and normal sectors. Thus, the interaction with electromagnetic field near the boundary differs significantly in the full model and in the effective theory for the surface mode.
high energy physics theory
Using density-functional theory (DFT), we investigate the selectivity of adsorption of CO2 over N2 and CH4 on planar-type B clusters, based on our previous finding of strong chemisorption of CO2 on the B10-13 planar and quasiplanar clusters. We consider the prototype B8 and B12 planar-type clusters and perform a comparative study of the adsorption of the three molecules on these clusters. We find that, at room temperature, CO2 can be separated from N2 by selective binding to the B12 cluster and not to the B8 cluster. Selective adsorption of CO2 over CH4 at room temperature is possible for both clusters. Based on our DFT-adsorption data (including also a semi-infinite Boron sheet) and the available literature-adsorption value for N2 on the planar-type B36 cluster, we discuss the selectivity trend of CO2 adsorption over N2 and CH4 with planar-cluster size, showing that it extends over sizes including B10-13 clusters and significantly larger.
condensed matter
An axion background field induces tiny oscillating electric and magnetic fields in an external static magnetic field. This signature is used to search for axion dark matter. We use standard quantum field theory techniques to obtain an expression for a transition amplitude, from which we identify the classical electromagnetic fields induced by the background axion field. We confirm previous results, that if the spatial size $R$ of the applied static magnetic field is small compared to the axion Compton wavelength $\lambda$, the induced electric and magnetic fields are parametrically suppressed by the small numbers $(R/\lambda)^2$ and $R/\lambda$, respectively, relative to the case when $R$ is larger than $\lambda$. Our approach allows an intuitive interpretation in terms of 4-momentum conservation and momentum exchange via the photon propagator.
high energy physics phenomenology
We present the results of a large multi-wavelength follow-up campaign of the Tidal Disruption Event (TDE) \dsg, focusing on low to high resolution optical spectroscopy, X-ray, and radio observations. The galaxy hosts a super massive black hole of mass $\rm (5.4\pm3.2)\times10^6\,M_\odot$ and careful analysis finds no evidence for the presence of an Active Galactic Nucleus, instead the TDE host galaxy shows narrow optical emission lines that likely arise from star formation activity. The transient is luminous in the X-rays, radio, UV and optical. The X-ray emission becomes undetected after $\sim$125 days, and the radio luminosity density starts to decay at frequencies above 5.4 GHz by $\sim$180 days. Optical emission line signatures of the TDE are present up to $\sim$250 days after the discovery of the transient. The medium to high resolution spectra show traces of absorption lines that we propose originate in the self-gravitating debris streams. At late times, after $\sim$200 days, narrow Fe lines appear in the spectra. The TDE was previously classified as N-strong, but after careful subtraction of the host galaxy's stellar contribution, we find no evidence for these N lines in the TDE spectrum, even though O Bowen lines are detected. The observed properties of the X-ray emission are fully consistent with the detection of the inner regions of a cooling accretion disc. The optical and radio properties are consistent with this central engine seen at a low inclination (i.e., seen from the poles).
astrophysics
Soft drop has been shown to reduce hadronisation effects at $e^+e^-$ colliders for the thrust event shape. In this context, we perform fits of the strong coupling constant for the soft-drop thrust distribution at NLO+NLL accuracy to pseudo data generated by the \textsf{Sherpa}~event generator. In particular, we focus on the impact of hadronisation corrections, which we estimate both with an analytical model and a Monte-Carlo based one, on the fitted value of $\alpha_s(m_Z)$. We find that grooming can reduce the size of the shift in the fitted value of $\alpha_s$ due to hadronisation. In addition, we also explore the possibility of extending the fitting range down to significantly lower values of (one minus) thrust. Here, soft drop is shown to play a crucial role, allowing us to maintain good fit qualities and stable values of the fitted strong coupling. The results of these studies show that soft-drop thrust is a promising candidate for fitting $\alpha_s$ at $e^+ e^-$ colliders with reduced impact of hadronisation effects.
high energy physics phenomenology
We study a real-time bidding problem resulting from a set of contractual obligations stipulating that a firm win a specified number of heterogeneous impressions or ad placements over a defined duration in a real-time auction. The contracts specify item targeting criteria (which may be overlapping), and a supply requirement. Using the Pontryagin maximum principle, we show that the resulting continuous time and time inhomogeneous planning problem can be reduced into a finite dimensional convex optimization problem and solved to optimality. In addition, we provide algorithms to update the bidding plan over time via a receding horizon. Finally, we provide numerical results based on real data and show a connection to production-transportation problems.
electrical engineering and systems science
The modular spaces are a family of polarizations of the Hilbert space that are based on Aharonov's modular variables and carry a rich geometric structure. We construct here, step by step, a Feynman path integral for the quantum harmonic oscillator in a modular polarization. This modular path integral is endowed with novel features such as a new action, winding modes, and an Aharonov-Bohm phase. Its saddle points are sequences of superposition states and they carry a non-classical concept of locality in alignment with the understanding of quantum reference frames. The action found in the modular path integral can be understood as living on a compact phase space and it possesses a new set of symmetries. Finally, we propose a prescription analogous to the Legendre transform, which can be applied generally to the Hamiltonian of a variety of physical systems to produce similar modular actions.
quantum physics
In this paper, we present the one-loop radiative corrections to the electroweak precision observable $\Delta \rho$ coming from the $I_W=1$ multiplet excited leptons. We have calculated the couplings of the exotic lepton triplet to the vector bosons and ordinary leptons using effective Lagrangian approach. These couplings are then used to estimate the excited lepton triplet contribution to the $\Delta \rho$ parameter. The mass degenerate excited lepton contribution to $\Delta \rho $ is small and can be neglected. However, if the excited leptons are non-degenerate, their contribution can be large which can result in more stringent constraints on the excited fermion parameter space compared to the constraints from present experimental searches and perturbative unitarity condition.
high energy physics phenomenology
The ROC curve is the gold standard for measuring the performance of a test/scoring statistic regarding its capacity to discriminate between two statistical populations in a wide variety of applications, ranging from anomaly detection in signal processing to information retrieval, through medical diagnosis. Most practical performance measures used in scoring/ranking applications such as the AUC, the local AUC, the p-norm push, the DCG and others, can be viewed as summaries of the ROC curve. In this paper, the fact that most of these empirical criteria can be expressed as two-sample linear rank statistics is highlighted and concentration inequalities for collections of such random variables, referred to as two-sample rank processes here, are proved, when indexed by VC classes of scoring functions. Based on these nonasymptotic bounds, the generalization capacity of empirical maximizers of a wide class of ranking performance criteria is next investigated from a theoretical perspective. It is also supported by empirical evidence through convincing numerical experiments.
mathematics
We study theoretically the radiative lifetime of bound two-particle excitations in a waveguide with an array of two-level atoms, realising a 1D Dicke-like model. Recently, Zhang et al. [arXiv:1908.01818] have numerically found an unexpected sharp maximum of the bound pair lifetime when the array period $d$ is equal to $1/12$th of the light wavelength $\lambda_0$]. We uncover a rigorous transformation from the non-Hermitian Hamiltonian with the long-ranged radiative coupling to the nearest-neigbor coupling model with the radiative losses only at the edges. This naturally explains the puzzle of long lifetime: the effective mass of the bound photon pair also diverges for $d=\lambda_0/12$, hampering an escape of photons through the edges. We also link the oscillations of the lifetime with the number of atoms to the nonmonotous quasi-flat-band dispersion of the bound pair.
quantum physics
We derive a topological classification of the steady states of $d$-dimensional lattice models driven by $D$ incommensurate tones. Mapping to a unifying $(d+D)$-dimensional localized model in frequency space reveals anomalous localized topological phases (ALTPs) with no static analog. While the formal classification is determined by $d+D$, the observable signatures of each ALTP depend on the spatial dimension $d$. For each $d$, with $d+D=3$, we identify a quantized circulating current, and corresponding topological edge states. The edge states for a driven wire ($d=1$) function as a quantized, nonadiabatic energy pump between the drives. We design concrete models of quasiperiodically driven qubits and wires that achieve ALTPs of several topological classes. Our results provide a route to experimentally access higher dimensional ALTPs in driven low-dimensional systems.
condensed matter
Neural network applications generally benefit from larger-sized models, but for current speech enhancement models, larger scale networks often suffer from decreased robustness to the variety of real-world use cases beyond what is encountered in training data. We introduce several innovations that lead to better large neural networks for speech enhancement. The novel PoCoNet architecture is a convolutional neural network that, with the use of frequency-positional embeddings, is able to more efficiently build frequency-dependent features in the early layers. A semi-supervised method helps increase the amount of conversational training data by pre-enhancing noisy datasets, improving performance on real recordings. A new loss function biased towards preserving speech quality helps the optimization better match human perceptual opinions on speech quality. Ablation experiments and objective and human opinion metrics show the benefits of the proposed improvements.
electrical engineering and systems science
In this paper, we estimate the errors of Gaussian transformations implemented using one-way quantum computations on cluster states of various configurations. From all possible cluster state configurations, we choose those that give the smallest computation error. Furthermore, we evaluate errors in hybrid computational schemes, in which Gaussian operations are performed using one-way computations with additional linear transformations. As a result, we find the optimal strategy for the implementation of universal Gaussian computations with minimal errors.
quantum physics
Shape-constrained inference has wide applicability in bioassay, medicine, economics, risk assessment, and many other fields. Although there has been a large amount of work on monotone-constrained univariate curve estimation, multivariate shape-constrained problems are much more challenging, and fewer advances have been made in this direction. With a focus on monotone regression with multiple predictors, this current work proposes a projection approach to estimate a multiple monotone regression function. An initial unconstrained estimator -- such as a local polynomial estimator or spline estimator -- is first obtained, which is then projected onto the shape-constrained space. A shape-constrained estimate (with multiple predictors) is obtained by sequentially projecting an (adjusted) initial estimator along each univariate direction. Compared to the initial unconstrained estimator, the projection estimate results in a reduction of estimation error in terms of both $L^p$ ($p\geq 1$) distance and supremum distance. We also derive the asymptotic distribution of the projection estimate. Simple computational algorithms are available for implementing the projection in both the unidimensional and higher dimensional cases. Our work provides a simple recipe for practitioners to use in real applications, and is illustrated with a joint-action example from environmental toxicology.
statistics
Inflation is often described through the dynamics of a scalar field, slow-rolling in a suitable potential. Ultimately, this inflaton must be identified as the expectation value of a quantum field, evolving in a quantum effective potential. The shape of this potential is determined by the underlying tree-level potential, dressed by quantum corrections from the scalar field itself and the metric perturbations. Following [1], we compute the effective scalar field equations and the corrected Friedmann equations to quadratic order in both scalar field, scalar metric and tensor perturbations. We identify the quantum corrections from different sources at leading order in slow-roll, and estimate their magnitude in benchmark models of inflation. We comment on the implications of non-minimal coupling to gravity in this context.
high energy physics phenomenology
We obtain cosmological solutions with Kasner-like asymptotics in N=2 gauged and ungauged supergravity by maximal analytic continuation of planar versions of non-extremal black hole solutions. Initially, we construct static solutions with planar symmetry by solving the time-reduced field equations. Upon lifting back to four dimensions, the resulting static regions are incomplete and bounded by a curvature singularity on one side and a Killing horizon on the other. Analytic continuation reveals the existence of dynamic patches in the past and future, with Kasner-like asymptotics. For the ungauged STU-model, our solutions contain previously known solutions with the same conformal diagram as a subset. We find explicit lifts to five, six, ten and eleven dimensions which show that in the extremal limit, the underlying brane configuration is the same as for STU black holes. The extremal limit of the six-dimensional lift is shown to be BPS for special choices of the integration constants. We argue that there is a universal correspondence between spherically symmetric black hole solutions and planar cosmological solutions which can be illustrated using the Reissner-Nordstr\"om solution of Einstein-Maxwell theory.
high energy physics theory
We report the results of the transit timing variation (TTV) analysis of the extra-solar planet Qatar-1b using thirty eight light curves. Our analysis combines thirty five previously available transit light curves with three new transits observed by us between June 2016 and September 2016 using the 2-m Himalayan Chandra Telescope (HCT) at the Indian Astronomical Observatory (Hanle, India). From these transit data, the physical and orbital parameters of the Qatar-1 system are determined. In addition to this, the ephemeris for the orbital period and mid-transit time are refined to investigate the possible TTV. We find that the null-TTV model provides the better fit to the (O-C) data. This indicates that there is no evidence for TTVs to confirm the presence of additional planets in the Qatar-1 system. The use of the 3.6-m Devasthal Optical Telescope (DOT) operated by the Aryabhatta Research Institute of Observational Sciences (ARIES, Nainital, India) could improve the photometric precision to examine the signature of TTVs in this system with a greater accuracy than in the present work.
astrophysics
GitHub projects can be easily replicated through the site's fork process or through a Git clone-push sequence. This is a problem for empirical software engineering, because it can lead to skewed results or mistrained machine learning models. We provide a dataset of 10.6 million GitHub projects that are copies of others, and link each record with the project's ultimate parent. The ultimate parents were derived from a ranking along six metrics. The related projects were calculated as the connected components of an 18.2 million node and 12 million edge denoised graph created by directing edges to ultimate parents. The graph was created by filtering out more than 30 hand-picked and 2.3 million pattern-matched clumping projects. Projects that introduced unwanted clumping were identified by repeatedly visualizing shortest path distances between unrelated important projects. Our dataset identified 30 thousand duplicate projects in an existing popular reference dataset of 1.8 million projects. An evaluation of our dataset against another created independently with different methods found a significant overlap, but also differences attributed to the operational definition of what projects are considered as related.
computer science
Anderson localization is a striking phenomenon wherein transport of light is arrested due to the formation of disorder-induced resonances. Hitherto, Anderson localization has been demonstrated separately in two limits of disorder, namely, amorphous disorder and nearly-periodic disorder. However, transport properties in the two limits are yet unstudied, particularly in a statistically consistent manner. Here, we experimentally measure light transport across two-dimensional open mesoscopic structures, wherein the disorder systematically ranges from nearly-periodic to amorphous. We measure the generalized conductance, which quantifies the transport probability in the sample. Although localization was identified in both the limits, statistical measurements revealed a discrepant behavior in the generalized conductance fluctuations in the two disorder regimes. Under amorphous disorder, the generalized conductance remains below unity for any configuration of the disorder, attesting to the arrested nature of transport. Contrarily, at near-periodic disorder, the distribution of generalized conductance is heavy-tailed towards large conductance values, indicating that the overall transport is delocalized. Theoretical results from a model based on the tight-binding approximation, augmented to include open boundaries, are in excellent agreement with experiments, and also endorse the results over much larger ensembles. These results quantify the differences in the two disorder regimes, and advance the studies of disordered systems into actual consequences of Anderson localization in light transport.
physics
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs. Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i.e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries. This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations. Specifically, entities are modeled by an adaptive neighbor encoder to discern their task-oriented roles, while references are modeled by an adaptive query-aware aggregator to differentiate their contributions. Through the attention mechanism, both entities and references can capture their fine-grained semantic meanings, and thus render more expressive representations. This will be more predictive for knowledge acquisition in the few-shot scenario. Evaluation in link prediction on two public datasets shows that our approach achieves new state-of-the-art results with different few-shot sizes.
computer science
We study solutions to the quantum trajectory evolution of $N$-mode open quantum systems possessing a time-independent Hamiltonian, linear Heisenberg-picture dynamics, and Gaussian measurement noise. In terms of the mode annihilation and creation operators, a system will have linear Heisenberg-picture dynamics under two conditions. First, the Hamiltonian must be quadratic. Second, the Lindblad operators describing the coupling to the environment (including those corresponding to the measurement) must be linear. In cases where we can solve the $2N$-degree polynomials that arise in our calculations, we provide an analytical solution for initial states that are arbitrary (i.e. they are not required to have Gaussian Wigner functions). The solution takes the form of an evolution operator, with the measurement-result dependence captured in $2N$ stochastic integrals over these classical random signals. The solutions also allow the POVM, which generates the probabilities of obtaining measurement outcomes, to be determined. To illustrate our results, we solve some single-mode example systems, with the POVMs being of practical relevance to the inference of an initial state, via quantum state tomography. Our key tool is the representation of mixed states of quantum mechanical oscillators as state vectors rather than state matrices (albeit in a larger Hilbert space). Together with methods from Lie algebra, this allows a more straightforward manipulation of the exponential operators comprising the system evolution than is possible in the original Hilbert space.
quantum physics
This article is an introductory survey of index theory in the context of noncommutative geometry, written for the occasion of the 70th birthday of Alain Connes.
mathematics
We investigate how the detectability of signatures of self-gravity in a protoplanetary disc depends on its temporal evolution. We run a one-dimensional model for secular timescales to follow the disc mass as a function of time. We then combine this with three-dimensional global hydrodynamics simulations that employ a hybrid radiative transfer method to approximate realistic heating and cooling. We simulate ALMA continuum observations of these systems, and find that structures induced by the gravitational instability (GI) are readily detectable when $q=M_\mathrm{disc}/M_*\gtrsim 0.25$ and $R_\mathrm{outer}\lesssim 100$ au. The high accretion rate generated by gravito-turbulence in such a massive disc drains its mass to below the detection threshold in $\sim10^4$ years, or approximately 1 % of the typical disc lifetime. Therefore, discs with spiral arms detected in ALMA dust observations, if generated by self-gravity, must either be still receiving infall to maintain a high $q$ value, or have just emerged from their natal envelope. Detection of substructure in systems with lower $q$ is possible, but would require a specialist integration with the most extended configuration over several days. This disfavours the possibility of GI-caused spiral structure in systems with $q<0.25$ being detected in relatively short integration times, such as those found in the DSHARP ALMA survey (Andrews et al. 2018; Huang et al. 2018). We find no temporal dependence of detectability on dynamical timescales
astrophysics
We analyze the phase diagram of twisted graphene bilayers near a magic angle. We consider the effect of the long range Coulomb interaction, treated within the self consistent Hartree-Fock approximation, and we study arbitrary band fillings. We find a rich phase diagram, with different broken symmetry phases, although tehy do not show necessarily a gap at the Fermi energy. There are non trivial effects of the electrostatic potential on the shape and the gaps of the bands in the broken symmetry phases. The results suggest that the non superconducting broken symmetry phases observed experimentally are induced by the long range exchange interaction.
condensed matter
Classical molecular dynamics simulations are based on Newton's equations of motion and rely on numerical integrators to solve them. Using a small timestep to avoid discretization errors, Verlet integrators generate a trajectory of particle positions as solutions to Newton's equations. We introduce an integrator based on deep neural networks that is trained on trajectories generated using the Verlet integrator and learns to propagate the dynamics of particles with timestep up to 4000$\times$ larger compared to the Verlet timestep. We demonstrate significant net speedup of up to 32000 for 1 - 16 particle 3D systems and over a variety of force fields.
physics
We exploit the theoretical strength of the supervariable and Becchi-Rouet-Stora-Tyutin (BRST) formalisms to derive the proper (i.e. off-shell nilpotent and absolutely anticommuting) (anti-)BRST symmetry transformations for the reparameterization invariant model of a non-relativistic (NR) free particle whose space $(x)$ and time $(t)$ variables are function of an evolution parameter $(\tau)$. The infinitesimal reparameterization (i.e. 1D diffeomorphism) symmetry transformation of our theory is defined w.r.t. this evolution parameter $(\tau)$. We apply the modified Bonora-Tonin (BT) supervariable approach (MBTSA) as well as the (anti-)chiral supervariable approach (ACSA) to BRST formalism to discuss various aspects of our present system. For this purpose, our 1D ordinary theory (parameterized by $\tau$) is generalized onto a $(1, 2)$-dimensional supermanifold which is characterized by the superspace coordinates $Z^M = (\tau, \theta, \bar\theta)$ where a pair of Grassmannian variables satisfy the fermionic relationships: $\theta^2 = {\bar\theta}^2 = 0, \, \theta\,\bar\theta + \bar\theta\,\theta = 0$ and $\tau$ is the bosonic evolution parameter. In the context of ACSA, we take into account only the (1, 1)-dimensional (anti-)chiral super sub-manifolds of the general (1, 2)-dimensional supermanifold. The derivation of the universal Curci-Ferrari (CF)-type restriction, from various underlying theoretical methods, is a novel observation in our present endeavor. Furthermore, we note that the form of the gauge-fixing and Faddeev-Popov ghost terms for our NR and non-SUSY system is exactly same as that of the reparameterization invariant SUSY (i.e. spinning) and non-SUSY (i.e. scalar) relativistic particles. This is a novel observation, too.
high energy physics theory
We study the entanglement wedge cross-section (EWCS) in holographic massive gravity theory, in which a first and second-order phase transition can occur. We find that the mixed state entanglement measures, the EWCS and mutual information (MI) can characterize the phase transitions. The interesting phenomenon is that the EWCS and MI show exactly the opposite behavior in the critical region, which suggests that the EWCS captures distinct degrees of freedom from that of the MI. Meanwhile, the holographic entanglement entropy (HEE) exhibits more complicated behaviors, totally distinct from that of the AdS-RN black holes. Moreover, EWCS, MI and HEE all show obvious critical scaling law near the critical point of the second-order phase transition and have the same critical exponent. We offer an analytical understanding on this and point out that this critical phenomenon is distinct from that of the superconductivity phase transition.
high energy physics theory
In this paper we derive for the first time the complete gravitational quartic-in-spin interaction of generic compact binaries at the next-to-leading order in the post-Newtonian (PN) expansion. The derivation builds on the effective field theory for gravitating spinning objects, and its recent extensions, in which new type of worldline couplings should be considered, as well as on the extension of the effective action of a spinning particle to quadratic order in the curvature. The latter extension entails a new Wilson coefficient that appears in this sector. This work pushes the precision frontier with spins at the fifth PN (5PN) order for maximally-spinning compact objects, and at the same time informs us of the gravitational Compton scattering with higher spins.
high energy physics theory
Quantifying tidal interactions in close-in two-body systems is of prime interest since they have a crucial impact on the architecture and on the rotational history of the bodies. Various studies have shown that the dissipation of tides in either body is very sensitive to its structure and to its dynamics, like differential rotation which exists in the outer convective enveloppe of solar-like stars and giant gaseous planets. In particular, tidal waves may strongly interact with zonal flows at the so-called corotation resonances, where the wave's Doppler-shifted frequency cancels out. We aim to provide a deep physical understanding of the dynamics of tidal inertial waves at corotation resonances, in the presence of differential rotation profiles typical of low-mass stars and giant planets. By developping an inclined shearing box, we investigate the propagation and the transmission of free inertial waves at corotation, and more generally at critical levels, which are singularities in the governing wave differential equation. Through the construction of an invariant called the wave action flux, we identify different regimes of wave transmission at critical levels, which are confirmed with a one-dimensional three-layer numerical model. We find that inertial waves can be either fully transmitted, strongly damped, or even amplified after crossing a critical level. The occurrence of these regimes depends on the assumed profile of differential rotation, on the nature as well as the latitude of the critical level, and on wave parameters such as the inertial frequency and the longitudinal and vertical wavenumbers. Waves can thus either deposit their action flux to the fluid when damped at critical levels, or they can extract action flux to the fluid when amplified at critical levels. Both situations could lead to significant angular momentum exchange between the tidally interacting bodies.
astrophysics
Fluctuating Copper pairs in a superconductor strongly impact its electrical and thermal transport phenomena above the critical temperature. We demonstrate that the nontrivial band geometry of pairing electrons makes fingerprints at the spectrum of fluctuations. Different superconducting channels become intertwined by the coupling that has inter-winding phase shift and mimics spin-orbit interactions. The spectrum of fluctuating Cooper pairs is described by the effective Ginzburg-Landau Hamiltonian, can be topologically nontrivial and characterised by nonzero Chern number. We interpret such fluctuating pairs as topological ones. It is shown that the nontrivial geometry manifests in the anomalous Hall paraconductivity mediated by fluctuating Cooper pair.
condensed matter
We consider the combined use of resampling and partial rejection control in sequential Monte Carlo methods, also known as particle filters. While the variance reducing properties of rejection control are known, there has not been (to the best of our knowledge) any work on unbiased estimation of the marginal likelihood (also known as the model evidence or the normalizing constant) in this type of particle filter. Being able to estimate the marginal likelihood without bias is highly relevant for model comparison, computation of interpretable and reliable confidence intervals, and in exact approximation methods, such as particle Markov chain Monte Carlo. In the paper we present a particle filter with rejection control that enables unbiased estimation of the marginal likelihood.
statistics
Spin-diodes are usually resonant in nature (GHz frequency) and tuneable by magnetic field and bias current with performances, in terms of sensitivity and minimum detectable power, overcoming the semiconductor counterpart, i.e. Schottky diodes. Recently, spin diodes characterized by a low frequency detection (MHz frequency) have been proposed. Here, we show a strategy to design low frequency detectors based on magnetic tunnel junctions having the interfacial perpendicular anisotropy of the same order of the demagnetizing field out-of-plane component. Micromagnetic calculations show that to reach this detection regime a threshold input power has to be overcome and the phase shift between the oscillation magnetoresistive signal and the input radiofrequency current plays the key role in determining the value of the rectification voltage.
physics
Minimal $D=5$ supergravity admits asymptotically globally AdS$_5$ gravitational solitons (strictly stationary, geodesically complete spacetimes with positive mass). We show that, like asymptotically flat gravitational solitons, these solutions satisfy mass and mass variation formulas analogous to those satisfied by AdS black holes. A thermodynamic volume associated to the non-trivial topology of the spacetime plays an important role in this construction. We then consider these solitons within the holographic ``complexity equals action'' and ``complexity equals volume'' conjectures as simple examples of spacetimes with nontrivial rotation and topology. We find distinct behaviours for the volume and action, with the counterterm for null boundaries playing a significant role in the latter case. For large solitons we find that both proposals yield a complexity of formation proportional to a power of the thermodynamic volume, $V^{3/4}$. In fact, up to numerical prefactors, the result coincides with the analogous one for large black holes.
high energy physics theory
Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, most state-of-the-art generative models do not explicitly capture the compositional nature of visual scenes. Two recent exceptions, MONet and IODINE, decompose scenes into objects in an unsupervised fashion. Their underlying generative processes, however, do not account for component interactions. Hence, neither of them allows for principled sampling of novel scenes. Here we present GENESIS, the first object-centric generative model of 3D visual scenes capable of both decomposing and generating scenes by capturing relationships between scene components. GENESIS parameterises a spatial GMM over images which is decoded from a set of object-centric latent variables that are either inferred sequentially in an amortised fashion or sampled from an autoregressive prior. We train GENESIS on several publicly available datasets and evaluate its performance on scene generation, decomposition, and semi-supervised learning.
computer science
The thermodynamic cost of resetting an arbitrary initial state to a particular desired state is lower bounded by Landauer's bound. However, here we demonstrate that this lower bound is necessarily unachievable for nearly every initial state, for any reliable reset mechanism. Since local heating threatens rapid decoherence, this issue is of substantial importance beyond mere energy efficiency. For the case of qubit reset, we find the minimally dissipative state analytically for any reliable reset protocol, in terms of the entropy-flow vector introduced here. This allows us to verify a recent theorem about initial-state dependence of entropy production for any finite-time transformation, as it pertains to quantum state preparation.
quantum physics
Ever since the discovery of all-optical magnetization switching (AOS) around a decade ago, this phenomenon of manipulating magnetization using only femtosecond laser pulses has promised a large potential for future data storage and logic devices. Two distinct mechanisms have been observed, where the final magnetization state is either defined by the helicity of many incoming laser pulses, or toggled by a single pulse. What has thus far been elusive, yet essential for applications, is the deterministic writing of a specific magnetization state with a single laser pulse. In this work we experimentally demonstrate such a mechanism by making use of a spin polarized current which is optically generated in a ferromagnetic reference layer, assisting or hindering switching in an adjacent Co/Gd bilayer. We show deterministic writing of an 'up' and 'down' state using a sequence of 1 or 2 pulses, respectively. Moreover, we demonstrate the non-local origin of the effect by varying the magnitude of the generated spin current. Our demonstration of deterministic magnetization writing could provide an essential step towards the implementation of future optically addressable spintronic memory devices.
condensed matter
Human drivers may behave in an imprecise/unstable manner, leading to traffic oscillations which are harmful to traffic throughput. Recent field experiments have shown that the control of a single autonomous vehicle (AV) can increase traffic throughput on a circular test track, as well as reduce traffic oscillations on straight roads. We consider a mixed traffic environment consisting of humans and autonomous vehicles, where the goal is to find a control policy for the autonomous vehicles which maximizes traffic throughput by preventing oscillations in speed. We formulate this problem as an optimization problem which can be solved using gradient based optimization. Numerical experiments on a circular road show that the optimized control policy improves traffic throughput by 28%.
electrical engineering and systems science
Large-scale screening for potential threats with limited resources and capacity for screening is a problem of interest at airports, seaports, and other ports of entry. Adversaries can observe screening procedures and arrive at a time when there will be gaps in screening due to limited resource capacities. To capture this game between ports and adversaries, this problem has been previously represented as a Stackelberg game, referred to as a Threat Screening Game (TSG). Given the significant complexity associated with solving TSGs and uncertainty in arrivals of customers, existing work has assumed that screenees arrive and are allocated security resources at the beginning of the time window. In practice, screenees such as airport passengers arrive in bursts correlated with flight time and are not bound by fixed time windows. To address this, we propose an online threat screening model in which screening strategy is determined adaptively as a passenger arrives while satisfying a hard bound on acceptable risk of not screening a threat. To solve the online problem with a hard bound on risk, we formulate it as a Reinforcement Learning (RL) problem with constraints on the action space (hard bound on risk). We provide a novel way to efficiently enforce linear inequality constraints on the action output in Deep Reinforcement Learning. We show that our solution allows us to significantly reduce screenee wait time while guaranteeing a bound on risk.
computer science
We propose a scheme to realize optical nonreciprocal response and conversion in a Tavis-Cummings coupling optomechanical system, where a single cavity mode interacts with the vibrational mode of a flexible membrane with an embedded ensemble of two-level quantum emitters. Due to the introduction of the Tavis-Cummings interaction, we find that the phases between the mechanical mode and the optical mode, as well as between the mechanical mode and the dopant mode, are correlated with each other, and further give the analytical relationship between them. By optimizing the system parameters, especially the relative phase between two paths, the optimal nonreciprocal response can be achieved. Under the frequency domain, we derive the transmission matrix of the system analytically based on the input-output relation and study the influence of the system parameters on the nonreciprocal response of the quantum input signal. Moreover, compared with the conventional optomechanical systems, the Tavis-Cummings coupling optomechanical system exhibits richer nonreciprocal conversion phenomena among the optical mode, mechanical mode, and dopant mode, which provide a new applicable way of achieving the phonon-photon transducer and the optomechanical circulator in future practice.
quantum physics
The Gottesman-Kitaev-Preskill encoding of a qubit in a harmonic oscillator is a promising building block towards fault-tolerant quantum computation. Recently, this encoding was experimentally demonstrated for the first time in trapped-ion and superconducting circuit systems. However, these systems lack some of the Gaussian operations which are critical to efficiently manipulate the encoded qubits. In particular, homodyne detection, which is the preferred method for readout of the encoded qubit, is not readily available, heavily limiting the readout fidelity. Here, we present an alternative read-out strategy designed for qubit-coupled systems. Our method can improve the readout fidelity with several orders of magnitude for such systems and, surprisingly, even surpass the fidelity of homodyne detection in the low squeezing regime.
quantum physics
Quantum control for error correction is critical for the practical use of quantum computers. We address quantum optimal control for single-shot multi-qubit gates by framing as a feasibility problem for the Hamiltonian model and then solving with standard global-optimization software. Our approach yields faster high-fidelity ($>$99.99\%) single-shot three-qubit-gate control than obtained previously, and our method has enabled us to solve the quantum-control problem for a fast high-fidelity four-qubit gate.
quantum physics
In a retroreflective scheme atomic Raman diffraction adopts some of the properties of Bragg diffraction due to additional couplings to off-resonant momenta. As a consequence, double Raman diffraction has to be performed in a Bragg-type regime. Taking advantage of this regime, double Raman allows for resonant higher-order diffraction. We study theoretically the case of third-order diffraction and compare it to first order as well as a sequence of first-order pulses giving rise to the same momentum transfer as the third-order pulse. In fact, third-order diffraction constitutes a competitive tool for the diffraction of ultracold atoms and interferometry based on large momentum transfer since it allows to reduce the complexity of the experiment as well as the total duration of the diffraction process compared to a sequence.
quantum physics
Noninvasive reconstruction of cardiac transmembrane potential (TMP) from surface electrocardiograms (ECG) involves an ill-posed inverse problem. Model-constrained regularization is powerful for incorporating rich physiological knowledge about spatiotemporal TMP dynamics. These models are controlled by high-dimensional physical parameters which, if fixed, can introduce model errors and reduce the accuracy of TMP reconstruction. Simultaneous adaptation of these parameters during TMP reconstruction, however, is difficult due to their high dimensionality. We introduce a novel model-constrained inference framework that replaces conventional physiological models with a deep generative model trained to generate TMP sequences from low-dimensional generative factors. Using a variational auto-encoder (VAE) with long short-term memory (LSTM) networks, we train the VAE decoder to learn the conditional likelihood of TMP, while the encoder learns the prior distribution of generative factors. These two components allow us to develop an efficient algorithm to simultaneously infer the generative factors and TMP signals from ECG data. Synthetic and real-data experiments demonstrate that the presented method significantly improve the accuracy of TMP reconstruction compared with methods constrained by conventional physiological models or without physiological constraints.
electrical engineering and systems science
In this work, we connect two distinct concepts for unsupervised domain adaptation: feature distribution alignment between domains by utilizing the task-specific decision boundary and the Wasserstein metric. Our proposed sliced Wasserstein discrepancy (SWD) is designed to capture the natural notion of dissimilarity between the outputs of task-specific classifiers. It provides a geometrically meaningful guidance to detect target samples that are far from the support of the source and enables efficient distribution alignment in an end-to-end trainable fashion. In the experiments, we validate the effectiveness and genericness of our method on digit and sign recognition, image classification, semantic segmentation, and object detection.
computer science
Randomization ensures that observed and unobserved covariates are balanced, on average. However, randomizing units to treatment and control often leads to covariate imbalances in realization, and such imbalances can inflate the variance of estimators of the treatment effect. One solution to this problem is rerandomization---an experimental design strategy that randomizes units until some balance criterion is fulfilled---which yields more precise estimators of the treatment effect if covariates are correlated with the outcome. Most rerandomization schemes in the literature utilize the Mahalanobis distance, which may not be preferable when covariates are correlated or vary in importance. As an alternative, we introduce an experimental design strategy called ridge rerandomization, which utilizes a modified Mahalanobis distance that addresses collinearities among covariates and automatically places a hierarchy of importance on the covariates according to their eigenstructure. This modified Mahalanobis distance has connections to principal components and the Euclidean distance, and---to our knowledge---has remained unexplored. We establish several theoretical properties of this modified Mahalanobis distance and our ridge rerandomization scheme. These results guarantee that ridge rerandomization is preferable over randomization and suggest when ridge rerandomization is preferable over standard rerandomization schemes. We also provide simulation evidence that suggests that ridge rerandomization is particularly preferable over typical rerandomization schemes in high-dimensional or high-collinearity settings.
mathematics
The intrinsic luminosity of Uranus is a factor of 10 less than that of Neptune, an observation that standard giant planetary evolution models, which assume negligible viscosity, fail to capture. Here we show that more than half of the interior of Uranus is likely to be in a solid state, and that thermal evolution models that account for this high viscosity region satisfy the observed faintness of Uranus by storing accretional heat deep in the interior. A frozen interior also explains the quality factor of Uranus required by the evolution of the orbits of its satellites.
astrophysics
We consider the problem of the estimation of a high-dimensional probability distribution using model classes of functions in tree-based tensor formats, a particular case of tensor networks associated with a dimension partition tree. The distribution is assumed to admit a density with respect to a product measure, possibly discrete for handling the case of discrete random variables. After discussing the representation of classical model classes in tree-based tensor formats, we present learning algorithms based on empirical risk minimization using a $L^2$ contrast. These algorithms exploit the multilinear parametrization of the formats to recast the nonlinear minimization problem into a sequence of empirical risk minimization problems with linear models. A suitable parametrization of the tensor in tree-based tensor format allows to obtain a linear model with orthogonal bases, so that each problem admits an explicit expression of the solution and cross-validation risk estimates. These estimations of the risk enable the model selection, for instance when exploiting sparsity in the coefficients of the representation. A strategy for the adaptation of the tensor format (dimension tree and tree-based ranks) is provided, which allows to discover and exploit some specific structures of high-dimensional probability distributions such as independence or conditional independence. We illustrate the performances of the proposed algorithms for the approximation of classical probabilistic models (such as Gaussian distribution, graphical models, Markov chain).
statistics
In this study, the melting and coalescence of Au nanoparticles were investigated using molecular dynamics simulation. The melting points of nanoparticles were calculated by studying the potential energy and Lindemann indices as a function of temperature. The simulations show that coalescence of two Au nanoparticles of the same size occurs at far lower temperatures than their corresponding melting temperature. For smaller nanoparticles, the difference between melting and coalescence temperature increases. Detailed analyses of the Lindemann indices and potential energy distribution across the nanoparticles show that the surface melting in nanoparticles begins at several hundred degrees below the melting point. This suggests that the coalescence is governed by the liquid-phase surface diffusion. Furthermore, the surface reduction during the coalescence accelerates its kinetics. It is found that for small enough particles and/or at elevated temperatures, the heat released due to the surface reduction result in a melting transition of the two attached nanoparticles.
condensed matter
We provide a new framework to model the day side and night side atmospheres of irradiated exoplanets using 1-D radiative transfer by incorporating a self-consistent heat flux carried by circulation currents (winds) between the two sides. The advantages of our model are its physical motivation and computational efficiency, which allows for an exploration of a wide range of atmospheric parameters. We use this forward model to explore the day and night side atmosphere of WASP-76~b, an ultra-hot Jupiter which shows evidence for a thermal inversion and Fe condensation, and WASP-43~b, comparing our model against high precision phase curves and general circulation models. We are able to closely match the observations as well as prior theoretical predictions for both of these planets with our model. We also model a range of hot Jupiters with equilibrium temperatures between 1000-3000~K and reproduce the observed trend that the day-night temperature contrast increases with equilibrium temperature up to $\sim$2500~K beyond which the dissociation of H$_2$ becomes significant and the relative temperature difference declines.
astrophysics
Tidal Disruption Events (TDEs) are processes where stars are torn apart by the strong gravitational force near to a massive or supermassive black hole. If a jet is launched in such a process, particle acceleration may take place in internal shocks. We demonstrate that jetted TDEs can simultaneously describe the observed neutrino and cosmic ray fluxes at the highest energies if stars with heavier compositions, such as carbon-oxygen white dwarfs, are tidally disrupted and these events are sufficiently abundant. We simulate the photo-hadronic interactions both in the TDE jet and in the propagation through the extragalactic space and we show that the simultaneous description of Ultra-High Energy Cosmic Ray (UHECR) and PeV neutrino data implies that a nuclear cascade in the jet develops by photo-hadronic interactions.
astrophysics
We describe a new technique to measure the EDM of $^{129}$Xe with $^3$He comagnetometry. Both species are polarized using spin-exchange optical pumping, transferred to a measurement cell, and transported into a magnetically shielded room, where SQUID magnetometers detect free precession in applied electric and magnetic fields. The result of a one week run combined with a detailed study of systematic effects is $d_A(^{129}\mathrm{Xe}) = (0.26 \pm 2.33_\mathrm{stat} \pm 0.72_\mathrm{syst})\times10^{-27}~e\,\mathrm{cm}$. This corresponds to an upper limit of $|d_A(^{129}\mathrm{Xe})| < 4.81\times 10^{-27} ~e\,\mathrm{cm}~(95\%~\mathrm{CL})$, a factor of 1.4 more sensitive than the previous limit.
physics
We study a model of aggregation and fragmentation of clusters of particles on an open segment of a single-lane road. The particles and clusters obey the stochastic discrete-time discrete-space kinetics of the Totally Asymmetric Simple Exclusion Process (TASEP) with backward ordered sequential update (dynamics), endowed with two hopping probabilities, p and pm. The second modified probability, pm, models a special kinematic interaction between the particles belonging to the same cluster. This modification is called generalized TASEP (gTASEP) since it contains as special cases TASEP with parallel update and TASEP with backward ordered sequential update for specific values of the second hopping probability pm. We focus here on exemplifying the effect of the additional attraction interaction on the system properties in the non-equilibrium steady state. We estimate various physical quantities (bulk density, density distribution, and the current) in the system and how they change with the increase of pm (p < pm<1). Within a random walk theory we consider the evolution of the gaps under different boundary conditions and present space-time plots generated by MC simulations, illustrating the applicability of the random walk theory for the study of gTASEP.
condensed matter
We discover the quantum analog of the well-known classical maximum power transfer theorem. Our theoretical framework considers the continuous steady-state problem of coherent energy transfer through an N-node bosonic network coupled to an external dissipative load. We present an exact solution for optimal power transfer in the form of the maximum power transfer theorem known in the design of electrical circuits. We provide analytical expressions for both the maximum power delivered to the load as well as the energy transfer efficiency which are exact analogs to their classical counterparts. Our results are applicable to both ordered and disordered quantum networks with graph-like structures ranging from nearest-neighbour to all-to-all connectivities. This work points towards universal design principles which adapt ideas of power transfer from the classical domain to the quantum regime for applications in energy-harvesting, wireless power transfer, energy transduction, as well as future applications in quantum power circuit design.
quantum physics
Background: While machine learning (ML) models are rapidly emerging as promising screening tools in critical care medicine, the identification of homogeneous subphenotypes within populations with heterogeneous conditions such as pediatric sepsis may facilitate attainment of high-predictive performance of these prognostic algorithms. This study is aimed to identify subphenotypes of pediatric sepsis and demonstrate the potential value of partitioned data/subtyping-based training. Methods: This was a retrospective study of clinical data extracted from medical records of 6,446 pediatric patients that were admitted at a major hospital system in the DC area. Vitals and labs associated with patients meeting the diagnostic criteria for sepsis were used to perform latent profile analysis. Modern ML algorithms were used to explore the predictive performance benefits of reduced training data heterogeneity via label profiling. Results: In total 134 (2.1%) patients met the diagnostic criteria for sepsis in this cohort and latent profile analysis identified four profiles/subphenotypes of pediatric sepsis. Profiles 1 and 3 had the lowest mortality and included pediatric patients from different age groups. Profile 2 were characterized by respiratory dysfunction; profile 4 by neurological dysfunction and highest mortality rate (22.2%). Machine learning experiments comparing the predictive performance of models derived without training data profiling against profile targeted models suggest statistically significant improved performance of prediction can be obtained. For example, area under ROC curve (AUC) obtained to predict profile 4 with 24-hour data (AUC = .998, p < .0001) compared favorably with the AUC obtained from the model considering all profiles as a single homogeneous group (AUC = .918) with 24-hour data.
statistics
Active materials are capable of converting free energy into mechanical work to produce autonomous motion, and exhibit striking collective dynamics that biology relies on for essential functions. Controlling those dynamics and transport in synthetic systems has been particularly challenging. Here, we introduce the concept of spatially structured activity as a means to control and manipulate transport in active nematic liquid crystals consisting of actin filaments and light-sensitive myosin motors. Simulations and experiments are used to demonstrate that topological defects can be generated at will, and then constrained to move along specified trajectories, by inducing local stresses in an otherwise passive material. These results provide a foundation for design of autonomous and reconfigurable microfluidic systems where transport is controlled by modulating activity with light.
condensed matter
We present a method to learn single-view reconstruction of the 3D shape, pose, and texture of objects from categorized natural images in a self-supervised manner. Since this is a severely ill-posed problem, carefully designing a training method and introducing constraints are essential. To avoid the difficulty of training all elements at the same time, we propose training category-specific base shapes with fixed pose distribution and simple textures first, and subsequently training poses and textures using the obtained shapes. Another difficulty is that shapes and backgrounds sometimes become excessively complicated to mistakenly reconstruct textures on object surfaces. To suppress it, we propose using strong regularization and constraints on object surfaces and background images. With these two techniques, we demonstrate that we can use natural image collections such as CIFAR-10 and PASCAL objects for training, which indicates the possibility to realize 3D object reconstruction on diverse object categories beyond synthetic datasets.
computer science
According to the holographic bound, there is only a finite density of degrees of freedom in space when gravity is taken into account. Conventional quantum field theory does not conform to this bound, since in this framework, infinitely many degrees of freedom may be localized to any given region of space. In this paper, we explore the viewpoint that quantum field theory may emerge from an underlying theory that is locally finite-dimensional, and we construct a locally finite-dimensional version of a Klein-Gordon scalar field using generalized Clifford algebras. Demanding that the finite-dimensional field operators obey a suitable version of the canonical commutation relations makes this construction essentially unique. We then find that enforcing local finite dimensionality in a holographically consistent way leads to a huge suppression of the quantum contribution to vacuum energy, to the point that the theoretical prediction becomes plausibly consistent with observations.
high energy physics theory
This article outlines our point of view regarding the applicability, state-of-the-art, and potential of quantum computing for problems in finance. We provide an introduction to quantum computing as well as a survey on problem classes in finance that are computationally challenging classically and for which quantum computing algorithms are promising. In the main part, we describe in detail quantum algorithms for specific applications arising in financial services, such as those involving simulation, optimization, and machine learning problems. In addition, we include demonstrations of quantum algorithms on IBM Quantum back-ends and discuss the potential benefits of quantum algorithms for problems in financial services. We conclude with a summary of technical challenges and future prospects.
quantum physics
Analytic quantifiers of the symmetric quantum discord for two-qubit X type states and block-diagonal states and the symmetric measurement induced nonlocality for any two qubit states are established on the basis of the quantum skew information.
quantum physics
Online data assimilation in time series models over a large spatial extent is an important problem in both geosciences and robotics. Such models are intrinsically high-dimensional, rendering traditional particle filter algorithms ineffective. Though methods that begin to address this problem exist, they either rely on additional assumptions or lead to error that is spatially inhomogeneous. I present a novel particle-based algorithm for online approximation of the filtering problem on such models, using the fact that each locus affects only nearby loci at the next time step. The algorithm is based on a Metropolis-Hastings-like MCMC for creating hybrid particles at each step. I show simulation results that suggest the error of this algorithm is uniform in both space and time, with a lower bias, though higher variance, as compared to a previously-proposed algorithm.
statistics
In biomedical research, unified access to up-to-date domain-specific knowledge is crucial, as such knowledge is continuously accumulated in scientific literature and structured resources. Identifying and extracting specific information is a challenging task and computational analysis of knowledge bases can be valuable in this direction. However, for disease-specific analyses researchers often need to compile their own datasets, integrating knowledge from different resources, or reuse existing datasets, that can be out-of-date. In this study, we propose a framework to automatically retrieve and integrate disease-specific knowledge into an up-to-date semantic graph, the iASiS Open Data Graph. This disease-specific semantic graph provides access to knowledge relevant to specific concepts and their individual aspects, in the form of concept relations and attributes. The proposed approach is implemented as an open-source framework and applied to three diseases (Lung Cancer, Dementia, and Duchenne Muscular Dystrophy). Exemplary queries are presented, investigating the potential of this automatically generated semantic graph as a basis for retrieval and analysis of disease-specific knowledge.
computer science
Complex or co-existing diseases are commonly treated using drug combinations, which can lead to higher risk of adverse side effects. The detection of polypharmacy side effects is usually done in Phase IV clinical trials, but there are still plenty which remain undiscovered when the drugs are put on the market. Such accidents have been affecting an increasing proportion of the population (15% in the US now) and it is thus of high interest to be able to predict the potential side effects as early as possible. Systematic combinatorial screening of possible drug-drug interactions (DDI) is challenging and expensive. However, the recent significant increases in data availability from pharmaceutical research and development efforts offer a novel paradigm for recovering relevant insights for DDI prediction. Accordingly, several recent approaches focus on curating massive DDI datasets (with millions of examples) and training machine learning models on them. Here we propose a neural network architecture able to set state-of-the-art results on this task---using the type of the side-effect and the molecular structure of the drugs alone---by leveraging a co-attentional mechanism. In particular, we show the importance of integrating joint information from the drug pairs early on when learning each drug's representation.
statistics
The possibility of a superradiant phase transition in light-matter systems is the subject of much debate, due to numerous apparently conflicting no-go and counter no-go theorems. Using an arbitrary gauge approach we show that a unique phase transition does occur in archetypal many-dipole cavity QED systems, and that it manifests unambiguously via a macroscopic gauge-invariant polarisation. We find that the gauge choice controls the extent to which this polarisation is included as part of the radiative quantum subsystem and thereby determines the degree to which the abnormal phase is classed as superradiant. This resolves the long-standing paradox of no-go and counter no-go theorems for superradiance, which are shown to refer to different definitions of radiation.
quantum physics
Testing a covariance matrix following a Gaussian graphical model (GGM) is considered in this paper based on observations made at a set of distributed sensors grouped into clusters. Ordered transmissions are proposed to achieve the same Bayes risk as the optimum centralized energy unconstrained approach but with fewer transmissions and a completely distributed approach. In this approach, we represent the Bayes optimum test statistic as a sum of local test statistics which can be calculated by only utilizing the observations available at one cluster. We select one sensor to be the cluster head (CH) to collect and summarize the observed data in each cluster and intercluster communications are assumed to be inexpensive. The CHs with more informative observations transmit their data to the fusion center (FC) first. By halting before all transmissions have taken place, transmissions can be saved without performance loss. It is shown that this ordering approach can guarantee a lower bound on the average number of transmissions saved for any given GGM and the lower bound can approach approximately half the number of clusters when the minimum eigenvalue of the covariance matrix under the alternative hypothesis in each cluster becomes sufficiently large.
electrical engineering and systems science
Millimeter (mm) wave picocellular networks are a promising approach for delivering the 1000-fold capacity increase required to keep up with projected demand for wireless data: the available bandwidth is orders of magnitude larger than that in existing cellular systems, and the small carrier wavelength enables the realization of highly directive antenna arrays in compact form factor, thus drastically increasing spatial reuse. In this paper, we carry out an interference analysis for mm-wave picocells in an urban canyon with a dense deployment of base stations. Each base station sector can serve multiple simultaneous users, which implies that both intra- and inter-cell interference must be managed. We propose a \textit{cross-layer} approach to interference management based on (i) suppressing interference at the physical layer and (ii) managing the residual interference at the medium access control layer. We provide an estimate of network capacity and establish that 1000-fold increase relative to conventional LTE cellular networks is indeed feasible.
electrical engineering and systems science
Fracton models exhibit a variety of exotic properties and lie beyond the conventional framework of gapped topological order. In a previous work, we generalized the notion of gapped phase to one of foliated fracton phase by allowing the addition of layers of gapped two-dimensional resources in the adiabatic evolution between gapped three-dimensional models. Moreover, we showed that the X-cube model is a fixed point of one such phase. In this paper, according to this definition, we look for universal properties of such phases which remain invariant throughout the entire phase. We propose multi-partite entanglement quantities, generalizing the proposal of topological entanglement entropy designed for conventional topological phases. We present arguments for the universality of these quantities and show that they attain non-zero constant value in non-trivial foliated fracton phases.
condensed matter
We develop a model to describe the motional (i.e., external degree of freedom) energy spectra of atoms trapped in a one-dimensional optical lattice, taking into account both axial and radial confinement relative to the lattice axis. Our model respects the coupling between axial and radial degrees of freedom, as well as other anharmonicities inherent in the confining potential. We further demonstrate how our model can be used to characterize lattice light shifts in optical lattice clocks, including shifts due to higher multipolar (magnetic dipole and electric quadrupole) and higher order (hyperpolarizability) coupling to the lattice field. We compare results for our model with results from other lattice light shift models in the literature under similar conditions.
physics
An essential character for a distribution to play a central role in the limit theory is infinite divisibility. In this note, we prove that the Conway-Maxwell-Poisson (CMP) distribution is infinitely divisible iff it is the Poisson or geometric distribution. This explains that, despite its applications in a wide range of fields, there is no theoretical foundation for the CMP distribution to be a natural candidate for the law of small numbers.
mathematics
The search for artificial topological superconductivity has been limited by the stringent conditions required for its emergence. As exemplified by the recent discoveries of various correlated electronic states in twisted van der Waals materials, moir\'e patterns can act as a powerful knob to create artificial electronic structures. Here we demonstrate that a moir\'e pattern between a van der Waals superconductor and a monolayer ferromagnet creates a periodic potential modulation that enables the realization of a topological superconducting state that would not be accessible in the absence of the moir\'e. We show that the existence of a magnetic moir\'e pattern gives rise to Yu-Shiba-Rusinov minibands and periodic modulation of the Majorana edge modes that we detect using low-temperature scanning tunneling microscopy (STM) and spectroscopy (STS). Our results put forward moir\'e patterns as a powerful tool to overcome conventional constrains for topological superconductivity in van der Waals heterostructures. In a broader picture, periodic potential modulation provides a general way of controlling topological superconductivity towards the realisation of topological qubits in the future.
condensed matter
Producing or sharing Child Sexual Exploitation Material (CSEM) is a serious crime fought vigorously by Law Enforcement Agencies (LEAs). When an LEA seizes a computer from a potential producer or consumer of CSEM, they need to analyze the suspect's hard disk's files looking for pieces of evidence. However, a manual inspection of the file content looking for CSEM is a time-consuming task. In most cases, it is unfeasible in the amount of time available for the Spanish police using a search warrant. Instead of analyzing its content, another approach that can be used to speed up the process is to identify CSEM by analyzing the file names and their absolute paths. The main challenge for this task lies behind dealing with short text distorted deliberately by the owners of this material using obfuscated words and user-defined naming patterns. This paper presents and compares two approaches based on short text classification to identify CSEM files. The first one employs two independent supervised classifiers, one for the file name and the other for the path, and their outputs are later on fused into a single score. Conversely, the second approach uses only the file name classifier to iterate over the file's absolute path. Both approaches operate at the character n-grams level, while binary and orthographic features enrich the file name representation, and a binary Logistic Regression model is used for classification. The presented file classifier achieved an average class recall of 0.98. This solution could be integrated into forensic tools and services to support Law Enforcement Agencies to identify CSEM without tackling every file's visual content, which is computationally much more highly demanding.
computer science
We provide the first stochastic convergence rates for adaptive Gauss--Hermite quadrature applied to normalizing the posterior distribution in Bayesian models. Our results apply to the uniform relative error in the approximate posterior density, the coverage probabilities of approximate credible sets, and approximate moments and quantiles, therefore guaranteeing fast asymptotic convergence of approximate summary statistics used in practice. We demonstrate via simulation a simple model that matches our stochastic upper bound for small sample sizes, and apply the method in two challenging low-dimensional examples. Further, we demonstrate how adaptive quadrature can be used as a crucial component of a complex Bayesian inference procedure for high-dimensional parameters. The method is implemented and made publicly available in the \texttt{aghq} package for the \texttt{R} language.
statistics
Atomically thin crystals of transition metal dichalcogenides are ideally suited to study the interplay of light-matter coupling, polarization and magnetic field effects. In this work, we investiagte the formation of exciton-polaritons in a MoSe2 monolayer, which is integrated in a fully-grown, monolithic microcavity. Due to the narrow linewidth of the polaritonic resonances, we are able to directly investigate the emerging valley Zeeman splitting of the hybrid light-matter resonances in the presence of a magnetic field. At a detuning of -54.5 meV (13.5 % matter constituent of the lower polariton branch), we find a Zeeman splitting of the lower polariton branch of 0.36 meV, which can be directly associated with an excitonic g factor of 3.94\pm0.13. Remarkably, we find that a magnetic field of 6T is sufficient to induce a notable valley polarization of 15 % in our polariton system, which approaches 30% at 9T. Strikingly, this circular polarization degree of the polariton (ground) state exceeds the polarization of the exciton reservoir for equal magnetic field magnitudes by approximately 50%, as a consequence of enhanced relaxation of bosons in our monolayer-based system.
condensed matter
Context: Seasonal variations and climate stability of a planet are very sensitive to the planet obliquity and its evolution. This is of particular interest for the emergence and sustainability of land-based life, but orbital and rotational parameters of exoplanets are still poorly constrained. Numerical explorations usually realised in this situation are thus in heavy contrast with the uncertain nature of the available data. Aims: We aim to provide an analytical formulation of the long-term spin-axis dynamics of exoplanets, linking it directly to physical and dynamical parameters, but still giving precise quantitative results if the parameters are well known. Together with bounds for the poorly constrained parameters of exoplanets, this analysis is designed to allow a quick and straightforward exploration of the spin-axis dynamics. Methods: The long-term orbital solution is decomposed in quasi-periodic series and the spin-axis Hamiltonian is expanded in powers of eccentricity and inclination. Chaotic zones are measured by the resonance overlap criterion. Bounds for the poorly known parameters of exoplanets are obtained from physical grounds (rotational breakup) and dynamical considerations (equipartition of AMD). Results: This method gives accurate results when the orbital evolution is well known. The chaotic zones for planets of the Solar System can be retrieved in details from simple analytical formulas. For less constrained planetary systems, the maximal extent of the chaotic regions can be computed, requiring only the mass, the semi-major axis and the eccentricity of the planets present in the system. Additionally, some estimated bounds of the precession constant allow to classify which observed exoplanets are necessarily out of major spin-orbit secular resonances (unless the precession rate is affected by the presence of massive satellites).
astrophysics
The transition metal carbides (namely MXenes) and their functionalized derivatives exhibit various physical and chemical characteristics and offer many potential applications in electronic devices and sensors. Using density functional theory (DFT), it is revealed that the nearly free electron (NFE) states are near the Fermi levels in hydroxyl (OH) functionalized MXenes. Most of the OH-terminated MXene are metallic, but some of them, e.g. Sc2C(OH)2, are semiconductors and the NFE states are conduction bands. In this paper, to investigate the NFE states in MXenes, an attractive image-potential well model is adopted. Compared the solutions of this model with the DFT calculations, it is found that due to the overlap of spatially extensive wave functions of NFE states and their hybridization between the artificial neighboring layers imposed by the periodical boundary conditions (PBCs), the DFT results represent the properties of multiple layers, intrinsically. Based on the DFT calculations, it is found that the energy gap widths are affected by the interlayer distances. We address that the energetics of the NFE states can be modulated by the external electric fields and it is possible to convert semiconducting MXenes into metals. This band-gap manipulation makes the OH-terminated semiconducting MXenes an excellent candidate for electronic switch applications. Finally, using a set of electron transport calculations, I-V characteristics of Sc2C(OH)2 devices are investigated with the gate voltages.
condensed matter
The radiative baryonic decay $\mathcal{B}^{*}\left(\frac{3}{2}\right)\to\mathcal{B}\left(\frac{1}{2}\right)+\gamma$ is a magnetic dipole $(M1)$ transition. It requires the transition magnetic moment $\mu_{\mathcal{B}\left(3/2\right)\to\mathcal{B}\left(1/2\right)}$. The transition magnetic moments for the helicities $1/2$ and $3/2$ are evaluated in the frame work of constituent quark model in which the intrinsic spin and the magnetic moments of quarks $u,d$ and $s$ play a key role. Within this framework, the radiative decays $\Delta^{+}\to p +\gamma$, $\Sigma^{*0}\to \Lambda+\gamma$, $\Sigma^{*+}\to \Sigma^{+}+\gamma$ and $\Xi^{*0}\to \Xi^{0}+\gamma$ are analyzed in detail. The branching ratio for these decays is found to be in good agreement with the corresponding experimental values.
high energy physics phenomenology
Recently the superconductor and topological semimetal PbTaSe$_2$ was experimentally found to exhibit surface-only lattice rotational symmetry breaking below $T_c$. We exploit the Ginzburg-Landau free energy and propose a microscopic two-channel model to study possible superconducting states on the surface of PbTaSe$_2$. We identify two types of topological superconducting states. One is time-reversal invariant and preserves the lattice hexagonal symmetry while the other breaks both symmetries. We find that such time-reversal symmetry breaking is unavoidable for a superconducting state in a two dimensional irreducible representation of crystal point group in a system where the spatial inversion symmetry is broken and the strong spin-orbit coupling is present. Our findings will guide the search for topological chiral superconductors.
condensed matter
We present an efficient method to shorten the analytic integration-by-parts (IBP) reduction coefficients of multi-loop Feynman integrals. For our approach, we develop an improved version of Leinartas' multivariate partial fraction algorithm, and provide a modern implementation based on the computer algebra system Singular. Furthermore, We observe that for an integral basis with uniform transcendental (UT) weights, the denominators of IBP reduction coefficients with respect to the UT basis are either symbol letters or polynomials purely in the spacetime dimension $D$. With a UT basis, the partial fraction algorithm is more efficient both with respect to its performance and the size reduction. We show that in complicated examples with existence of a UT basis, the IBP reduction coefficients size can be reduced by a factor of as large as $\sim 100$. We observe that our algorithm also works well for settings without a UT basis.
high energy physics phenomenology
The $\cal{N}=4$ supersymmetric spinning particle admits several consistent quantizations, related to the gauging of different subgroups of the $SO(4)$ $R$-symmetry on the worldline. We construct the background independent BRST quantization for all of these choices which are shown to reproduce either the massless NS-NS spectrum of the string, or Einstein theory with or without the antisymmetric tensor field and/or dilaton corresponding to different restrictions. Quantum consistency of the worldline implies equations of motion for the background which, in addition to the admissible string backgrounds, admit Einstein manifolds with or whithout a cosmological constant. The vertex operators for the Kalb-Ramond, graviton and dilaton fields are obtained from the linear variations of the BRST charge. They produce the physical states by action on the diffeomorphism ghost states.
high energy physics theory
We use conformal field theory to construct model wavefunctions for a gapless interface between lattice versions of a bosonic Laughlin state and a fermionic Moore-Read state, both at $\nu=1/2$. The properties of the resulting model state, such as particle density, correlation function and R\'enyi entanglement entropy are then studied using the Monte Carlo approach. Moreover, we construct the wavefunctions also for anyonic excitations (quasiparticles and quasiholes). We study their density profile, charge and statistics. We show that, similarly to the Laughlin-Laughlin case studied earlier, some anyons (the Laughlin Abelian ones) can cross the interface, while others (the non-Abelian ones) lose their anyonic character in such a process. Also, we argue that, under an assumption of local particle exchange, multiple interfaces give rise to a topological degeneracy, which can be interpreted as originating from Majorana zero modes.
condensed matter
We revisit the physics of neutrino magnetic moments, focusing in particular on the case where the right-handed, or sterile, neutrinos are heavier (up to several MeV) than the left-handed Standard Model neutrinos. The discussion is centered around the idea of detecting an upscattering event mediated by a transition magnetic moment in a neutrino or dark matter experiment. Considering neutrinos from all known sources, as well as including all available data from XENON1T and Borexino, we derive the strongest up-to-date exclusion limits on the active-to-sterile neutrino transition magnetic moment. We then study complementary constraints from astrophysics and cosmology, performing, in particular, a thorough analysis of BBN. We find that these data sets scrutinize most of the relevant parameter space. Explaining the XENON1T excess with transition magnetic moments is marginally possible if conservative assumptions are adopted regarding the supernova 1987A and CMB constraints. Finally, we discuss model-building challenges that arise in scenarios that feature large magnetic moments while keeping neutrino masses well below 1 eV. We present a successful ultraviolet-complete model of this type based on TeV-scale leptoquarks, establishing links with muon magnetic moment, B physics anomalies, and collider searches at the LHC.
high energy physics phenomenology
Reservoir Computing (RC) is a popular methodology for the efficient design of Recurrent Neural Networks (RNNs). Recently, the advantages of the RC approach have been extended to the context of multi-layered RNNs, with the introduction of the Deep Echo State Network (DeepESN) model. In this paper, we study the quality of state dynamics in progressively higher layers of DeepESNs, using tools from the areas of information theory and numerical analysis. Our experimental results on RC benchmark datasets reveal the fundamental role played by the strength of inter-reservoir connections to increasingly enrich the representations developed in higher layers. Our analysis also gives interesting insights into the possibility of effective exploitation of training algorithms based on stochastic gradient descent in the RC field.
computer science
We present the results from a search for gravitational-wave transients associated with core-collapse supernovae observed within a source distance of approximately 20 Mpc during the first and second observing runs of Advanced LIGO and Advanced Virgo. No significant gravitational-wave candidate was detected. We report the detection efficiencies as a function of the distance for waveforms derived from multidimensional numerical simulations and phenomenological extreme emission models. For neutrino-driven explosions the distance at which we reach 50% detection efficiency is approaching 5 kpc, and for magnetorotationally-driven explosions is up to 54 kpc. However, waveforms for extreme emission models are detectable up to 28 Mpc. For the first time, the gravitational-wave data enabled us to exclude part of the parameter spaces of two extreme emission models with confidence up to 83%, limited by coincident data coverage. Besides, using ad hoc harmonic signals windowed with Gaussian envelopes we constrained the gravitational-wave energy emitted during core-collapse at the levels of $4.27\times 10^{-4}\,M_\odot c^2$ and $1.28\times 10^{-1}\,M_\odot c^2$ for emissions at 235 Hz and 1304 Hz respectively. These constraints are two orders of magnitude more stringent than previously derived in the corresponding analysis using initial LIGO, initial Virgo and GEO 600 data.
astrophysics
We study aspects of anti-de Sitter space in the context of the Swampland. In particular, we conjecture that the near-flat limit of pure AdS belongs to the Swampland, as it is necessarily accompanied by an infinite tower of light states. The mass of the tower is power-law in the cosmological constant, with a power of $\frac{1}{2}$ for the supersymmetric case. We discuss relations between this behaviour and other Swampland conjectures such as the censorship of an unbounded number of massless fields, and the refined de Sitter conjecture. Moreover, we propose that changes to the AdS radius have an interpretation in terms of a generalised distance conjecture which associates a distance to variations of all fields. In this framework, we argue that the distance to the $\Lambda \rightarrow 0$ limit of AdS is infinite, leading to the light tower of states. We also discuss implications of the conjecture for de Sitter space.
high energy physics theory
This paper deals with a situation when one is interested in the dependence structure of a multidimensional response variable in the presence of a multivariate covariate. It is assumed that the covariate affects only the marginal distributions through regression models while the dependence structure, which is described by a copula, is unaffected. A parametric estimation of the copula function is considered with focus on the maximum pseudo-likelihood method. It is proved that under some appropriate regularity assumptions the estimator calculated from the residuals is asymptotically equivalent to the estimator based on the unobserved errors. In such case one can ignore the fact that the response is first adjusted for the effect of the covariate. A Monte Carlo simulation study explores (among others) situations where the regularity assumptions are not satisfied and the claimed result does not hold. It shows that in such situations the maximum pseudo-likelihood estimator may behave poorly and the moment estimation of the copula parameter is of interest. Our results complement the results available for nonparametric estimation of the copula function.
mathematics
Historically, the implementation of research-based assessments (RBAs) has been a driver of educational change within physics and helped motivate adoption of interactive engagement pedagogies. Until recently, RBAs were given to students exclusively on paper and in-class; however, this approach has important drawbacks including decentralized data collection and the need to sacrifice class time. Recently, some RBAs have been moved to online platforms to address these limitations. Yet, online RBAs present new concerns such as student participation rates, test security, and students' use of outside resources. Here, we report on a study addressing these concerns in both upper-division and lower-division undergraduate physics courses. We gave RBAs to courses at five institutions; the RBAs were hosted online and featured embedded JavaScript code which collected information on students' behaviors (e.g., copying text, printing). With these data, we examine the prevalence of these behaviors, and their correlation with students' scores, to determine if online and paper-based RBAs are comparable. We find that browser loss of focus is the most common online behavior while copying and printing events were rarer. We found that correlations between these behaviors and student performance varied significantly between introductory and upper-division student populations, particularly with respect to the impact of students copying text in order to utilize internet resources. However, while the majority of students engaged in one or more of the targeted online behaviors, we found that, for our sample, none of these behaviors resulted in a significant change in the population's average performance that would threaten our ability to interpret this performance or compare it to paper-based implementations of the RBA.
physics
The Symmetries of Feynman Integrals (SFI) method is extended for the first time to incorporate an irreducible numerator. This is done in the context of the so-called vacuum and propagator seagull diagrams, which have 3 and 2 loops, respectively, and both have a single irreducible numerator. For this purpose, an extended version of SFI (xSFI) is developed. For the seagull diagrams with general masses, the SFI equation system is found to extend by two additional equations. The first is a recursion equation in the numerator power, which has an alternative form as a differential equation for the generating function. The second equation applies only to the propagator seagull and does not involve the numerator. We solve the equation system in two cases: over the singular locus and in a certain 3 scale sector where we obtain novel closed-form evaluations and epsilon expansions, thereby extending previous results for the numerator-free case.
high energy physics theory
We present a novel paradigm that allows to define a composite theory at the electroweak scale that is well defined all the way up to any energy by means of safety in the UV. The theory flows from a complete UV fixed point to an IR fixed point for the strong dynamics (which gives the desired walking) before generating a mass gap at the TeV scale. We discuss two models featuring a composite Higgs, Dark Matter and partial compositeness for all SM fermions. The UV theories can also be embedded in a Pati-Salam partial unification, thus removing the instability generated by the $\mbox{U}(1)$ running. Finally, we find a Dark Matter candidate still allowed at masses of $260$ GeV, or $1.5 \sim 2$ TeV, where the latter mass range will be covered by next generation direct detection experiments.
high energy physics phenomenology
Motivated by Vafa's model, we study the $tt^{*}$ geometry of a degenerate class of FQHE models with an abelian group of symmetry acting transitively on the classical vacua. Despite it is not relevant for the phenomenology of the FQHE, this class of theories has interesting mathematical properties. We find that these models are parametrized by the family of modular curves $Y_{1}(N)= \mathbb{H}/\Gamma_{1}(N)$, labelled by an integer $N\geq 2$. Each point of the space of level $N$ is in correspondence with a one dimensional $\mathcal{N}=4$ Landau-Ginzburg theory, which is defined on an elliptic curve with $N$ vacua and $N$ poles in the fundamental cell. The modular curve $Y(N)= \mathbb{H}/\Gamma(N)$ is a cover of degree $N$ of $Y_{1}(N)$ and plays the role of spectral cover for the space of models. The presence of an abelian symmetry allows to diagonalize the Berry's connection of the vacuum bundle and the $tt^{*}$ equations turn out to be the well known $\hat{A}_{N-1}$ Toda equations. The underlying structure of the modular curves and the connection between geometry and number theory emerge clearly when we study the modular properties and classify the critical limits of these models.
high energy physics theory
We study active feature selection, a novel feature selection setting in which unlabeled data is available, but the budget for labels is limited, and the examples to label can be actively selected by the algorithm. We focus on feature selection using the classical mutual information criterion, which selects the $k$ features with the largest mutual information with the label. In the active feature selection setting, the goal is to use significantly fewer labels than the data set size and still find $k$ features whose mutual information with the label based on the \emph{entire} data set is large. We explain and experimentally study the choices that we make in the algorithm, and show that they lead to a successful algorithm, compared to other more naive approaches. Our design draws on insights which relate the problem of active feature selection to the study of pure-exploration multi-armed bandits settings. While we focus here on mutual information, our general methodology can be adapted to other feature-quality measures as well. The code is available at the following url: https://github.com/ShacharSchnapp/ActiveFeatureSelection.
computer science
We consider the online $k$-taxi problem, a generalization of the $k$-server problem, in which $k$ servers are located in a metric space. A sequence of requests is revealed one by one, where each request is a pair of two points, representing the start and destination of a travel request by a passenger. The goal is to serve all requests while minimizing the distance traveled without carrying a passenger. We show that the classic Double Coverage algorithm has competitive ratio $2^k-1$ on HSTs, matching a recent lower bound for deterministic algorithms. For bounded depth HSTs, the competitive ratio turns out to be much better and we obtain tight bounds. When the depth is $d\ll k$, these bounds are approximately $k^d/d!$. By standard embedding results, we obtain a randomized algorithm for arbitrary $n$-point metrics with (polynomial) competitive ratio $O(k^c\Delta^{1/c}\log_{\Delta} n)$, where $\Delta$ is the aspect ratio and $c\ge 1$ is an arbitrary positive integer constant. The only previous known bound was $O(2^k\log n)$. For general (weighted) tree metrics, we prove the competitive ratio of Double Coverage to be $\Theta(k^d)$ for any fixed depth $d$, but unlike on HSTs it is not bounded by $2^k-1$. We obtain our results by a dual fitting analysis where the dual solution is constructed step-by-step backwards in time. Unlike the forward-time approach typical of online primal-dual analyses, this allows us to combine information from the past and the future when assigning dual variables. We believe this method can be useful also for other problems. Using this technique, we also provide a dual fitting proof of the $k$-competitiveness of Double Coverage for the $k$-server problem on trees.
computer science
Free-space optical communication is emerging as a low-power, low-cost, and high data rate alternative to radio-frequency communication in short-to medium-range applications. However, it requires a close-to-line-of-sight link between the transmitter and the receiver. This paper proposes a robust $\cHi$ control law for free-space optical (FSO) beam pointing error systems under controlled weak turbulence conditions. The objective is to maintain the transmitter-receiver line, which means the center of the optical beam as close as possible to the center of the receiving aperture within a prescribed disturbance attenuation level. First, we derive an augmented nonlinear discrete-time model for pointing error loss due to misalignment caused by weak atmospheric turbulence. We then investigate the $\cHi$-norm optimization problem that guarantees the closed-loop pointing error is stable and ensures the prescribed weak disturbance attenuation. Furthermore, we evaluate the closed-loop outage probability error and bit error rate (BER) that quantify the free-space optical communication performance in fading channels. Finally, the paper concludes with a numerical simulation of the proposed approach to the FSO link's error performance.
electrical engineering and systems science
In this letter we consider two different models of our present universe. We choose the models which are consisting different sets of two seperate fluids. The first one of each set tries to justify the late time acceleration and the second one is barotropic fluid. The former model considers our present time universe to be homogeneously filled up by Generalized Chaplygin Gas which is interacting with barotropic fluid. On the other hand, the latter model considers that the cosmic acceleration is generated by Modified Chaplygin Gas which is interacting with matter depicted by barotropic equation of state. For both the models, we consider the interaction term to vary proportionally with Hubble's parameter as well as with the exotic matter/dark energy's energy density. We find an explicit function form of the energy density of the cosmos which is found to depend on different cosmological parameters like scale factor, dark energy and barotropic fluid's EoS parameters and other constants like interacting constants etc. We draw curves of effective EoS-s, different cosmological parameters like deceleration parameter $q$, statefinder parameters $r$ and $s$ with repect to the redshift $z$ (for different values of dark energy and barotopic fluid parameters) and study them thoroughly. We compare two models as well as the nature of dependencies on these models' interaction coefficients. We point out the particular redshift for which the universe may transit from a deceleration to acceleration phase. We tally all these values with different observational data. Here we also analyse how this value of particular redshift does change for different values of interaction coefficients and different dark energy models.
physics
This is the second of two papers on the injective spectrum of a right noetherian ring. In the prequel, we considered the injective spectrum as a topological space associated to a ring (or, more generally, a Grothendieck category), which generalises the Zariski spectrum. We established some results about the topology and its links with Krull dimension, and computed a number of examples. In the present paper, which can largely be read independently of the first, we extend these results by defining a sheaf of rings on the injective spectrum and considering sheaves of modules over this structure sheaf and their relation to modules over the original ring. We then explore links with the spectrum of prime torsion theories developed by Golan and use this torsion-theoretic viewpoint to prove further results about the topology.
mathematics
In a multipartite scenario quantum entanglement manifests its most dramatic form when the state is genuinely entangled. Such a state is more beneficial for information theoretic applications if it contains distillable entanglement in every bipartition. It is, therefore, of significant operational interest to identify subspaces of multipartite quantum systems that contain such properties apriori. In this letter, we introduce the notion of unextendible biseparable bases (UBB) that provides an adequate method to construct genuinely entangled subspaces (GES). We provide an explicit construction of two types of UBBs -- party symmetric and party asymmetric -- for every $3$-{\it qudit} quantum system, with local dimension d\ge 3. Further, we show that the GES resulting from the symmetric construction is indeed a {\it bidistillable} subspace, i.e., all the states supported on it contain distillable entanglement across every bipartition.
quantum physics
Markov chain Monte Carlo (MCMC) is a powerful methodology for the approximation of posterior distributions. However, the iterative nature of MCMC does not naturally facilitate its use with modern highly parallelisable computation on HPC and cloud environments. Another concern is the identification of the bias and Monte Carlo error of produced averages. The above have prompted the recent development of fully (`embarrassingly') parallelisable unbiased Monte Carlo methodology based on couplings of MCMC algorithms. A caveat is that formulation of effective couplings is typically not trivial and requires model-specific technical effort. We propose couplings of sequential Monte Carlo (SMC) by considering adaptive SMC to approximate complex, high-dimensional posteriors combined with recent advances in unbiased estimation for state-space models. Coupling is then achieved at the SMC level and is, in general, not problem-specific. The resulting methodology enjoys desirable theoretical properties. We illustrate the effectiveness of the algorithm via application to two statistical models in high dimensions: (i) horseshoe regression; (ii) Gaussian graphical models.
statistics