text
stringlengths
11
9.77k
label
stringlengths
2
104
In this paper, we construct the supersymmetric spinning polynomials. These are orthogonal polynomials that serve as an expansion basis for the residue or discontinuity of four-point scattering amplitudes, respecting four-dimensional super Poincare invariance. The polynomials are constructed by gluing on-shell supersymmetric three-point amplitudes of one massive two massless multiplets, and are identified with algebraic Jacobi-polynomials. Equipped with these we construct the supersymmetric EFThedron, which geometrically defines the allowed region of Wilson coefficients respecting UV unitarity and super Poincare invariance.
high energy physics theory
Social and behavioral scientists are increasingly employing technologies such as fMRI, smartphones, and gene sequencing, which yield 'high-dimensional' datasets with more columns than rows. There is increasing interest, but little substantive theory, in the role the variables in these data play in known processes. This necessitates exploratory mediation analysis, for which structural equation modeling is the benchmark method. However, this method cannot perform mediation analysis with more variables than observations. One option is to run a series of univariate mediation models, which incorrectly assumes independence of the mediators. Another option is regularization, but the available implementations may lead to high false positive rates. In this paper, we develop a hybrid approach which uses components of both filter and regularization: the 'Coordinate-wise Mediation Filter'. It performs filtering conditional on the other selected mediators. We show through simulation that it improves performance over existing methods. Finally, we provide an empirical example, showing how our method may be used for epigenetic research.
statistics
A few-layer palladium diselenide (PdSe2) field effect transistor is studied under external stimuli such as electrical and optical fields, electron irradiation and gas pressure. We observe ambipolar conduction and hysteresis in the transfer curves of the PdSe2 material unprotected and as-exfoliated. We tune the ambipolar conduction and its hysteretic behavior in the air and pure nitrogen environments. The prevailing p-type transport observed at room pressure is reversibly turned into dominant n-type conduction by reducing the pressure, which can simultaneously suppress the hysteresis. The pressure control can be exploited to symmetrize and stabilize the transfer characteristic of the device as required in high-performance logic circuits. The transistor is immune from short channel effects but is affected by trap states with characteristic times in the order of minutes. The channel conductance, dramatically reduced by the electron irradiation during scanning electron microscope imaging, is restored after several minutes anneal at room temperature. The work paves the way toward the exploitation of PdSe2 in electronic devices by providing an experiment-based and deeper understanding of charge transport in PdSe2 transistors subjected to electrical stress and other external agents.
condensed matter
The $f$-sum rule and the Kohn formula are well-established general constraints on the electric conductivity in quantum many-body systems. We present their generalization to non-linear conductivities at all orders of the response in a unified manner, by considering two limiting quantum time-evolution processes: a quench process and an adiabatic process. Our generalized formulas are valid in any stationary state, including the ground state and finite temperature Gibbs states, regardless of the details of the system such as the specific form of the kinetic term, the strength of the many-body interactions, or the presence of disorders.
condensed matter
We realize mode-multiplexed full-field reconstruction over six spatial and polarization modes after 30-km multimode fiber transmission using intensity-only measurements without any optical carrier or local oscillator at the receiver or transmitter. The receiver's capabilities to cope with modal dispersion and mode dependent loss are experimentally demonstrated.
electrical engineering and systems science
We give a pedagogical introduction to time-independent scattering theory in one dimension focusing on the basic properties and recent applications of transfer matrices. In particular, we begin surveying some basic notions of potential scattering such as transfer matrix and its analyticity, multi-delta-function and locally periodic potentials, Jost solutions, spectral singularities and their time-reversal, and unidirectional reflectionlessness and invisibility. We then offer a simple derivation of the Lippmann-Schwinger equation and Born series, and discuss the Born approximation. Next, we outline a recently developed dynamical formulation of time-independent scattering theory in one dimension. This formulation relates the transfer matrix and therefore the solution of the scattering problem for a given potential to the solution of the time-dependent Schr\"odinger equation for an effective non-unitary two-level quantum system. We provide a self-contained treatment of this formulation and some of its most important applications. Specifically, we use it to devise a powerful alternative to the Born series and Born approximation, derive dynamical equations for the reflection and transmission amplitudes, discuss their application in constructing exact tunable unidirectionally invisible potentials, and use them to provide an exact solution for single-mode inverse scattering problems. The latter, which has important applications in designing optical devices with a variety of functionalities, amounts to providing an explicit construction for a finite-range complex potential whose reflection and transmission amplitudes take arbitrary prescribed values at any given wavenumber.
quantum physics
We study quantum tomography from a continuous measurement record obtained by measuring expectation values of a set of Hermitian operators obtained from unitary evolution of an initial observable. For this purpose, we consider the application of a random unitary, diagonal in a fixed basis at each time step and quantify the information gain in tomography using Fisher information of the measurement record and the Shannon entropy associated with the eigenvalues of covariance matrix of the estimation. Surprisingly, very high fidelity of reconstruction is obtained using random unitaries diagonal in a fixed basis even though the measurement record is not informationally complete. We then compare this with the information generated and fidelities obtained by application of a different Haar random unitary at each time step. We give an upper bound on the maximal information that can be obtained in tomography and show that a covariance matrix taken from the Wishart-Laguerre ensemble of random matrices and the associated Marchenko-Pastur distribution saturates this bound. We find that physically, this corresponds to an application of a different Haar random unitary at each time step. We show that repeated application of random diagonal unitaries gives a covariance matrix in tomographic estimation that corresponds to a new ensemble of random matrices. We analytically and numerically estimate eigenvalues of this ensemble and show the information gain to be bounded from below by the Porter-Thomas distribution.
quantum physics
We consider conformal and 't Hooft anomalies in six-dimensional ${\cal N}=(1,0)$ superconformal field theories, focusing on those conformal anomalies that determine the two- and three-point functions of conserved flavor and $SU(2)_R$ currents, as well as stress tensors. By analyzing these correlators in superspace, we explain why the number of independent conformal anomalies is reduced in supersymmetric theories. For instance, non-supersymmetric CFTs in six dimensions have three independent conformal $c$-anomalies, which determine the stress-tensor two- and three-point functions, but in superconformal theories the three $c$-anomalies are subject to a linear constraint. We also describe anomaly multiplet relations, which express the conformal anomalies of a superconformal theory in terms of its 't Hooft anomalies. Following earlier work on the conformal $a$-anomaly, we argue for these relations by considering the supersymmetric dilaton effective action on the tensor branch of such a theory. We illustrate the utility of these anomaly multiplet relations by presenting exact results for conformal anomalies, and hence current and stress-tensor correlators, in several interacting examples.
high energy physics theory
We construct a series of one-dimensional non-unitary dynamics consisting of both unitary and imaginary evolutions based on the Sachdev-Ye-Kitaev model. Starting from a short-range entangled state, we analyze the entanglement dynamics using the path integral formalism in the large $N$ limit. Among all the results that we obtain, two of them are particularly interesting: (1) By varying the strength of the imaginary evolution, the interacting model exhibits a first order phase transition from the highly entangled volume law phase to an area law phase; (2) The one-dimensional free fermion model displays an extensive critical regime with emergent two-dimensional conformal symmetry.
condensed matter
We elucidate the mechanism by which a Mott insulator transforms into a non-Fermi liquid metal upon increasing disorder at half filling. By correlating maps of the local density of states, the local magnetization and the local bond conductivity, we find a collapse of the Mott gap toward a V-shape pseudogapped density of states that occurs concomitantly with the decrease of magnetism around the highly disordered sites but an increase of bond conductivity. These metallic regions percolate to form an emergent non-Fermi liquid phase with a conductivity that increases with temperature. Bond conductivity measured via local microwave impedance combined with charge and spin local spectroscopies are ideal tools to corroborate our predictions.
condensed matter
The neutrino oscillation probabilities in vacuum and matter are discussed, considering the framework of three active and one light sterile neutrinos. We study in detail the rephasing invariants and CP asymmetry observables, and investigate the four-neutrino oscillations in long-baseline neutrino experiments, such as DUNE, NO$\nu$A and T2HK. Our results show that the matter effect enhances quite a significantly the oscillation probabilities of electron neutrino and electron antineutrino appearance channels within a certain energy range, while no considerable change arises in the CP asymmetry analysis due to the matter effect. Moreover, separation between the results with and without the sterile neutrino is not so significant and that is also affected by CP-violating phases. Comparing the results for these three experiments, all of them have similar features, nevertheless, sizes and separations of the oscillation probabilities in DUNE are bit larger.
high energy physics phenomenology
This report is a survey of the different autonomous driving datasets which have been published up to date. The first section introduces the many sensor types used in autonomous driving datasets. The second section investigates the calibration and synchronization procedure required to generate accurate data. The third section describes the diverse driving tasks explored by the datasets. Finally, the fourth section provides comprehensive lists of datasets, mainly in the form of tables.
computer science
In 1971 I announced what I described as a nice proof of Tychonoff's Theorem, an immediate corollary of a result concerning closed projections combined with Mrowka's characterization of compactness: a space X is compact if and only if for each space Y the projection from X x Y to Y is closed. I described the proof as to appear but to date it has not. In 2019 I published a generalization of the stronger closed projection result which yields a different look to the proof. Both versions are presented here.
mathematics
We calculate all contributions $\propto T_F$ to the polarized three-loop anomalous dimensions in the M-scheme using massive operator matrix elements and compare to results in the literature. This includes the complete anomalous dimensions $\gamma_{qq}^{(2),\rm PS}$ and $\gamma_{qg}^{(2)}$. We also obtain the complete two-loop polarized anomalous dimensions in an independent calculation. While for most of the anomalous dimensions the usual direct computation methods in Mellin $N$-space can be applied since all recurrences factorize at first order, this is not the case for $\gamma_{qg}^{(2)}$. Due to the necessity of deeper expansions of the master integrals in the dimensional parameter $\varepsilon = D-4$, we had to use the method of arbitrary high moments to eliminate elliptic contributions in intermediate steps. 4000 moments were generated to determine this anomalous dimension and 2640 moments turned out to be sufficient. As an aside, we also recalculate the contributions $\propto T_F$ to the three-loop QCD $\beta$-function.
high energy physics phenomenology
Very high quality factor superconducting radio frequency cavities developed for accelerators can enable fundamental physics searches with orders of magnitude higher sensitivity, as well as offer a path to a 1000-fold increase in the achievable coherence times for cavity-stored quantum states in the 3D circuit QED architecture. Here we report the first measurements of multiple accelerator cavities of $f_0=$1.3, 2.6, 5 GHz resonant frequencies down to temperatures of about 10~mK and field levels down to a few photons, which reveal record high photon lifetimes up to 2 seconds, while also further exposing the role of the two level systems (TLS) in the niobium oxide. We also demonstrate how the TLS contribution can be greatly suppressed by the vacuum heat treatments at 340-450$^\circ$C.
quantum physics
In this paper, we study the following fractional Schr\"{o}dinger-Poisson system \begin{equation*} \left\{ \begin{array}{ll} \varepsilon^{2s}(-\Delta)^su+V(x)u+\phi u=g(u) & \hbox{in $\mathbb{R}^3$,} \varepsilon^{2t}(-\Delta)^t\phi=u^2,\,\, u>0& \hbox{in $\mathbb{R}^3$,} \end{array} \right. \end{equation*} where $s,t\in(0,1)$, $\varepsilon>0$ is a small parameter. Under some suitable assumptions on potential function $V(x)$ and critical nonlinearity term $g(u)$, we construct a family of positive solutions $u_{\varepsilon}\in H^s(\mathbb{R}^3)$ which concentrates around the global minima of $V$ as $\varepsilon\rightarrow0$.
mathematics
Flux ratio anomalies in strong gravitationally lensed quasars constitute a unique way to probe the abundance of non-luminous dark matter haloes, and hence the nature of dark matter. In this paper we identify double imaged quasars as a statistically efficient probe of dark matter, since they are 20 times more abundant than quadruply imaged quasars. Using N-body simulations that include realistic baryonic feedback, we measure the full distribution of flux ratios in doubly imaged quasars for cold (CDM) and warm dark matter (WDM) cosmologies. Through this method, we fold in two key systematics - quasar variability and line-of-sight structures. We find that WDM cosmologies predict a ~6 per cent difference in the cumulative distribution functions of flux ratios relative to CDM, with CDM predicting many more small ratios. Finally, we estimate that ~600 doubly imaged quasars will need to be observed in order to be able to unambiguously discern between CDM and the two WDM models studied here. Such sample sizes will be easily within reach of future large scale surveys such as Euclid. In preparation for this survey data we require discerning the scale of the uncertainties in modelling lens galaxies and their substructure in simulations, plus a strong understanding of the selection function of observed lensed quasars.
astrophysics
Surface recombination has a major impact on the open-circuit voltage ($V_\text{oc}$) of organic photovoltaics. Here, we study how this loss mechanism is influenced by imbalanced charge transport in the photoactive layer. As a model system, we use organic solar cells with a two orders of magnitude higher electron than hole mobility. We find that small variations in the work function of the anode have a strong effect on the light intensity dependence of $V_\text{oc}$. Transient measurements and drift-diffusion simulations reveal that this is due to a change in the surface recombination rather than the bulk recombination. We use our numerical model to generalize these findings and determine under which circumstances the effect of contacts is stronger or weaker compared to the idealized case of balanced charge transport. Finally, we derive analytical expressions for $V_\text{oc}$ in the case that a pile-up of space charge is present due to highly imbalanced mobilities.
physics
Dielectric loaded structures are promising candidates for use in the structure wakefield acceleration (SWFA) technique, for both the collinear wakefield and the two-beam acceleration (CWA and TBA respectively) approaches, due to their low fabrication cost, low rf losses, and the potential to withstand high gradient. A short pulse (<=20 ns) TBA program is under development at the Argonne Wakefield Accelerator (AWA) facility where dielectric loaded structures are being used for both the power extractor/transfer structure (PETS) and the accelerator. In this study, an X-band 11.7 GHz dielectric PETS was developed and tested at the AWA facility to demonstrate high power wakefield generation. The PETS was driven by a train of eight electron bunches separated by 769.2 ps (9 times of the X-band rf period) in order to achieve coherent wakefield superposition. A total train charge of 360 nC was passed through the PETS structure to generate ~200 MW, ~3 ns flat-top rf pulses without rf breakdown. A future experiment is being planned to increase the generated rf power to approximately ~1 GW by optimizing the structure design and improving the drive beam quality.
physics
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
electrical engineering and systems science
Exponential dichotomies play a central role in stability theory for dynamical systems. They allow to split the state space into two subspaces, where all trajectories in one subspace decay whereas all trajectories in the other subspace grow, uniformly and exponentially. This paper studies uniform detectability and observability notions for linear time varying systems, which admit an exponential dichotomy. The main contributions are necessary and sufficient detectability conditions for this class of systems.
electrical engineering and systems science
Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H2O2 in alkaline solution
physics
We study the superfluid critical temperature in a two-band attractive Fermi system with strong pairing fluctuations associated with both interband and intraband couplings. We focus specifically on a configuration where the intraband coupling is varied from weak to strong in a shallow band coupled to a weakly-interacting deeper band. The whole crossover from the Bardeen-Cooper-Schrieffer (BCS) condensation of largely overlapping Cooper pairs to the Bose-Einstein condensation (BEC) of tightly bound molecules is covered by our analysis, which is based on the extension of the Nozi\`{e}res-Schmitt-Rink (NSR) approach to a two-band system. In comparison with the single-band case, we find a strong enhancement of the critical temperature, a significant reduction of the preformed pair region where pseudogap effects are expected, and the entanglement of two kinds of composite bosons in the strong-coupling BEC regime.
condensed matter
Infinite width limits of deep neural networks often have tractable forms. They have been used to analyse the behaviour of finite networks, as well as being useful methods in their own right. When investigating infinitely wide CNNs it was observed that the correlations arising from spatial weight sharing disappear in the infinite limit. This is undesirable, as spatial correlation is the main motivation behind CNNs. We show that the loss of this property is not a consequence of the infinite limit, but rather of choosing an independent weight prior. Correlating the weights maintains the correlations in the activations. Varying the amount of correlation interpolates between independent-weight limits and mean-pooling. Empirical evaluation of the infinitely wide network shows that optimal performance is achieved between the extremes, indicating that correlations can be useful.
statistics
We implement the so-called Weyl-Heisenberg covariant integral quantization in the case of a classical system constrained by a bounded or semi-bounded geometry. The procedure, which is free of the ordering problem of operators, is illustrated with the basic example of the one-dimensional motion of a free particle in an interval, and yields a fuzzy boundary, a position-dependent mass (PDM), and an extra potential on the quantum level. The consistency of our quantization is discussed by analyzing the semi-classical phase space portrait of the derived quantum dynamics, which is obtained as a regularization of its original classical counterpart.
quantum physics
We present an experimental study of the saturated non-linear dynamics of an inertial wave attractor in an axisymmetric geometrical setting. The experiments are carried out in a rotating ring-shaped fluid domain delimited by two vertical coaxial cylinders, a conical bottom, and a horizontal deformable upper lid as wave generator: the meridional cross-section of the fluid volume is a trapezium, while the horizontal cross-section is a ring. First, the fluid is set into a rigid-body rotation. Thereafter, forcing is introduced into the system via axisymmetric low-amplitude volume-conserving oscillatory motion of the upper lid. After a short transient of about 10 forcing periods, a quasi-linear regime is established, with an axisymmetric inertial wave attractor. The attractor is prone to instability: at long time-scale (order 100 forcing periods) a saturated fully non-linear regime develops as a consequence of an energy cascade draining energy towards a slow two-dimensional manifold represented by a regular polygonal system of axially-oriented cyclonic vortices that are slowly precessing around the inner cylinder. We show that this slow two-dimensional manifold manifests a persistent slow prograde motion and a strong cyclonic-anticyclonic asymmetry quantified by the time-evolution of the probability density function of the vertical vorticity.
physics
$L_{\infty}$ algebras describe the underlying algebraic structure of many consistent classical field theories. In this work we analyze the algebraic structure of Gauged Double Field Theory in the generalized flux formalism. The symmetry transformations consist of a generalized deformed Lie derivative and Double Lorentz transformations. We obtain all the non-trivial products in a closed form considering a Generalized Kerr-Schild ansatz for the generalized frame and we include a linear perturbation for the generalized dilaton. The off-shell structure can be cast in an $L_{3}$ algebra and when one considers dynamics the former is exactly promoved to an $L_{4}$ algebra. The present computations show the fully algebraic structure of the fundamental charged heterotic string and the $L^{\rm{gauge}}_{3}$ structure of (Bosonic) Enhanced Double Field Theory.
high energy physics theory
It has long been expected that the 3d Ising model can be thought of as a string theory, where one interprets the domain walls that separate up spins from down spins as two-dimensional string worldsheets. The usual Ising Hamiltonian measures the area of these domain walls. This theory has string coupling of unit magnitude. We add new local terms to the Ising Hamiltonian that further weight each spin configuration by a factor depending on the genus of the corresponding domain wall, resulting in a new 3d Ising model that has a tunable bare string coupling $g_s$. We use a combination of analytical and numerical methods to analyze the phase structure of this model as $g_s$ is varied. We study statistical properties of the topology of worldsheets and discuss the prospects of using this new deformation at weak string coupling to find a worldsheet description of the 3d Ising transition.
high energy physics theory
We show that the typical dynamical system sometimes begins to behave like a non-deterministic system with a small classical entropy, and this behavior lasts an extremely long time, until the system starts decreasing entropy. Then again it will become almost non-deterministic for a very very long time, but with more smaller classical entropy. Playing on this fact and considering sigma-compact families of measure-preserving zero-entropy transformations, for example, the rectangle exchange transformations, we choose the Kushnirenko entropy so that it is equal to zero for the transformations under consideration, but is infinite for the generic transformation.
mathematics
Dynamic MRI is a technique of acquiring a series of images continuously to follow the physiological changes over time. However, such fast imaging results in low resolution images. In this work, abdominal deformation model computed from dynamic low resolution images have been applied to high resolution image, acquired previously, to generate dynamic high resolution MRI. Dynamic low resolution images were simulated into different breathing phases (inhale and exhale). Then, the image registration between breathing time points was performed using the B-spline SyN deformable model and using cross-correlation as a similarity metric. The deformation model between different breathing phases were estimated from highly undersampled data. This deformation model was then applied to the high resolution images to obtain high resolution images of different breathing phases. The results indicated that the deformation model could be computed from relatively very low resolution images.
electrical engineering and systems science
We study the potential of future Electron-Ion Collider (EIC) data to probe four-fermion operators in the Standard Model Effective Field Theory (SMEFT). The ability to perform measurements with both polarized electron and proton beams at the EIC provides a powerful tool that can disentangle the effects from different SMEFT operators. We compare the potential constraints from an EIC with those obtained from Drell-Yan data at the Large Hadron Collider. We show that EIC data plays an important complementary role since it probes combinations of Wilson coefficients not accessible through available Drell-Yan measurements.
high energy physics phenomenology
Cosmic Microwave Background (CMB) observations are used to constrain reheating to Standard Model (SM) particles after a period of inflation. As a light spectator field, the SM Higgs boson acquires large field values from its quantum fluctuations during inflation, gives masses to SM particles that vary from one Hubble patch to another, and thereby produces large density fluctuations. We consider both perturbative and resonant decay of the inflaton to SM particles. For the case of perturbative decay from coherent oscillations of the inflaton after high scale inflation, we find strong constraints on the reheat temperature for the inflaton decay into heavy SM particles. For the case of resonant particle production (preheating) to (Higgsed) SM gauge bosons, we find temperature fluctuations larger than observed in the CMB for a range of gauge coupling that includes those found in the SM and conclude that such preheating cannot be the main source of reheating the Universe after inflation.
high energy physics phenomenology
We present a scenario where an axion-like field drives inflation until a potential barrier, which keeps a waterfall field at the origin, disappears and a waterfall transition occurs. Such a barrier separates the scale of inflation from that of the waterfall transition. We find the observed spectrum of the cosmic microwave background indicates that the decay constant of the inflaton is well below the Planck scale, with the inflationary Hubble parameter spanning a wide range. Further, our model involves dark matter candidates including the inflaton itself. Also, for a complex waterfall field, we can determine cosmologically the Peccei-Quinn scale associated with the strong CP problem.
high energy physics phenomenology
Semantic image segmentation is the process of labeling each pixel of an image with its corresponding class. An encoder-decoder based approach, like U-Net and its variants, is a popular strategy for solving medical image segmentation tasks. To improve the performance of U-Net on various segmentation tasks, we propose a novel architecture called DoubleU-Net, which is a combination of two U-Net architectures stacked on top of each other. The first U-Net uses a pre-trained VGG-19 as the encoder, which has already learned features from ImageNet and can be transferred to another task easily. To capture more semantic information efficiently, we added another U-Net at the bottom. We also adopt Atrous Spatial Pyramid Pooling (ASPP) to capture contextual information within the network. We have evaluated DoubleU-Net using four medical segmentation datasets, covering various imaging modalities such as colonoscopy, dermoscopy, and microscopy. Experiments on the MICCAI 2015 segmentation challenge, the CVC-ClinicDB, the 2018 Data Science Bowl challenge, and the Lesion boundary segmentation datasets demonstrate that the DoubleU-Net outperforms U-Net and the baseline models. Moreover, DoubleU-Net produces more accurate segmentation masks, especially in the case of the CVC-ClinicDB and MICCAI 2015 segmentation challenge datasets, which have challenging images such as smaller and flat polyps. These results show the improvement over the existing U-Net model. The encouraging results, produced on various medical image segmentation datasets, show that DoubleU-Net can be used as a strong baseline for both medical image segmentation and cross-dataset evaluation testing to measure the generalizability of Deep Learning (DL) models.
electrical engineering and systems science
To exploit properly the precision physics program at the FCC-ee, the theoretical precision tag on the respective luminosity will need to be improved from the 0.054$\%$ (0.061$\%$) results at LEP to 0.01$\%$, where the former (latter) LEP result has (does not have) the pairs correction. We present an overview of the roads one may take to reach the required 0.01$\%$ precision tag at the FCC-ee and we discuss possible synergistic effects of the walk along these roads for other FCC precision theory requirements.
high energy physics phenomenology
The decays $B_d^0\to J/\psi K^0_{\text{S}}$and $B_s^0\to J/\psi\phi$ play a key role for the determination of the $B^0_q$-$\bar B^0_q$ ($q=d,s$) mixing phases $\phi_d$ and $\phi_s$, respectively. The theoretical precision of the extraction of these quantities is limited by doubly Cabibbo-suppressed penguin topologies, which can be included through control channels by means of the $SU(3)$ flavour symmetry of strong interactions. Using the currently available data and a new simultaneous analysis, we discuss the state-of-the-art picture of these effects and include them in the extracted $\phi_q$ values. We have a critical look at the Standard Model predictions of these phases and explore the room left for new physics. Considering future scenarios for the high-precision era of flavour physics, we illustrate that we may obtain signals for physics beyond the Standard Model with a significance well above five standard deviations. We also determine effective colour-suppression factors of $B_d^0\to J/\psi K^0$, $B_d^0\to J/\psi K^0_{\text{S}}$ and $B_d^0\to J/\psi\pi^0$ decays, which serve as benchmarks for QCD calculations of the underlying decay dynamics, and present a new method using information from semileptonic $B_d^0\to\pi^-\ell^+\nu_{\ell}$ and $B_s^0\to K^-\ell^+\nu_{\ell}$ decays.
high energy physics phenomenology
The impact of granular microstructure in permanent magnets on eddy current losses are investigated. A numerical homogenization procedure for electrical conductivity is defined. Then, an approximated simple analytical model for the homogenized conductivity able to capture the main features of the geometrical and material dependences is derived. Finally eddy current losses analytical calculations are given, and the two asymptotic expressions for losses in the stationary conduction limit and advanced skin effect limit are derived and discussed.
physics
Whether supernovae are a significant source of dust has been a long-standing debate. The large quantities of dust observed in high-redshift galaxies raise a fundamental question as to the origin of dust in the Universe since stars cannot have evolved to the AGB dust-producing phase in high-redshift galaxies. In contrast, supernovae occur within several millions of years after the onset of star formation. This white paper focuses on dust formation in supernova ejecta with US-Extremely Large Telescope (ELT) perspective during the era of JWST and LSST.
astrophysics
We have studied the $J/\psi\phi$ mass distribution of the process $B^+\to J/\psi\phi K^+$ from the threshold to about 4250 MeV, by considering the contribution of the $X(4140)$ with a narrow width, together with the $X(4160)$ state. Our results show that the cusp structure at the $D^*_s\bar{D}^*_s$ threshold is tied to the molecular nature of the $X(4160)$ state.
high energy physics phenomenology
One key use of k-means clustering is to identify cluster prototypes which can serve as representative points for a dataset. However, a drawback of using k-means cluster centers as representative points is that such points distort the distribution of the underlying data. This can be highly disadvantageous in problems where the representative points are subsequently used to gain insights on the data distribution, as these points do not mimic the distribution of the data. To this end, we propose a new clustering method called "distributional clustering", which ensures cluster centers capture the distribution of the underlying data. We first prove the asymptotic convergence of the proposed cluster centers to the data generating distribution, then present an efficient algorithm for computing these cluster centers in practice. Finally, we demonstrate the effectiveness of distributional clustering on synthetic and real datasets.
statistics
Granular systems are not always homogeneous and can be composed of grains with very different mechanical properties. To improve our understanding of the behavior of real granular systems, in this experimental study, we compress 2D bidisperse systems made of both soft and rigid grains. By means of a recently developed experimental set-up, \md{from the measurement of the displacement field we can} follow all the mechanical observables of this granular medium from the inside of each particle up-to-the whole system scale. We \md{are able to} detect the jamming transition from these observables and study their evolution deep in the jammed state for packing fractions as high as $0.915$. We show the uniqueness of the behavior of such a system, \md{in which way} it is similar to purely soft or rigid systems and how it is different from them. This study constitutes the first step toward a better understanding of the mechanical behavior of granular materials that are polydisperse in terms of grain rheology.
condensed matter
We introduce new adjustments and advances in space-borne 3D volumetric scattering-tomography of cloud micro-physics. The micro-physical properties retrieved are the liquid water content and effective radius within a cloud. New adjustments include an advanced perspective polarization imager model, and the assumption of 3D variation of the effective radius. Under these assumptions, we advanced the retrieval to yield results that (compared to the simulated ground-truth) have smaller errors than the prior art. Elements of our advancement include initialization by a parametric horizontally-uniform micro-physical model. The parameters of this initialization are determined by a grid search of the cost function. Furthermore, we added viewpoints corresponding to single-scattering angles, where polarization yields enhanced sensitivity to the droplet micro-physics (i.e., the cloudbow region). In addition, we introduce an optional adjustment, in which optimization of the liquid water content and effective radius are separated to alternating periods. The suggested initialization model and additional advances have been evaluated by retrieval of a set of large-eddy simulation clouds.
electrical engineering and systems science
The pure effects described by Robins and Greenland, and later called natural effects by Pearl, have been criticized because they require a cross-world independence assumption. In this paper, we use potential outcomes and sufficient causal sets to present a conceptual perspective of the cross-world independence assumption that explains why the clinical utility of natural effects is sometimes greater than that of controlled effects. Our perspective is consistent with recent work on mediation of natural effects, path specific effects and separable effects.
statistics
High performance quantum information processing requires efficient control of undesired decohering effects, which are present in realistic quantum dynamics. To deal with this issue, a powerful strategy is to employ transitionless quantum driving (TQD), where additional fields are added to speed up the evolution of the quantum system, achieving a desired state in a short time in comparison with the natural decoherence time scales. In this paper, we provide an experimental investigation of the performance of a generalized approach for TQD to implement shortcuts to adiabaticity in nuclear magnetic resonance (NMR). As a first discussion, we consider a single nuclear spin-$\frac{1}{2}$ system in a time-dependent rotating magnetic field. While the adiabatic dynamics is violated at a resonance situation, the TQD Hamiltonian is shown to be robust against resonance, allowing us to mimic the adiabatic behavior in a fast evolution even under the resonant configurations of the original (adiabatic) Hamiltonian. Moreover, we show that the generalized TQD theory requires less energy resources, with the strength of the magnetic field less than that required by standard TQD. As a second discussion, we analyze the experimental implementation of shortcuts to single-qubit adiabatic gates. By adopting generalized TQD, we can provide feasible time-independent driving Hamiltonians, which are optimized in terms of the number of pulses used to implement the quantum dynamics. The robustness of adiabatic and generalized TQD evolutions against typical decoherence processes in NMR is also analyzed.
quantum physics
We predict the existence of Einstein-de Haas effect in topological magnon insulators. Temperature variation of angular momentum in the topological state shows a sign change behavior, akin to the low temperature thermal Hall conductance response. This manifests itself as a macroscopic mechanical rotation of the material hosting topological magnons. We show that an experimentally observable Einstein-de Haas effect can be measured in the square-octagon, the kagome, and the honeycomb lattices. Albeit, the effect is the strongest in the square-octagon lattice. We treat both the low and the high temperature phases using spin wave and Schwinger boson theory, respectively. We propose an experimental set up to detect our theoretical predictions. We suggest candidate square-octagon materials where our theory can be tested.
condensed matter
We applied a recently published modified Fock-Schwinger (MFS) method to find the exact solution of the propagator equation for a charged vector boson in the presence of a constant magnetic field directly in the momentum space as a sum over Landau levels in arbitrary $\xi$-gauge. In contrast to the standard approaches for finding propagators, MFS method demonstrated several improvements in terms of computational complexity reduction and revealed simple internal structures in intermediate and final expressions, thus allowing to obtain new useful representations of the propagator.
high energy physics theory
We propose a novel video understanding task by fusing knowledge-based and video question answering. First, we introduce KnowIT VQA, a video dataset with 24,282 human-generated question-answer pairs about a popular sitcom. The dataset combines visual, textual and temporal coherence reasoning together with knowledge-based questions, which need of the experience obtained from the viewing of the series to be answered. Second, we propose a video understanding model by combining the visual and textual video content with specific knowledge about the show. Our main findings are: (i) the incorporation of knowledge produces outstanding improvements for VQA in video, and (ii) the performance on KnowIT VQA still lags well behind human accuracy, indicating its usefulness for studying current video modelling limitations.
computer science
Novel analog-to-digital converter (ADC) architectures are motivated by the demand for rising sampling rates and effective number of bits (ENOB). The main limitation on ENOB in purely electrical ADCs lies in the relatively high jitter of oscillators, in the order of a few tens of fs for state-of-the-art components. When compared to the extremely low jitter obtained with best-in-class Ti:sapphire mode-locked lasers (MLL), in the attosecond range, it is apparent that a mixed electrical-optical architecture could significantly improve the converters' ENOB. We model and analyze the ENOB limitations arising from optical sources in optically enabled, spectrally sliced ADCs, after discussing the system architecture and implementation details. The phase noise of the optical carrier, serving for electro-optic signal transduction, is shown not to propagate to the reconstructed digitized signal and therefore not to represent a fundamental limit. The optical phase noise of the MLL used to generate reference tones for individual slices also does not fundamentally impact the converted signal, so long as it remains correlated among all the comb lines. On the other hand, the timing jitter of the MLL, as also reflected in its RF linewidth, is fundamentally limiting the ADC performance, since it is directly mapped as jitter to the converted signal. The hybrid nature of a photonically enabled, spectrally sliced ADC implies the utilization of a number of reduced bandwidth electrical ADCs to convert parallel slices, resulting in the propagation of jitter from the electrical oscillator supplying their clock. Due to the reduced sampling rate of the electrical ADCs, as compared to the overall system, the overall noise performance of the presented architecture is substantially improved with respect to a fully electrical ADC.
electrical engineering and systems science
We use machine learning (ML) and non-ML techniques to study optimized $CP$-odd observables, directly and maximally sensitive to the $CP$-odd $i \tilde \kappa \bar t \gamma^5 t h$ interaction at the LHC and prospective future hadron colliders using the final state with a Higgs boson and a top quark pair, $pp\to t\bar t h$, followed by semileptonic $t$ decays. We perform phase-space optimization of manifestly $CP$-odd observables ($\boldsymbol \omega$), sensitive to the sign of $\tilde \kappa$, and constructed from experimentally accessible final state momenta. We identify a simple optimized linear combination $\boldsymbol \alpha\cdot \boldsymbol\omega$ that gives similar sensitivity as the studied fully fledged ML models. Using $\boldsymbol\alpha\cdot \boldsymbol\omega$ we project the expected sensitivities to $\tilde \kappa$ at HL-LHC, HE-LHC, and FCC-hh.
high energy physics phenomenology
K2-146 is a mid-M dwarf ($M_\star = 0.331 \pm 0.009 M_\odot$; $R_\star = 0.330 \pm 0.010 R_\odot$), observed in Campaigns 5, 16, and 18 of the K2 mission. In Campaign 5 data, a single planet was discovered with an orbital period of $2.6$~days and large transit timing variations due to an unknown perturber. Here we analyze data from Campaigns 16 and 18, detecting the transits of a second planet, c, with an orbital period of $4.0$~days, librating in a 3:2 resonance with planet b. Large, anti-correlated timing variations of both planets exist due to their resonant perturbations. The planets have a mutual inclination of $2.40^\circ\pm0.25^\circ$, which torqued planet c more closely into our line-of-sight. Planet c was grazing in Campaign 5 and thus missed in previous searches; in Campaigns 16 and 18 it is fully transiting, and its transit depth is three times larger. We improve the stellar properties using data from Gaia DR2, and using dynamical fits find that both planets are sub-Neptunes: their masses are $5.77\pm0.18$ and $7.50\pm0.23 M_{\oplus}$ and their radii are $2.04\pm0.06$ and $2.19\pm0.07$ R$_\oplus$, respectively. These mass constraints set the precision record for small exoplanets (a few gas giants have comparable relative precision). These planets lie in the photoevaporation valley when viewed in Radius-Period space, but due to the low-luminosity M-dwarf host star, they lie among the atmosphere-bearing planets when viewed in Radius-Irradiation space. This, along with their densities being 60%-80% that of Earth, suggests that they may both have retained a substantial gaseous envelope.
astrophysics
We show that, for $n\geq 3$, $\lim_{t \to 0} e^{it\Delta}f(x) = f(x)$ holds almost everywhere for all $f \in H^s (\mathbb{R}^n)$ provided that $s>\frac{n}{2(n+1)}$. Due to a counterexample by Bourgain, up to the endpoint, this result is sharp and fully resolves a problem raised by Carleson. Our main theorem is a fractal $L^2$ restriction estimate, which also gives improved results on the size of divergence set of Schr\"odinger solutions, the Falconer distance set problem and the spherical average Fourier decay rates of fractal measures. The key ingredients of the proof include multilinear Kakeya estimates, decoupling and induction on scales.
mathematics
The BRST quantization on the hypersurface V_(N-1) embedded in Euclidean space R_N is carried out both in Hamiltonian and Lagrangian formalism. Using Batalin-Fradkin-Fradkina-Tyutin (BFFT) formalism, the second class constrained obtained using Hamiltonian analysis are converted into first class constraints. Then using BFV analysis the BRST symmetry is constructed. We have given a simple example of this kind of systems. We have also tried to establish an equivalence between canonical Dirac and BRST quantizations. In the end we have discussed Batalin-Vilkovisky formalism in the context of this (BFFT modified) system.
high energy physics theory
Lepton flavor violated process are strongly suppressed in the Standard Model due to very small neutrino mass, but can be sizable in some extended models. The current experimental bounds on decay modes $\ell^{'\pm} \to a \ell^{\pm} $ are much weaker than other flavor violated processes because of the huge irreducible backgrounds $\ell' \to \ell \bar{\nu}_{\ell} \nu_{\ell'}$. In this paper, we give the full helicity density matrix of both the signal and backgrounds, and then study polarization effects. Particularly, we treat inclusively the two missing neutrinos in the background, and we find that both longitudinal and transverse polarization effects survives even the relative kinematical degrees of freedom of the two neutrinos are integrated out. Furthermore, we have show that signal and backgrounds have distinctive polarization effects which can be measured by using energy fractions of the charged decay products. This is particularly useful because kinematical reconstruction is not required. In case of that the decaying lepton is not polarized, we show that correlations between polarizations of lepton pair generated at colliders are still useful to search for the signals. More interestingly, polarization correlations depends on product of scalar and pseudo-scalar ALP couplings, and hence are sensitive to their relative sign. We demonstrate that how the polarization correlation effects can be used to investigate flavor violating decays $\tau^{\pm} \to a \ell^{\pm} $ at the BelleII experiment.
high energy physics phenomenology
We propose a model that explains the fermion mass hierarchy by the Froggatt-Nielsen mechanism with a discrete $Z_N^F$ symmetry. As a concrete model, we study a supersymmetric model with a single flavon coupled to the minimal supersymmetric Standard Model. Flavon develops a TeV scale vacuum expectation value for realizing flavor hierarchy, an appropriate $\mu$-term and the electroweak scale, hence the model has a low cutoff scale. We demonstrate how the flavon is successfully stabilized together with the Higgs bosons in the model. The discrete flavor symmetry $Z_N^F$ controls not only the Standard Model fermion masses, but also the Higgs potential and a mass of the Higgsino which is a good candidate for dark matter. The hierarchy in the Higgs-flavon sector is determined in order to make the model anomaly-free and realize a stable electroweak vacuum. We show that this model can explain the fermion mass hierarchy, realistic Higgs-flavon potential and thermally produced dark matter at the same time. We discuss flavor violating processes induced by the light flavon which would be detected in future experiments.
high energy physics phenomenology
The statistical modeling of multivariate count data observed on a space-time lattice has generally focused on using a hierarchical modeling approach where space-time correlation structure is placed on a continuous, latent, process. The count distribution is then assumed to be conditionally independent given the latent process. However, in many real-world applications, especially in the modeling of criminal or terrorism data, the conditional independence between the count distributions is inappropriate. In this manuscript we propose a class of models that capture spatial variation and also account for the possibility of data model dependence. The resulting model allows both data model dependence, or self-excitation, as well as spatial dependence in a latent structure. We demonstrate how second-order properties can be used to characterize the spatio-temporal process and how misspecificaiton of error may inflate self-excitation in a model. Finally, we give an algorithm for efficient Bayesian inference for the model demonstrating its use in capturing the spatio-temporal structure of burglaries in Chicago from 2010-2015.
statistics
Parkinson's disease (PD) is a common neurodegenerative disease that affects motor and non-motor symptoms. Postural instability and freezing of gait (FOG) are considered motor symptoms of PD resulting in falling. In this study, we investigated the effect of simultaneous use of a robotic walker and a pneumatic walking assist device (PWAD) for PD patients on gait features. The pneumatic actuated artificial muscle on the leg and actuators on the walker produce mutual induced stimulation, allowing the user to suppress FOG and maintain a stable gait pattern while walking. The performance of the proposed system was evaluated by conducting an 8 [m] straight-line walking task by a healthy subject with (a) RW (robotic walker), (b) simultaneous use of an RW and a PWAD, and some gait features for each condition were analyzed. The increasing stride length and decreasing stance phase duration in the gait cycle suggest that simultaneous use of a robotic walker and a pneumatic walking assist device would effectively decrease FOG and maintain a stable gait pattern for PD patients.
computer science
It is well known that the convex hull of $\{(x,y,xy)\}$, where $(x,y)$ is constrained to lie in a box, is given by the Reformulation-Linearization Technique (RLT) constraints. Belotti {\em et al.\,}(2010) and Miller {\em et al.\,}(2011) showed that if there are additional upper and/or lower bounds on the product $z=xy$, then the convex hull can be represented by adding an infinite family of inequalities, requiring a separation algorithm to implement. Nguyen {\em et al.\,}(2018) derived convex hulls with bounds on $z$ for the more general case of $z=x^{b_1} y^{b_2}$, where $b_1\ge 1$, $b_2\ge 1$. We focus on the most important case where $b_1=b_2=1$ and show that the convex hull with either an upper bound or lower bound on the product is given by RLT constraints, the bound on $z$ and a single Second-Order Cone (SOC) constraint. With both upper and lower bounds on the product, the convex hull can be represented using no more than three SOC constraints, each applicable on a subset of $(x,y)$ values. In addition to the convex hull characterizations, volumes of the convex hulls with either an upper or lower bound on $z$ are calculated and compared to the relaxation that imposes only the RLT constraints. As an application of these volume results, we show how spatial branching can be applied to the product variable so as to minimize the sum of the volumes for the two resulting subproblems.
mathematics
Motivated by the interest in communication-efficient methods for distributed machine learning, we consider the communication complexity of minimising a sum of $d$-dimensional functions $\sum_{i = 1}^N f_i (x)$, where each function $f_i$ is held by a one of the $N$ different machines. Such tasks arise naturally in large-scale optimisation, where a standard solution is to apply variants of (stochastic) gradient descent. As our main result, we show that $\Omega( Nd \log d / \varepsilon)$ bits in total need to be communicated between the machines to find an additive $\epsilon$-approximation to the minimum of $\sum_{i = 1}^N f_i (x)$. The results holds for deterministic algorithms, and randomised algorithms under some restrictions on the parameter values. Importantly, our lower bounds require no assumptions on the structure of the algorithm, and are matched within constant factors for strongly convex objectives by a new variant of quantised gradient descent. The lower bounds are obtained by bringing over tools from communication complexity to distributed optimisation, an approach we hope will find further use in future.
computer science
We consider the equilibrium thermal state of photons and determine the mean number and number variance of squeezed coherent photons. We use an integral representation for electro-magnetic radiation applicable both to systems in equilibrium and to systems in nonequilibrium to determine the spectral function of the radiation. The system considered is in thermal equilibrium and we find that the squeezed coherent photons are at a higher temperature than the photons themselves. Also, as expected, the mean number of squeezed coherent photons is greater than that of photons.
quantum physics
Maximum likelihood estimation of generalized linear mixed models(GLMMs) is difficult due to marginalization of the random effects. Computing derivatives of a fitted GLMM's likelihood (with respect to model parameters) is also difficult, especially because the derivatives are not by-products of popular estimation algorithms. In this paper, we describe GLMM derivatives along with a quadrature method to efficiently compute them, focusing on lme4 models with a single clustering variable. We describe how psychometric results related to IRT are helpful for obtaining these derivatives, as well as for verifying the derivatives' accuracies. After describing the derivative computation methods, we illustrate the many possible uses of these derivatives, including robust standard errors, score tests of fixed effect parameters, and likelihood ratio tests of non-nested models. The derivative computation methods and applications described in the paper are all available in easily-obtained R packages.
statistics
In this paper we introduce a new approach to the study of the effects that an impulsive wave, containing a mixture of material sources and gravitational waves, has on a geodesic congruence that traverses it. We find that the effect of the wave on the congruence is a discontinuity in the B-tensor of the congruence. Our results thus provide a detector independent and covariant characterization of gravitational memory.
high energy physics theory
Accurate model selection is a fundamental requirement for statistical analysis. In many real-world applications of graphical modelling, correct model structure identification is the ultimate objective. Standard model validation procedures such as information theoretic scores and cross validation have demonstrated poor performance in the high dimensional setting. Specialised methods such as EBIC, StARS and RIC have been developed for the explicit purpose of high-dimensional Gaussian graphical model selection. We present a novel model score criterion, Graphical Neighbour Information. This method demonstrates oracle performance in high-dimensional model selection, outperforming the current state-of-the-art in our simulations. The Graphical Neighbour Information criterion has the additional advantage of efficient, closed-form computability, sparing the costly inference of multiple models on data subsamples. We provide a theoretical analysis of the method and benchmark simulations versus the current state of the art.
statistics
Recently, direct modeling of raw waveforms using deep neural networks has been widely studied for a number of tasks in audio domains. In speaker verification, however, utilization of raw waveforms is in its preliminary phase, requiring further investigation. In this study, we explore end-to-end deep neural networks that input raw waveforms to improve various aspects: front-end speaker embedding extraction including model architecture, pre-training scheme, additional objective functions, and back-end classification. Adjustment of model architecture using a pre-training scheme can extract speaker embeddings, giving a significant improvement in performance. Additional objective functions simplify the process of extracting speaker embeddings by merging conventional two-phase processes: extracting utterance-level features such as i-vectors or x-vectors and the feature enhancement phase, e.g., linear discriminant analysis. Effective back-end classification models that suit the proposed speaker embedding are also explored. We propose an end-to-end system that comprises two deep neural networks, one front-end for utterance-level speaker embedding extraction and the other for back-end classification. Experiments conducted on the VoxCeleb1 dataset demonstrate that the proposed model achieves state-of-the-art performance among systems without data augmentation. The proposed system is also comparable to the state-of-the-art x-vector system that adopts data augmentation.
electrical engineering and systems science
We construct a pairing, which we call factorization homology, between framed manifolds and higher categories. The essential geometric notion is that of a vari-framing of a stratified manifold, which is a framing on each stratum together with a coherent system of compatibilities of framings along links between strata. Our main result constructs labeling systems on disk-stratified vari-framed $n$-manifolds from $(\infty,n)$-categories. These $(\infty,n)$-categories, in contrast with the literature to date, are not required to have adjoints. This allows the following conceptual definition: the factorization homology \[ \int_M\mathcal{C} \] of a framed $n$-manifold $M$ with coefficients in an $(\infty,n)$-category $\mathcal{C}$ is the classifying space of $\cC$-labeled disk-stratifications over $M$.
mathematics
We propose an orbital optimized method for unitary coupled cluster theory (OO-UCC) within the variational quantum eigensolver (VQE) framework for quantum computers. OO-UCC variationally determines the coupled cluster amplitudes and also molecular orbital coefficients. Owing to its fully variational nature, first-order properties are readily available. This feature allows the optimization of molecular structures in VQE without solving any additional equations. Furthermore, the method requires smaller active space and shallower quantum circuit than UCC to achieve the same accuracy. We present numerical examples of OO-UCC using quantum simulators, which include the geometry optimization of the water and ammonia molecules using analytical first derivatives of the VQE.
condensed matter
If a bulk gravitational path integral can be identified with an average of partition functions over an ensemble of boundary quantum theories, then a corresponding moment problem can be solved. We review existence and uniqueness criteria for the Stieltjes moment problem, which include an infinite set of positivity conditions. The existence criteria are useful to rule out an ensemble interpretation of a theory of gravity, or to indicate incompleteness of the gravitational data. We illustrate this in a particular class of 2D gravities including variants of the CGHS model and JT supergravity. The uniqueness criterium is relevant for an unambiguous determination of quantities such as $\overline{\log Z(\beta)}$ or the quenched free energy. We prove in JT gravity that perturbation theory, both in the coupling which suppresses higher-genus surfaces and in the temperature, fails when the number of boundaries is taken to infinity. Since this asymptotic data is necessary for the uniqueness problem, the question cannot be settled without a nonperturbative completion of the theory.
high energy physics theory
DNA toroidal bundles form upon condensation of one or multiple DNA filaments. DNA filaments in toroidal bundles are hexagonally packed, and collectively twist around the center line of the toroid. In a previous study, we and our coworkers argue that the filaments' curvature locally correlates with their density in the bundle, with the filaments less closely packed where their curvature appears to be higher. We base our claim on the assumption that twist has a negligible effect on the local curvature of filaments in DNA toroids. However, this remains to be proven. We fill this gap here, by calculating the distribution of filaments' curvature in a geometric model of twisted toroidal bundle, which we use to describe DNA toroids by an appropriate choice of parameters. This allows us to substantiate our previous study and suggest directions for future experiments.
condensed matter
As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability.
computer science
High-entropy ceramics generally exhibit reduced thermal conductivity, but little is known about what controls this suppression and which descriptor can predict it. Herein, 18 medium- and high-entropy pyrochlores were synthesized to measure their thermal conductivity and Young's modulus. Up to 35% reductions in thermal conductivity were achieved with retained moduli, thereby attaining insulative yet stiff properties for potential thermal barrier coating applications. Notably, the measured thermal conductivity correlates well with a modified size disorder parameter. Thus, this modified size disorder parameter is suggested as a useful descriptor for designing thermally-insulative medium- and high-entropy ceramics (broadly defined as "compositionally-complex ceramics").
condensed matter
We establish that the Wu-Yang monopole needs the introduction of a magnetic point source at the origin in order for it to be a solution of the differential and integral equations for the Yang-Mills theory. That result is corroborated by the analysis through distribution theory, of the two types of magnetic fields relevant for the local and global properties of the Wu-Yang solution. The subtlety lies on the fact that with the non-vanishing magnetic point source required by the Yang-Mills integral equations, the Wu-Yang monopole configuration does not violate, in the sense of distribution theory, the differential Bianchi identity.
high energy physics theory
We present results for the two-loop helicity amplitudes entering the NLO QCD corrections to the production of a Higgs boson in association with a $Z$-boson in gluon fusion. The two-loop integrals, involving massive top quarks, are calculated numerically. Results for the interference of the finite part of the two-loop amplitudes with the Born amplitude are shown as a function of the two kinematic invariants on which the amplitudes depend.
high energy physics phenomenology
We initiate a quantum treatment of chameleon-like particles, deriving classical and quantum forces directly from the path integral. It is found that the quantum force can potentially dominate the classical one by many orders of magnitude. We calculate the quantum chameleon pressure between infinite plates, which is found to interpolate between the Casimir and the integrated Casimir-Polder pressures, respectively in the limits of full screening and no screening. To this end we calculate the chameleon propagator in the presence of an arbitrary number of one-dimensional layers of material. For the E\"ot-Wash experiment, the five-layer propagator is used to take into account the intermediate shielding sheet, and it is found that the presence of the sheet enhances the quantum pressure by two orders of magnitude. As an example of implication, we show that in both the standard chameleon and symmetron models, large and previously unconstrained regions of the parameter space are excluded once the quantum pressure is taken into account.
high energy physics phenomenology
Compressing piecewise smooth images is important for many data types such as depth maps in 3D videos or optic flow fields for motion compensation. Specialised codecs that rely on explicitly stored segmentations excel in this task since they preserve discontinuities between smooth regions. However, current approaches rely on ad hoc segmentations that lack a clean interpretation in terms of energy minimisation. As a remedy, we derive a generic region merging algorithm from the Mumford-Shah cartoon model. It adapts the segmentation to arbitrary reconstruction operators for the segment content. In spite of its conceptual simplicity, our framework can outperform previous segment-based compression methods as well as BPG by up to 3 dB.
electrical engineering and systems science
Here, in an effort towards facile and fast screening/diagnosis of novel coronavirus disease 2019 (COVID-19), we combined the unprecedently sensitive graphene field-effect transistor (Gr-FET) with highly selective antibody-antigen interaction to develop a coronavirus immunosensor. The Gr-FET immunosensors can rapidly identify (about 2 mins) and accurately capture the COVID-19 spike protein S1 (which contains a receptor binding domain, RBD) at a limit of detection down to 0.2 pM, in a real-time and label-free manner. Further results ensure that the Gr-FET immunosensors can be promisingly applied to screen for high-affinity antibodies (with binding constant up to 2*10^11 M^-1 against the RBD) at concentrations down to 0.1 pM. Thus, our developed electrical Gr-FET immunosensors provide an appealing alternative to address the early screening/diagnosis as well as the analysis and rational design of neutralizing-antibody locking methods of this ongoing public health crisis.
physics
This work proposes the application of independent component analysis to the problem of ranking different alternatives by considering criteria that are not necessarily statistically independent. In this case, the observed data (the criteria values for all alternatives) can be modeled as mixtures of latent variables. Therefore, in the proposed approach, we perform ranking by means of the TOPSIS approach and based on the independent components extracted from the collected decision data. Numerical experiments attest the usefulness of the proposed approach, as they show that working with latent variables leads to better results compared to already existing methods
electrical engineering and systems science
Because of the large mass differences between electrons and ions, the heat diffusion in electron-ion plasmas exhibits more complex behavior than simple heat diffusion found in typical gas mixtures. In particular, heat is diffused in two distinct, but coupled, channels. Conventional single fluid models neglect the resulting complexity, and can often inaccurately interpret the results of heat pulse experiments. However, by recognizing the sensitivity of the electron temperature evolution to the ion diffusivity, not only can previous experiments be interpreted correctly, but informative simultaneous measurements can be made of both ion and electron heat channels.
physics
The CARMENES survey is searching for Earth-like planets orbiting M dwarfs using the radial velocity method. Studying the stellar activity of the target stars is important to avoid false planet detections and to improve our understanding of the atmospheres of late-type stars. In this work we present measurements of activity indicators at visible and near-infrared wavelengths for 331 M dwarfs observed with CARMENES. Our aim is to identify the activity indicators that are most sensitive and easiest to measure, and the correlations among these indicators. We also wish to characterise their variability. Using a spectral subtraction technique, we measured pseudo-equivalent widths of the He I D3, H$\alpha$, He I $\lambda$10833 {\AA}, and Pa$\beta$ lines, the Na I D doublet, and the Ca II infrared triplet, which have a chromospheric component in active M dwarfs. In addition, we measured an index of the strength of two TiO and two VO bands, which are formed in the photosphere. We also searched for periodicities in these activity indicators for all sample stars using generalised Lomb-Scargle periodograms. We find that the most slowly rotating stars of each spectral subtype have the strongest H$\alpha$ absorption. H$\alpha$ is correlated most strongly with He I D3, whereas Na I D and the Ca II infrared triplet are also correlated with H$\alpha$. He I $\lambda$10833 {\AA} and Pa$\beta$ show no clear correlations with the other indicators. The TiO bands show an activity effect that does not appear in the VO bands. We find that the relative variations of H$\alpha$ and He I D3 are smaller for stars with higher activity levels, while this anti-correlation is weaker for Na I D and the Ca II infrared triplet, and is absent for He I $\lambda$10833 {\AA} and Pa$\beta$. Periodic variation with the rotation period most commonly appears in the TiO bands, H$\alpha$, and in the Ca II infrared triplet.
astrophysics
TileCal is the central hadronic calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC). It is a sampling detector where scintillating tiles are embedded in steel absorber plates. The tiles are grouped forming cells, which are read-out on both sides by photomultiplier tubes (PMTs). The PMT digital samples are transmitted to the Read-Out Drivers (ROD) located in the back-end system for the events accepted by the Level 1 trigger system. The ROD is the core element of the back-end electronics and it represents the interface between the front-end electronics and the ATLAS overall Data AcQuisition (DAQ) system. It is responsible for energy and time reconstruction, trigger and data synchronization, busy handling, data integrity checking and lossless data compression. The TileCal ROD is a standard 9U VME board equipped with DSP based Processing Units mezzanine cards. A total of 32 ROD modules are required to read-out the entire TileCal detector. Each ROD module has to process the data from up to 360 PMTs in real time in less than 10 microseconds. The commissioning of the RODs was completed in 2008 before the first LHC collisions. Since then, several hardware and firmware updates have been implemented to accommodate the RODs to the evolving ATLAS Trigger and DAQ conditions adjusted to follow the LHC parameters. The initial ROD system, the different updates implemented and the operational experience during the LHC Run 1 and Run 2 are presented.
physics
Neutrino-nucleus $\nu A\to \nu A$ and antineutrino-nucleus $\bar\nu A\to \bar\nu A$ interactions, when the nucleus conserves its integrity, are discussed with coherent (elastic) and incoherent (inelastic) scattering regimes taken into account. In the first regime the nucleus remains in the same quantum state after the scattering and the cross-section depends on the quadratic number of nucleons. In the second regime the nucleus changes its quantum state and the cross-section has an essentially linear dependence on the number of nucleons. The coherent and incoherent cross-sections are driven by a nuclear nucleon form-factor squared $|F|^2$ term and a $(1-|F|^2)$ term, respectively. One has a smooth transition between the regimes of coherent and incoherent (anti)neutrino-nucleus scattering. Due to the neutral current nature these elastic and inelastic processes are indistinguishable if the nucleus recoil energy is only observed. One way to separate the coherent signal from the incoherent one is to register $\gamma$ quanta from deexcitation of the nucleus excited during the incoherent scattering. Another way is to use a very low-energy threshold detector and collect data at very low recoil energies, where the incoherent scattering is vanishingly small. In particular, for ${}^{133}\text{Cs}$ and neutrino energies of 30--50 MeV the incoherent cross-section is about 15-20\% of the coherent one. Therefore, the COHERENT experiment (with ${}^{133}\text{Cs}$) has measured the coherent elastic neutrino nucleus scattering (CE$\nu$NS) with the inelastic admixture at a level of 15-20\%, if the excitation $\gamma$ quantum escapes its detection.
high energy physics phenomenology
We demonstrate a method to enhance the atom loading rate of a ytterbium (Yb) magneto-optic trap (MOT) operating on the 556 nm ${^1S}_0 \rightarrow {^3P}_1$ intercombination transition (narrow linewidth $\Gamma_g = 2\pi \times 182$ kHz). Following traditional Zeeman slowing of an atomic beam near the 399 nm ${^1S}_0 \rightarrow {^1P}_1$ transition (broad linewidth $\Gamma_p = 2\pi \times 29 $ MHz), two laser beams in a crossed-beam geometry, frequency tuned near the same transition, provide additional slowing immediately prior to the MOT. Using this technique, we observe an improvement by a factor of 6 in the atom loading rate of a narrow-line Yb MOT. The relative simplicity and generality of this approach make it readily adoptable to other experiments involving narrow-line MOTs. We also present a numerical simulation of this two-stage slowing process which shows good agreement with the observed dependence on experimental parameters, and use it to assess potential improvements to the method.
physics
A uniform derivation is presented of the self-consistent field equations in a finite basis set. Both restricted and unrestricted Hartree-Fock (HF) theory as well as various density functional (DF) approximations are considered. The unitary invariance of the HF and DF models is discussed, paving the way for the use of localized molecular orbitals. The self-consistent field equations are derived in a non-orthogonal basis set, and their solution is discussed in the presence of linear dependencies in the basis set. It is argued why iterative diagonalization of the Kohn-Sham-Fock matrix leads to the minimization of the total energy. Alternative methods for the solution of the self-consistent field equations via direct minimization as well as stability analysis are also briefly discussed. Explicit expressions are given for the contributions to the Kohn-Sham-Fock matrix up to meta-GGA functionals. Range-separated hybrids and non-local correlation functionals are also briefly discussed.
physics
Intelligent reflecting surface (IRS), which consists of a large number of tunable reflective elements, is capable of enhancing the wireless propagation environment in a cellular network by intelligently reflecting the electromagnetic waves from the base-station (BS) toward the users. The optimal tuning of the phase shifters at the IRS is, however, a challenging problem, because due to the passive nature of reflective elements, it is difficult to directly measure the channels between the IRS, the BS, and the users. Instead of following the traditional paradigm of first estimating the channels then optimizing the system parameters, this paper advocates a machine learning approach capable of directly optimizing both the beamformers at the BS and the reflective coefficients at the IRS based on a system objective. This is achieved by using a deep neural network to parameterize the mapping from the received pilots (plus any additional information, such as the user locations) to an optimized system configuration, and by adopting a permutation invariant/equivariant graph neural network (GNN) architecture to capture the interactions among the different users in the cellular network. Simulation results show that the proposed implicit channel estimation based approach is generalizable, can be interpreted, and can efficiently learn to maximize a sum-rate or minimum-rate objective from a much fewer number of pilots than the traditional explicit channel estimation based approaches.
electrical engineering and systems science
We study the vector-valued spectrum $\mathcal{M}_\infty(B_{c_0},B_{c_0})$, that is, the set of non null algebra homomorphisms from $\mathcal H^\infty(B_{c_0})$ to $\mathcal H^\infty(B_{c_0})$ which is naturally projected onto the closed unit ball of $\mathcal H^\infty(B_{c_0}, \ell_\infty)$, likewise the scalar-valued spectrum $\mathcal M_\infty(B_{c_0})$ which is projected over $\bar{B}_{\ell_\infty}$. Our itinerary begins in the scalar-valued spectrum $\mathcal{M}_\infty(B_{c_0})$: by expanding a result by Cole, Gamelin and Johnson (1992) we prove that on each fiber there are $2^c$ disjoint analytic Gleason isometric copies of $B_{\ell_\infty}$. For the vector-valued case, building on the previous result we obtain $2^c$ disjoint analytic Gleason isometric copies of $B_{\mathcal{H}^\infty(B_{c_0},\ell_\infty)}$ on each fiber. We also take a look at the relationship between fibers and Gleason parts for both vector-valued spectra $\mathcal{M}_{u,\infty}(B_{c_0},B_{c_0})$ and $\mathcal{M}_\infty(B_{c_0},B_{c_0})$.
mathematics
The IRDC SDC335.579-0.292 (SDC335) is a massive star-forming cloud found to be globally collapsing towards one of the most massive star forming cores in the Galaxy. SDC335 hosts three high-mass protostellar objects at early stages of their evolution and archival ALMA Cycle 0 data indicate the presence of at least one molecular outflow in the region. Observations of molecular outflows from massive protostellar objects allow us to estimate the accretion rates of the protostars as well as to assess the disruptive impact that stars have on their natal clouds. The aim of this work is to identify and analyse the properties of the protostellar-driven molecular outflows within SDC335 and use these outflows to help refine the properties of the protostars. We imaged the molecular outflows in SDC335 using new data from the ATCA of SiO and Class I CH$_3$OH maser emission (~3 arcsec) alongside observations of four CO transitions made with APEX and archival ALMA CO, $^{13}$CO (~1 arcsec), and HNC data. We introduced a generalised argument to constrain outflow inclination angles based on observed outflow properties. We used the properties of each outflow to infer the accretion rates on the protostellar sources driving them and to deduce the evolutionary characteristics of the sources. We identify three molecular outflows in SDC335, one associated with each of the known compact HII regions. The outflow properties show that the SDC335 protostars are in the early stages (Class 0) of their evolution, with the potential to form stars in excess of 50 M$_{\odot}$. The measured total accretion rate onto the protostars is $1.4(\pm 0.1) \times 10^{-3}$M$_{\odot}$ yr$^{-1}$, comparable to the total mass infall rate toward the cloud centre on parsec scales of 2.5$(\pm 1.0) \times 10^{-3}$M$_{\odot}$ yr$^{-1}$, suggesting a near-continuous flow of material from cloud to core scales. [abridged].
astrophysics
We propose a model where the masses of the active neutrinos and a dark matter candidate are generated radiatively through the $U(1)_{B-L}$ gauge symmetry breaking. It is realized by a non-universal $U(1)_{B-L}$ charge assignment on the right handed neutrinos and one of them becomes DM. The dark matter mass becomes generally small compared with the typical mass of the Weak Interacting Massive Particles and we have milder constraints on the dark matter. We consider the case where the dark matter is produced through the freeze-in mechanism and show that the observed dark matter relic density can be realized consistently with the current experimental constraints on the neutrino masses and the lepton flavor structure.
high energy physics phenomenology
Superconducting qubits are a promising platform for building a larger-scale quantum processor capable of solving otherwise intractable problems. In order for the processor to reach practical viability, the gate errors need to be further suppressed and remain stable for extended periods of time. With recent advances in qubit control, both single- and two-qubit gate fidelities are now in many cases limited by the coherence times of the qubits. Here we experimentally employ closed-loop feedback to stabilize the frequency fluctuations of a superconducting transmon qubit, thereby increasing its coherence time by 26\% and reducing the single-qubit error rate from $(8.5 \pm 2.1)\times 10^{-4}$ to $(5.9 \pm 0.7)\times 10^{-4}$. Importantly, the resulting high-fidelity operation remains effective even away from the qubit flux-noise insensitive point, significantly increasing the frequency bandwidth over which the qubit can be operated with high fidelity. This approach is helpful in large qubit grids, where frequency crowding and parasitic interactions between the qubits limit their performance.
quantum physics
Ultrahigh-resolution fiber-optic sensing has been demonstrated with a meter-long, high-finesse fiber Fabry-Perot interferometer (FFPI). The main technical challenge of large, environment-induced resonance frequency drift is addressed by locking the interrogation laser to a similar meter-long FFPI, which, along with the FFPI sensor, is thermally and mechanically isolated from the ambient. A nominal, noise-limited strain resolution of 800 f{\epsilon} /sqrt(Hz) has been achieved within 1 to 100 Hz. Strain resolution further improves to 75 f{\epsilon} /sqrt(Hz) at 1 kHz, 60 f{\epsilon} /sqrt(Hz) at 2 kHz and 40 f{\epsilon} /sqrt(Hz) at 23 kHz, demonstrating comparable or even better resolutions than proven techniques such as {\pi}-phase-shifted and slow-light fiber Bragg gratings. Limitations of the current system are analyzed and improvement strategies are presented. The work lays out a feasible path toward ultrahigh-resolution fiber-optic sensing based on long FFPIs.
physics
We propose a refined but natural notion of toric degenerations that respect a given embedding and show that within this framework a Gorenstein Fano variety can only be degenerated to a Gorenstein Fano toric variety if it is embedded via its anticanonical embedding. This also gives a precise criterion for reflexive polytopes to appear, which might be required for applications in mirror symmetry. For the proof of this statement we will study polytopes whose polar dual is a lattice polytope. As a byproduct we generalize a connection between the number of lattice points in a rational convex polytope and the Euler characteristic of an associated torus invariant rational Weil divisor, allowing us to show that Ehrhart-Macdonald Reciprocity and Serre Duality are equivalent statements for a broad class of varieties. Additionally, we conjecture a necessary and sufficient condition for the Ehrhart quasi-polynomial of a rational convex polytope to be a polynomial. Finally, we show that the anticanonical line bundle on a Gorenstein Fano variety with at worst rational singularities is uniquely determined by a combinatorial condition of its Hilbert polynomial.
mathematics
The forthcoming Laser Interferometer Space Antenna (LISA) will probe the population of coalescing massive black hole (MBH) binaries up to the onset of structure formation. Here we simulate the galactic-scale pairing of $\sim10^6 M_\odot$ MBHs in a typical, non-clumpy main-sequence galaxy embedded in a cosmological environment at $z = 7-6$. In order to increase our statistical sample, we adopt a strategy that allows us to follow the evolution of six secondary MBHs concomitantly. We find that the magnitude of the dynamical-friction induced torques is significantly smaller than that of the large-scale, stochastic gravitational torques arising from the perturbed and morphologically evolving galactic disc, suggesting that the standard dynamical friction treatment is inadequate for realistic galaxies at high redshift. The dynamical evolution of MBHs is very stochastic, and a variation in the initial orbital phase can lead to a drastically different time-scale for the inspiral. Most remarkably, the development of a galactic bar in the host system either significantly accelerates the inspiral by dragging a secondary MBH into the centre, or ultimately hinders the orbital decay by scattering the MBH in the galaxy outskirts. The latter occurs more rarely, suggesting that galactic bars overall promote MBH inspiral and binary coalescence. The orbital decay time can be an order of magnitude shorter than what would be predicted relying on dynamical friction alone. The stochasticity, and the important role of global torques, have crucial implications for the rates of MBH coalescences in the early Universe: both have to be accounted for when making predictions for the upcoming LISA observatory.
astrophysics
Sgr A*, the supermassive black hole (SMBH) in our Galaxy, is dormant today, but it should have gone through multiple gas-accretion episodes in the past billions of years to grow to its current mass of $4\times10^6\,M_\odot$. Each episode temporarily ignites the SMBH and turns the Galactic Center into an active galactic nucleus (AGN). Recently, we showed that the AGN could produce large amount of hard X-rays that can penetrate the dense interstellar medium in the Galactic plane. Here we further study the impact of the X-rays on the molecular chemistry in our Galaxy. We use a chemical reaction network to simulate the evolution of several molecular species including H2O, CH3OH, and H2CO, both in the gas phase and on the surface of dust grains. We find that the X-ray irradiation could significantly enhance the abundances of these species. The effect is the most significant in those young, high-density molecular clouds, and could be prominent at a Galactic distance of $8$ kpc or smaller. The imprint in the chemical abundance is visible even several million years after the AGN turns off.
astrophysics
In the framework of fair learning, we consider clustering methods that avoid or limit the influence of a set of protected attributes, $S$, (race, sex, etc) over the resulting clusters, with the goal of producing a {\it fair clustering}. For this, we introduce perturbations to the Euclidean distance that take into account $S$ in a way that resembles attraction-repulsion in charged particles in Physics and results in dissimilarities with an easy interpretation. Cluster analysis based on these dissimilarities penalizes homogeneity of the clusters in the attributes $S$, and leads to an improvement in fairness. We illustrate the use of our procedures with both synthetic and real data. Our procedures are implemented in an R package freely available at https://github.com/HristoInouzhe/AttractionRepulsionClustering.
statistics
Quantum theory (QT) has been confirmed by numerous experiments, yet we still cannot fully grasp the meaning of the theory. As a consequence, the quantum world appears to us paradoxical. Here we shed new light on QT by having it follow from two main postulates (i) the theory should be logically consistent; (ii) inferences in the theory should be computable in polynomial time. The first postulate is what we require to each well-founded mathematical theory. The computation postulate defines the physical component of the theory. We show that the computation postulate is the only true divide between QT, seen as a generalised theory of probability, and classical probability. All quantum paradoxes, and entanglement in particular, arise from the clash of trying to reconcile a computationally intractable, somewhat idealised, theory (classical physics) with a computationally tractable theory (QT) or, in other words, from regarding physics as fundamental rather than computation.
quantum physics
In the present investigation, we evaluate the in-medium partial decay widths of exited states $\psi$(4040) and $Y$(4008) decaying to the pairs of non-strange pseudoscalar $D \bar{D}$ mesons, strange pseudoscalar $D_s \bar{D}_s$ mesons, non-strange vector $D^* \bar{D}^*$ meson and pseudoscalar-vector $D \bar{D}^*$ mesons using $^3P_0$ model. The in-medium effects are incorporated through the in-medium masses of daughter mesons (calculated using the chiral SU(3) model and QCD sum rule approach in our previous works). We consider $\psi$(4040) and $Y$(4008) states as 3$^3S_1$ states and observe the in-medium dominance of one state over the other for a given decay mode. The results of the present investigation will prove as one step forward in assigning the correct spectroscopic state to controversial $Y$(4008) state.
high energy physics phenomenology
Human-robot interaction plays a crucial role to make robots closer to humans. Usually, robots are limited by their own capabilities. Therefore, they utilise Cloud Robotics to enhance their dexterity. Its ability includes the sharing of information such as maps, images and the processing power. This whole process involves distributing data which intend to rise enormously. New issues can arise such as bandwidth, network congestion at backhaul and fronthaul systems resulting in high latency. Thus, it can make an impact on seamless connectivity between the robots, users and the cloud. Also, a robot may not accomplish its goal successfully within a stipulated time. As a consequence, Cloud Robotics cannot be in a position to handle the traffic imposed by robots. On the contrary, impending Fog Robotics can act as a solution by solving major problems of Cloud Robotics. Therefore to check its feasibility, we discuss the need and architectures of Fog Robotics in this paper. To evaluate the architectures, we used a realistic scenario of Fog Robotics by comparing them with Cloud Robotics. Next, latency is chosen as the primary factor for validating the effectiveness of the system. Besides, we utilised real-time latency using Pepper robot, Fog robot server and the Cloud server. Experimental results show that Fog Robotics reduces latency significantly compared to Cloud Robotics. Moreover, advantages, challenges and future scope of the Fog Robotics system is further discussed.
computer science
Schr\"{o}dinger's Cat was proposed by Erwin Schr\"{o}dinger, the infamous thought experiment in which a cat in a box was both alive and dead simultaneously illustrating a quantum phenomenon known as superposition. In 2013, Yakir Aharonov and his co-authors conceived of an experiment suggesting that a particle can be separated from its property. They called the effect a "Quantum Cheshire Cat" that has been experimentally verified in the succeeding year. The name Quantum Cheshire Cat is inspired from a fanciful character of the Cheshire Cat in "\textit{Alice's Adventures in Wonderland"} a novel written by Lewis Carroll where the grin of cat is found without a cat. An important question arises here. Once the grin of the Cheshire Cat is separated, is there any correlation still left between the grin and the cat? To answer the question we propose a thought experiment in which Quantum Cheshire Cat is also a Schr\"{o}dinger's Cat existing in superposition of happy(smiling) and sad(frowning) states. We name this cat as a "Quantum Mona Lisa Cat" for the reason that historically it is presumed that Mona Lisa's portrait contains both characteristics of happy(smiling) and sad(frowning) and either is observed depending upon the mood of the observer. We show that property separated from particle behave as "Quantum Mona Lisa Cat".
quantum physics
Recent work has attempted to directly approximate the `function-space' or predictive posterior distribution of Bayesian models, without approximating the posterior distribution over the parameters. This is appealing in e.g. Bayesian neural networks, where we only need the former, and the latter is hard to represent. In this work, we highlight some advantages and limitations of employing the Kullback-Leibler divergence in this setting. For example, we show that minimizing the KL divergence between a wide class of parametric distributions and the posterior induced by a (non-degenerate) Gaussian process prior leads to an ill-defined objective function. Then, we propose (featurized) Bayesian linear regression as a benchmark for `function-space' inference methods that directly measures approximation quality. We apply this methodology to assess aspects of the objective function and inference scheme considered in Sun, Zhang, Shi, and Grosse (2018), emphasizing the quality of approximation to Bayesian inference as opposed to predictive performance.
statistics
Uplift is a particular case of conditional treatment effect modeling. Such models deal with cause-and-effect inference for a specific factor, such as a marketing intervention or a medical treatment. In practice, these models are built on individual data from randomized clinical trials where the goal is to partition the participants into heterogeneous groups depending on the uplift. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps training our uplift models to a great extent. We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
statistics
We present a novel symmetry of the colored HOMFLY polynomial. It relates pairs of polynomials colored by different representations at specific values of $N$ and generalizes the previously known "tug-the-hook" symmetry of the colored Alexander polynomial. As we show, the symmetry has a superalgebra origin, which we discuss qualitatively. Our main focus are the constraints that such a property imposes on the general group-theoretical structure, namely the $\mathfrak{sl}(N)$ weight system, arising in the perturbative expansion of the invariant. Finally, we demonstrate its tight relation to the eigenvalue conjecture.
high energy physics theory
Data centers owned and operated by large companies have a high power consumption and this is expected to increase in the future. However, the ability to shift computing loads geographically and in time can provide flexibility to the power grid. We introduce the concept of virtual links to capture space-time load flexibility provided by geographically-distributed data centers in market clearing procedures. We show that the virtual link abstraction fits well into existing market clearing frameworks and can help analyze and establish market design properties. This is demonstrated using illustrative case studies.
electrical engineering and systems science
We observe that in presence of excitation, a thermodynamic Smarr like relation corresponding to a generalized entanglement temperature ($T_g$) can be holographically obtained for the entanglement entropy of a subsystem. Such a relation emerges naturally by demanding that the generalized entanglement temperature produces the exact Hawking temperature as the leading term in the IR limit ($l\rightarrow \infty$). Remarkably, this relation has the same form as the Smarr relation in black hole thermodynamics. We demonstrate this for three spacetime geometries, namely, a background with a nonconformal factor, a hyperscaling violating geometry background, and a charged black hole background which corresponds to a field theory with a finite chemical potential.
high energy physics theory