text
stringlengths
11
9.77k
label
stringlengths
2
104
An analysis of the experimental search for supernarrow dibaryons (SNDs) have been performed. The sum rules for SND masses have been constructed. The calculated values of the SND masses are in good agreement with the existing experimental values. It has been shown that the SND decay leads to the formation of N* with small masses. Experimental observations of N* is an additional confirmation of the possibility of the SND existence.
high energy physics phenomenology
In this paper, we show several rigorous results on the phase transition of Finitary Random Interlacement (FRI). For the high intensity regime, we show the existence of a critical fiber length, and give the exact asymptotic of it as intensity goes to infinity. At the same time, our result for the low intensity regime proves the global existence of a non-trivial phase transition with respect to the system intensity.
mathematics
We consider entanglement measures in 2-2 scattering in quantum field theories, focusing on relative entropy which distinguishes two different density matrices. Relative entropy is investigated in several cases which include $\phi^4$ theory, chiral perturbation theory ($\chi PT$) describing pion scattering and dilaton scattering in type II superstring theory. We derive a high energy bound on the relative entropy using known bounds on the elastic differential cross-sections in massive QFTs. In $\chi PT$, relative entropy close to threshold has simple expressions in terms of ratios of scattering lengths. Definite sign properties are found for the relative entropy which are over and above the usual positivity of relative entropy in certain cases. We then turn to the recent numerical investigations of the S-matrix bootstrap in the context of pion scattering. By imposing these sign constraints and the $\rho$ resonance, we find restrictions on the allowed S-matrices. By performing hypothesis testing using relative entropy, we isolate two sets of S-matrices living on the boundary which give scattering lengths comparable to experiments but one of which is far from the 1-loop $\chi PT$ Adler zeros. We perform a preliminary analysis to constrain the allowed space further, using ideas involving positivity inside the extended Mandelstam region, and elastic unitarity.
high energy physics theory
Aims. GRB 190829A (z = 0.0785), detected by Fermi and Swift with two emission episodes separated by a quiescent gap of ~40 s, was also observed by the H.E.S.S. telescopes at Very-High Energy (VHE). We present the 10.4m GTC observations of the afterglow of GRB 190829A and underlying supernova and compare it against a similar GRB 180728A and discuss the implications on underlying physical mechanisms producing these two GRBs. Methods. We present multi-band photometric data along with spectroscopic follow-up observations taken with the 10.4m GTC telescope. Together with the data from the prompt emission, the 10.4m GTC data are used to understand the emission mechanisms and possible progenitor. Results. A detailed analysis of multi-band data of the afterglow demands cooling frequency to pass between the optical and X-ray bands at early epochs and dominant with underlying SN 2019oyw later on. Conclusions. Prompt emission temporal properties of GRB 190829A and GRB 180728A are similar, however the two pulses seem different in the spectral domain. We found that the supernova (SN) 2019oyw associated with GRB 190829A, powered by Ni decay, is of Type Ic-BL and that the spectroscopic/photometric properties of this SN is consistent with those observed for SN 1998bw but evolved comparatively early.
astrophysics
Logic regression was developed more than a decade ago as a tool to construct predictors from Boolean combinations of binary covariates. It has been mainly used to model epistatic effects in genetic association studies, which is very appealing due to the intuitive interpretation of logic expressions to describe the interaction between genetic variations. Nevertheless logic regression has (partly due to computational challenges) remained less well known than other approaches to epistatic association mapping. Here we will adapt an advanced evolutionary algorithm called GMJMCMC (Genetically modified Mode Jumping Markov Chain Monte Carlo) to perform Bayesian model selection in the space of logic regression models. After describing the algorithmic details of GMJMCMC we perform a comprehensive simulation study that illustrates its performance given logic regression terms of various complexity. Specifically GMJMCMC is shown to be able to identify three-way and even four-way interactions with relatively large power, a level of complexity which has not been achieved by previous implementations of logic regression. We apply GMJMCMC to reanalyze QTL mapping data for Recombinant Inbred Lines in \textit{Arabidopsis thaliana} and from a backcross population in \textit{Drosophila} where we identify several interesting epistatic effects. The method is implemented in an R package which is available on github.
statistics
Comets are considered to be some of the most pristine and unprocessed solar system objects accessible to in-situ exploration. Investigating their molecular and elemental composition takes us on a journey back to the early period of our solar system and possibly even further. In this work, we deduce the bulk abundances of the major volatile species in comet 67P/Churyumov-Gerasimenko, the target of the European Space Agency's Rosetta mission. The basis are measurements obtained with the ROSINA instrument suite on board the Rosetta orbiter during a suitable period of high outgassing near perihelion. The results are combined with both gas and dust composition measurements published in the literature. This provides an integrated inventory of the major elements present in the nucleus of 67P/Churyumov-Gerasimenko. Similar to comet 1P/Halley, which was visited by ESA's Giotto spacecraft in 1986, comet 67P/Churyumov-Gerasimenko also shows near-solar abundances of oxygen and carbon, whereas hydrogen and nitrogen are depleted compared to solar. Still, the degree of devolatilization is lower than that of inner solar system objects, including meteorites and the Earth. This supports the idea that comets are among the most pristine objects in our solar system.
astrophysics
Due to the publicly known and deterministic characteristic of pilot tones, pilot authentication (PA) in multi-user multi-antenna orthogonal frequency-division multiplexing systems is very susceptible to the jamming/nulling/spoofing behaviors. To solve this, in this paper, we develop a hierarchical 2-D feature (H2DF) coding theory that exploits the hidden pilot signal features, i.e., the energy feature and independence feature, to secure pilot information coding which is applied between legitimate parties through a well-designed five-layer hierarchical coding model to achieve secure multiuser PA (SMPA). The reliability of SMPA is characterized using the identification error probability (IEP) of pilot encoding and decoding with the exact closed-form upper and lower bounds. However, this phenomenon of non-tight bounds brings about the risk of long-term instability in SMPA. Therefore, a reliability bound contraction theory is developed to shrink the bound interval, and practically, this is done by an easy-to-implement technique, namely, codebook partition within the H2DF code. In this process, a tradeoff between the upper and lower bounds of IEP is identified and a problem of optimal upper and lower bound tradeoff is formulated, with the objective of optimizing the cardinality of sub-codebooks such that the upper and lower bounds coincide. Solving this, we finally derive an exact closed-form expression for IEP, which realizes a stable and highly reliable SMPA. Numerical results validate the stability and resilience of H2DF coding in SMPA.
electrical engineering and systems science
An approach to study a generalization of the classical-quantum transition for general systems is proposed. In order to develop the idea, a deformation of the ladder operators algebra is proposed that contains a realization of the quantum group $SU(2)_q$ as a particular case. In this deformation Planck's constant becomes an operator whose eigenvalues approach $\hbar $ for small values of $n$ (the eigenvalue of the number operator), and zero for large values of $n$ (the system is classicalized).
high energy physics theory
We describe the Springer correspondence explicitly for exceptional Lie algebras of type $G_2$ and $F_4$ and their duals in bad characteristics, i.e. in characteristics 2 and 3.
mathematics
Principal component analysis (PCA) is a versatile tool to reduce the dimensionality which has wide applications in statistics and machine learning community. It is particularly useful to model data in high-dimensional scenarios where the number of variables $p$ is comparable to, or much larger than the sample size $n$. Despite extensive literature on this topic, researches have focused on modeling static principal eigenvectors or subspaces, which is unsuitable for stochastic processes that are dynamic in nature. To characterize the change in the whole course of high-dimensional data collection, we propose a unified framework to estimate dynamic principal subspaces spanned by leading eigenvectors of covariance matrices. In the proposed framework, we formulate an optimization problem by combining the kernel smoothing and regularization penalty together with the orthogonality constraint, which can be effectively solved by the proximal gradient method for manifold optimization. We show that our method is suitable for high-dimensional data observed under both common and irregular designs. In addition, theoretical properties of the estimators are investigated under $l_q (0 \leq q \leq 1)$ sparsity. Extensive experiments demonstrate the effectiveness of the proposed method in both simulated and real data examples.
statistics
For a given group $G$ and a collection of subgroups $\mathcal F$ of $G$, we show that there exist a left induced model structure on the category of right $G$-simplicial sets, in which the weak equivalences and cofibrations are the maps that induce weak equivalences and cofibrations on the $H$-orbits for all $H$ in $\mathcal F$. This gives a model categorical criterion for maps that induce weak equivalences on $H$-orbits to be weak equivalences in the $\mathcal F$-model structure.
mathematics
4U 1543-47 is a low mass X-ray binary which harbours a stellar-mass black hole located in our Milky Way galaxy. In this paper, we revisit 7 data sets which were in the Steep Power Law state of the 2002 outburst. The spectra were observed by the Rossi X-ray Timing Explorer. We have carefully modelled the X-ray reflection spectra, and made a joint-fit to these spectra with relxill, for the reflected emission. We found a moderate black hole spin, which is $0.67_{-0.08}^{+0.15}$ at 90% statistical confidence. Negative and low spins (< 0.5) at more than 99% statistical confidence are ruled out. In addition, our results indicate that the model requires a super-solar iron abundance: $5.05_{-0.26}^{+1.21}$, and the inclination angle of the inner disc is $36.3_{-3.4}^{+5.3}$ degrees. This inclination angle is appreciably larger than the binary orbital inclination angle (~21 degrees); this difference is possibly a systematic artefact of the artificially low-density employed in the reflection model for this X-ray binary system.
astrophysics
This paper presents TurboNet, a novel model-driven deep learning (DL) architecture for turbo decoding that combines DL with the traditional max-log-maximum a posteriori (MAP) algorithm. To design TurboNet, we unfold the original iterative structure for turbo decoding and replace each iteration by a deep neural network (DNN) decoding unit. In particular, the DNN decoding unit is obtained by parameterizing the max-log-MAP algorithm rather than replace the whole decoder with a black box fully connected DNN architecture. With the proposed architecture, the parameters can be efficiently learned from training data, and thus TurboNet learns to appropriately use systematic and parity information to offer higher error correction capabilities and decrease computational complexity compared with existing methods. Furthermore, simulation results prove TurboNet's superiority in signal-to-noise ratio generalizations.
electrical engineering and systems science
Deep learning has fostered many novel applications in materials informatics. However, the inverse design of inorganic crystals, $\textit{i.e.}$ generating new crystal structure with targeted properties, remains a grand challenge. An important ingredient for such generative models is an invertible representation that accesses the full periodic table. This is challenging due to limited data availability and the complexity of 3D periodic crystal structures. In this paper, we present a generalized invertible representation that encodes the crystallographic information into the descriptors in both real space and reciprocal space. Combining with a generative variational autoencoder (VAE), a wide range of crystallographic structures and chemistries with desired properties can be inverse-designed. We show that our VAE model predicts novel crystal structures that do not exist in the training and test database (Materials Project) with targeted formation energies and band gaps. We validate those predicted crystals by first-principles calculations. Finally, to design solids with practical applications, we address the sparse label problem by building a semi-supervised VAE and demonstrate its successful prediction of unique thermoelectric materials
physics
Quantum measurement is essential to both the foundations and practical applications of quantum information science. Among many possible models of quantum measurement, feedback measurements that dynamically update their physical structure are highly interesting due to their flexibility which enables a wide range of measurements that might otherwise be hard to implement. Here we investigate by detector tomography a measurement consisting of a displacement operation combined with photon detection followed by a real time feedback operation. We design the measurement in order to discriminate the superposition of vacuum and single photon states -- the single-rail qubit -- and find that it can discriminate the superposition states with a certainty of 96\%. Such a feedback-controlled photon counter will facilitate the realization of quantum information protocols with single-rail qubits as well as the non-locality test of certain entangled states.
quantum physics
We derive forms of light-state dominance for correlators in CFT$_d$, making precise the sense in which correlators can be approximated by the contribution of light operator exchanges. Our main result is that the four-point function of operators with dimension $\Delta$ is approximated, with bounded error, by the contribution of operators with scaling dimension below $\Delta_c > 2\Delta$ in the appropriate OPE channel. Adapting an existing modular invariance argument, we use crossing symmetry to show that the heavy-state contribution is suppressed by a relative factor of $e^{2\Delta-\Delta_c}$. We extend this result to the first sheet and derivatives of the correlator. Further exploiting technical similarities between crossing and modular invariance, we prove analogous results for the $2d$ partition function along the way. We then turn to effective field theory in gapped theories and AdS/CFT, and make some general comments about the effect of integrating out heavy particles in the bulk. Combining our bounds with the Lorentzian OPE inversion formula we show that, under certain conditions, light-state dominance implies that integrating out heavy exchanges leads to higher-derivative couplings suppressed at large $\Delta_{gap}$.
high energy physics theory
The most powerful superflares reaching 10$^{39}$erg bolometric energy are from giant stars. The mechanism behind flaring is supposed to be the magnetic reconnection, which is closely related to magnetic activity including starspots. However, it is poorly understood, how the underlying magnetic dynamo works and how the flare activity is related to the stellar properties which eventually control the dynamo action. We analyse the flaring activity of KIC 2852961, a late-type giant star, in order to understand how the flare statistics are related to that of other stars with flares and superflares and what the role of the observed stellar properties in generating flares is. We search for flares in the full Kepler dataset of the star by an automated technique together with visual inspection. We set a final list of 59 verified flares during the observing term. We calculate flare energies for the sample and perform a statistical analysis. The stellar properties of KIC 2852961 are revised and a more consistent set of parameters are proposed. The cumulative flare energy distribution can be characterized by a broken power-law, i.e. on the log-log representation the distribution function is fitted by two linear functions with different slopes, depending on the energy range fitted. We find that the total flare energy integrated over a few rotation periods correlates with the average amplitude of the rotational modulation due to starspots. Flares and superflares seem to be the result of the same physical mechanism at different energetic levels, also implying that late-type stars in the main sequence and flaring giant stars have the same underlying physical process for emitting flares. There might be a scaling effect behind generating flares and superflares in the sense that the higher the magnetic activity the higher the overall magnetic energy released by flares and/or superflares.
astrophysics
We establish a correspondence between a class of Wilson-'t Hooft lines in four-dimensional $\mathcal{N} = 2$ supersymmetric gauge theories described by circular quivers and transfer matrices constructed from dynamical L-operators for trigonometric quantum integrable systems. We compute the vacuum expectation values of the Wilson-'t Hooft lines in a twisted product space $S^1 \times_\epsilon \mathbb{R}^2 \times \mathbb{R}$ by supersymmetric localization and show that they are equal to the Wigner transforms of the transfer matrices. A variant of the AGT correspondence implies an identification of the transfer matrices with Verlinde operators in Toda theory, which we also verify. We explain how these field theory setups are related to four-dimensional Chern-Simons theory via embedding into string theory and dualities.
high energy physics theory
In this study, we focus on the question of stability of NISQ devices. The parameters that define the device stability profile are motivated by the work of DiVincenzo where the requirements for physical implementation of quantum computing are discussed. We develop the metrics and theoretical framework to quantify the DiVincenzo requirements and study the stability of those key metrics. The basis of our assessment is histogram similarity (in time and space). For identical experiments, devices which produce reproducible histograms in time, and similar histograms in space, are considered more reliable. To investigate such reliability concerns robustly, we propose a moment-based distance (MBD) metric. We illustrate our methodology using data collected from IBM's Yorktown device. Two types of assessments are discussed: spatial stability and temporal stability.
quantum physics
The aura of mystery surrounding quantum physics makes it difficult to advance quantum technologies. Demystification requires methodological techniques that explain the basics of quantum technologies without metaphors and abstract mathematics. The article provides an example of such an explanation for the BB84 quantum key distribution protocol based on phase coding. This allows you to seamlessly get acquainted with the real cryptographic installation QRate, used at the WorldSkills competition in the competence of "Quantum Technologies".
physics
In response to the growing need for low-radioactivity argon, community experts and interested parties came together for a 2-day workshop to discuss the worldwide low-radioactivity argon needs and the challenges associated with its production and characterization. Several topics were covered: experimental needs and requirements for low-radioactivity argon, the sources of low-radioactivity argon and its production, how long-lived argon radionuclides are created in nature, measuring argon radionuclides, and other applicable topics. The Low-Radioactivity Underground Argon (LRUA) workshop took place on March 19-20, 2018 at Pacific Northwest National Laboratory in Richland Washington, USA. This paper is a synopsis of the workshop with the associated abstracts from the talks.
physics
Dynamical quantum phase transitions (DQPTs), which refer to the criticality in time of a quantum many-body system, have attracted much theoretical and experimental research interest recently. Despite DQPTs are defined and signalled by the non-analyticities in the Loschmidt rate, its interrelation with various correlation measures such as the equilibrium order parameters of the system remains unclear. In this work, by considering the quench dynamics in an interacting topological model, we find that the equilibrium order parameters of the model in general exhibit signatures around the DQPT, in the short time regime. The first extrema of the equilibrium order parameters are connected to the first Loschmidt rate peak. By studying the unequal-time two-point correlation, we also find that the correlation between the nearest neighbors decays while that with neighbors further away builds up as time grows in the non-interacting case, and upon the addition of repulsive intra-cell interactions. On the other hand, the inter-cell interaction tends to suppress the two-site correlations. These findings could provide us insights into the characteristic of the system around DQPTs, and pave the way to a better understanding of the dynamics in non-equilibrium quantum many-body systems.
condensed matter
Previous studies have shown that sea-ice drift effectively promote the onset of a globally ice-covered snowball climate for paleo Earth and for tidally locked planets around low-mass stars. Here, we investigate whether sea-ice drift can influence the stellar flux threshold for a snowball climate onset on rapidly rotating aqua-planets around a Sun-like star. Using a fully coupled atmosphere--land--ocean--sea-ice model with turning on or off sea-ice drift, a circular orbit with no eccentricity (e=0) and an eccentric orbit (e=0.2) are examined. When sea-ice drift is turned off, the stellar flux threshold for the snowball onset is 1250--1275 and 1173--1199 W m^-2 for e=0 and 0.2, respectively. The difference is mainly due to the poleward retreat of sea ice and snow edges when the planet is close to the perihelion in the eccentric orbit. When sea-ice drift is turned on, the respective stellar flux threshold is 1335--1350 and 1250--1276 W m^-2. These mean that sea-ice drift increases the snowball onset threshold by ~80 W m^-2 for both e=0 and 0.2, promoting the formation of a snowball climate state. We further show that oceanic dynamics have a small effect, <26 W m^-2, on the snowball onset threshold. This is because oceanic heat transport becomes weaker and weaker as the sea ice edge is approaching the equator. These results imply that sea-ice dynamics are important for the climate of planets close to the outer edge of the habitable zone, but oceanic heat transport is less important.
astrophysics
Given any Koszul algebra of finite global dimension one can define a new algebra, which we call a higher zigzag algebra, as a twisted trivial extension of the Koszul dual of our original algebra. If our original algebra is the path algebra of a quiver whose underlying graph is a tree, this construction recovers the zigzag algebras of Huerfano and Khovanov. We study examples of higher zigzag algebras coming from Iyama's iterative construction of type A higher representation finite algebras. We give presentations of these algebras by quivers and relations, and describe relations between spherical twists acting on their derived categories. We then make a connection to the McKay correspondence in higher dimensions: if G is a finite abelian subgroup of the special linear group acting on affine space, then the skew group algebra which controls the category of G-equivariant sheaves is Koszul dual to a higher zigzag algebra. Using this, we show that our relations between spherical twists appear naturally in examples from algebraic geometry.
mathematics
Recently it was proposed that microscopic models of braneworld cosmology could be realized in the context of AdS/CFT using black hole microstates containing an end-of-the-world brane. Motivated by a desire to establish the microscopic existence of such microstates, which so far have been discussed primarily in bottom-up models, we have studied similar microstates in a simpler version of AdS/CFT. On one side, we define and study boundary states in the charged Sachdev-Ye-Kitaev model and show that these states typically look thermal with a certain pattern of symmetry breaking. On the other side, we study the dimensional reduction of microstates in Einstein-Maxwell theory featuring an end-of-the-world brane and show that they have an equivalent description in terms of 2D Jackiw-Teitelboim gravity coupled to an end-of-the-world particle. In particular, the same pattern of symmetry breaking is realized in both sides of the proposed duality. These results give significant evidence that such black hole microstates have a sensible microscopic realization.
high energy physics theory
We address the swimming problem at low Reynolds number. This regime, which is typically used for micro-swimmers, is described by Stokes equations. We couple a PDE solver of Stokes equations, derived from the Feel++ finite elements library, to a quaternion-based rigid-body solver. We validate our numerical results both on a 2D exact solution and on an exact solution for a rotating rigid body respectively. Finally, we apply them to simulate the motion of a one-hinged swimmer, which obeys to the scallop theorem.
mathematics
Calculations of central exclusive diffractive di-pion continuum production are presented in the Regge-eikonal approach. Data from ISR, STAR, CDF and CMS were analyzed and compared with theoretical description. We also consider theoretical predictions for LHC, possible nuances and problems of calculations and prospects of investigations at present and future hadron colliders.
high energy physics phenomenology
An affine surface is said to be an affine Zoll surface if all affine geodesics close smoothly. It is said to be an affine almost Zoll surface if thru any point, every affine geodesic but one closes smoothly (the exceptional geodesic is said to be alienated as it does not return). We exhibit an affine structure on the cylinder which is almost Zoll. This structure is geodesically complete, affine Killing complete, and affine symmetric.
mathematics
The Near-IR Imaging Spectropolarimeter (NIRIS) is a polarimeter that is installed at the New Solar Telescope at Big Bear Solar Observatory. This instrument takes advantages of the highest spatial resolution and flux. The primary mirror is an on-axis type, so it was our interest to evaluate its contribution to the crosstalk among the Stokes parameters since we could not put our calibration optics before the mirror. We would like to present our efforts to compensate for the crosstalk among Stokes profiles caused by the relay optics from the telescope to the detector. The overall data processing pipeline is also introduced.
astrophysics
The Mixed convention/braking Actuation Mobile Robot (MAMR) was designed to tackle some of the drawbacks of conventional mobile robots such as losing controllability due to primary actuator failures, mechanical complexity, weight, and cost. It replaces conventional steering wheels with braking actuators and conventional drive wheels with a single omni-directional wheel. This makes it fall under the category of under-actuated mobile robots. The brakes have only two states, ON and OFF, resulting in discontinuous dynamics. This inspires the use of a discontinuous control law to control the system. This work presents a Sliding Mode Controller (SMC) design to park the MAMR system from a given initial configuration to a desired final configuration. Experimental results are presented to validate the parking control of the MAMR.
electrical engineering and systems science
The Jellyfish network has recently be proposed as an alternate to the fat-tree network as the interconnect for data centers and high performance computing clusters. Jellyfish adopts a random regular graph as its topology and has been showed to be more cost-effective than fat-trees. Effective routing on Jellyfish is challenging. It is known that shortest path routing and equal-cost multi-path routing (ECMP) do not work well on Jellyfish. Existing schemes use variations of k-shortest path routing (KSP). In this work, we study two routing components for Jellyfish: path selection that decides the paths to route traffic, and routing mechanisms that decide which path to be used for each packet. We show that the performance of the existing KSP can be significantly improved by incorporating two heuristics, randomization and edge-disjointness. We evaluate a range of routing mechanisms including traffic oblivious and traffic adaptive schemes and identify an adaptive routing scheme that has significantly higher performance than others including the Universal Globally Adaptive Load-balance (UGAL) routing.
computer science
We consider nonregular fractions of factorial experiments for a class of linear models. These models have a common general mean and main effects, however they may have different 2-factor interactions. Here we assume for simplicity that 3-factor and higher order interactions are negligible. In the absence of a priori knowledge about which interactions are important, it is reasonable to prefer a design that results in equal variance for the estimates of all interaction effects to aid in model discrimination. Such designs are called common variance designs and can be quite challenging to identify without performing an exhaustive search of possible designs. In this work, we introduce an extension of common variance designs called approximate common variance, or A-ComVar designs. We develop a numerical approach to finding A-ComVar designs that is much more efficient than an exhaustive search. We present the types of A-ComVar designs that can be found for different number of factors, runs, and interactions. We further demonstrate the competitive performance of both common variance and A-ComVar designs using several comparisons to other popular designs in the literature.
statistics
We study vortex penetration into two-layer structures of superconducting plates under a perpendicular magnetic field. We solve the heat transport equation and the Maxwell equations with the current-voltage relation for superconductor, simultaneously, and obtain magnetic flux and current densities. We show how magnetic flux structure depends on the structure, especially distance of two-layer of superconductors.
condensed matter
We study the details of preheating in Palatini Higgs inflation. We show, that contrary to what happens in the metric formulation of the model, the Universe does not reheat through the creation of gauge bosons only, but also through the tachyonic production of Higgs excitations. The latest entropy production channel turns out to be very efficient and leads to an almost instantaneous onset of radiation domination after the end of inflation. As compared to the metric case, this reduces the number of e-folds needed to solve the usual hot big bang problems while leading to a smaller spectral index for the primordial spectrum of density perturbations.
high energy physics phenomenology
Multicomponent T2-mapping using a gradient and spin-echo (GraSE) acquisition has become standard for myelin water imaging at 3T. Higher magnetic field strengths promise SNR benefits but face specific absorption rate limits and shortened T2 times. This study investigates compartmental T2 times in vivo and addresses advantages and challenges of multi-component T2-mapping at 7T. We acquired 3D multi-echo GraSE data in seven healthy adults at 7T, with three subjects scanned also at 3T. Stimulated echoes arising from B+1 inhomogeneities were accounted for by the extended phase graph (EPG) algorithm. We used the computed T2 distributions to determine T2 times that identify different water pools and assessed signal-to-noise and fit-to-noise characteristics of the signal estimation. We compared short T2 fractions and T2 properties of the intermediate water pool at 3T and 7T. Flip angle mapping confirmed that EPG accurately determined the larger inhomogeneity at 7T. Multi-component T2 analysis demonstrated shortened T2 times at 7T compared to 3T. Fit-to-noise and signal-to-noise ratios were improved at 7T but depended on B1 homogeneity. Lowering the shortest T2 to 8 ms and adjusting the T2 threshold that separates different water compartments to 20 ms, yielded short T2 fractions at 7T that conformed to 3T data. Short T2 fractions in myelin-rich white matter regions were lower at 7T than at 3T, and higher in iron-rich structures. Adjusting the T2 compartment boundaries was required due to the shorter T2 relaxation times at 7T. Shorter echo spacing would better sample the fast decaying signal but would increase peripheral nerve stimulation. We used a multi-echo 3D-GraSE sequence to characterize the multi-exponential T2 decay at 7T. We adapted T2 parameters for evaluation of the short T2 fraction. Obtained 7T multicomponent T2-maps were in good agreement with 3T data.
physics
We present results from \textit{Chandra} X-ray observations and 325 MHz Giant Metrewave Radio Telescope (GMRT) observations of the massive and X-ray luminous cluster of galaxies Abell S1063. We report the detection of large-scale \lq\lq excess brightness\rq\rq\ in the residual \textit{Chandra} X-ray surface brightness map, which extends at least 2.7 Mpc towards the north-east from the center of the cluster. We also present a high fidelity X-ray flux and temperature map using \textit{Chandra} archival data of 122 ksec, which shows the disturbed morphology in the cluster. The residual flux map shows the first observational confirmation of the merging axis proposed by earlier simulation by \citet{Gomez2012AJ....144...79G}. The average temperature within $R_{500}$ is $11.7 \pm 0.56$ keV, which makes AS1063 one of the hottest clusters in the nearby Universe. The integrated radio flux density at 325 MHz is found to be $62.0\pm6.3$ mJy. The integrated spectrum of the radio halo follows a power-law with a spectral index $\alpha=-1.43\pm 0.13$. The radio halo is found to be significantly under-luminous, which favored for both the hadronic as well as the turbulent re-acceleration mechanism for its origin.
astrophysics
The prisoner's dilemma game is the most known contribution of game theory into social sciences. Here we describe new implications of this game for transactional and transformative leadership. While the autocratic (Stackelberg's) leadership is inefficient for this game, we discuss a Pareto-optimal scenario, where the leader L commits to react probabilistically to pure strategies of the follower F, which is free to make the first move. Offering F to resolve the dilemma, L is able to get a larger average pay-off. The exploitation can be stabilized via repeated interaction of L and F, and turns to be more stable than the egalitarian regime, where the pay-offs of L and F are equal. The total (summary) pay-off of the exploiting regime is never larger than in the egalitarian case. We discuss applications of this solution to a soft method of fighting corruption and to modeling the Machiavellian leadership. Whenever the defection benefit is large, the optimal strategies of F are mixed, while the summary pay-off is maximal. One mechanism for sustaining this solution is that L recognizes intentions of F.
physics
We propose an orthogonal series density estimator for complex surveys, where samples are neither independent nor identically distributed. The proposed estimator is proved to be design-unbiased and asymptotically design-consistent. The asymptotic normality is proved under both design and combined spaces. Two data driven estimators are proposed based on the proposed oracle estimator. We show the efficiency of the proposed estimators in simulation studies. A real survey data example is provided for an illustration.
statistics
Computational complexity is a new quantum information concept that may play an important role in holography and in understanding the physics of the black hole interior. We consider quantum computational complexity for $n$ qubits using Nielsen's geometrical approach. We investigate a choice of penalties which, compared to previous definitions, increases in a more progressive way with the number of qubits simultaneously entangled by a given operation. This choice turns out to be free from singularities. We also analyze the relation between operator and state complexities, framing the discussion with the language of Riemannian submersions. This provides a direct relation between geodesics and curvatures in the unitaries and the states spaces, which we also exploit to give a closed-form expression for the metric on the states in terms of the one for the operators. Finally, we study conjugate points for a large number of qubits in the unitary space and we provide a strong indication that maximal complexity scales exponentially with the number of qubits in a certain regime of the penalties space.
high energy physics theory
Recent work in machine learning shows that deep neural networks can be used to solve a wide variety of inverse problems arising in computational imaging. We explore the central prevailing themes of this emerging area and present a taxonomy that can be used to categorize different problems and reconstruction methods. Our taxonomy is organized along two central axes: (1) whether or not a forward model is known and to what extent it is used in training and testing, and (2) whether or not the learning is supervised or unsupervised, i.e., whether or not the training relies on access to matched ground truth image and measurement pairs. We also discuss the trade-offs associated with these different reconstruction approaches, caveats and common failure modes, plus open problems and avenues for future work.
electrical engineering and systems science
New coronavirus disease (COVID-19) has constituted a global pandemic and has spread to most countries and regions in the world. By understanding the development trend of a regional epidemic, the epidemic can be controlled using the development policy. The common traditional mathematical differential equations and population prediction models have limitations for time series population prediction, and even have large estimation errors. To address this issue, we propose an improved method for predicting confirmed cases based on LSTM (Long-Short Term Memory) neural network. This work compared the deviation between the experimental results of the improved LSTM prediction model and the digital prediction models (such as Logistic and Hill equations) with the real data as reference. And this work uses the goodness of fitting to evaluate the fitting effect of the improvement. Experiments show that the proposed approach has a smaller prediction deviation and a better fitting effect. Compared with the previous forecasting methods, the contributions of our proposed improvement methods are mainly in the following aspects: 1) we have fully considered the spatiotemporal characteristics of the data, rather than single standardized data; 2) the improved parameter settings and evaluation indicators are more accurate for fitting and forecasting. 3) we consider the impact of the epidemic stage and conduct reasonable data processing for different stage.
physics
We calculate the electromagnetic self-force of a uniformly charged spherical ball moving on a rectilinear trajectory, neglecting the Lorentz contraction.
physics
Analytical expressions for calculating the energy density and spatial correlation function of thermal emission by a homogeneous, isothermal sphere of arbitrary size and material are presented. The spectral distribution and the power law governing the distance-dependent energy density are investigated in the near-field and far-field regimes for silicon carbide, silicon and tungsten spheres of various size parameters ranging from X = 0.002 to 5. The spatial coherence of thermal field emitted by spheres is also studied in both radial and polar directions. The energy density follows a power law of d^-2 (d is the observation distance) in the far field for all sizes and materials. The power law in the near field is strongly dependent on the material, size parameter, and the ratio d/a (a is the sphere radius). In the near field, the energy density follows a power law of d^-6 when X<<1 and d/a>>1 (similar to an electric point dipole). With increasing X or decreasing d/a, the contribution of multipoles to the energy density increases resulting in an increase in the power of d until the power law converges to that for a semi-infinite medium. The spatial correlation length in the radial direction is in the orders of $\lambda$, 0.1$\lambda$, and 0.001$\lambda$ in the far field, intermediate near field, and extreme near field, respectively. The correlation angle in the extreme near field is strongly dependent on the sphere size parameter, such that it decreases by three orders of magnitude (from 0.5$\pi$ to 0.001$\pi$) when X increases from 0.002 to 5. In the intermediate near field and far field, the correlation angle retains the same order of magnitude (0.15$\pi$ - 0.7$\pi$) for all considered Xs. While the excitation of dipolar localized surface phonons (LSPhs) does not affect the correlation length and angle, the multipolar LSPhs reduce the spatial coherence in both directions.
physics
The nonlinear and ill-posed nature of full waveform inversion (FWI) requires us to use sophisticated regularization techniques to solve it. In most applications, the model parameters may be described by physical properties (e.g., wave speeds, density, attenuation, anisotropic parameters) which are piecewise functions of space. Compound regularizations are thus necessary to reconstruct properly such parameters by FWI. We consider different implementations of compound regularizations in the wavefield reconstruction inversion (WRI) method, a formulation of FWI that extends its search space and prevent the so-called cycle skipping pathology. Our hybrid regularizations rely on Tikhonov and total variation (TV) functionals, from which we build two classes of hybrid regularizers: the first class is simply obtained by a convex combination (CC) of the two functionals, while the second relies on their infimal convolution (IC). In the former class, the model of parameters is required to simultaneously satisfy different priors, while in the latter the model is broken into its basic components, each satisfying a distinct prior (e.g. smooth, piecewise constant, piecewise linear). We implement these types of compound regularizations in the WRI optimization problem using the alternating direction method of multipliers (ADMM). Then, we assess our regularized WRI in the framework of seismic imaging applications. Using a wide range of subsurface models, we conclude that compound regularizer based on IC leads to the lowest error in the parameter reconstruction compared to that obtained with the CC counterpart and the Tikhonov and TV regularizers when used independently.
mathematics
We disclose the effects of Lifshitz dynamical exponent $z$ on the properties of holographic paramagnetic-ferromagnetic phase transition in the background of 4D and 5D Lifshitz spacetime. To preserve the conformally invariance in higher dimensions, we consider the Power-Maxwell (PM) electrodynamics as our gauge field. We introduce a massive $2$-form coupled to the PM field and perform the numerical shooting method in the probe limit by assuming the PM and the $2$-form fields do not back react on the background geometry. The results indicate that the critical temperature decreases with increasing the strength of the power parameter $q$ and dynamical exponent $z$. Besides, the formation of magnetic moment in the black hole background is harder in the absence of an external magnetic field. At low temperatures, when there is no external magnetic field, our result show the spontaneous magnetization and the ferromagnetic phase transition. We find that the critical exponent has its universal value of $\beta= 1/2$ regardless of the parameters $q, z$ as well as dimension d, which is in agreement with the result of the mean field theory. In the presence of external magnetic field, the magnetic susceptibility satisfies the Cure-Weiss law.
high energy physics theory
We study the existence and charaterization of self-trapping phenomena in discrete-time quantum walks. By considering a Kerr-like nonlinearity, we associate an acquisition of the intensity-dependent phase to the walker while it propagates on the lattice. Adjusting the nonlinear parameter ($\chi$) and the quantum gates ($\theta$), we will show the existence of different quantum walking regimes, including those with travelling soliton-like structures or localized by self-trapping. This latter scenario is absent for quantum gates close enough to Pauli-X. It appears for intermediate configurations and becomes predominant as quantum gates get closer to Pauli-Z. By using $\chi$ versus $\theta$ diagrams, we will show that the threshold between quantum walks with delocalized or localized regimes exhibit an unusual aspect, in which an increment on the nonlinear strength can induce the system from localized to a delocalized regime.
quantum physics
We construct novel web diagrams with a trivalent or quadrivalent gluing for various 6d/5d theories from certain Higgsings of 6d conformal matter theories on a circle. The theories realized on the web diagrams include 5d Kaluza-Klein theories from circle compactifications of the 6d $G_2$ gauge theory with 4 flavors, the 6d $F_4$ gauge theory with 3 flavors, the 6d $E_6$ gauge theory with 4 flavors and the 6d $E_7$ gauge theory with 3 flavors. The Higgsings also give rise to 5d Kaluza-Klein theories from twisted compactifications of 6d theories including the 5d pure SU(3) gauge theory with the Chern-Simons level 9 and the 5d pure SU(4) gauge theory with the Chern-Simons level 8. We also compute the Nekrasov partition functions of the theories by applying the topological vertex formalism to the newly obtained web diagrams.
high energy physics theory
We present the Generalized Polarization CALibration pipeline (GPCAL), an automated pipeline for instrumental polarization calibration of very long baseline interferometry (VLBI) data. The pipeline is designed to achieve a high calibration accuracy by means of fitting the instrumental polarization model, including the second-order terms, to multiple calibrators data simultaneously. It also allows using more accurate linear polarization models of calibrators for D-term estimation compared to the conventional way that assumes similar linear polarization and total intensity structures. This assumption has widely been used in the existing packages for instrumental polarization calibration but could be a source of significant uncertainties when there is no suitable calibrator satisfying the assumption. We demonstrate the capabilities of GPCAL by using simulated data, archival Very Long Baseline Array (VLBA) data of many active galactic nuclei (AGN) jets at 15 and 43 GHz, and our Korean VLBI Network (KVN) observations of many AGN jets at 86, 95, 130, and 142 GHz. The pipeline could reproduce the complex linear polarization structures of several sources shown in the previous studies using the same VLBA data. GPCAL also reveals a complex linear polarization structure in the flat-spectrum radio quasar 3C 273 from the KVN data at all four frequencies. These results demonstrate that GPCAL can achieve a high calibration accuracy for various VLBI arrays.
astrophysics
In the summer of 2017, the National Basketball Association reduced the number of total timeouts, along with other rule changes, to regulate the flow of the game. With these rule changes, it becomes increasingly important for coaches to effectively manage their timeouts. Understanding the utility of a timeout under various game scenarios, e.g., during an opposing team's run, is of the utmost importance. There are two schools of thought when the opposition is on a run: (1) call a timeout and allow your team to rest and regroup, or (2) save a timeout and hope your team can make corrections during play. This paper investigates the credence of these tenets using the Rubin causal model framework to quantify the causal effect of a timeout in the presence of an opposing team's run. Too often overlooked, we carefully consider the stable unit-treatment-value assumption (SUTVA) in this context and use SUTVA to motivate our definition of units. To measure the effect of a timeout, we introduce a novel, interpretable outcome based on the score difference to describe broad changes in the scoring dynamics. This outcome is well-suited for situations where the quantity of interest fluctuates frequently, a commonality in many sports analytics applications. We conclude from our analysis that while comebacks frequently occur after a run, it is slightly disadvantageous to call a timeout during a run by the opposing team and further demonstrate that the magnitude of this effect varies by franchise.
statistics
Artificial intelligence (AI) products can be trained to recognize tuberculosis (TB)-related abnormalities on chest radiographs. Various AI products are available commercially, yet there is lack of evidence on how their performance compared with each other and with radiologists. We evaluated five AI software products for screening and triaging TB using a large dataset that had not been used to train any commercial AI products. Individuals (>=15 years old) presenting to three TB screening centers in Dhaka, Bangladesh, were recruited consecutively. All CXR were read independently by a group of three Bangladeshi registered radiologists and five commercial AI products: CAD4TB (v7), InferReadDR (v2), Lunit INSIGHT CXR (v4.9.0), JF CXR-1 (v2), and qXR (v3). All five AI products significantly outperformed the Bangladeshi radiologists. The areas under the receiver operating characteristic curve are qXR: 90.81% (95% CI:90.33-91.29%), CAD4TB: 90.34% (95% CI:89.81-90.87), Lunit INSIGHT CXR: 88.61% (95% CI:88.03%-89.20%), InferReadDR: 84.90% (95% CI: 84.27-85.54%) and JF CXR-1: 84.89% (95% CI:84.26-85.53%). Only qXR met the TPP with 74.3% specificity at 90% sensitivity. Five AI algorithms can reduce the number of Xpert tests required by 50%, while maintaining a sensitivity above 90%. All AI algorithms performed worse among the older age and people with prior TB history. AI products can be highly accurate and useful screening and triage tools for TB detection in high burden regions and outperform human readers.
electrical engineering and systems science
We present a three dimensional, time dependent model for bone regeneration in the presence of porous scaffolds to bridge critical size bone defects. Our approach uses homogenized quantities, thus drastically reducing computational cost compared to models resolving the microstructural scale of the scaffold. Using abstract functional relationships instead of concrete effective material properties, our model can incorporate the homogenized material tensors for a large class of scaffold microstructure designs. We prove an existence and uniqueness theorem for solutions based on a fixed point argument. We include the cases of mixed boundary conditions and multiple, interacting signalling molecules, both being important for application. Furthermore we present numerical simulations showing good agreement with experimental findings.
mathematics
We study the right-angled Artin group action on the extension graph. We show that this action satisfies a certain finiteness property, which is a variation of a condition introduced by Delzant and Bowditch. As an application we show that the asymptotic translation lengths of elements of a given right-angled Artin group are always rational and once the defining graph has girth at least 6, they have a common denominator. We construct explicit examples which show the denominator of the asymptotic translation length of such an action can be arbitrary. We also observe that if either an element has a small syllable length or the defining graph for the right-angled Artin group is a tree then the asymptotic translation lengths are integers.
mathematics
We compute $S$-wave quarkonium wavefunctions at the origin in the $\overline{\rm MS}$ scheme based on nonrelativistic effective field theories. We include the effects of nonperturbative long-distance behaviors of the potentials, while we determine the short-distance behaviors of the potentials in perturbative QCD. We obtain $\overline{\rm MS}$-renormalized quarkonium wavefunctions at the origin that have the correct scale dependences that are expected from perturbative QCD, so that the scale dependences cancel in physical quantities. Based on the calculation of the wavefunctions at the origin, we make model-independent predictions of decay constants and electromagnetic decay rates of $S$-wave charmonia and bottomonia, and compare them with measurements. We find that the poor convergence of perturbative QCD corrections are substantially improved when we include corrections to the wavefunctions at the origin in the calculation of decay constants and decay rates.
high energy physics phenomenology
Ultra-light dark matter (ULDM) is currently one of the most popular classes of cosmological dark matter. The most important advantage is that ULDM with mass $m \sim 10^{-22}$ eV can account for the small-scale problems encountered in the standard cold dark matter (CDM) model like the core-cusp problem, missing satellite problem and the too-big-to-fail problem in galaxies. In this article, we formulate a new simple model-independent analysis using the SPARC data to constrain the range of ULDM mass. In particular, the most stringent constraint comes from the data of a galaxy ESO563-G021, which can conservatively exclude a ULDM mass range $m=(0.14-3.11)\times 10^{-22}$ eV. This model-independent excluded range is consistent with many bounds obtained by recent studies and it suggests that the ULDM proposal may not be able to alleviate the small-scale problems.
astrophysics
We propose a simple grand unified theory (GUT) scenario in which supersymmetry (SUSY) is spontaneously broken in visible sector. Our model is based on the GUT model that has been proposed to solve almost all problems in conventional GUT scenarios. In the previous work, the problems can be solved by a natural assumption in a supersymmetric vacuum. In this paper, we consider an extension of the model (i.e. omitting one singlet field) and break SUSY spontaneously without new sector. Our model does not have hidden sector and predicts high-scale SUSY where sfermion masses are of order 100-1000 TeV and flavor violating processes are suppressed. In this scenario, we can see an explicit signature of GUT in sfermion mass spectrum since the sfermion mass spectrum respects SU(5) matter unification. In addition, we find a superheavy long lived charged lepton as a proof of our scenario, and it may be seen in the LHC.
high energy physics phenomenology
Automatic cell segmentation is an essential step in the pipeline of computer-aided diagnosis (CAD), such as the detection and grading of breast cancer. Accurate segmentation of cells can not only assist the pathologists to make a more precise diagnosis, but also save much time and labor. However, this task suffers from stain variation, cell inhomogeneous intensities, background clutters and cells from different tissues. To address these issues, we propose an Attention Enforced Network (AENet), which is built on spatial attention module and channel attention module, to integrate local features with global dependencies and weight effective channels adaptively. Besides, we introduce a feature fusion branch to bridge high-level and low-level features. Finally, the marker controlled watershed algorithm is applied to post-process the predicted segmentation maps for reducing the fragmented regions. In the test stage, we present an individual color normalization method to deal with the stain variation problem. We evaluate this model on the MoNuSeg dataset. The quantitative comparisons against several prior methods demonstrate the superiority of our approach.
electrical engineering and systems science
We derive the partition function of an $\mathcal N=2$ chiral multiplet on topologically twisted $H^2\times S^1$. The chiral multiplet is coupled to a background vector multiplet encoding a real mass deformation. We consider an $ H^2\times S^1$ metric containing two parameters: one is the $S^1$ radius, while the other gives a fugacity $q$ for the angular momentum on $H^2$. The computation is carried out by means of supersymmetric localization, which provides a finite answer written in terms of $q$-Pochammer symbols and multiple Zeta functions. Especially, the partition function of normalizable fields reproduces three-dimensional holomorphic blocks.
high energy physics theory
We first study the thermodynamics of Bardeen-AdS black hole by the $T$-$r_{h}$ diagram, where T is the Hawking temperature and $r_{h}$ is the radius of event horizon. The cut-off radius which is the minimal radius of the thermodynamical stable Bardeen black hole can be got, and the cut-off radius is the same with the result of the heat capacity analysis. Moreover, by studying the parameter $g$, which is interpreted as a gravitationally collapsed magnetic monopole arising in a specific form of non-linear electrodynamics, in the Bardeen black hole, we can get a critical value $g_{m}$ and different phenomenons with different values of parameter $g$. For $g>g_{m}$, there is no second order phase transition. We also research the thermodynamical stability of the Bardeen black hole by the Gibbs free energy and the heat capacity. In addition, the phase transition is discussed.
high energy physics theory
If, during the early Universe epoch, the dark matter particle thermalizes in a hidden sector which does not thermalize with the Standard Model thermal bath, its relativistic thermal decoupling can easily lead to the observed relic density, even if the dark matter particle mass is many orders of magnitude heavier than the usual $\sim$ eV hot relic mass scale. This straightforward scenario simply requires that the temperature of the hidden sector thermal bath is one to five orders of magnitude cooler than the temperature of the Standard Model thermal bath. In this way the resulting relic density turns out to be determined only by the dark matter mass scale and the ratio of the temperatures of both sectors. In a model independent way we determine that this can work for a dark matter mass all the way from $\sim 1$ keV to $\sim 30$ PeV. We also show how this scenario works explicitly in the framework of two illustrative models. One of them can lead to a PeV neutrino flux from dark matter decay of the order of the one needed to account for the high energy neutrinos observed by IceCube.
high energy physics phenomenology
We propose a roadmap for bootstrapping conformal field theories (CFTs) described by gauge theories in dimensions $d>2$. In particular, we provide a simple and workable answer to the question of how to detect the gauge group in the bootstrap calculation. Our recipe is based on the notion of \emph{decoupling operator}, which has a simple (gauge) group theoretical origin, and is reminiscent of the null operator of $2d$ Wess-Zumino-Witten CFTs in higher dimensions. Using the decoupling operator we can efficiently detect the rank (i.e. color number) of gauge groups, e.g., by imposing gap conditions in the CFT spectrum. We also discuss the physics of the equation of motion, which has interesting consequences in the CFT spectrum as well. As an application of our recipes, we study a prototypical critical gauge theory, namely the scalar QED which has a $U(1)$ gauge field interacting with critical bosons. We show that the scalar QED can be solved by conformal bootstrap, namely we have obtained its kinks and islands in both $d=3$ and $d=2+\epsilon$ dimensions.
high energy physics theory
The compact size and high wavelength-selectivity of microring resonators (MRs) enable photonic networks-on-chip (PNoCs) to utilize dense-wavelength-division-multiplexing (DWDM) in their photonic waveguides, and as a result, attain high bandwidth on-chip data transfers. Unfortunately, a Hardware Trojan in a PNoC can manipulate the electrical driving circuit of its MRs to cause the MRs to snoop data from the neighboring wavelength channels in a shared photonic waveguide, which introduces a serious security threat. This paper presents a framework that utilizes process variation-based authentication signatures along with architecture-level enhancements to protect against data-snooping Hardware Trojans during unicast as well as multicast transfers in PNoCs. Evaluation results indicate that our framework can improve hardware security across various PNoC architectures with minimal overheads of up to 14.2% in average latency and of up to 14.6% in energy-delay-product (EDP).
computer science
Active Traffic Management strategies are often adopted in real-time to address such sudden flow breakdowns. When queuing is imminent, Speed Harmonization (SH), which adjusts speeds in upstream traffic to mitigate traffic showckwaves downstream, can be applied. However, because SH depends on driver awareness and compliance, it may not always be effective in mitigating congestion. The use of multiagent reinforcement learning for collaborative learning, is a promising solution to this challenge. By incorporating this technique in the control algorithms of connected and autonomous vehicle (CAV), it may be possible to train the CAVs to make joint decisions that can mitigate highway bottleneck congestion without human driver compliance to altered speed limits. In this regard, we present an RL-based multi-agent CAV control model to operate in mixed traffic (both CAVs and human-driven vehicles (HDVs)). The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic. Another objective was to assess the efficacy of the RL-based controller vis-\`a-vis that of the rule-based controller. In addressing this objective, we duly recognize that one of the main challenges of RL-based CAV controllers is the variety and complexity of inputs that exist in the real world, such as the information provided to the CAV by other connected entities and sensed information. These translate as dynamic length inputs which are difficult to process and learn from. For this reason, we propose the use of Graphical Convolution Networks (GCN), a specific RL technique, to preserve information network topology and corresponding dynamic length inputs. We then use this, combined with Deep Deterministic Policy Gradient (DDPG), to carry out multi-agent training for congestion mitigation using the CAV controllers.
computer science
A handedness in the arrival directions of high-energy photons from outside our Galaxy can be related to the helicity of an intergalactic magnetic field. Previous estimates by arXiv:1310.4826 and arXiv:1412.3171 showed a hint of a signal present in the photons observed by the Fermi Large Area Telescope (LAT). An update on the measurement of handedness in Fermi-LAT data is presented using more than 10 years of observations. Simulations are performed to study the uncertainty of the measurements, taking into account the structure of the exposure caused by the energy-dependent instrument response and its observing profile, as well as the background from the interstellar medium. The simulations are required to accurately estimate the uncertainty and to show that previously the uncertainty was significantly underestimated. The apparent signal in the earlier analysis of {\em Fermi}-LAT data is rendered non-significant.
astrophysics
Multiple imputation is a well-established general technique for analyzing data with missing values. A convenient way to implement multiple imputation is sequential regression multiple imputation (SRMI), also called chained equations multiple imputation. In this approach, we impute missing values using regression models for each variable, conditional on the other variables in the data. This approach, however, assumes that the missingness mechanism is missing at random, and it is not well-justified under not-at-random missingness without additional modification. In this paper, we describe how we can generalize the SRMI imputation procedure to handle not-at-random missingness (MNAR) in the setting where missingness may depend on other variables that are also missing. We provide algebraic justification for several generalizations of standard SRMI using Taylor series and other approximations of the target imputation distribution under MNAR. Resulting regression model approximations include indicators for missingness, interactions, or other functions of the MNAR missingness model and observed data. In a simulation study, we demonstrate that the proposed SRMI modifications result in reduced bias in the final analysis compared to standard SRMI, with an approximation strategy involving inclusion of an offset in the imputation model performing the best overall. The method is illustrated in a breast cancer study, where the goal is to estimate the prevalence of a specific genetic pathogenic variant.
statistics
Discrete symmetries being preferred to explain the neutrino phenomenology, we chose the simplest $S_3$ group and explore the implication of its modular form on neutrino masses and mixing. Non-trivial transformations of Yukawa couplings under this symmetry, make the model phenomenologically interesting by reducing the requirement of multiple scalar fields. This symmetry imposes a specific flavor structure to the neutrino mass matrix within the framework of less frequented type III seesaw mechanism and helps to explore the neutrino mixing consistent with the current observation. Apart, we also explain the preferred scenario of leptogenesis to explain the baryon asymmetry of the universe by generating the lepton asymmetry from the decay of heavy fermion triplet at TeV scale.
high energy physics phenomenology
Network meta-analysis (NMA) is a statistical technique for the comparison of treatment options. The nodes of the network are the competing treatments and edges represent comparisons of treatments in trials. Outcomes of Bayesian NMA include estimates of treatment effects, and the probabilities that each treatment is ranked best, second best and so on. How exactly network geometry affects the accuracy and precision of these outcomes is not fully understood. Here we carry out a simulation study and find that disparity in the number of trials involving different treatments leads to a systematic bias in estimated rank probabilities. This bias is associated with an increased variation in the precision of treatment effect estimates. Using ideas from the theory of complex networks, we define a measure of `degree irregularity' to quantify asymmetry in the number of studies involving each treatment. Our simulations indicate that more regular networks have more precise treatment effect estimates and smaller bias of rank probabilities. We also find that degree regularity is a better indicator of NMA quality than both the total number of studies in a network and the disparity in the number of trials per comparison. These results have implications for planning future trials. We demonstrate that choosing trials which reduce the network's irregularity can improve the precision and accuracy of NMA outcomes.
statistics
In this article we report acceleration observed for ions in a forward ablation geometry of 50 nm thick nickel film coated on a glass plate under nanosecond laser ablation. A detailed study with varying background pressure and laser energy of time of flight (TOF) spectra of ionic and other neutral transitions from plasma is undertaken. The results indicate that TOF spectra recorded for different neutral transitions exhibit different dynamical behavior (the slower peak of the neutral emission becomes faster with background pressure). Similar observation is seen for the ionic species as well. These observations have similarity to some of the reported double-layer concepts. However, the calculated electric fields from the acceleration appear to be anomalously higher. Along with the acceleration observed for the ionic and neutral lines, significant asymmetry in spectral shape is observed for neutral nickel line (712.2 nm) along with spectral broadening as the background pressure increases. This large asymmetry is indicative of micro electric fields present inside the laser produced plasma plume which may result in a continuous acceleration of ions. Interestingly, the asymmetry in spectral broadening exhibits temporal and spatial dependence which indicates that electric field is present in the plasma plume even for longer duration and also over a significant distance.
physics
In \cite{Covolo:2016}, \cite{Covolo:2012} and \cite{Poncin:2016}, we introduced the category of colored supermanifolds ($\mathbb{Z}_2^n$-super\-ma\-ni\-folds or just $\mathbb{Z}_2^n$-manifolds ($\mathbb{Z}_2^n=\mathbb{Z}_2\times\ldots\times\mathbb{Z}_2$ ($n$ times))), explicitly described the corresponding $\mathbb{Z}_2^n$-Berezinian and gave first insights into $\mathbb{Z}_2^n$-integration theory. The present paper contains a detailed account of parts of the $\mathbb{Z}_2^n$-differential calculus and of the $\mathbb{Z}_2^n$-variants of the trilogy of local theorems, which consists of the inverse function theorem, the implicit function theorem and the constant rank theorem.
mathematics
We show that results about spaces or moduli spaces of positive scalar curvature metrics proved using index theory can typically be extended to non-negative scalar curvature metrics. We illustrate this by providing explicit generalizations of some classical results concerning moduli spaces of positive scalar curvature metrics.
mathematics
This paper is devoted to the study of affine quaternionic manifolds and to a possible classification of all compact affine quaternionic curves and surfaces. It is established that on an affine quaternionic manifold there is one and only one affine quaternionic structure. A direct result, based on the celebrated Kodaira Theorem that studies compact complex manifolds in complex dimension 2, states that the only compact affine quaternionic curves are the quaternionic tori and the primary Hopf surface S^3 x S^1. As for compact affine quaternionic surfaces, we restrict to the complete ones: the study of their fundamental groups, together with the inspection of all nilpotent hypercomplex simply connected 8-dimensional Lie Groups, identifies a path towards their classification.
mathematics
A common environment acting on a pair of qubits gives rise to a plethora of different phenomena, such as the generation of qubit-qubit entanglement, quantum synchronization and subradiance. Here we define time-independent figures of merit for entanglement generation, quantum synchronization and subradiance, and perform an extensive analytical and numerical study of their dependence on model parameters. We also address a recently proposed measure of the collectiveness of the dynamics driven by the bath, and find that it almost perfectly witnesses the behavior of entanglement generation. Our results show that synchronization and subradiance can be employed as reliable local signatures of an entangling common-bath in a general scenario. Finally, we propose an experimental implementation of the model based on two transmon qubits capacitively coupled to a common resistor, which provides a versatile quantum simulation platform of the open system in any regime.
quantum physics
We study gravity wave production and baryogenesis at the electroweak phase transition, in a real singlet scalar extension of the Standard Model, including vector-like top partners to generate the CP violation needed for electroweak baryogenesis (EWBG). The singlet makes the phase transition strongly first-order through its coupling to the Higgs boson, and it spontaneously breaks CP invariance through a dimension-5 contribution to the top quark mass term, generated by integrating out the heavy top quark partners. We improve on previous studies by incorporating updated transport equations, compatible with large bubble wall velocities. The wall speed and thickness are computed directly from the microphysical parameters rather than treating them as free parameters, allowing for a first-principles computation of the baryon asymmetry. The size of the CP-violating dimension-5 operator needed for EWBG is constrained by collider, electroweak precision, and renormalization group running constraints. We identify regions of parameter space that can produce the observed baryon asymmetry or observable gravitational (GW) wave signals. Contrary to standard lore, we find that for strong deflagrations, the efficiencies of large baryon asymmetry production and strong GW signals can be positively correlated. However we find the overall likelihood of observably large GW signals to be smaller than estimated in previous studies. In particular, only detonation-type transitions are predicted to produce observably large gravitational waves.
high energy physics phenomenology
The recent advancements in cloud services, Internet of Things (IoT) and Cellular networks have made cloud computing an attractive option for intelligent traffic signal control (ITSC). Such a method significantly reduces the cost of cables, installation, number of devices used, and maintenance. ITSC systems based on cloud computing lower the cost of the ITSC systems and make it possible to scale the system by utilizing the existing powerful cloud platforms. While such systems have significant potential, one of the critical problems that should be addressed is the network delay. It is well known that network delay in message propagation is hard to prevent, which could potentially degrade the performance of the system or even create safety issues for vehicles at intersections. In this paper, we introduce a new traffic signal control algorithm based on reinforcement learning, which performs well even under severe network delay. The framework introduced in this paper can be helpful for all agent-based systems using remote computing resources where network delay could be a critical concern. Extensive simulation results obtained for different scenarios show the viability of the designed algorithm to cope with network delay.
electrical engineering and systems science
We realize Surface Code quantum memories for nearest-neighbor qubits with always-on Ising interactions. This is done by utilizing multi-qubit gates that mimic the functionality of several gates. The previously proposed Surface Code memories rely on error syndrome detection circuits based on CNOT gates. In a two-dimensional planar architecture, to realize a two-qubit CNOT gate in the presence of couplings to other neighboring qubits, the interaction of the target qubit with its three other neighbors must cancel out. Here we present a new error syndrome detection circuit utilizing multi-qubit parity gates. In addition to speed up in the error correction cycles, in our approach, the depth of the error syndrome detection circuit does not grow by increasing the number of qubits in the logical qubit layout. We analytically design the system parameters to realize new five-qubit gates suitable for error syndrome detection in nearest-neighbor two-dimensional array of qubits. The five-qubit gates are designed such that the middle qubit is the target qubit and all four coupled neighbors are the control qubits. In our scheme, only one control parameter of the target qubits must be adjusted to realize controlled-unitary operations. The gate operations are confirmed with a fidelity of >99.9% in a simulated system consists of nine nearest-neighbor qubits.
quantum physics
Recently in [Phys. Rev. D $99$ $(2019)$ 104010] the non-relativistic Feynman propagator for harmonic oscillator system is presented when the generalized uncertainty principle is employed. In this short comment it is shown that the expression is incorrect. We also derive the correct expression of it.
quantum physics
The Fast Blue Optical Transient (FBOT) ATLAS18qqn (AT2018cow) has a light curve as bright as superluminous supernovae but rises and falls much faster. We model this light curve by circumstellar interaction of a pulsational pair-instability (PPI) supernova (SN) model based on our PPISN models studied in previous work. We focus on the 42 $M_\odot$ He star (core of a 80 $M_{\odot}$ star) which has circumstellar matter of mass 0.50 $M_\odot$. With the parameterized mass cut and the kinetic energy of explosion $E$, we perform hydrodynamical calculations of nucleosynthesis and optical light curves of PPISN models. The optical light curve of the first $\sim$ 20 days of AT2018cow is well-reproduced by the shock heating of circumstellar matter for the $42 ~M_{\odot}$ He star with $E = 5 \times 10^{51}$ erg. After day 20, the light curve is reproduced by the radioactive decay of 0.6 $M_\odot$ $^{56}$Co, which is a decay product of $^{56}$Ni in the explosion. We also examine how the light curve shape depends on the various model parameters, such as CSM structure and composition. We also discuss (1) other possible energy sources and their constraints, (2) origin of observed high-energy radiation, and (3) how our result depends on the radiative transfer codes. Based on our successful model for AT2018cow and the model for SLSN with the CSM mass as large as $20 ~M_\odot)$, we propose the working hypothesis that PPISN produces SLSNe if CSM is massive enough and FBOTs if CSM is less than $\sim 1 ~M_\odot$.
astrophysics
In this paper we use radiative transfer + N-body simulations to explore the feasibility of measurements of cross-correlations between the 21cm field observed by the Square Kilometer Array (SKA) and high-z Lyman Alpha Emitters (LAEs) detected in galaxy surveys with the Subaru Hyper Supreme Cam (HSC), Subaru Prime Focus Spectrograph (PFS) and Wide Field Infrared Survey Telescope (WFIRST). 21cm-LAE cross-correlations are in fact a powerful probe of the epoch of reionization as they are expected to provide precious information on the progress of reionization and the typical scale of ionized regions at different redshifts. The next generation observations with SKA will have a noise level much lower than those with its precursor radio facilities, introducing a significant improvement in the measurement of the cross-correlations. We find that an SKA-HSC/PFS observation will allow to investigate scales below ~10 Mpc/h and ~60 Mpc/h at z=7.3 and 6.6, respectively. WFIRST will allow to access also higher redshifts, as it is expected to observe spectroscopically ~900 LAEs per square degree and unit redshift in the range 7.5<z<8.5. Because of the reduction of the shot noise compared to HSC and PFS, observations with WFIRST will result in more precise cross-correlations and increased observable scales.
astrophysics
Let $\mathfrak{g}_0$ be a simple Lie algebra of type ADE and let $U'_q(\mathfrak{g})$ be the corresponding untwisted quantum affine algebra. We show that there exists an action of the braid group $B(\mathfrak{g}_0)$ on the quantum Grothendieck ring $K_t(\mathfrak{g})$ of Hernandez-Leclerc's category $C_{\mathfrak{g}}^0$. Focused on the case of type $A_{N-1}$, we construct a family of monoidal autofunctors $\{\mathscr{S}_i\}_{i\in \mathbb{Z}}$ on a localization $T_N$ of the category of finite-dimensional graded modules over the quiver Hecke algebra of type $A_{\infty}$. Under an isomorphism between the Grothendieck ring $K(T_N)$ of $T_N$ and the quantum Grothendieck ring $K_t({A^{(1)}_{N-1}})$, the functors $\{\mathscr{S}_i\}_{1\le i\le N-1}$ recover the action of the braid group $B(A_{N-1})$. We investigate further properties of these functors.
mathematics
In this paper, a Novel Active Disturbance Rejection Control (N-ADRC) strategy is proposed that replaces the Linear Extended state observer (LESO) used in Conventional ADRC (C-ADRC) with a Nested LESO. In the nested LESO, the inner-loop LESO actively estimates and eliminates the generalized disturbance. Increasing the bandwidth improves the estimation accuracy which may tolerate noise and conflict with H/W limitations and the sampling frequency of the system. Therefore, an alternative scenario is offered without increasing the bandwidth of the inner-loop LESO provided that the rate of change of the generalized disturbance estimation error is upper bounded. This is achieved by the placing an outer-loop LESO in parallel with the inner one, it estimates and eliminates the remaining generalized disturbance that eluded from the inner-loop LESO due to bandwidth limitations. The stability of LESO and nested LESO is investigated using Lyapunov stability analysis. Simulations on uncertain nonlinear SISO system with time-varying exogenous disturbance revealed that the proposed nested LESO can successfully deal with a generalized disturbance in both noisy and noise-free environments, where the Integral Time Absolute Error (ITAE) of the tracking error for the nested LESO is reduced by 69.87% from that of the LESO.
computer science
The bulk-to-boundary dictionary for 4D celestial holography is given a new entry defining 2D boundary states living on oriented circles on the celestial sphere. The states are constructed using the 2D CFT state-operator correspondence from operator insertions corresponding to either incoming or outgoing particles which cross the celestial sphere inside the circle. The BPZ construction is applied to give an inner product on such states whose associated bulk adjoints are shown to involve a shadow transform. Scattering amplitudes are then given by BPZ inner products between states living on the same circle but with opposite orientations. 2D boundary states are found to encode the same information as their 4D bulk counterparts, but organized in a radically different manner.
high energy physics theory
SU(1,1) interferometry, proposed in a classic 1986 paper by Yurke, McCall, and Klauder [Phys. Rev. A 33, 4033 (1986)], involves squeezing, displacing, and then unsqueezing two bosonic modes. It has, over the past decade, been implemented in a variety of experiments. Here I take SU(1,1) interferometry apart, to see how and why it ticks. SU(1,1) interferometry arises naturally as the two-mode version of active-squeezing-enhanced, back-action-evading measurements aimed at detecting the phase-space displacement of a harmonic oscillator subjected to a classical force. Truncating an SU(1,1) interferometer, by omitting the second two-mode squeezer, leaves a prototype that uses the entanglement of two-mode squeezing to detect and characterize a disturbance on one of the two modes from measurement statistics gathered from both modes.
quantum physics
Scanning transmission electron microscopy (STEM) combined with electron energy loss spectroscopy (EELS) has become a standard technique to map localized surface plasmon resonances with a nanometer spatial and a sufficient energy resolution over the last 15 years. However, no experimental work discussing the influence of experimental conditions during the measurement has been published up to now. We present an experimental study of the influence of the primary beam energy and the collection semi-angle on the plasmon resonances measurement by STEM-EELS. To explore the influence of these two experimental parameters we study a series of gold rods and gold bow-tie and diabolo antennas. We discuss the impact on experimental characteristics which are important for successful detection of the plasmon peak in EELS, namely: the intensity of plasmonic signal, the signal to background ratio, and the signal to zero-loss peak ratio. We show that the best results are obtained using a medium primary beam energy, in our case 120 keV, and an arbitrary collection semi-angle, as it is not a critical parameter at this primary beam energy. Our instructive overview will help microscopists in the field of plasmonics to arrange their experiments.
condensed matter
This paper addresses the problem of optimizing sensor deployment locations to reconstruct and also predict a spatiotemporal field. A novel deep learning framework is developed to find a limited number of optimal sampling locations and based on that, improve the accuracy of spatiotemporal field reconstruction and prediction. The proposed approach first optimizes the sampling locations of a wireless sensor network to retrieve maximum information from a spatiotemporal field. A spatiotemporal reconstructor is then used to reconstruct and predict the spatiotemporal field, using collected in-situ measurements. A simulation is conducted using global climate datasets from the National Oceanic and Atmospheric Administration, to implement and validate the developed methodology. The results demonstrate a significant improvement made by the proposed algorithm. Specifically, compared to traditional approaches, the proposed method provides superior performance in terms of both reconstruction error and long-term prediction robustness.
electrical engineering and systems science
A quantum system driven by a weak deterministic force while under strong continuous energy measurement exhibits quantum jumps between its energy levels (Nagourney et al., 1986, Sauter et al., 1986, Bergquist et al., 1986). This celebrated phenomenon is emblematic of the special nature of randomness in quantum physics. The times at which the jumps occur are reputed to be fundamentally unpredictable. However, certain classical phenomena, like tsunamis, while unpredictable in the long term, may possess a degree of predictability in the short term, and in some cases it may be possible to prevent a disaster by detecting an advance warning signal. Can there be, despite the indeterminism of quantum physics, a possibility to know if a quantum jump is about to occur or not? In this dissertation, we answer this question affirmatively by experimentally demonstrating that the completed jump from the ground to an excited state of a superconducting artificial atom can be tracked, as it follows its predictable "flight," by monitoring the population of an auxiliary level coupled to the ground state. Furthermore, the experimental results demonstrate that the jump when completed is continuous, coherent, and deterministic. Exploiting these features, we catch and reverse a quantum jump mid-flight, thus deterministically preventing its completion. This real-time intervention is based on a particular lull period in the population of the auxiliary level, which serves as our advance warning signal. Our results, which agree with theoretical predictions essentially without adjustable parameters, support the modern quantum trajectory theory and provide new ground for the exploration of real-time intervention techniques in the control of quantum systems, such as early detection of error syndromes.
quantum physics
Using generating functions and some trivial bijections, we show in this paper that the binomial coefficients count the set of (123,132) and (123,213)-avoiding permutations according to the number of crossings. We also define a q-tableau of power of two and prove that it counts the set of (213,312) and (132,312)-avoiding permutations according to the number of crossings.
mathematics
The behavior of the gluon distribution function and the reduced cross section considered from the proton structure function and its derivatives at low values of $x$. These behaviors studied and compared with the experimental data. These results are augmented by including an additional higher-twist term in the description of the nonlinear correction. This additional term, modified nonlinear correction, improves the description of the reduced cross sections significantly at low values of $Q^{2}$. We discuss, furthermore, how this behavior can be determine the reduced cross section with respect to the proton parameterization at high-y values. The resulting predictions for $\sigma_{r}$ suggest that further corrections are required for $Q^{2}$ less than about $3~\mathrm{GeV}^{2}$.
high energy physics phenomenology
Quantum communication has been leading the way of many remarkable theoretical results and experimental tests in physics. In this context, quantum communication complexity (QCC) has recently drawn earnest research attention as a tool to optimize the amounts of transmitted qubits and energy that are required to implement distributed computational tasks. On this matter, we introduce a novel multi-user quantum fingerprinting protocol that is ready to be implemented with existing technology. Particularly, we extend to the multi-user framework a well-known two-user coherent-state fingerprinting scheme. This generalization is highly non-trivial for a twofold reason, as it requires not only to extend the set of protocol rules but also to specify a procedure for designing the optical devices intended for the generalized protocol. Much of the importance of our work arises from the fact that the obtained QCC figures of merit allow direct comparison with the best-known classical multi-user fingerprinting protocol, of significance in the field of computer technologies and networking. Furthermore, as one of the main contributions of the manuscript, we deduce innovative analytical upper bounds on the amount of transmitted quantum information that are even valid in the two-user protocol as a particular case. Ultimately, comparative results are provided to contrast different protocol implementation strategies and, importantly, to show that, under realistic circumstances, the multi-user protocol can achieve tasks that are impossible by using classical communication alone. Our work provides relevant contributions towards understanding the nature and the limitations of quantum fingerprinting and, on a broader scope, also the limitations and possibilities of quantum-communication networks embracing a node that is accessed by multiple users at the same time.
quantum physics
We present an algorithm for projecting superoperators onto the set of completely positive, trace-preserving maps. When combined with gradient descent of a cost function, the procedure results in an algorithm for quantum process tomography: finding the quantum process that best fits a set of sufficient observations. We compare the performance of our algorithm to the diluted iterative algorithm as well as second-order solvers interfaced with the popular CVX package for MATLAB, and find it to be significantly faster and more accurate while guaranteeing a physical estimate.
quantum physics
This paper presents a configurable lattice cryptography processor which enables quantum-resistant security protocols for IoT. Efficient sampling architectures, coupled with a low-power SHA-3 core, provide two orders of magnitude energy savings over software. A single-port RAM-based NTT architecture is proposed, which provides ~124k-gate area savings. This is the first ASIC implementation which demonstrates multiple lattice-based protocols proposed for NIST post-quantum standardization.
computer science
To handle time series with complicated oscillatory structure, we propose a novel time-frequency (TF) analysis tool that fuses the short time Fourier transform (STFT) and periodic transform (PT). Since many time series oscillate with time-varying frequency, amplitude and non-sinusoidal oscillatory pattern, a direct application of PT or STFT might not be suitable. However, we show that by combining them in a proper way, we obtain a powerful TF analysis tool. We first combine the Ramanujan sums and $l_1$ penalization to implement the PT. We call the algorithm Ramanujan PT (RPT). The RPT is of its own interest for other applications, like analyzing short signal composed of components with integer periods, but that is not the focus of this paper. Second, the RPT is applied to modify the STFT and generate a novel TF representation of the complicated time series that faithfully reflect the instantaneous frequency information of each oscillatory components. We coin the proposed TF analysis the Ramanujan de-shape (RDS) and vectorized RDS (vRDS). In addition to showing some preliminary analysis results on complicated biomedical signals, we provide theoretical analysis about RPT. Specifically, we show that the RPT is robust to three commonly encountered noises, including envelop fluctuation, jitter and additive noise.
electrical engineering and systems science
Binary classification is a fundamental problem in machine learning. Recent development of quantum similarity-based binary classifiers and kernel method that exploit quantum interference and feature quantum Hilbert space opened up tremendous opportunities for quantum-enhanced machine learning. To lay the fundamental ground for its further advancement, this work extends the general theory of quantum kernel-based classifiers. Existing quantum kernel-based classifiers are compared and the connection among them is analyzed. Focusing on the squared overlap between quantum states as a similarity measure, the essential and minimal ingredients for the quantum binary classification are examined. The classifier is also extended concerning various aspects, such as data type, measurement, and ensemble learning. The validity of the Hilbert-Schmidt inner product, which becomes the squared overlap for pure states, as a positive definite and symmetric kernel is explicitly shown, thereby connecting the quantum binary classifier and kernel methods.
quantum physics
For a simplicial complex with n sets, let W^-(x) be the set of sets in G contained in x and W^+(x) the set of sets in G containing x. An integer-valued function h on G defines for every A subset G an energy E[A]=sum_x in A h(x). The function energizes the geometry similarly as divisors do in the continuum, where the Riemann-Roch quantity chi(G)+deg(D) plays the role of the energy. Define the n times n matrices L=L^--(x,y)=E[W^-(x) cap W^-(y)] and L^++(x,y) = E[W^+(x) cap W^+(y)]. With the notation S(x,y)=1_n omega(x) =delta(x,y) (-1)dim(x) and str(A)=tr(SA) define g=S L^++ S. The results are: det(L)=det(g) = prod_x in G h(x) and E[G] = sum_x,y g(x,y) and E[G]=str(g). The number of positive eigenvalues of g is equal to the number of positive energy values of h. In special cases, more is true: A) If h(x) in -1, 1}, the matrices L=L^--,L^++ are unimodular and L^-1 = g, even if G is a set of sets. B) In the constant energy h(x)=1 case, L and g are isospectral, positive definite matrices in SL(n,Z). For any set of sets G we get so isospectral multi-graphs defined by adjacency matrices L^++ or L^-- which have identical spectral or Ihara zeta function. The positive definiteness holds for positive divisors in general. C) In the topological case h(x)=omega(x), the energy E[G]=str(L) = str(g) = sum_x,y g(x,y)=chi(G) is the Euler characteristic of G and phi(G)=prod_x omega(x), a product identity which holds for arbitrary set of sets. D) For h(x)=t^|x| with some parameter t we have E[H]=1-f_H(t) with f_H(t)=1+f_0 t + cdots + f_d t^d+1 for the f-vector of H and L(x,y) = (1-f_W^-(x) cap W^-(y)(t)) and g(x,y)=omega(x) omega(y) (1-f_W^+(x) cap W^+(y)(t)). Now, the inverse of g is g^-1(x,y) = 1-f_W^-(x) cap W^-(y)(t)/t^dim(x cap y) and E[G] = 1-f_G(t)=sum_x,y g(x,y).
mathematics
In this paper we study compactifications of the ${\cal N}=2$ heterotic $E_8\times E_8$ string on $(K3\times T^2)/\mathbb{Z}_3$ with various gauge backgrounds and calculate the topological couplings in the effective supergravity action that arise from one-loop amplitudes. We then identify candidates for dual type IIA compactifications on Calabi-Yau threefolds and compare the heterotic results with the corresponding topological string amplitudes. We find that the dual Calabi-Yau geometries are $K3$ fibrations that are also genus one fibered with three-sections. Moreover, we show that the intersection form on the polarization lattice of the $K3$ fibration has to be three times the intersection form on the Narain lattice $\Gamma^{1,1}$.
high energy physics theory
This article describes location aware temperature profiles from six strawberry shipments across the continental United States. Three pallets were instrumented in each shipment with three vertically placed loggers to take a longitudinal and latitudinal snapshot of 9 strategically different locations (including the top, middle and bottom layers of the pallets placed in the back, middle and the front of the shipping container) for a combined 54 measurement points across shipments of varying lengths. The sensors were instrumented in the field, right at the point of harvest, recorded temperatures every every 5 to 10 minutes depending on the shipment, and uploaded their data periodically via cellular radios on each device. The data is a result of significant collaboration between stakeholders from farmers to distributors to retailers to academics, which can play an important role for researchers and educators in food engineering, cold-chain, machine learning, and data mining, as well as in other disciplines related to food and transportation.
electrical engineering and systems science
In this work, we study the implication of Higgs precision measurements at future Higgs factories on the MSSM parameter space, focusing on the dominant stop sector contributions. We perform a multi-variable fit to both the signal strength for various Higgs decay channels at Higgs factories and the Higgs mass. The chi-square fit results show sensitivity to mA, tan beta, stop mass parameter mSUSY as well as the stop left-right mixing parameter Xt. We also study the impact of the Higgs mass prediction on the MSSM and compare the sensitivities of different Higgs factories.
high energy physics phenomenology
It is well-known that no local model - in theory - can simulate the outcome statistics of a Bell-type experiment as long as the detection efficiency is higher than a threshold value. For the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality this theoretical threshold value is $\eta_{\text{T}} = 2 (\sqrt{2}-1) \approx 0.8284$. On the other hand, Phys.\ Rev.\ Lett.\ 107, 170404 (2011) outlined an explicit practical model that can fake the CHSH inequality for a detection efficiency of up to $0.5$. In this work, we close this gap. More specifically, we propose a method to emulate a Bell inequality at the threshold detection efficiency using existing optical detector control techniques. For a Clauser-Horne-Shimony-Holt inequality, it emulates the CHSH violation predicted by quantum mechanics up to $\eta_{\text{T}}$. For the Garg-Mermin inequality - re-calibrated by incorporating non-detection events - our method emulates its exact local bound at any efficiency above the threshold. This confirms that attacks on secure quantum communication protocols based on Bell violation is a real threat if the detection efficiency loophole is not closed.
quantum physics
This paper is concerned with developing a theory of traces for functions that are integrable but need not possess any differentiability within their domain. Moreover, the domain can have an irregular boundary with cusp-like features and codimension not necessarily equal to one, or even an integer. Given $\Omega\subseteq\mathbb{R}^n$ and $\Gamma\subseteq\partial\Omega$, we introduce a function space $\mathscr{N}^{s(\cdot),p}(\Omega)\subseteq L^p_{\text{loc}}(\Omega)$ for which a well-defined trace operator can be identified. Membership in $\mathscr{N}^{s(\cdot),p}(\Omega)$ constrains the oscillations in the function values as $\Gamma$ is approached, but does not imply any regularity away from $\Gamma$. Under connectivity assumptions between $\Omega$ and $\Gamma$, we produce a linear trace operator from $\mathscr{N}^{s(\cdot),p}(\Omega)$ to the space of measurable functions on $\Gamma$. The connectivity assumptions are satisfied, for example, by all $1$-sided nontangentially accessible domains. If $\Gamma$ is upper Ahlfors-regular, then the trace is a continuous operator into a Sobolev-Slobodeckij space. If $\Gamma=\partial\Omega$ and is further assumed to be lower Ahlfors-regular, then the trace exhibits the standard Lebesgue point property. To demonstrate the generality of the results, we construct $\Omega\subseteq\mathbb{R}^2$ with a $t>1$-dimensional Ahlfors-regular $\Gamma\subseteq\partial\Omega$ satisfying the main domain hypotheses, yet $\Gamma$ is nowhere rectifiable and for every neighborhood of every point in $\Gamma$, there exists a boundary point within that neighborhood that is only tangentially accessible.
mathematics
For an element $r$ of a ring $R$, a Diophantine $D(r)$ $m$-tuple is an $m$-tuple $(a_1,a_2,\ldots,a_m)$ of elements of $R$ such that for all $i,j$ with $i\neq j$, $a_ia_j+r$ is a perfect square in $R$. In this article, we compute and estimate the measures of the sets of $D(r)$ $m$-tuples in the ring $\mathbb{Z}_p$ of $p$-adic integers, as well as its residue field $\mathbb{F}_p$.
mathematics
Different decompositions of the nucleon mass, in terms of the masses and energies of the underlying constituents, have been proposed in the literature. We explore the corresponding sum rules in quantum electrodynamics for an electron at one-loop order in perturbation theory. To this end we compute the form factors of the energy-momentum tensor, by paying particular attention to the renormalization of ultraviolet divergences, operator mixing and scheme dependence. We clarify the expressions of all the proposed sum rules in the electron rest frame in terms of renormalized operators. Furthermore, we consider the same sum rules in a moving frame, where they become energy decompositions. Finally, we discuss some implications of our study on the mass sum rules for the nucleon.
high energy physics phenomenology
Deformable templates are essential to large-scale medical image registration, segmentation, and population analysis. Current conventional and deep network-based methods for template construction use only regularized registration objectives and often yield templates with blurry and/or anatomically implausible appearance, confounding downstream biomedical interpretation. We reformulate deformable registration and conditional template estimation as an adversarial game wherein we encourage realism in the moved templates with a generative adversarial registration framework conditioned on flexible image covariates. The resulting templates exhibit significant gain in specificity to attributes such as age and disease, better fit underlying group-wise spatiotemporal trends, and achieve improved sharpness and centrality. These improvements enable more accurate population modeling with diverse covariates for standardized downstream analyses and easier anatomical delineation for structures of interest.
computer science