text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The partial entanglement entropy $s_{\mathcal{A}}(\mathcal{A}_i)$ captures the contribution from the subset $\mathcal{A}_i$ of the region $\mathcal{A}$ to the total entanglement entropy $S_{\mathcal{A}}$ of $\mathcal{A}$. The partial entanglement entropy proposal \cite{Wen:2018whg,Wen:2019ubu} claims that $s_{\mathcal{A}}(\mathcal{A}_i)$ equals to a linear combination of the entanglement entropies of certain relevant subsets in $\mathcal{A}$. We derive the differential version of this proposal which can directly generate the scheme independent entanglement contour function. Furthermore we derived the sufficient condition to apply the differential version of proposal. On the other way around, the proposal indicates that given the partial entanglement entropy we can calculate the entanglement entropies for the subsets. Following this idea, on the field theory side we analytically calculated the entanglement contour functions and entanglement entropies for annuli and spherical shells in holographic CFTs in arbitrary dimensions. We also comment on the phase transition of the mutual information, which is calculated holographically, across an annulus.
|
high energy physics theory
|
Hierarchical forecasting with intermittent time series is a challenge in both research and empirical studies. The overall forecasting performance is heavily affected by the forecasting accuracy of intermittent time series at bottom levels. In this paper, we present a forecasting reconciliation approach that treats the bottom level forecast as latent to ensure higher forecasting accuracy on the upper levels of the hierarchy. We employ a pure deep learning forecasting approach N-BEATS for continuous time series on top levels and a widely used tree-based algorithm LightGBM for the bottom level intermittent time series. The hierarchical forecasting with alignment approach is simple and straightforward to implement in practice. It sheds light on an orthogonal direction for forecasting reconciliation. When there is difficulty finding an optimal reconciliation, allowing suboptimal forecasts at a lower level could retain a high overall performance. The approach in this empirical study was developed by the first author during the M5 Forecasting Accuracy competition ranking second place. The approach is business orientated and could be beneficial for business strategic planning.
|
statistics
|
We present a quantum LDPC code family that has distance $\Omega(N^{3/5}/\operatorname{polylog}(N))$ and $\tilde\Theta(N^{3/5})$ logical qubits. This is the first quantum LDPC code construction which achieves distance greater than $N^{1/2} \operatorname{polylog}(N)$. The construction is based on generalizing the homological product of codes to a fiber bundle.
|
quantum physics
|
In this paper we consider linear relations with conjugates of a Salem number $\alpha$. We show that every such a relation arises from a linear relation between conjugates of the corresponding totally real algebraic integer $\alpha+1/\alpha$. It is also shown that the smallest degree of a Salem number with a nontrivial relation between its conjugates is $8$, whereas the smallest length of a nontrivial linear relation between the conjugates of a Salem number is $6$.
|
mathematics
|
Quantum radar is generally defined as a detection sensor that utilizes the microwave photons like a classical radar. At the same time, it employs quantum phenomena to improve detection, identification, and resolution capabilities. However, the entanglement is so fragile, unstable, and difficult to create and to preserve for a long time. Also, more importantly, the entangled states have a tendency to leak away as a result of noise. The points mentioned above enforces that the entangled states should be carefully studied at each step of the quantum radar detection processes as follows. Firstly, the creation of the entanglement between microwave and optical photons into the tripartite system is realized. Secondly, the entangled microwave photons are intensified. Thirdly, the intensified photons are propagated into the atmosphere (attenuation medium) and reflected from a target. Finally, the backscattered photons are intensified before the detection. At each step, the parameters related to the real mediums and target material can affect the entangled states to leak away easily. In this article, the entanglement behavior of a designed quantum radar is specifically investigated. In this study, the quantum electrodynamics theory is generally utilized to analyze the quantum radar system to define the parameters influencing the entanglement behavior. The tripartite system dynamics of equations of motions are derived using the quantum canonical conjugate method. The results of simulations indicate that the features of the tripartite system and the amplifier are designed in such a way to lead the detected photons to remain entangled with the optical modes.
|
quantum physics
|
Depth information matters in RGB-D semantic segmentation task for providing additional geometric information to color images. Most existing methods exploit a multi-stage fusion strategy to propagate depth feature to the RGB branch. However, at the very deep stage, the propagation in a simple element-wise addition manner can not fully utilize the depth information. We propose Global-Local propagation network (GLPNet) to solve this problem. Specifically, a local context fusion module(L-CFM) is introduced to dynamically align both modalities before element-wise fusion, and a global context fusion module(G-CFM) is introduced to propagate the depth information to the RGB branch by jointly modeling the multi-modal global context features. Extensive experiments demonstrate the effectiveness and complementarity of the proposed fusion modules. Embedding two fusion modules into a two-stream encoder-decoder structure, our GLPNet achieves new state-of-the-art performance on two challenging indoor scene segmentation datasets, i.e., NYU-Depth v2 and SUN-RGBD dataset.
|
computer science
|
The data drawn from biological, economic, and social systems are often confounded due to the presence of unmeasured variables. Prior work in causal discovery has focused on discrete search procedures for selecting acyclic directed mixed graphs (ADMGs), specifically ancestral ADMGs, that encode ordinary conditional independence constraints among the observed variables of the system. However, confounded systems also exhibit more general equality restrictions that cannot be represented via these graphs, placing a limit on the kinds of structures that can be learned using ancestral ADMGs. In this work, we derive differentiable algebraic constraints that fully characterize the space of ancestral ADMGs, as well as more general classes of ADMGs, arid ADMGs and bow-free ADMGs, that capture all equality restrictions on the observed variables. We use these constraints to cast causal discovery as a continuous optimization problem and design differentiable procedures to find the best fitting ADMG when the data comes from a confounded linear system of equations with correlated errors. We demonstrate the efficacy of our method through simulations and application to a protein expression dataset. Code implementing our methods is open-source and publicly available at https://gitlab.com/rbhatta8/dcd and will be incorporated into the Ananke package.
|
computer science
|
Linear regression on a set of observations linked by a network has been an essential tool in modeling the relationship between response and covariates with additional network data. Despite its wide range of applications in many areas, such as in social sciences and health-related research, the problem has not been well-studied in statistics so far. Previous methods either lack inference tools or rely on restrictive assumptions on social effects and usually assume that networks are observed without errors, which is unrealistic in many problems. This paper proposes a linear regression model with nonparametric network effects. The model does not assume that the relational data or network structure is exactly observed; thus, the method can be provably robust to a certain network perturbation level. A set of asymptotic inference results is established under a general requirement of the network observational errors, and the robustness of this method is studied in the specific setting when the errors come from random network models. We discover a phase-transition phenomenon of the inference validity concerning the network density when no prior knowledge of the network model is available while also showing a significant improvement achieved by knowing the network model. As a by-product of this analysis, we derive a rate-optimal concentration bound for random subspace projection that may be of independent interest. Extensive simulation studies are conducted to verify these theoretical results and demonstrate the advantage of the proposed method over existing work in terms of accuracy and computational efficiency under different data-generating models. The method is then applied to adolescent network data to study the gender and racial difference in social activities.
|
statistics
|
We propose an iteration-free source separation algorithm based on Winner-Take-All (WTA) hash codes, which is a faster, yet accurate alternative to a complex machine learning model for single-channel source separation in a resource-constrained environment. We first generate random permutations with WTA hashing to encode the shape of the multidimensional audio spectrum to a reduced bitstring representation. A nearest neighbor search on the hash codes of an incoming noisy spectrum as the query string results in the closest matches among the hashed mixture spectra. Using the indices of the matching frames, we obtain the corresponding ideal binary mask vectors for denoising. Since both the training data and the search operation are bitwise, the procedure can be done efficiently in hardware implementations. Experimental results show that the WTA hash codes are discriminant and provide an affordable dictionary search mechanism that leads to a competent performance compared to a comprehensive model and oracle masking.
|
electrical engineering and systems science
|
The existence of a global causal order between events places constraints on the correlations that parties may share. Such "causal correlations" have been the focus of recent attention, driven by the realization that some extensions of quantum mechanics may violate so-called causal inequalities. In this paper we study causal correlations from an entropic perspective, and we show how to use this framework to derive entropic causal inequalities. We consider two different ways to derive such inequalities. Firstly, we consider a method based on the causal Bayesian networks describing the causal relations between the parties. In contrast to the Bell-nonlocality scenario, where this method has previously been shown to be ineffective, we show that it leads to several interesting entropic causal inequalities. Secondly, we consider an alternative method based on counterfactual variables that has previously been used to derive entropic Bell inequalities. We compare the inequalities obtained via these two methods and discuss their violation by noncausal correlations. As an application of our approach, we derive bounds on the quantity of information - which is more naturally expressed in the entropic framework - that parties can communicate when operating in a definite causal order.
|
quantum physics
|
This paper presents an iterative data-driven algorithm for solving dynamic multi-objective (MO) optimal control problems arising in control of nonlinear continuous-time systems. It is first shown that the Hamiltonian functional corresponding to each objective can be leveraged to compare the performance of admissible policies. Hamiltonian-inequalities are then used for which their satisfaction guarantees satisfying the objectives' aspirations. An aspiration-satisfying dynamic optimization framework is then presented to optimize the main objective while satisfying the aspiration of other objectives. Relation to satisficing (good enough) decision-making framework is shown. A Sum-of-Square (SOS) based iterative algorithm is developed to solve the formulated aspiration-satisfying MO optimization. To obviate the requirement of complete knowledge of the system dynamics, a data-driven satisficing reinforcement learning approach is proposed to solve the SOS optimization problem in real-time using only the information of the system trajectories measured during a time interval without having full knowledge of the system dynamics. Finally, two simulation examples are provided to show the effectiveness of the proposed algorithm.
|
electrical engineering and systems science
|
Emergence of deterministic and irreversible macroscopic behavior from deterministic and reversible microscopic dynamics is understood as a result of the law of large numbers. In this paper, we prove on the basis of the theory of algorithmic randomness that Martin-L\"of random initial microstates satisfy an irreversible macroscopic law in the Kac infinite chain model. We find that the time-reversed state of a random state is not random as well as violates the macroscopic law.
|
condensed matter
|
The entanglement wedge cross section (EWCS) is numerically investigated both statically and dynamically in a five-dimension AdS-Vaidya spacetime with Gauss-Bonnet (GB) corrections, focusing on two identical rectangular strips on the boundary. In the static case, EWCS arises as the GB coupling constant $\alpha$ increasing, and disentangles at smaller separations between two strips for smaller $\alpha$. For the dynamical case we observe that the monotonic relation between EWCS and $\alpha$ holds but the two strips no longer disentangle monotonically. In the early stage of thermal quenching, when disentanglement occurs, the smaller $\alpha$, the greater separations. As time evolving, two strips then disentangle at larger separations with larger $\alpha$. Our results suggest that the higher order derivative corrections also have nontrivial effects on the EWCS, so do on the entanglement of purification in the dual boundary theory.
|
high energy physics theory
|
In this paper we study finite element discretizations of a surface vector-Laplace eigenproblem. We consider two known classes of finite element methods, namely one based on a vector analogon of the Dziuk-Elliott surface finite element method and one based on the so-called trace finite element technique. A key ingredient in both classes of methods is a penalization method that is used to enforce tangentiality of the vector field in a weak sense. This penalization and the perturbations that arise from numerical approximation of the surface lead to essential nonconformities in the discretization of the variational formulation of the vector-Laplace eigenproblem. We present a general abstract framework applicable to such nonconforming discretizations of eigenproblems. Error bounds both for eigenvalue and eigenvector approximations are derived that depend on certain consistency and approximability parameters. Sharpness of these bounds is discussed. Results of a numerical experiment illustrate certain convergence properties of such finite element discretizations of the surface vector-Laplace eigenproblem.
|
mathematics
|
The electric conductivity is considered in the fully anisotropic holographic theory. The electric conductivity is calculated in two different ways, and their equivalence is obtained for the case of the fully anisotropic theory. Numerical calculations of the electric conductivity were done for Einstein-dilaton-three-Maxwell holographic model [1]. The dependence of the conductivity on the temperature, the chemical potential, the external magnetic field, and the spatial anisotropy associated with the heavy-ions collision is studied. The electric conductivity jumps near the first order phase transition are observed. This effect is similar to the jumps of holographic entanglement that were studied previously.
|
high energy physics theory
|
Image quality is the basis of image communication and understanding tasks. Due to the blur and noise effects caused by imaging, transmission and other processes, the image quality is degraded. Blind image restoration is widely used to improve image quality, where the main goal is to faithfully estimate the blur kernel and the latent sharp image. In this study, based on experimental observation and research, an adaptively sparse regularized minimization method is originally proposed. The high-order gradients combine with low-order ones to form a hybrid regularization term, and an adaptive operator derived from the image entropy is introduced to maintain a good convergence. Extensive experiments were conducted on different blur kernels and images. Compared with existing state-of-the-art blind deblurring methods, our method demonstrates superiority on the recovery accuracy.
|
electrical engineering and systems science
|
We present a model describing the dark sector (DS) featured by two interactions remaining efficient until late times in the matter-dominated era after recombination: the interaction among dark radiations (DR), and the interaction between a small fraction of dark matter and dark radiation. The dark sector consists of (1) a dominant component cold collisionless DM (DM1), (2) a sub-dominant cold DM (DM2) and (3) a self-interacting DR. When a sufficient amount of DR is ensured and a few percent of the total DM density is contributed by DM2 interacting with DR, this set-up is known to be able to resolve both the Hubble and $\sigma_{8}$ tension. In light of this, we propose a scenario which is logically natural and has an intriguing theoretical structure with a hidden unbroken gauge group ${\rm SU}(5)_{\rm X}\otimes {\rm U}(1)_{\rm X}$. Our model of the dark sector does not introduce any new scalar field, but contains only massless chiral fermions and gauge fields in the ultraviolet (UV) regime. As such, it introduces a new scale (DM2 mass, $m_{\rm DM2}$) based on the confinement resulting from the strong dynamics of ${\rm SU}(5)_{\rm X}$. Both DM2-DR and DR-DR interactions are attributed to an identical long range interaction of ${\rm U}(1)_{\rm X}$. We show that our model can address the cosmological tensions when it is characterized by $g_{\rm X}=\mathcal{O}(10^{-3})-\mathcal{O}(10^{-2})$, $m_{\rm DM2}=\mathcal{O}(1)-\mathcal{O}(100){\rm GeV}$ and $T_{\rm DS}/T_{\rm SM}\simeq0.3-0.4$ where $g_{\rm X}$ is the gauge coupling of ${\rm U}(1)_{\rm X}$ and $T_{\rm DS}$ ($T_{\rm SM}$) is a temperature of the DS (Standard Model sector). Our model explains candidates of DM2 and DR, and DM1 can be any kind of CDM.
|
high energy physics phenomenology
|
We study the spatio-temporal two-point correlation function of passively advected scalar fields in the inertial-convective range in three dimensions by means of numerical simulations. We show that at small time delays $t$ the correlations decay as a Gaussian in the variable $tp$ where $p$ is the wavenumber. At large time delays, a crossover to an exponential decay in $tp^2$ is expected from a recent functional renormalization group (FRG) analysis. We study this regime for a scalar field advected by a Kraichnan's ``synthetic'' velocity field, and accurately confirm the FRG result, including the form of the prefactor in the exponential. By introducing finite time correlations in the synthetic velocity field, we uncover the crossover between the two regimes.
|
physics
|
The presence of multiparticle entanglement is an important benchmark for the performance of intermediate-scale quantum technologies. In this work we consider statistical methods based on locally randomized measurements in order to characterize different degrees of multiparticle entanglement in qubit systems. We introduce hierarchies of criteria, satisfied by states which are separable with respect to partitions of different size, involving only second moments of the underlying probability distribution. Furthermore, we study in detail the resources required for a statistical estimation of the respective moments if only a finite number of samples is available, and discuss their scaling with the system size.
|
quantum physics
|
Disentangling the relationship between the insulating state with a charge gap and the magnetic order in an antiferromagnetic (AF) Mott insulator remains difficult due to inherent phase separation as the Mott state is perturbed. Measuring magnetic and electronic properties at the atomic length scales would provide crucial insight, but this is yet to be experimentally achieved. Here we use spectroscopic-imaging spin-polarized scanning tunneling microscopy (SP-STM) to visualize periodic spin-resolved modulations originating from the AF order in a relativistic Mott insulator Sr2IrO4, and study these as a function of doping. We find that near insulator-to-metal transition (IMT), the long-range AF order melts into a fragmented state with short-range AF correlations. Crucially, we discover that the short-range AF order is locally uncorrelated with the observed spectral gap magnitude. This strongly suggests that short range AF correlations are unlikely to be the culprit behind inhomogeneous gap closing and the emergence of pseudogap regions near IMT. Our work establishes SP-STM as a powerful tool for revealing atomic-scale magnetic information in complex oxides.
|
condensed matter
|
Decentralized coordination of a robot swarm requires addressing the tension between local perceptions and actions, and the accomplishment of a global objective. In this work, we propose to learn decentralized controllers based on solely raw visual inputs. For the first time, that integrates the learning of two key components: communication and visual perception, in one end-to-end framework. More specifically, we consider that each robot has access to a visual perception of the immediate surroundings, and communication capabilities to transmit and receive messages from other neighboring robots. Our proposed learning framework combines a convolutional neural network (CNN) for each robot to extract messages from the visual inputs, and a graph neural network (GNN) over the entire swarm to transmit, receive and process these messages in order to decide on actions. The use of a GNN and locally-run CNNs results naturally in a decentralized controller. We jointly train the CNNs and the GNN so that each robot learns to extract messages from the images that are adequate for the team as a whole. Our experiments demonstrate the proposed architecture in the problem of drone flocking and show its promising performance and scalability, e.g., achieving successful decentralized flocking for large-sized swarms consisting of up to 75 drones.
|
electrical engineering and systems science
|
The theory of orbital magnetization is reconsidered by defining additional quantities that incorporate a non-Hermitian effect due to anomalous operators that break the domain of definition of the Hermitian Hamiltonian. As a result, boundary contributions to the observable are rigorously and analytically taken into account. In this framework, we extend the standard velocity operator definition in order to incorporate an anomaly of the position operator that is inherent in band theory, which results in an explicit boundary velocity contribution. Using the extended velocity, we define the electrons' intrinsic orbital circulation and we argue that this is the main quantity that captures the orbital magnetization phenomenon. As evidence of this assertion, we demonstrate the explicit relation between the nth band electrons' collective intrinsic circulation and the approximated, evaluated with respect to Wannier states, local and itinerant circulation contributions that are frequently used in the modern theory of orbital magnetization. A quantum mechanical formalism for the orbital magnetization of extended and periodic topological solids (insulators or metals) is redeveloped without any Wannier localization approximation or heuristic extension [Caresoli, Thonhauser, Vanderbilt and Resta, Phys. Rev. B 74, 024408 (2006)]. It is rigorously shown that, as a result of the non-Hermitian effect, an emerging covariant derivative enters the one-band (adiabatically deformed) approximation k-space expression for the orbital magnetization. In the corresponding many-band (unrestricted) k-space formula, the non-Hermitian effect contributes an additional boundary quantity which is expected to give locally (in momentum space) giant contributions whenever band crossings occur along with Hall voltage due to imbalance of electron accumulation at the opposite boundaries of the material.
|
condensed matter
|
Combining ocean model data and in-situ Lagrangian data, I show that an array of surface drifting buoys tracked by a Global Navigation Satellite System (GNSS), such as the Global Drifter Program, could provide estimates of global mean sea level (GMSL) and its changes, including linear decadal trends. For a sustained array of 1250 globally distributed buoys with a standardized design, I demonstrate that GMSL decadal linear trend estimates with an uncertainty less than 0.3 mm yr$^{-1}$ could be achieved with GNSS daily random error of 1.6 m or less in the vertical direction. This demonstration assumes that controlled vertical position measurements could be acquired from drifting buoys, which is yet to be demonstrated. Development and implementation of such measurements could ultimately provide an independent and resilient observational system to infer natural and anthropogenic sea level changes, augmenting the on-going tide gauge and satellites records.
|
physics
|
We consider the task of estimating the expectation value of an $n$-qubit tensor product observable $O_1\otimes O_2\otimes \cdots \otimes O_n$ in the output state of a shallow quantum circuit. This task is a cornerstone of variational quantum algorithms for optimization, machine learning, and the simulation of quantum many-body systems. Here we study its computational complexity for constant-depth quantum circuits and three types of single-qubit observables $O_j$ which are (a) close to the identity, (b) positive semidefinite, (c) arbitrary. It is shown that the mean value problem admits a classical approximation algorithm with runtime scaling as $\mathrm{poly}(n)$ and $2^{\tilde{O}(\sqrt{n})}$ in cases (a,b) respectively. In case (c) we give a linear-time algorithm for geometrically local circuits on a two-dimensional grid. The mean value is approximated with a small relative error in case (a), while in cases (b,c) we satisfy a less demanding additive error bound. The algorithms are based on (respectively) Barvinok's polynomial interpolation method, a polynomial approximation for the OR function arising from quantum query complexity, and a Monte Carlo method combined with Matrix Product State techniques. We also prove a technical lemma characterizing a zero-free region for certain polynomials associated with a quantum circuit, which may be of independent interest.
|
quantum physics
|
Graphical functions are special position space Feynman integrals, which can be used to calculate Feynman periods and one- or two-scale processes at high loop orders. With graphical functions, renormalization constants have been calculated to loop orders seven and eight in four-dimensional $\phi^4$ theory and to order five in six-dimensional $\phi^3$ theory. In this article we present the theory of graphical functions in even dimensions $\geq4$ with detailed reviews of known properties and full proofs whenever possible.
|
high energy physics theory
|
The interplay between strong electron correlation and band topology is at the forefront of condensed matter research. As a direct consequence of correlation, magnetism enriches topological phases and also has promising functional applications. However, the influence of topology on magnetism remains unclear, and the main research effort has been limited to ground state magnetic orders. Here we report a novel order above the magnetic transition temperature in magnetic Weyl semimetal (WSM) CeAlGe. Such order shows a number of anomalies in electrical and thermal transport, and neutron scattering measurements. We attribute this order to the coupling of Weyl fermions and magnetic fluctuations originating from a three-dimensional Seiberg-Witten monopole, which qualitatively agrees well with the observations. Our work reveals a prominent role topology may play in tailoring electron correlation beyond ground state ordering, and offers a new avenue to investigate emergent electronic properties in magnetic topological materials.
|
condensed matter
|
Some changes in a recent convolution formula are performed here in order to clean it up by using more conventional notations and by making use of more referrenced and documented components (namely Sierpi\'nski's polynomials, the Thue-Morse sequence, the binomial modulo~2 transform and its inverse). Several variants are published here, by reading afterwards summed coefficients in another order; the last formula is then turned back from a summation to a new divide-and-conquer recursive formula.
|
mathematics
|
We report the identification from multi-wavelength observations of the Fermi Large Area Telescope (LAT) source 4FGL J1405.1-6119 (= 3FGL J1405.4-6119) as a high-mass gamma-ray binary. Observations with the LAT show that gamma-ray emission from the system is modulated at a period of 13.7135 +/- 0.0019 days, with the presence of two maxima per orbit with different spectral properties. X-ray observations using the Neil Gehrels Swift Observatory X-ray Telescope (XRT) show that X-ray emission is also modulated at this period, but with a single maximum that is closer to the secondary lower-energy gamma-ray maximum. A radio source, coincident with the X-ray source, is also found from Australia Telescope Compact Array (ATCA) observations, and the radio emission is modulated on the gamma-ray period with similar phasing to the X-ray emission. A large degree of interstellar obscuration severely hampers optical observations, but a near-infrared counterpart is found. Near-infrared spectroscopy indicates an O6 III spectral classification. This is the third gamma-ray binary to be discovered with the Fermi LAT from periodic modulation of the gamma-ray emission, the other two sources also have early O star, rather than Be star, counterparts. We consider at what distances we can detect such modulated gamma-ray emission with the LAT, and examine constraints on the gamma-ray binary population of the Milky Way.
|
astrophysics
|
Many studies of possible new physics employ effective field theory (EFT), whereby corrections to the Standard Model take the form of higher-dimensional operators, suppressed by a large energy scale. Fits of such a theory to data typically use parton level observables, which limits the datasets one can use. In order to theoretically model search channels involving many additional jets, it is important to include tree-level matrix elements matched to a parton shower algorithm, and a suitable matching procedure to remove the double counting of additional radiation. There are then two potential problems: (i) EFT corrections are absent in the shower, leading to an extra source of discontinuities in the matching procedure; (ii) the uncertainty in the matching procedure may be such that no additional constraints are obtained from observables sensitive to radiation. In this paper, we review why the first of these is not a problem in practice, and perform a detailed study of the second. In particular, we quantify the additional constraints on EFT expected from top pair plus multijet events, relative to inclusive top pair production alone.
|
high energy physics phenomenology
|
We study the modular symmetry on magnetized toroidal orbifolds with Scherk-Schwarz phases. In particular, we investigate finite modular flavor groups for three-generation modes on magnetized orbifolds. The three-generation modes can be the three-dimensional irreducible representations of covering groups and central extended groups of $\Gamma_N$ for $N=3,4,5,7,8,16$, that is, covering groups of $\Delta(6(N/2)^2)$ for $N=$ even and central extensions of $PSL(2,\mathbb{Z}_{N})$ for $N=$odd. We also study anomaly behaviors.
|
high energy physics theory
|
A new estimate of the one loop contributions of the standard model to the chromomagnetic dipole moment (CMDM) $\hat \mu_q(q^2)$ of quarks is presented with the aim to address a few disagreements arising in previous calculations. We consider the most general case with an off-shell gluon with transfer momentum $q^2$ and obtain analytical results in terms of Feynman parameter integrals and Passarino-Veltman scalar functions, which are then expressed in terms of closed form functions when possible. The calculation is done via a renormalizable linear $R_\xi$ gauge and the background field method, which allows one to corroborate that the resulting $\hat \mu_q(q^2)$ is gauge independent and thus a valid observable quantity. It is found that the QCD contribution from a three-gluon Feynman diagram has an infrared divergence, which agrees with a previous evaluation and stems from the fact that the static CMDM [$\hat\mu(0)$] has no sense in perturbative QCD. For the numerical analysis we consider the region 30 GeV$<\|q\|<$ 1000 GeV and analyze the behavior of $\hat \mu_q(q^2)$ for all the standard model quarks. It is found that the CMDM of light quarks is considerably smaller than that of the top quark as it is directly proportional to the quark mass. In the considered energy interval, both the real and imaginary parts of $\hat\mu_t(q^2)$ are of the order of $10^{-2}-10^{-3}$, with the largest contribution arising from the QCD induced diagrams, though around the threshold $q^2=4m_t^2$ there are also important contributions from diagrams with $Z$ gauge boson and Higgs boson exchange.
|
high energy physics phenomenology
|
Advances in graphics and machine learning have led to the general availability of easy-to-use tools for modifying and synthesizing media. The proliferation of these tools threatens to cast doubt on the veracity of all media. One approach to thwarting the flow of fake media is to detect modified or synthesized media through machine learning methods. While detection may help in the short term, we believe that it is destined to fail as the quality of fake media generation continues to improve. Soon, neither humans nor algorithms will be able to reliably distinguish fake versus real content. Thus, pipelines for assuring the source and integrity of media will be required---and increasingly relied upon. We propose AMP, a system that ensures the authentication of media via certifying provenance. AMP creates one or more publisher-signed manifests for a media instance uploaded by a content provider. These manifests are stored in a database allowing fast lookup from applications such as browsers. For reference, the manifests are also registered and signed by a permissioned ledger, implemented using the Confidential Consortium Framework (CCF). CCF employs both software and hardware techniques to ensure the integrity and transparency of all registered manifests. AMP, through its use of CCF, enables a consortium of media providers to govern the service while making all its operations auditable. The authenticity of the media can be communicated to the user via visual elements in the browser, indicating that an AMP manifest has been successfully located and verified.
|
computer science
|
The Next-to Minimal Supersymmetric Standard Model (NMSSM) with a Type-I seesaw mechanism extends the NMSSM by three generations of right-handed neutrino fields to generate neutrino mass. As a byproduct it renders the lightest sneutrino as a viable DM candidate. Due to the gauge singlet nature of the DM, its scattering with nucleon is suppressed in most cases to coincide spontaneously with the latest XENON-1T results. Consequently, broad parameter spaces in the Higgs sector, especially a light Higgsino mass, are resurrected as experimentally allowed, which makes the theory well suited to explain the long standing $b \bar{b}$ excess at LEP-II and the continuously observed $\gamma \gamma$ excess by CMS collaboration. We show by both analytic formulas and numerical results that the theory can naturally predict the central values of the excesses in its broad parameter space, and the explanations are consistent with the Higgs data of the discovered Higgs boson, $B-$physics and DM physics measurements, the electroweak precision data as well as the LHC search for sparticles. Part of the explanations may be tested by future DM experiments and the SUSY search at the LHC.
|
high energy physics phenomenology
|
Lithium-Niobate-On-Insulator (LNOI) has emerged as a promising platform in the field of integrated photonics. Nonlinear optical processes and fast electro-optic modulation have been reported with outstanding performance in ultra-low loss waveguides. In order to harness the advantages offered by the LNOI technology, suitable fiber-to-chip interconnects operating at different wavelength ranges are demanded. Here we present easily manufacturable, self-imaging apodized grating couplers, featuring a coupling efficiency of the TE0 mode as high as $\simeq 47.1\%$ at $\lambda$=1550 nm and $\simeq 44.9\%$ at $\lambda$=775 nm. Our approach avoids the use of any metal back-reflector for an improved directivity or multi-layer structures for an enhanced grating strength.
|
physics
|
Do black holes rotate, and if yes, how fast? This question is fundamental and has broad implications, but still remains open. There are significant observational challenges in current spin determinations, and future facilities offer prospects for precision measurements.
|
astrophysics
|
We propose a novel strategy to search for new physics in timing spectra, envisioning the situation in which a new particle comes from the decay of its heavier partner with a finite particle width. The timing distribution of events induced by the dark matter particle scattering at the detector may populate in a relatively narrow range, forming a "resonance-like" shape. Due to this structural feature, the signal may be isolated from the backgrounds, in particular when the backgrounds are uniformly distributed in energy and time. For proof of the principle, we investigate the discovery potential for dark matter from the decay of a dark photon in the COHERENT experiment, and show the exciting prospects for exploring the associated parameter space with this experiment. We analyze the existing CsI detector data with a timing cut and an energy cut, and find, for the first time, an excess in the timing distribution which can be explained by such dark matter. We compare the sensitivity to the kinetic mixing parameter ($\epsilon$) for current and future COHERENT experiments with the projected limits from LDMX and DUNE.
|
high energy physics phenomenology
|
Comprehensive spectral analyses of the Galactic Wolf-Rayet stars of the nitrogen sequence (i.e.\ the WN subclass) have been performed in a previous paper. However, the distances of these objects were poorly known. Distances have a direct impact on the "absolute" parameters, such as luminosities and mass-loss rates. The recent Gaia Data Release (DR2) of trigonometric parallaxes includes nearly all WN stars of our Galactic sample. In the present paper, we apply the new distances to the previously analyzed Galactic WN stars and rescale the results accordingly. On this basis, we present a revised catalog of 55 Galactic WN stars with their stellar and wind parameters. The correlations between mass-loss rate and luminosity show a large scatter, for the hydrogen-free WN stars as well as for those with detectable hydrogen. The slopes of the $\log L - \log \dot{M}$ correlations are shallower than found previously. The empirical Hertzsprung-Russell diagram (HRD) still shows the previously established dichotomy between the hydrogen-free early WN subtypes that are located on the hot side of the zero-age main sequence (ZAMS), and the late WN subtypes, which show hydrogen and reside mostly at cooler temperatures than the ZAMS (with few exceptions). However, with the new distances, the distribution of stellar luminosities became more continuous than obtained previously. The hydrogen-showing stars of late WN subtype are still found to be typically more luminous than the hydrogen-free early subtypes, but there is a range of luminosities where both subclasses overlap. The empirical HRD of the Galactic single WN stars is compared with recent evolutionary tracks. Neither these single-star evolutionary models nor binary scenarios can provide a fully satisfactory explanation for the parameters of these objects and their location in the HRD.
|
astrophysics
|
We propose \emph{MaxUp}, an embarrassingly simple, highly effective technique for improving the generalization performance of machine learning models, especially deep neural networks. The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data. By doing so, we implicitly introduce a smoothness or robustness regularization against the random perturbations, and hence improve the generation performance. For example, in the case of Gaussian perturbation, \emph{MaxUp} is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness. We test \emph{MaxUp} on a range of tasks, including image classification, language modeling, and adversarial certification, on which \emph{MaxUp} consistently outperforms the existing best baseline methods, without introducing substantial computational overhead. In particular, we improve ImageNet classification from the state-of-the-art top-1 accuracy $85.5\%$ without extra data to $85.8\%$. Code will be released soon.
|
computer science
|
Distributed or multi-area optimal power flow appears to be promising in order to cope with computational burdens in large-scale grids and without the regional system operators losing control over their respective areas. However, algorithms are usually tested either in small test cases or single countries. We present a realistic case study in the interconnected European transmission grid with over 5000 buses and 315 GW load. The grid is partitioned into 24 areas which correspond to the respective single countries. We use a full alternating current model and integrate multi-terminal direct current systems. The decomposed problem is solved via a modified Alternating Direction of Multipliers Method (ADMM), in which the single countries only exchange border node information with each neighbor. In terms of generation costs, the solution of the distributed optimal power flow problem deviates only 0.02% from a centrally computed one. Consensus between all regions is reached before 200 iterations, which is remarkable in such a large system.
|
electrical engineering and systems science
|
We further develop a recently proposed new approach to the description of the relativistic neutrino flavour $\nu_e^L \leftrightarrow \nu_{\mu}^L$, spin $\nu_e^L \leftrightarrow \nu_{e}^R$ and spin-flavour $\nu_e^L \leftrightarrow \nu_{\mu}^R$ oscillations in a constant magnetic field that is based on the use of the exact neutrino stationary states in the magnetic field. The neutrino flavour, spin and spin-flavour oscillations probabilities are calculated accounting for the whole set of possible conversions between four neutrino states. In general, the obtained expressions for the neutrino oscillations probabilities exhibit new inherent features in the oscillation patterns. It is shown, in particular, that: 1) in the presence of the transversal magnetic field for a given choice of parameters (the energy and magnetic moments of neutrinos and the strength of the magnetic field) the amplitude of the flavour oscillations $\nu_e^L \leftrightarrow \nu_{\mu}^L$ at the vacuum frequency is modulated by the magnetic field frequency, 2) the neutrino spin oscillation probability (without change of the neutrino flavour) exhibits the dependence on the mass square difference $\Delta m^2$.
|
high energy physics phenomenology
|
The resonance band in hollow-core photonic crystal fiber (HC-PCF), while leading to high-loss region in the fiber transmission spectrum, has been successfully used for generating phase-matched dispersive wave (DW). Here, we report that the spectral width of the resonance-induced DW can be largely broadened due to plasma-driven blueshifting soliton. In the experiment, we observed that in a short length of Ar-filled single-ring HC-PCF the soliton self-compression and photoionization effects caused a strong spectral blueshift of the pump pulse, changing the phase-matching condition of the DW emission process. Therefore, broadening of DW spectrum to the longer-wavelength side was obtained with several spectral peaks, which correspond to the generation of DW at different positions along the fiber. In the simulation, we used super-Gauss windows with different central wavelengths to filter out these DW spectral peaks, and studied the time-domain characteristics of these peaks respectively using Fourier transform method. The simulation results verified that these multiple-peaks on the DW spectrum have different delays in the time domain, agreeing well with our theoretical prediction. Remarkably, we found that the whole time-domain DW trace can be compressed to ~29 fs using proper chirp compensation. The experimental and numerical results reported here provide some insight into the resonance-induced DW generation process in gas-filled HC-PCFs, they could also pave the way to ultrafast pulse generation using DW-emission mechanism.
|
physics
|
The emission of real photons from a momentum-anisotropic quark-gluon plasma (QGP) is affected by both the collective flow of the radiating medium and the modification of local rest frame emission rate due to the anisotropic momentum distribution of partonic degrees of freedom. In this paper, we first calculate the photon production rate from an ellipsoidally momentum-anisotropic QGP including hard contributions from Compton scattering and quark pair annihilation and soft contribution calculated using the hard thermal loop (HTL) approximation. We introduce a parametrization of the nonequilibrium rate in order to facilitate its further application in yield and flow calculations. We convolve the anisotropic photon rate with the space-time evolution of QGP provided by 3+1d anisotropic hydrodynamics (aHydro) to obtain the yield and the elliptic flow coefficient $v_2$ of photons from QGP generated at Pb-Pb collisions at LHC at 2.76 TeV and Au-Au collisions at RHIC at 200 GeV. We investigate the effects of various parameters on the results. In particular we analyze the sensitivity of results to initial momentum anisotropy.
|
high energy physics phenomenology
|
We consider a class of two-degree-of-freedom Hamiltonian systems with saddle-centers connected by heteroclinic orbits and discuss some relationships between the existence of transverse heteroclinic orbits and nonintegrability. By the Lyapunov center theorem there is a family of periodic orbits near each of the saddle-centers, and the Hessian matrices of the Hamiltonian at the two saddle-centers are assumed to have the same number of positive eigenvalues. We show that if the associated Jacobian matrices have the same pair of purely imaginary eigenvalues, then the stable and unstable manifolds of the periodic orbits intersect transversely on the same Hamiltonian energy surface when sufficient conditions obtained in previous work for real-meromorphic nonintegrability of the Hamiltonian systems hold; if not, then these manifolds intersect transversely on the same energy surface, have quadratic tangencies or do not intersect whether the sufficient conditions hold or not. Our theory is illustrated for a system with quartic single-well potential and some numerical results are given to support the theoretical results.
|
mathematics
|
A hypergraph $\mathcal{H}$ on $n$ vertices and $m$ edges is said to be {\it nearly-intersecting} if every edge of $\mathcal{H}$ intersects all but at most polylogarthmically many (in $m$ and $n$) other edges. Given lists of colors $\mathcal{L}(v)$, for each vertex $v\in V$, $\mathcal{H}$ is said to be $\mathcal{L}$-(list) colorable, if each vertex can be assigned a color from its list such that no edge in $\mathcal{H}$ is monochromatic. We show that list-colorability for any nearly intersecting hypergraph, and lists drawn from a set of constant size, can be checked in quasi-polynomial time in $m$ and $n$.
|
computer science
|
Cosmic rays (CRs) leave their sources mainly along the local magnetic field present in the region around the source and in doing so they excite both resonant and non-resonant modes through streaming instabilities. The excitation of these modes leads to enhanced scattering and in turn to a large pressure gradient that causes the formation of expanding bubbles of gas and self-generated magnetic fields. By means of hybrid particle-in-cell simulations, we here demonstrate that, by exciting this instability, CRs excavate a cavity around their source where the diffusivity is strongly suppressed. This phenomenon is general and is expected to occur around any sufficiently powerful CR source in the Galaxy. Our results are consistent with recent $\gamma$-ray observations where emission from the region around supernova remnants, stellar clusters and pulsar wind nebulae have been used to infer that the diffusion coefficient around these sources is $\sim 10-100$ times smaller than the typical Galactic one.
|
astrophysics
|
Recent deep learning based single image super-resolution (SISR) methods mostly train their models in a clean data domain where the low-resolution (LR) and the high-resolution (HR) images come from noise-free settings (same domain) due to the bicubic down-sampling assumption. However, such degradation process is not available in real-world settings. We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions, which is inspired by the recent success of CycleGAN in the image-to-image translation applications. We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation in an end-to-end manner. We demonstrate our proposed approach in the quantitative and qualitative experiments that generalize well to the real image super-resolution and it is easy to deploy for the mobile/embedded devices. In addition, our SR results on the AIM 2020 Real Image SR Challenge datasets demonstrate that the proposed SR approach achieves comparable results as the other state-of-art methods.
|
electrical engineering and systems science
|
We introduce a new regime of cascaded quadratic nonlinearities which result in a continuous red shift of the optical pump, analogous to a Raman shifting process rather than self-phase modulation. This is particularly relevant to terahertz generation, where a continuous red shift of the pump can resolve current issues such as dispersion management and laser-induced damage. We show that in the absence of absorption or dispersion, the presented Raman shifting method will result in optical-to-terahertz energy conversion efficiencies that approach $100\%$ which is not possible with conventional cascaded difference-frequency generation. Furthermore, we present designs of aperiodically poled structures which result in energy conversion efficiencies of $\approx 35\%$ even in the presence of dispersion and absorption.
|
physics
|
We present the optical transmission spectrum of the highly inflated Saturn-mass exoplanet WASP-21b, using three transits obtained with the ACAM instrument on the William Herschel Telescope through the LRG-BEASTS survey (Low Resolution Ground-Based Exoplanet Atmosphere Survey using Transmission Spectroscopy). Our transmission spectrum covers a wavelength range of 4635-9000 Angstrom, achieving an average transit depth precision of 197ppm compared to one atmospheric scale height at 246ppm. We detect Na I absorption in a bin width of 30 Angstrom, at >4$\sigma$ confidence, which extends over 100 Angstrom. We see no evidence of absorption from K I. Atmospheric retrieval analysis of the scattering slope indicates it is too steep for Rayleigh scattering from H$_2$, but is very similar to that of HD 189733b. The features observed in our transmission spectrum cannot be caused by stellar activity alone, with photometric monitoring of WASP-21 showing it to be an inactive star. We therefore conclude that aerosols in the atmosphere of WASP-21b are giving rise to the steep slope that we observe, and that WASP-21b is an excellent target for infra-red observations to constrain its atmospheric metallicity.
|
astrophysics
|
This paper reports a breakdown in linear stability theory under conditions of neutral stability that is deduced by an examination of exponential modes of the form $h\approx {{e}^{i(kx-\omega t)}}$, where $h$ is a response to a disturbance, $k$ is a real wavenumber, and $\omega(k)$ is a wavelength-dependent complex frequency. In a previous paper, King et al (Stability of algebraically unstable dispersive flows, \textit{Phys. Rev. Fluids}, 1(073604), 2016) demonstrates that when Im$[\omega(k)]$=0 for all $k$, it is possible for a system response to grow or damp algebraically as $h\approx {{t}^{s}}$ where $s$ is a fractional power. The growth is deduced through an asymptotic analysis of the Fourier integral that inherently invokes the superposition of an infinite number of modes. In this paper, the more typical case associated with the transition from stability to instability is examined in which Im$[\omega(k)]$=0 for a single mode (i.e., for one value of $k$) at neutral stability. Two partial differential equation systems are examined, one that has been constructed to elucidate key features of the stability threshold, and a second that models the well-studied problem of rectilinear Newtonian flow down an inclined plane. In both cases, algebraic growth/decay is deduced at the neutral stability boundary, and the propagation features of the responses are examined.
|
physics
|
The high temperature and electron degeneracy attained during a supernova allow for the formation of a large muon abundance within the core of the resulting proto-neutron star. If new pseudoscalar degrees of freedom have large couplings to the muon, they can be produced by this muon abundance and contribute to the cooling of the star. By generating the largest collection of supernova simulations with muons to date, we show that observations of the cooling rate of SN 1987A place strong constraints on the coupling of axion-like particles to muons, limiting the coupling to $g_{a\mu} < 10^{-7.4}~\text{GeV}^{-1}$ (see Erratum).
|
high energy physics phenomenology
|
Emergent electromagnetism in magnets originates from the strong coupling between conduction electron spins and those of noncollinear ordered moments and the consequent Berry phase. This offers possibilities to develop new functions of quantum transport and optical responses. The emergent inductance in spiral magnets is an example recently proposed and experimentally demonstrated, used the emergent electric field induced by alternating currents. However, the microscopic theory of this phenomenon is missing, which should reveal the factors to determine the magnitude, sign, frequency dependence, and nonlinearity of the inductance L. Here we theoretically study electromagnetic responses of spiral magnets taking into account their collective modes. In sharp contrast to the collinear spin-density wave, the system remains metallic even in one-dimension, and the canonical conjugate relation of uniform magnetization and phason coordinate plays an essential role, determining the properties of L. This result opens a way to design the emergent inductance of desired properties.
|
condensed matter
|
Within this work, we explore intention inference for user actions in the context of a handheld robot setup. Handheld robots share the shape and properties of handheld tools while being able to process task information and aid manipulation. Here, we propose an intention prediction model to enhance cooperative task solving. The model derives intention from the user's gaze pattern which is captured using a robot-mounted remote eye tracker. The proposed model yields real-time capabilities and reliable accuracy up to 1.5s prior to predicted actions being executed. We assess the model in an assisted pick and place task and show how the robot's intention obedience or rebellion affects the cooperation with the robot.
|
computer science
|
Advances in quantum computing are a rapidly growing threat towards modern cryptography. Quantum key distribution (QKD) provides long-term security without assuming the computational power of an adversary. However, inconsistencies between theory and experiment have raised questions in terms of real-world security, while large and power-hungry commercial systems have slowed wide-scale adoption. Measurement-device-independent QKD (MDI-QKD) provides a method of sharing secret keys that removes all possible detector side-channel attacks which drastically improves security claims. In this letter, we experimentally demonstrate a key step required to perform MDI-QKD with scalable integrated devices. We show Hong-Ou-Mandel interference between weak coherent states carved from two independent indium phosphide transmitters at $431$ MHz with a visibility of $46.5 \pm 0.8\%$. This work demonstrates the feasibility of using integrated devices to lower a major barrier towards adoption of QKD in metropolitan networks.
|
quantum physics
|
We consider the problem of approximating smoothing spline estimators in a nonparametric regression model. When applied to a sample of size $n$, the smoothing spline estimator can be expressed as a linear combination of $n$ basis functions, requiring $O(n^3)$ computational time when the number of predictors $d\geq 2$. Such a sizable computational cost hinders the broad applicability of smoothing splines. In practice, the full sample smoothing spline estimator can be approximated by an estimator based on $q$ randomly-selected basis functions, resulting in a computational cost of $O(nq^2)$. It is known that these two estimators converge at the identical rate when $q$ is of the order $O\{n^{2/(pr+1)}\}$, where $p\in [1,2]$ depends on the true function $\eta$, and $r > 1$ depends on the type of spline. Such $q$ is called the essential number of basis functions. In this article, we develop a more efficient basis selection method. By selecting the ones corresponding to roughly equal-spaced observations, the proposed method chooses a set of basis functions with a large diversity. The asymptotic analysis shows our proposed smoothing spline estimator can decrease $q$ to roughly $O\{n^{1/(pr+1)}\}$, when $d\leq pr+1$. Applications on synthetic and real-world datasets show the proposed method leads to a smaller prediction error compared with other basis selection methods.
|
statistics
|
This paper presents new sufficient conditions for convergence and asymptotic or exponential stability of a stochastic discrete-time system, under which the constructed Lyapunov function always decreases in expectation along the system's solutions after a finite number of steps, but without necessarily strict decrease at every step, in contrast to the classical stochastic Lyapunov theory. As the first application of this new Lyapunov criterion, we look at the product of any random sequence of stochastic matrices, including those with zero diagonal entries, and obtain sufficient conditions to ensure the product almost surely converges to a matrix with identical rows; we also show that the rate of convergence can be exponential under additional conditions. As the second application, we study a distributed network algorithm for solving linear algebraic equations. We relax existing conditions on the network structures, while still guaranteeing the equations are solved asymptotically.
|
computer science
|
We describe the integrable structure of the space of local operators for the supersymmetric sine-Gordon model. Namely, we conjecture that this space is created by acting on the primary fields by fermions and a Kac-Moody current. We proceed with the computation of the one-point functions. In the UV limit they are shown to agree with the alternative results obtained by solving the reflection relations.
|
high energy physics theory
|
We study the quantum tunnel effect through a potential barrier employing a semiclassical formulation of quantum mechanics based on expectation values of configuration variables and quantum dispersions as dynamical variables. The evolution of the system is given in terms of a dynamical system for which we are able to determine effective trajectories for individual particles, in a total resemblance of the Bohmian description of quantum mechanics. We obtain a type of semiclassical confinement for particles in a similar way as with the quantum potential, and also determine a semiclassical transmission coefficient for the tunneling process.
|
quantum physics
|
In this study, we investigate the operation of an optimal home energy management system (HEMS) with integrated renewable energy system (RES) and energy storage system (ESS) supporting electricity selling functions. A multi-objective mixed integer nonlinear programming model, including RES, ESS, home appliances and the main grid, is proposed to optimize different and conflicting objectives which are energy cost, user comfort and PAR. The effect of different selling prices on the objectives is also considered in detail. We further develop a formula for the lower bound of energy cost to help residents or engineers quickly choose best parameters of RES and ESS for their homes during the installation process. The performance of our system is verified through extensive simulations under three different scenarios of normal, economic, and smart with different selling prices using real data, and simulation results are compared in terms of daily energy cost, PAR, user's convenience and consecutive waiting time to use appliances. Numerical results clearly show that the economic scenario achieves 51.6% reduction of daily energy cost compared to the normal scenario while sacrificing the user's convenience, PAR, and consecutive waiting time by 49%, 132%, and 1 hour, respectively. On the other hand, the smart scenario shows only slight degradation of user's convenience and PAR by 2% and 18%, respectively while achieving 46.4% reduction of daily energy cost and the same level of consecutive waiting time. Furthermore, our simulation results show that a decrease of selling prices has tiny impacts on PAR and user comfort even though the daily energy cost increases.
|
electrical engineering and systems science
|
The authors propose a methodology to perform seismic damage assessment of instrumented wood-frame buildings using response measurements. The proposed methodology employs a nonlinear model-based state observer that combines sparse acceleration measurements and a nonlinear structural model of a building to estimate the complete seismic response including displacements, velocity, acceleration and internal forces in all structural members. From the estimated seismic response and structural characteristics of each shear wall of the building, element-by-element seismic damage indices are computed and remaining useful life (pertaining to seismic effects) is predicted. The methodology is illustrated using measured data from the 2009 NEESWood Capstone full-scale shake table tests at the E-Defense facility in Japan.
|
computer science
|
The automatic detection of frames containing polyps from a colonoscopy video sequence is an important first step for a fully automated colonoscopy analysis tool. Typically, such detection system is built using a large annotated data set of frames with and without polyps, which is expensive to be obtained. In this paper, we introduce a new system that detects frames containing polyps as anomalies from a distribution of frames from exams that do not contain any polyps. The system is trained using a one-class training set consisting of colonoscopy frames without polyps -- such training set is considerably less expensive to obtain, compared to the 2-class data set mentioned above. During inference, the system is only able to reconstruct frames without polyps, and when it tries to reconstruct a frame with polyp, it automatically removes (i.e., photoshop) it from the frame -- the difference between the input and reconstructed frames is used to detect frames with polyps. We name our proposed model as anomaly detection generative adversarial network (ADGAN), comprising a dual GAN with two generators and two discriminators. We show that our proposed approach achieves the state-of-the-art result on this data set, compared with recently proposed anomaly detection systems.
|
electrical engineering and systems science
|
A new $\beta$-metastable Ti-alloy is designed with the aim to obtain a TWIP alloy but positioned at the limit between the TRIP/TWIP and the TWIP dominated regime. The designed alloy exhibits a large ductility combined with an elevated and stable work-hardening rate. Deformation occurring by formation and multiplication of {332}<113> twins is evidenced and followed by in-situ electron microscopy, and no primary stress induced martensite is observed. Since microstructural investigations of the deformation mechanisms show a highly heterogeneous deformation, the reason of the large ductility is then investigated. The spatial strain distribution is characterized by micro-scale digital image correlation, and the regions highly deformed are found to stand at the crossover between twins, or at the intersection between deformation twins and grain boundaries. Detailed electron back-scattered imaging in such regions of interest finally allowed to evidence the formation of thin needles of stress induced martensite. The latter is thus interpreted as an accommodation mechanism, relaxing the local high strain fields, which ensures a large and stable plastic deformation of this newly designed Ti-alloy.
|
condensed matter
|
This work continues the development of the raytracing method of [1] for computing the scattered fields from metasurfaces characterized by locally periodic reflection and transmission coefficients. In this work, instead of describing the metasurface in terms of scattering coefficients that depend on the incidence direction, its scattering behavior is characterized by the surface susceptibility tensors that appear in the generalized sheet transition conditions (GSTCs). As the latter quantities are constitutive parameters, they do not depend on the incident field and thus enable a more compact and physically motivated description of the surface. The locally periodic susceptibility profile is expanded into a Fourier series, and the GSTCs are rewritten in a form that enables them to be numerically solved for in terms of the reflected and transmitted surface fields. The scattered field at arbitrary detector locations is constructed by evaluating critical-point contributions of the first and second kinds using a Forward Ray Tracing (FRT) scheme. The accuracy of the resulting framework has been verified with an Integral Equation based Boundary Element Method (BEM)-GSTC full-wave solver for a variety of examples such as a periodically modulated metasurface, a metasurface diffuser and a beam collimator.
|
physics
|
We study the asymptotic behavior of the Castelnuovo-Mumford regularity along chains of graded ideals in increasingly larger polynomial rings that are invariant under the action of symmetric groups. A linear upper bound for the regularity of such ideals is established. We conjecture that their regularity grows eventually precisely linearly. We establish this conjecture in several cases, most notably when the ideals are Artinian or squarefree monomial.
|
mathematics
|
We study the spectrum generating closed nonlinear superconformal algebra that describes $\mathcal{N}=2$ super-extensions of rationally deformed quantum harmonic oscillator and conformal mechanics models with coupling constant $g=m(m+1)$, $m\in {\mathbb N}$. It has a nature of a nonlinear finite $W$ superalgebra being generated by higher derivative integrals, and generally contains several different copies of either deformed superconformal $\mathfrak{osp}(2|2)$ algebra in the case of super-extended rationally deformed conformal mechanics models, or deformed super-Schrodinger algebra in the case of super-extension of rationally deformed harmonic oscillator systems.
|
high energy physics theory
|
We propose an idea of the constrained Feynman amplitude for the scattering of the charged lepton and the virtual W-boson, $l_{\beta} + W_{\rho} \rightarrow l_{\alpha} + W_{\lambda}$, from which the conventional Pontecorvo oscillation formula of relativistic neutrinos is readily obtained using plane waves for all the particles involved. In a path integral picture, the neutrino propagates forward in time between the production and detection vertices, which are constrained respectively on the 3-dimensional spacelike hypersurfaces separated by a macroscopic positive time $\tau$. The covariant Feynman amplitude is formally recovered if one sums over all possible values of $\tau$ (including negative $\tau$).
|
high energy physics phenomenology
|
In this supplementary article to Kostinkskiy et al. (2020), we evaluate how it is possible to initiate and synchronize the start of a large number of streamer flashes, which can provide a powerful VHF signal, in the time range of ~1-3 us. As described in Kostinskiy et al. (2020), we will assume streamer flashes occur due to the voluminous network of 'air electrode' (Eth-volumes), the number of which is dynamically supported in highly turbulent regions of a thundercloud until an extensive air shower (EAS) passes through this region. The first numerical estimates are given herein. In the near future we plan a separate article based on these estimates, where we will present the main points in more detail.
|
physics
|
In the present paper, an essential generalization of the symbolic dynamics is considered. We apply the notions of abstract self-similar sets and the similarity map for a chaos introduction, which orbits are expanded among infinitely many modules. The dynamics is free of dimensional, metrical and topological assumptions. It unites all the three types of Poincare, Li-Yorke and Devaney chaos in a single model, which can be unbounded. The research demonstrates that the dynamics of Poincare chaos is of exceptional use to analyze discrete and continuous-time random processes. Examples, illustrating the results are provided.
|
mathematics
|
We study biasing as a physical phenomenon by analysing power spectra (PS) and correlation functions (CF) of simulated galaxy samples and dark matter (DM) samples. We apply an algorithm based on the local densities of particles, $\rho$, to form populations of simulated galaxies, using particles with $\rho \ge \rho_0$. We calculate two-point CF of projected (2D) and spatial (3D) density fields of simulated galaxies for various particle-density limits $\rho_0$. We compare 3D and 2D CFs; in 2D case we use samples of various thickness to find the dependence of 2D CFs on thickness of samples. Dominant elements of the cosmic web are clusters and filaments, separated by voids filling most of the volume. In individual 2D sheets positions of clusters and filaments do not coincide. As a result, in projection clusters and filaments fill in 2D voids. This leads to the decrease of amplitudes of CFs in projection. For this reason amplitudes of 2D CFs are lower than amplitudes of 3D CFs, the difference is the larger, the thicker are 2D samples. Using PS and CFs of simulated galaxies and DM we estimate the bias factor for $L^\ast$ galaxies, $b^\ast =1.85 \pm 0.15$.
|
astrophysics
|
Maintaining a robust communication network plays an important role in the success of a multi-robot team jointly performing an optimization task. A key characteristic of a robust multi-robot system is the ability to repair the communication topology itself in the case of robot failure. In this paper, we focus on the Fast Biconnectivity Restoration (FBR) problem, which aims to repair a connected network to make it biconnected as fast as possible, where a biconnected network is a communication topology that cannot be disconnected by removing one node. We develop a Quadratically Constrained Program (QCP) formulation of the FBR problem, which provides a way to optimally solve the problem. We also propose an approximation algorithm for the FBR problem based on graph theory. By conducting empirical studies, we demonstrate that our proposed approximation algorithm performs close to the optimal while significantly outperforming the existing solutions.
|
computer science
|
Very Long Baseline Atom Interferometry (VLBAI) corresponds to ground-based atomic matter-wave interferometry on large scales in space and time, letting the atomic wave functions interfere after free evolution times of several seconds or wave packet separation at the scale of meters. As inertial sensors, e.g., accelerometers, these devices take advantage of the quadratic scaling of the leading order phase shift with the free evolution time to enhance their sensitivity, giving rise to compelling experiments. With shot noise-limited instabilities better than $10^{-9}$ m/s$^2$ at 1 s at the horizon, VLBAI may compete with state-of-the-art superconducting gravimeters, while providing absolute instead of relative measurements. When operated with several atomic states, isotopes, or species simultaneously, tests of the universality of free fall at a level of parts in $10^{13}$ and beyond are in reach. Finally, the large spatial extent of the interferometer allows one to probe the limits of coherence at macroscopic scales as well as the interplay of quantum mechanics and gravity. We report on the status of the VLBAI facility, its key features, and future prospects in fundamental science.
|
physics
|
We report an analytical representation of the correlation energy ec(rs, zeta) for a uniform electron gas (UEG), where rs is the Seitz radius or density parameter and zeta is the relative spin polarization. The new functional, called W20, is constructed to capture the known high-density and low-density limit (for zeta = 0 and 1) without any fitting parameters. The comparative assessment against the recent quantum Monte Carlo (QMC) results shows that the performance of the W20 functional is comparable to the popular parametrized UEG correlation functionals. On average, W20 agrees with QMC and PW92 [Phys. Rev. B 45, 13244 (1992)] within 0.01 eV, and W20 recovers the correct high- and low-density limits, whereas the QMC-data fitted UEG correlation functionals do not.
|
physics
|
This talk reviews Feynman integrals, which are associated to elliptic curves. The talk will give an introduction into the mathematics behind them, covering the topics of elliptic curves, elliptic integrals, modular forms and the moduli space of $n$ marked points on a genus one curve. The latter will be important, as elliptic Feynman integrals can be expressed as iterated integrals on the moduli space ${\mathcal M}_{1,n}$, in same way as Feynman integrals which evaluate to multiple polylogarithms can be expressed as iterated integrals on the moduli space ${\mathcal M}_{0,n}$. With the right language, many methods from the genus zero case carry over to the genus one case. In particular we will see in specific examples that the differential equation for elliptic Feynman integrals can be cast into an $\varepsilon$-form. This allows to systematically obtain a solution order by order in the dimensional regularisation parameter.
|
high energy physics theory
|
Two-dimensional Heisenberg antiferromagnets play a central role in quantum magnetism, yet the nature of dynamic correlations in these systems at finite temperature has remained poorly understood for decades. We solve this long-standing problem by using a novel quantum-classical duality to calculate the dynamic structure factor analytically and, paradoxically, find a broad frequency spectrum despite the very long quasiparticle lifetime. The solution reveals new multi-scale physics whereby an external probe creates a classical radiation field containing infinitely-many quanta. Crucially, it is the multi-scale nature of this phenomenon which prevents a conventional renormalization group approach. We also challenge the common wisdom on static correlations and perform Monte Carlo simulations which demonstrate excellent agreement with our theory.
|
condensed matter
|
The unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix has been well established by both direct and indirect measurements without any evidence of discrepancy. The CKM weak phase $\alpha$ is directly measured using an isospin analysis in $B\rightarrow \pi\pi$ and $B\to \rho\rho$ assuming that electroweak penguin contributions are ignorable. However, electroweak penguins are sensitive to NP, hence, it is important to experimentally estimate their effects. We determine the size of both electroweak penguin and isospin amplitudes, directly from $B\rightarrow \pi\pi$ and $B\to \rho\rho$ experimental data, using in addition the indirectly measured value of $\alpha$. We find that electroweak penguin contribution are indeed small and agree with SM expectations within $1\sigma$. We also find that there is a mild enhancement of the $\Delta I=\tfrac{1}{2}$ transition amplitude.
|
high energy physics phenomenology
|
We study the product of Selberg Zeta function and hyperbolic Eisenstein series on a family of degenerating hyperbolic surfaces.
|
mathematics
|
The integration of satellite and terrestrial communication systems plays a vital role in the fifth-generation mobile communication system (5G) for the ubiquitous coverage, reliable service and flexible networking. Moreover, the millimeter wave (mmWave) communication with large bandwidth is a key enabler for 5G intelligent rail transportation. In this paper, the satellite-terrestrial channel at 22.6 GHz is characterized for a typical high-speed railway (HSR) environment. The three-dimensional model of the railway scenario is reconstructed and imported into the Cloud Ray-Tracing (CloudRT) simulation platform. Based on extensive ray-tracing simulations, the channel for the terrestrial HSR system and the satellite-terrestrial system with two weather conditions are characterized, and the interference between them are evaluated. The results of this paper can help for the design and evaluation for the satellite-terrestrial communication system enabling future intelligent rail transportation.
|
electrical engineering and systems science
|
The equations governing the flow of a viscous incompressible fluid around a rigid body that performs a prescribed time-periodic motion with constant axes of translation and rotation are investigated. Under the assumption that the period and the angular velocity of the prescribed rigid-body motion are compatible, and that the mean translational velocity is non-zero, existence of a time-periodic solution is established. The proof is based on an appropriate linearization, which is examined within a setting of absolutely convergent Fourier series. Since the corresponding resolvent problem is ill-posed in classical Sobolev spaces, a linear theory is developed in a framework of homogeneous Sobolev spaces.
|
mathematics
|
Synthesizing high quality saliency maps from noisy images is a challenging problem in computer vision and has many practical applications. Samples generated by existing techniques for saliency detection cannot handle the noise perturbations smoothly and fail to delineate the salient objects present in the given scene. In this paper, we present a novel end-to-end coupled Denoising based Saliency Prediction with Generative Adversarial Network (DSAL-GAN) framework to address the problem of salient object detection in noisy images. DSAL-GAN consists of two generative adversarial-networks (GAN) trained end-to-end to perform denoising and saliency prediction altogether in a holistic manner. The first GAN consists of a generator which denoises the noisy input image, and in the discriminator counterpart we check whether the output is a denoised image or ground truth original image. The second GAN predicts the saliency maps from raw pixels of the input denoised image using a data-driven metric based on saliency prediction method with adversarial loss. Cycle consistency loss is also incorporated to further improve salient region prediction. We demonstrate with comprehensive evaluation that the proposed framework outperforms several baseline saliency models on various performance benchmarks.
|
computer science
|
We introduce a planar embedding of the k-regular k-XORSAT problem, in which solutions are encoded in the ground state of a classical statistical mechanics model of reversible logic gates arranged on a square grid and acting on bits that represent the Boolean variables of the problem. The special feature of this embedding is that the resulting model lacks a finite-temperature phase transition, thus bypassing the first-order thermodynamic transition known to occur in the random graph representation of XORSAT. In spite of this attractive feature, the thermal relaxation into the ground state displays remarkably slow glassy behavior. The question addressed in this paper is whether this planar embedding can afford an efficient path to solution of k-regular k-XORSAT via quantum adiabatic annealing. We first show that our model bypasses an avoided level crossing and consequent exponentially small gap in the limit of small transverse fields. We then present quantum Monte Carlo results for our embedding of the k-regular k-XORSAT that strongly support a picture in which second-order and first-order transitions develop at a finite transverse field for k = 2 and k = 3, respectively. This translates into power-law and exponential dependences in the scaling of energy gaps with system size, corresponding to times-to-solution which are, respectively, polynomial and exponential in the number of variables. We conclude that neither classical nor quantum annealing can efficiently solve our reformulation of XORSAT, even though the original problem can be solved in polynomial time by Gaussian elimination.
|
condensed matter
|
We develop the scale transformed power prior for settings where historical and current data involve different data types, such as binary and continuous data, respectively. This situation arises often in clinical trials, for example, when historical data involve binary responses and the current data involve time-to-event or some other type of continuous or discrete outcome. The power prior proposed by Ibrahim and Chen (2000) does not address the issue of different data types. Herein, we develop a new type of power prior, which we call the scale transformed power prior (straPP). The straPP is constructed by transforming the power prior for the historical data by rescaling the parameter using a function of the Fisher information matrices for the historical and current data models, thereby shifting the scale of the parameter vector from that of the historical to that of the current data. Examples are presented to motivate the need for a scale transformation and simulation studies are presented to illustrate the performance advantages of the straPP over the power prior and other informative and non-informative priors. A real dataset from a clinical trial undertaken to study a novel transitional care model for stroke survivors is used to illustrate the methodology.
|
statistics
|
In order to work with non-Nagata rings which are Nagata "up-to-completely-decomposed-universal-homeomorphism", specifically finite rank hensel valuation rings, we introduce the notions of pseudo-integral closure and pseudo-normalisation. We use this notion to give a much more direct and shorter proof that $H^n_{cdh}(X, F) = H^n_{ldh}(X, F)$ for homotopy sheaves $F$ of modules over the $\mathbb{Z}_{(l)}$-linear motivic Eilenberg-Maclane spectrum. This comparison is an alternative to the first half of the authors volume Ast\'erisque 391, whose main theorem is a cdh-descent result for Voevodsky motives. The motivating new insight is really accepting that Voevodsky's motivic cohomology (with $\mathbb{Z}[1/p]$-coefficients) is invariant not just for nilpotent thickenings, but for all universal homeomorphisms.
|
mathematics
|
In this paper we propose a wider class of symmetries including the Galilean shift symmetry as a subclass. We will show how to construct ghost-free nonlocal actions, consisting of infinite derivative operators, which are invariant under such symmetries, but whose functional form is not simply given by exponentials of entire functions. Motivated by this, we will consider the case of a scalar field and discuss the pole structure of the propagator which has infinitely many complex conjugate poles, but satisfies the tree-level unitarity. We will also consider the possibility to construct UV complete Galilean theories by showing how the ultraviolet behavior of loop integrals can be ameliorated. Moreover, we will consider kinetic operators respecting the same symmetries in the context of linearized gravity. In such a scenario, the graviton propagator turns out to be ghost-free and the spacetime metric generated by a point-like source is nonsingular. These new nonlocal models can be seen as an infinite derivative generalization of Lee-Wick theories and open a new branch of nonlocal theories.
|
high energy physics theory
|
In this note we give new proofs of rectifiability of RCD(K,N) spaces as metric measure spaces and lower semicontinuity of the essential dimension, via $\delta$-splitting maps. The arguments are inspired by the Cheeger-Colding theory for Ricci limits and rely on the second order differential calculus developed by Gigli and on the convergence and stability results by Ambrosio-Honda.
|
mathematics
|
We investigate the role of four-fermion interactions, rotational and particle-hole asymmetries, and their interplay in three-dimensional systems with a quadratic band touching point by virtue of the renormalization group approach, which allows to treat all these facets unbiasedly. The coupled flow evolutions of interaction parameters are derived by taking into account one-loop corrections in order to explore the behaviors of low-energy states. We find four-fermion interaction can drive Gaussian fixed points to be unstable in the low-energy regime. In addition, the rotational and particle-hole asymmetries, together with the fermion-fermion interactions conspire to split the trajectories of distinct types of fermionic couplings and induce superconductivity instability with appropriate starting conditions. Furthermore, we present the schematic phase diagrams in the parameter space, showing the overall behaviors of states in the low-energy regime caused by both fermionic interactions and asymmetries.
|
condensed matter
|
Lie algebra valued equations translating the integrability of a general two-dimensional Wess-Zumino-Witten model are given. We found simple solutions to these equations and identified three types of new integrable non-linear sigma models. One of them is a modified Yang-Baxter sigma model supplemented with a Wess-Zumino-Witten term.
|
high energy physics theory
|
In this letter, we present a novel Gaussian Process Learning-based Probabilistic Optimal Power Flow (GP-POPF) for solving POPF under renewable and load uncertainties of arbitrary distribution. The proposed method relies on a non-parametric Bayesian inference-based uncertainty propagation approach, called Gaussian Process (GP). We also suggest a new type of sensitivity called Subspace-wise Sensitivity, using observations on the interpretability of GP-POPF hyperparameters. The simulation results on 14-bus and 30-bus systems show that the proposed method provides reasonably accurate solutions when compared with Monte-Carlo Simulations (MCS) solutions at different levels of uncertain renewable penetration as well as load uncertainties, while requiring much less number of samples and elapsed time.
|
electrical engineering and systems science
|
Natural Human-Robot Interaction (HRI) is one of the key components for service robots to be able to work in human-centric environments. In such dynamic environments, the robot needs to understand the intention of the user to accomplish a task successfully. Towards addressing this point, we propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user. At the core of our system, we employ a multi-modal deep neural network for visual grounding. Unlike most grounding methods that tackle the challenge using pre-trained object detectors via a two-stepped process, we develop a single stage zero-shot model that is able to provide predictions in unseen data. We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets. Experimental results showed that the proposed model performs well in terms of accuracy and speed, while showcasing robustness to variation in the natural language input.
|
computer science
|
We present here a completely operatorial approach, using Hilbert-Schmidt operators, to compute spectral distances between time-like separated "events ", associated with the pure states of the algebra describing the Lorentzian Moyal plane, using the axiomatic framework given by [13, 14]. The result shows no deformations of non-commutative origin, as in the Euclidean case.
|
high energy physics theory
|
A time-reversal invariant topological insulator occupying a Euclidean half-space determines a 'Quaternionic' self-adjoint Fredholm family. We show that the discrete spectrum data for such a family is geometrically encoded in a non-trivial 'Real' gerbe. The gerbe invariant, rather than a na\"ive counting of Dirac points, precisely captures how edge states completely fill up the bulk spectral gap in a topologically protected manner.
|
high energy physics theory
|
Magnetic flux emergence has been shown to be a key mechanism for unleashing a wide variety of solar phenomena. However, there are still open questions concerning the rise of the magnetized plasma through the atmosphere, mainly in the chromosphere, where the plasma departs from local thermodynamic equilibrium (LTE) and is partially ionized. We aim to investigate the impact of the nonequilibrium (NEQ) ionization and recombination and molecule formation of hydrogen, as well as ambipolar diffusion, on the dynamics and thermodynamics of the flux emergence process. Using the Bifrost code, we performed 2.5D numerical experiments of magnetic flux emergence from the convection zone up to the corona. The experiments include the NEQ ionization and recombination of atomic hydrogen, the NEQ formation and dissociation of H2 molecules, and the ambipolar diffusion term of the Generalized Ohm's Law. Our experiments show that the LTE assumption substantially underestimates the ionization fraction in most of the emerged region, leading to an artificial increase in the ambipolar diffusion and, therefore, in the heating and temperatures as compared to those found when taking the NEQ effects on the hydrogen ion population into account. We see that LTE also overestimates the number density of H2 molecules within the emerged region, thus mistakenly magnifying the exothermic contribution of the H2 molecule formation to the thermal energy during the flux emergence process. We find that the ambipolar diffusion does not significantly affect the amount of total unsigned emerged magnetic flux, but it is important in the shocks that cross the emerged region, heating the plasma on characteristic times ranging from 0.1 to 100 s. We also briefly discuss the importance of including elements heavier than hydrogen in the equation of state so as not to overestimate the role of ambipolar diffusion in the atmosphere.
|
astrophysics
|
We introduce three representation formulas for the fractional $p$-Laplace operator in the whole range of parameters $0<s<1$ and $1<p<\infty$. Note that for $p\ne 2$ this a nonlinear operator. The first representation is based on a splitting procedure that combines a renormalized nonlinearity with the linear heat semigroup. The second adapts the nonlinearity to the Caffarelli-Silvestre linear extension technique. The third one is the corresponding nonlinear version of the Balakrishnan formula. We also discuss the correct choice of the constant of the fractional $p$-Laplace operator in order to have continuous dependence as $p\to 2$ and $s \to 0^+, 1^-$. A number of consequences and proposals are derived. Thus, we propose a natural spectral-type operator in domains, different from the standard restriction of the fractional $p$-Laplace operator acting on the whole space. We also propose numerical schemes, a new definition of the fractional $p$-Laplacian on manifolds, as well as alternative characterizations of the $W^{s,p}(\mathbb{R}^n)$ seminorms.
|
mathematics
|
In this work we investigate quantum-enhanced target detection in the presence of large background noise using multidimensional quantum correlations between photon pairs generated through spontaneous parametric down-conversion. Until now similar experiments have only utilized one of the photon pairs' many degrees of freedom such as temporal correlations and photon number correlations. Here, we utilized both temporal and spectral correlations of the photon pairs and achieved over an order of magnitude reduction to the background noise and in turn significant reduction to data acquisition time when compared to utilizing only temporal modes. We believe this work represents an important step in realizing a practical, real-time quantum-enhanced target detection system. The demonstrated technique will also be of importance in many other quantum sensing applications and quantum communications.
|
quantum physics
|
We derive the AdS${}_5\times$S${}^5$ Green-Schwarz superstring from four-dimensional Beltrami-Chern-Simons theory reduced on a manifold with singular boundary conditions. In this construction, the Lax connection and spectral parameter of the integrable superstring have a simple geometric origin in four dimensions as gauge connection and reduction coordinate. Kappa symmetry arises as a certain class of singular gauge transformations, while the worldsheet metric comes from complex-structure-changing Beltrami differentials. Our approach offers the possibility of investigating integrable holography using traditional field theory methods.
|
high energy physics theory
|
The e_LiBANS project aims at creating accelerator based compact neutron facilities for diverse interdisciplinary applications. After the successful setting up and characterization of a thermal neutron source based on a medical electron LINAC, a similar assembly for epithermal neutrons has been developed. The project is based on an Elekta 18 MV LINAC coupled with a photoconverter-moderator system which deploys the ({\gamma},n) photonuclear reaction to convert a bremsstrahlung photon beam into a neutron field. This communication describes the development of novel diagnostics to qualify the thermal and epithermal neutron fields that have been produced. In particular, a proof of concept for the use of silicon carbide photodiodes as a thermal neutron rate detector is presented.
|
physics
|
Context. Monte Carlo methods can be used to evaluate the uncertainty of a reaction rate that arises from many uncertain nuclear inputs. However, until now no attempt has been made to find the effect of correlated energy uncertainties in input resonance parameters. Aims. To investigate the impact of correlated energy uncertainties on reaction rates. Methods. Using a combination of numerical and Monte Carlo variation of resonance energies, the effect of correlations are investigated. Five reactions are considered: two fictional, illustrative cases and three reactions whose rates are of current interest. Results. The effect of correlations in resonance energies depends on the specific reaction cross section and temperatures considered. When several resonances contribute equally to a reaction rate, and are located either side of the Gamow peak, correlations between their energies dilute their effect on reaction rate uncertainties. If they are both located above or below the maximum of the Gamow peak, however, correlations between their resonance energies can increase the reaction rate uncertainties. This effect can be hard to predict for complex reactions with wide and narrow resonances contributing to the reaction rate.
|
astrophysics
|
We calculate the masses of $\chi_{\rm c}(3P)$ states with threshold corrections in a coupled-channel model. The model was recently applied to the description of the properties of $\chi_{\rm c}(2P)$ and $\chi_{\rm b}(3P)$ multiplets [Phys.\ Lett.\ B {\bf 789}, 550 (2019)]. We also compute the open-charm strong decay widths of the $\chi_{\rm c}(3P)$ states and their radiative transitions. According to our predictions, the $\chi_{\rm c}(3P)$ states should be dominated by the charmonium core, but they may also show small meson-meson components. The $X(4274)$ is interpreted as a $c \bar c$ $\chi_{\rm c1}(3P)$ state. More informations on the other members of the $\chi_{\rm c}(3P)$ multiplet, as well as a more rigorous analysis of the $X(4274)$'s decay modes, are needed to provide further indications on the quark structure of the previous resonance.
|
high energy physics phenomenology
|
In robotics, methods and softwares usually require optimizations of hyperparameters in order to be efficient for specific tasks, for instance industrial bin-picking from homogeneous heaps of different objects. We present a developmental framework based on long-term memory and reasoning modules (Bayesian Optimisation, visual similarity and parameters bounds reduction) allowing a robot to use meta-learning mechanism increasing the efficiency of such continuous and constrained parameters optimizations. The new optimization, viewed as a learning for the robot, can take advantage of past experiences (stored in the episodic and procedural memories) to shrink the search space by using reduced parameters bounds computed from the best optimizations realized by the robot with similar tasks of the new one (e.g. bin-picking from an homogenous heap of a similar object, based on visual similarity of objects stored in the semantic memory). As example, we have confronted the system to the constrained optimizations of 9 continuous hyperparameters for a professional software (Kamido) in industrial robotic arm bin-picking tasks, a step that is needed each time to handle correctly new object. We used a simulator to create bin-picking tasks for 8 different objects (7 in simulation and one with real setup, without and with meta-learning with experiences coming from other similar objects) achieving goods results despite a very small optimization budget, with a better performance reached when meta-learning is used (84.3% vs 78.9% of success overall, with a small budget of 30 iterations for each optimization) for every object tested (p-value=0.036).
|
computer science
|
A promising approach toward efficient energy management is non-intrusive load monitoring (NILM), that is to extract the consumption profiles of appliances within a residence by analyzing the aggregated consumption signal. Among efficient NILM methods are event-based algorithms in which events of the aggregated signal are detected and classified in accordance with the appliances causing them. The large number of appliances and the presence of appliances with close consumption values are known to limit the performance of event-based NILM methods. To tackle these challenges, one could enhance the feature space which in turn results in extra hardware costs, installation complexity, and concerns regarding the consumer's comfort and privacy. This has led to the emergence of an alternative approach, namely semi-intrusive load monitoring (SILM), where appliances are partitioned into blocks and the consumption of each block is monitored via separate power meters. While a greater number of meters can result in more accurate disaggregation, it increases the monetary cost of load monitoring, indicating a trade-off that represents an important gap in this field. In this paper, we take a comprehensive approach to close this gap by establishing a so-called notion of "disaggregation difficulty metric (DDM)," which quantifies how difficult it is to monitor the events of any given group of appliances based on both their power values and the consumer's usage behavior. Thus, DDM in essence quantifies how much is expected to be gained in terms of disaggregation accuracy of a generic event-based algorithm by installing meters on the blocks of any partition of the appliances. Experimental results based on the REDD dataset illustrate the practicality of the proposed approach in addressing the aforementioned trade-off.
|
electrical engineering and systems science
|
This work deals with thick branes in bulk with a single extra dimension modeled by a two-field configuration. We first consider the inclusion of the cuscuton to also control the dynamics of one of the fields and investigate how it contributes to change the internal structure of the configuration in three distinct situations, with the standard, the modified and the asymmetric Bloch brane. The results show that the branes get a rich internal structure, with the geometry presenting a novel behavior which is also governed by the parameter that controls the strength of the cuscuton term. We also study the case where the dynamics of one of the two fields is only described by the cuscuton. All the models support analytical solutions which are stable against fluctuations in the metric, and the main results unveil significant modifications in the warp factor and energy density of the branes.
|
high energy physics theory
|
A primary ideal in a polynomial ring can be described by the variety it defines and a finite set of Noetherian operators, which are differential operators with polynomial coefficients. We implement both symbolic and numerical algorithms to produce such a description in various scenarios as well as routines for studying affine schemes through the prism of Noetherian operators and Macaulay dual spaces.
|
mathematics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.