text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Inspired by recent connections between spectral theory and topological string theory, we propose exact quantization conditions for the relativistic Toda lattice of N particles. These conditions involve the Nekrasov-Shatashvili free energy, which resums the perturbative WKB expansion, but they require in addition a non-perturbative contribution, which is related to the perturbative result by an S-duality transformation of the Planck constant. We test the quantization conditions against explicit calculations of the spectrum for N=3. Our proposal can be generalized to arbitrary toric Calabi-Yau manifolds and might solve the corresponding quantum integrable system of Goncharov and Kenyon.
|
high energy physics theory
|
Perturbative quantum field theory usually uses second quantisation and Feynman diagrams. The worldline formalism provides an alternative approach based on first quantised particle path integrals, similar in spirit to string perturbation theory. Here we review the history, main features and present applications of the formalism. Our emphasis is on recent developments such as the path integral representation of open fermion lines, the description of colour using auxiliary worldline fields, incorporation of higher spin, and extension of the formalism to non-commutative space.
|
high energy physics theory
|
The focus of this paper is directed towards optimal control of multi-agent systems consisting of one leader and a number of followers in the presence of noise. The dynamics of every agent is assumed to be linear, and the performance index is a quadratic function of the states and actions of the leader and followers. The leader and followers are coupled in both dynamics and cost. The state of the leader and the average of the states of all followers (called mean-field) are common information and known to all agents; however, the local state of the followers are private information and unknown to other agents. It is shown that the optimal distributed control strategy is linear time-varying, and its computational complexity is independent of the number of followers. This strategy can be computed in a distributed manner, where the leader needs to solve one Riccati equation to determine its optimal strategy while each follower needs to solve two Riccati equations to obtain its optimal strategy. This result is subsequently extended to the case of the infinite horizon discounted and undiscounted cost functions, where the optimal distributed strategy is shown to be stationary. A numerical example with $100$ followers is provided to demonstrate the efficacy of the results.
|
mathematics
|
It has been suggested that a certain class of UV-incomplete quantum field theories can avoid unitarity violation above the cut-off energy scale by forming classical configurations at a length scale much larger than the cut-off length. This phenomenon has been named classicalization and is characterized by a length scale called classicalization radius $r_*$ which increases with energy. It has been argued that scalar field theories with derivative self-interactions are likely candidate for UV-completetion by classicalization and are much likely to form classicalons compared to non-classicalizing theories like $\phi^4$ scalar field theory. To look further into this claim, in this paper 2 to N particle scattering amplitude, scattering cross-section and the amplitude of classical structure formation has been calculated and compared for a classicalizing and non-classicalizing theory. As the phenomenon of classicalization relies on creating a large number of low energy particles from high energy two particle scattering, the ratios between the scattering amplitudes and the amplitude of classical structure formation in these two cases are an indicator of the feasibility of the classicalization process. From our calculation, it has been observed that with the increase of energy, the ratios of the relevant quantities between classicalizing and non-classicalizing theory actually decrease which is quite contrary to the expected behaviour if classicalization is supposed to self-unitarize certain class of theories beyond cut-off energy.
|
high energy physics theory
|
In this paper, we consider a set of new symmetries in the SM, {\it diagonal reflection} symmetries $R \, m_{u,\nu}^{*} \, R = m_{u,\nu}, ~ m_{d,e}^{*} = m_{d,e}$ with $R =$ diag $(-1,1,1)$. These generalized $CP$ symmetries predict the Majorana phases to be $\alpha_{2,3} /2 \sim 0$ or $\pi /2$. A realization of reflection symmetries suggests a broken chiral $U(1)_{\rm PQ}$ symmetry and a flavored axion. The axion scale is suggested to be $\langle \theta_{u,d} \rangle \sim \Lambda_{\rm GUT} \, \sqrt{m_{u,d} \, m_{c,s}} / v \sim 10^{12} \, $[GeV]. By combining the symmetries with the four-zero texture, the mass eigenvalues and mixing matrices of quarks and leptons are reproduced well. This scheme predicts the normal hierarchy, the Dirac phase $\delta_{CP} \simeq 203^{\circ},$ and $|m_{1}| \simeq 2.5$ or $6.2 \, $[meV]. In this scheme, the type-I seesaw mechanism and a given neutrino Yukawa matrix $Y_{\nu}$ completely determine the structure of right-handed neutrino mass $M_{R}$. An $u-\nu$ unification predicts mass eigenvalues to be $ (M_{R1} \, , M_{R2} \, , M_{R3}) = (O (10^{5}) \, , O (10^{9}) \, , O (10^{14})) \, $[GeV].
|
high energy physics phenomenology
|
Generating 3D speech-driven talking head has received more and more attention in recent years. Recent approaches mainly have following limitations: 1) most speaker-independent methods need handcrafted features that are time-consuming to design or unreliable; 2) there is no convincing method to support multilingual or mixlingual speech as input. In this work, we propose a novel approach using phonetic posteriorgrams (PPG). In this way, our method doesn't need hand-crafted features and is more robust to noise compared to recent approaches. Furthermore, our method can support multilingual speech as input by building a universal phoneme space. As far as we know, our model is the first to support multilingual/mixlingual speech as input with convincing results. Objective and subjective experiments have shown that our model can generate high quality animations given speech from unseen languages or speakers and be robust to noise.
|
electrical engineering and systems science
|
The cost of annotating transcriptions for large speech corpora becomes a bottleneck to maximally enjoy the potential capacity of deep neural network-based automatic speech recognition models. In this paper, we present a new training pipeline boosting the conventional active learning approach targeting label-efficient learning to resolve the mentioned problem. Existing active learning methods only focus on selecting a set of informative samples under a labeling budget. One step further, we suggest that the training efficiency can be further improved by utilizing the unlabeled samples, exceeding the labeling budget, by introducing sophisticatedly configured unsupervised loss complementing supervised loss effectively. We propose new unsupervised loss based on consistency regularization, and we configure appropriate augmentation techniques for utterances to adopt consistency regularization in the automatic speech recognition task. From the qualitative and quantitative experiments on the real-world dataset and under real-usage scenarios, we show that the proposed training pipeline can boost the efficacy of active learning approaches, thus successfully reducing a sustainable amount of human labeling cost.
|
electrical engineering and systems science
|
In this paper we begin an exploration of several domination-related parameters (among which are the total, restrained, total restrained, paired, outer connected and total outer connected domination numbers) in the generalized lexicographic product (GLP for short) of graphs. We prove that for each GLP of graphs there exist several equality chains containing these parameters. Some known results on standard lexicographic product of two graphs are generalized or/and extended. We also obtain results on well $\mu$-dominated GLP of graphs, where $\mu$ stands for any of the above mentioned domination parameters. In particular, we present a characterization of well $\mu$-dominated GLP of graphs in the cases when $\mu$ is the domination number or the total domination number.
|
mathematics
|
We study the $(1+1)$ dimensional generalized Dirac oscillator with a position-dependent mass. In particular, bound states with zero energy as well as non zero energy have been obtained for suitable choices of the mass function/oscillator interaction. It has also been shown that in the presence of an electric field, bound states exist if the magnitude of the electric field does not exceed a critical value.
|
quantum physics
|
In recent years, generative adversarial networks (GANs) have demonstrated impressive experimental results while there are only a few works that foster statistical learning theory for GANs. In this work, we propose an infinite dimensional theoretical framework for generative adversarial learning. Assuming the class of uniformly bounded $k$-times $\alpha$-H\"older differentiable and uniformly positive densities, we show that the Rosenblatt transformation induces an optimal generator, which is realizable in the hypothesis space of $\alpha$-H\"older differentiable generators. With a consistent definition of the hypothesis space of discriminators, we further show that in our framework the Jensen-Shannon divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero. Under sufficiently strict regularity assumptions on the density of the data generating process, we also provide rates of convergence based on concentration and chaining.
|
computer science
|
Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality.
|
computer science
|
The key requirement for quantum networking is the distribution of entanglement between nodes. Surprisingly, entanglement can be generated across a network without direct transfer - or communication - of entanglement. In contrast to information gain, which cannot exceed the communicated information, the entanglement gain is bounded by the communicated quantum discord, a more general measure of quantum correlation that includes but is not limited to entanglement. Here, we experimentally entangle two communicating parties sharing three initially separable photonic qubits by exchange of a carrier photon that is unentangled with either party at all times. We show that distributing entanglement with separable carriers is resilient to noise and in some cases becomes the only way of distributing entanglement through noisy environments.
|
quantum physics
|
We investigate the critical behavior of the two-dimensional spin-$1$ Baxter-Wu model in a crystal field using entropic sampling simulations with the joint density of states. We obtain the temperature-crystal field phase diagram, which includes a tetracritical line ending at a pentacritical point. A finite-size scaling analysis of the maximum of the specific heat, while changing the crystal field anisotropy, is used to obtain a precise location of the pentacritical point. Our results give the critical temperature and crystal field as $T_{pc}=0.98030(10)$ and $D_{pc}=1.68288(62)$. We also detect that at the first-order region of the phase diagram, the specific heat exhibits a double peak structure as in the Schottky-like anomaly, which is associated with an order-disorder transition.
|
condensed matter
|
We study a multiple-input single-output (MISO) communication system assisted by a reconfigurable intelligent surface (RIS). A base station (BS) having multiple antennas is assumed to be communicating to a single-antenna user equipment (UE), with the help of a RIS. We assume that the system operates in an environment with line-of-sight (LoS) between the BS and RIS, whereas the RIS-UE link experiences Rayleigh fading. We present a closed form expression for the optimal active and passive beamforming vectors at the BS and RIS respectively. Then, by characterizing the statistical properties of the received SNR at the UE, we apply them to derive analytical approximations for different system performance measures, including the outage probability, average achievable rate and average symbol error probability (SEP). Our results, in general, demonstrate that the gain due to RIS can be substantial, and can be significantly greater than the gains reaped by using multiple BS antennas.
|
electrical engineering and systems science
|
This paper proposes a modeling-by-generation (MbG) excitation vocoder for a neural text-to-speech (TTS) system. Recently proposed neural excitation vocoders can realize qualified waveform generation by combining a vocal tract filter with a WaveNet-based glottal excitation generator. However, when these vocoders are used in a TTS system, the quality of synthesized speech is often degraded owing to a mismatch between training and synthesis steps. Specifically, the vocoder is separately trained from an acoustic model front-end. Therefore, estimation errors of the acoustic model are inevitably boosted throughout the synthesis process of the vocoder back-end. To address this problem, we propose to incorporate an MbG structure into the vocoder's training process. In the proposed method, the excitation signal is extracted by the acoustic model's generated spectral parameters, and the neural vocoder is then optimized not only to learn the target excitation's distribution but also to compensate for the estimation errors occurring from the acoustic model. Furthermore, as the generated spectral parameters are shared in the training and synthesis steps, their mismatch conditions can be reduced effectively. The experimental results verify that the proposed system provides high-quality synthetic speech by achieving a mean opinion score of 4.57 within the TTS framework.
|
electrical engineering and systems science
|
It has recently been shown that vacuum expectation values and Feynman path integrals can be regularized using Fourier integral operator $\zeta$-function, yet the physical meaning of these $\zeta$-regularized objects was unknown. Here we show that $\zeta$-regularized vacuum expectations appear as continuum limits using a certain discretization scheme. Furthermore, we study the rate of convergence for the discretization scheme using the example of a one-dimensional hydrogen atom in $(-\pi,\pi)$ which we evaluate classically, using the Rigetti Quantum Virtual Machine, and on the Rigetti 8Q quantum chip "Agave" device. We also provide the free radiation field as an example for the computation of $\zeta$-regularized vacuum expectation values in a gauge theory.
|
quantum physics
|
We present a dual-channel optical transmitter (MTx+)/transceiver (MTRx+) for the front-end readout electronics of high-energy physics experiments. MTx+ utilizes two Transmitter Optical Sub-Assemblies (TOSAs) and MTRx+ utilizes a TOSA and a Receiver Optical Sub-Assemblies (ROSA). Both MTx+ and MTRx+ receive multimode fibers with standard Lucent Connectors (LCs) as the optical interface and can be panel or board mounted to a motherboard with a standard Enhanced Small Form-factor Pluggable (SFP+) connector as the electrical interface. MTx+ and MTRx+ employ a dual-channel Vertical-Cavity Surface-Emitting Laser (VCSEL) driver ASIC called LOCld65, which brings the transmitting data rate up to 14 Gbps per channel. MTx+ and MTRx+ have been tested to survive 4.9 kGy(SiO2).
|
physics
|
Efficient simulation of plasmas in various contexts often involves the use of meshes that conform to the intrinsic geometry of the system under consideration. We present here a description of a new magnetohydrodynamic code, Gamera (Grid Agnostic MHD for Extended Research Applications), designed to combine geometric flexibility with high-order spatial reconstruction and constrained transport to maintain the divergence-free magnetic field. Gamera carries on the legacy of its predecessor, the LFM (Lyon-Fedder-Mobarry), a research code whose use in space physics has spanned three decades. At the time of its initial development the LFM code had a number of novel features: eighth-order centered spatial differencing, the Partial Donor Cell Method limiter for shock capturing, a non-orthogonal staggered mesh with constrained transport, and conservative averaging-reconstruction for axis singularities. A capability to handle multiple ion species was also added later. Gamera preserves the core numerical philosophy of LFM while also incorporating numerous algorithmic and computational improvements. The upgrades in the numerical schemes include accurate grid metric calculations using high-order Gaussian quadrature techniques, high-order upwind reconstruction, non-clipping options for interface values, and improved treatment of axis singularities. The improvements in the code implementation include the use of data structures and memory access patterns conducive to aligned vector operations and the implementation of hybrid parallelism, using MPI and OMP. Gamera is designed to be a portable and easy-to-use code that implements multi-dimensional MHD simulations in arbitrary non-orthogonal curvilinear geometries on modern supercomputer architectures.
|
physics
|
Precision determinations of Standard Model (SM) Electro-Weak (EW) parameters at the Large Hadron Collider (LHC) are dominated by uncertainties due to Parton Distribution Functions (PDFs). Reweighting and profiling techniques are routinely employed to treat this. We explore approaches based on combining measurements of charged current and neutral current Drell-Yan (DY) asymmetries to improve PDF uncertainties. We present the results of a numerical analysis performed with the open-source platform xFitter. PDF uncertainties are examined for lepton-charge and forward-backward asymmetries in regions of transverse and invariant masses near the vector-boson peak, based on LHC Run III and HL-LHC luminosity scenarios. We discuss the complementarity of the asymmetries in reducing PDF uncertainties in observables relevant to both SM and Beyond the SM (BSM) physics.
|
high energy physics phenomenology
|
In the framework of the QCD light-cone sum rules (LCSRs) we present the analysis of all $B, B_{s}\to \eta^{(\prime)}$ and $D, D_{s}\to \eta^{(\prime)}$ form factors ($f^+, f^0$ and $f^T$) by including $m_{\eta^{(\prime)}}^2$ corrections in the leading (up to the twist-four) and next-to-leading order (up to the twist-three) in QCD, and two-gluon contributions to the form factors at the leading twist. The SU(3)-flavour breaking corrections and the axial anomaly contributions to the distribution amplitudes are also consistently taken into account. The complete results for the $f^0$ and $f^T$ form factors of $B,B_s \to \eta^{(\prime)}$ and $D, D_{s} \to \eta^{(\prime)}$ relevant for processes like $B \to \eta^{(\prime)} \tau \nu_{\tau}$ or $B_{s} \to \eta^{(\prime)} l^+ l^-$ are given for the first time, as well as the two-gluon contribution to the tensor form factors. The values obtained for the $f^+$ form factors are as follows: $f^+_{B\eta}(0)= 0.168^{+0.042}_{-0.047}$, $|f^+_{B_s\eta}(0)|= 0.212^{+0.015}_{-0.013}$, $f^+_{B\eta^\prime}(0)= 0.130^{+0.036}_{-0.032}$, $f^+_{B_s\eta^\prime}(0)= 0.252^{+0.023}_{-0.020}$ and $f^+_{D\eta}(0)= 0.429^{+0.165}_{-0.141}$, $|f^+_{D_s\eta}(0)|= 0.495^{+0.030}_{-0.029}$, $f^+_{D\eta^\prime}(0)= 0.292^{+0.113}_{-0.104}$, $f^+_{D_s\eta^\prime}(0)= 0.558^{+0.047}_{-0.045}$. Also phenomenological predictions for semileptonic $B, B_{s}\to \eta^{(\prime)}$ and $D, D_{s}\to \eta^{(\prime)}$ decay modes are given.
|
high energy physics phenomenology
|
The quantum singular value transformation is a powerful quantum algorithm that allows one to apply a polynomial transformation to the singular values of a matrix that is embedded as a block of a unitary transformation. This paper shows how to perform the quantum singular value transformation for a matrix that can be embedded as a block of a Hamiltonian. The transformation can be implemented in a purely Hamiltonian context by the alternating application of Hamiltonians for chosen intervals: it is an example of the Quantum Alternating Operator Ansatz (generalized QAOA). We also show how to use the Hamiltonian quantum singular value transformation to perform inverse block encoding to implement a unitary of which a given Hamiltonian is a block. Inverse block encoding leads to novel procedures for matrix multiplication and for solving differential equations on quantum information processors in a purely Hamiltonian fashion.
|
quantum physics
|
Coronal Mass Ejections (CMEs) are key drivers of space weather activity but most predictions have been limited to the expected arrival time of a CME, rather than the internal properties that affect the severity of an impact. Many properties, such as the magnetic field density and mass density, follow conservation laws and vary systematically with changes in the size of a CME. We present ANTEATR-PARADE, the newest version of the ANTEATR arrival time model, which now includes physics-driven changes in the size and shape of both the CME's central axis and its cross section. Internal magnetic and thermal and external drag forces affect the acceleration of the CME in different directions, inducing asymmetries between the radial and perpendicular directions. These improvements should lead to more realistic CME velocities, both bulk and expansion, sizes and shapes, and internal properties. We present the model details, an initial illustration of the general behavior, and a study of the relative importance of the different forces. The model shows a pancaking of both the cross section and central axis of the CME so that their radial extent becomes smaller than their extent in the perpendicular direction. We find that the initial velocities, drag, any form of cross section expansion, and the precise form of thermal expansion have strong effects. The results are less sensitive to axial forces and the specific form of the cross section expansion.
|
astrophysics
|
The dynamical assembly of binary black holes (BBHs) in dense star clusters (SCs) is one of the most promising pathways for producing observable gravitational wave (GW) sources, however several other formation scenarios likely operate as well. One of the current outstanding questions is how these different pathways may be distinguished apart. In this paper we suggest a new multi-messenger observable that can be used to constrain the formation of BBH mergers originating from SCs: the electromagnetic signal from tidal disruptions (TDs) of stars by BBHs. Such TDs will show variability in their light curve from the orbital motion of the disruptive BBHs, and can therefore be used to map the BBH orbital period distribution, and thereby also the dynamical mechanisms that eventually drive the BBHs to merger. Using an analytical approach including General Relativistic effects, we find that the orbital period distribution of BBHs within globular clusters peaks on timescales of days, which we argue is unique to this assembly pathway. We propose that the search for variable TDs in current and future EM transient surveys might be used to constrain the merger history of BBHs in SCs.
|
astrophysics
|
In order to clarify the properties of the secondary clump star HD 226808 (KIC 5307747), we combined four years of data from Kepler space photometry with high-resolution spectroscopy of the High Efficiency and Resolution Mercator \'Echelle Spectrograph mounted on the Mercator telescope. The fundamental atmospheric parameters, radial velocities, rotation velocities, and elemental abundance for Fe and Li were determined by analyzing line strengths and fitting-line profiles, based on a 1D local thermodynamic equilibrium model atmosphere. Second, we analyzed a photometric light curve obtained by Kepler and we extracted asteroseismic data of this target using Lets Analysis, Use and Report of Asteroseismology, a new seismic tool developed for the study of evolved FGK solar-like stars. We determined the evolutionary status and effective temperature, surface gravity, metallicity, microturbulence, and chemical abundances for Li, Ti, Fe, and Ni for HD~226808, by employing spectroscopy, asteroseismic scaling relations, and evolutionary structure models built in order to match observed data. Our results also show that an accurate synergy between good spectroscopic analysis and asteroseismology can provide a jump toward understanding evolved stars.}
|
astrophysics
|
We propose a new framework named DS-WGAN that integrates the doubly stochastic (DS) structure and the Wasserstein generative adversarial networks (WGAN) to model, estimate, and simulate a wide class of arrival processes with general non-stationary and random arrival rates. Regarding statistical properties, we prove consistency and convergence rate for the estimator solved by the DS-WGAN framework under a non-parametric smoothness condition. Regarding computational efficiency and tractability, we address a challenge in gradient evaluation and model estimation, arised from the discontinuity in the simulator. We then show that the DS-WGAN framework can conveniently facilitate what-if simulation and predictive simulation for future scenarios that are different from the history. Numerical experiments with synthetic and real data sets are implemented to demonstrate the performance of DS-WGAN. The performance is measured from both a statistical perspective and an operational performance evaluation perspective. Numerical experiments suggest that, in terms of performance, the successful model estimation for DS-WGAN only requires a moderate size of representative data, which can be appealing in many contexts of operational management.
|
statistics
|
The Carnegie-Chicago Hubble Program (CCHP) is building a direct path to the Hubble constant (H0) using Population II stars as the calibrator of the SN Ia-based distance scale. This path to calibrate the SN Ia is independent of the systematics in the traditional Cepheid-based technique. In this paper, we present the distance to M101, the host to SN2011fe, using the I-band tip of the red giant branch (TRGB) based on observations from the ACS/WFC instrument on the Hubble Space Telescope. The CCHP targets the halo of M101 where there is little to no host-galaxy dust, the red giant branch is isolated from nearly all other stellar populations, and there is virtually no source confusion or crowding at the magnitude of the tip. Applying the standard procedure for the TRGB method from the other works in the CCHP series, we find a foreground-extinction-corrected M101 distance modulus of {\mu_0}=29.07+/-0.04(stat)+/-0.05(sys) mag, which corresponds to a distance of D=6.52+/-0.12(stat)+/-0.15(sys) Mpc. This result is consistent with several recent Cepheid-based determinations, suggesting agreement between Population I and II distance scales for this nearby SN Ia-host galaxy. We further analyze four archival datasets for M101 that have targeted its outer disk to argue that targeting in the stellar halo provides much more reliable distance measurements from the TRGB method due to the combination of multiple structural components and heavily population contamination. Application of the TRGB in complex regions will have sources of uncertainty not accounted for in commonly used uncertainty measurement techniques.
|
astrophysics
|
We consider the scattering matrices of massive quantum field theories with no bound states and a global $O(N)$ symmetry in two spacetime dimensions. In particular we explore the space of two-to-two S-matrices of particles of mass $m$ transforming in the vector representation as restricted by the general conditions of unitarity, crossing, analyticity and $O(N)$ symmetry. We found a rich structure in that space by using convex maximization and in particular its convex dual minimization problem. At the boundary of the allowed space special geometric points such as vertices were found to correspond to integrable models. The dual convex minimization problem provides a novel and useful approach to the problem allowing, for example, to prove that generically the S-matrices so obtained saturate unitarity and, in some cases, that they are at vertices of the allowed space.
|
high energy physics theory
|
Recent work in Neural Machine Translation (NMT) has shown significant quality gains from noised-beam decoding during back-translation, a method to generate synthetic parallel data. We show that the main role of such synthetic noise is not to diversify the source side, as previously suggested, but simply to indicate to the model that the given source is synthetic. We propose a simpler alternative to noising techniques, consisting of tagging back-translated source sentences with an extra token. Our results on WMT outperform noised back-translation in English-Romanian and match performance on English-German, re-defining state-of-the-art in the former.
|
computer science
|
We show that the four-point functions in conformal field theory are defined as distributions on the boundary of the region of convergence of the conformal block expansion. The conformal block expansion converges in the sense of distributions on this boundary, i.e. it can be integrated term by term against appropriate test functions. This can be interpreted as a giving a new class of functionals that satisfy the swapping property when applied to the crossing equation, and we comment on the relation of our construction to other types of functionals. Our language is useful in all considerations involving the boundary of the region of convergence, e.g. for deriving the dispersion relations. We establish our results by elementary methods, relying only on crossing symmetry and the standard convergence properties of the conformal block expansion. This is the first in a series of papers on distributional properties of correlation functions in conformal field theory.
|
high energy physics theory
|
The relative contributions of explicit and dynamical chiral symmetry breaking in QCD models of the quark-gap equation are studied in dependence of frequently employed ans\"atze for the dressed interaction and quark-gluon vertex. The explicit symmetry breaking contributions are defined by a constituent-quark sigma term whereas the combined effects of explicit and dynamical symmetry breaking are described by a Euclidean constituent-mass solution. We extend this study of the gap equation to a quark-gluon vertex beyond the Abelian approximation complemented with numerical gluon- and ghost-dressing functions from lattice QCD. We find that the ratio of the sigma term over the Euclidean mass is largely independent of nonperturbative interaction and vertex models for current-quark masses, $m_{u,d}(\mu) \leq m(\mu) \leq m_b(\mu)$, and equal contributions of explicit and dynamical chiral symmetry breaking occur at $m(\mu) \approx 400$~MeV. For massive solutions of the gap equation with lattice propagators this value decreases to about 200~MeV.
|
high energy physics phenomenology
|
Stellar winds are an integral part of the underlying dynamo, the motor of stellar activity. The wind controls the star's angular momentum loss, which depends on the magnetic field geometry which varies significantly in time and latitude. Here we study basic properties of a self-consistent model that includes simple representations of both the global stellar dynamo in a spherical shell and the exterior in which the wind accelerates and becomes supersonic. We numerically solve an axisymmetric mean-field model for the induction, momentum, and continuity equations using an isothermal equation of state. The model allows for the simultaneous generation of a mean magnetic field and the development of a Parker wind. The resulting flow is transonic at the critical point, which we arrange to be between the inner and outer radii of the model. The boundary conditions are assumed to be such that the magnetic field is antisymmetric about the equator, i.e., dipolar. At the solar rotation rate, the dynamo is oscillatory and of $\alpha^2$ type. In most of the domain, the magnetic field corresponds to that of a split monopole. The magnetic energy flux is largest between the stellar surface and the critical point. The angular momentum flux is highly variable in time and can reach negative values, especially at midlatitudes. At rapid rotation of up to 50 times the solar value, most of the magnetic field is lost along the axis within the inner tangential cylinder of the model. The model reveals unexpected features that are not generally anticipated from models that are designed to reproduce the solar wind: highly variable angular momentum fluxes even from just an $\alpha^2$ dynamo in the star. A major caveat of our isothermal models with a magnetic field produced by a dynamo is the difficulty to reach small enough plasma betas without the dynamo itself becoming unrealistically strong inside the star.
|
astrophysics
|
Radio signal classification has a very wide range of applications in cognitive radio networks and electromagnetic spectrum monitoring. In this article, we consider scenarios where multiple nodes in the network participate in cooperative classification. We propose cooperative radio signal classification methods based on deep learning for decision fusion, signal fusion and feature fusion, respectively. We analyze the performance of these methods through simulation experiments. We conclude the article with a discussion of research challenges and open problems.
|
electrical engineering and systems science
|
With the onset of gated quantum machine learning, the architecture for such a system is an open question. Many architectures are created either ad hoc or are directly analogous from known classical architectures. Presented here is a novel algorithm which learns a gated quantum machine learning architecture while simultaneously learning its parameters. This proof of concept and some of its variations are explored and discussed.
|
quantum physics
|
In this paper we consider the unstable chaotic attractor of the Toda potential and stabilize it by a control in integral form. In order to obtain stability results, we propose a special technique which is based on the idea of reduction of integro-differential equations to ordinary differential equations system.
|
physics
|
The original Finite Selection Model (FSM) was developed in the 1970s to enhance the design of the RAND Health Insurance Experiment (HIE; Newhouse et al. 1993). At the time of its development by Carl Morris (Morris 1979), there were fundamental computational limitations to make the method widely available for practitioners. Today, as randomized experiments increasingly become more common, there is a need for implementing experimental designs that are randomized, balanced, robust, and easily applicable to several treatment groups. To help address this problem, we revisit the original FSM under the potential outcome framework for causal inference and provide its first readily available software implementation. In this paper, we provide an introduction to the FSM and a step-by-step guide for its use in R.
|
statistics
|
We report the discovery of an eclipsing binary millisecond pulsar in the globular cluster M92 (NGC6341) with the Five-hundred-meter Aperture Spherical radio Telescope (FAST). PSR J1717+4308A, or M92A, has a pulse frequency of 316.5~Hz (3.16~ms) and a dispersion measure of 35.45 pc cm$^{-3}$. The pulsar is a member of a binary system with an orbital period of 0.20~days around a low-mass companion which has a median mass of $\sim$0.18~\Ms. From observations so far, at least two eclipsing events have been observed in each orbit. The longer one lasted for ~5000~s in the orbital phase range 0.1--0.5. The other lasted for ~500~s and occurred between 1000--2000~s before or after the longer eclipsing event. The lengths of these two eclipsing events also change. These properties suggest that J1717+4308A is a ``red-back'' system with a low-mass main sequence or sub-giant companion. Timing observations of the pulsar and further searches of the data for additional pulsars are ongoing.
|
astrophysics
|
While relatively easy to engineer, static transverse coupling between a qubit and a cavity mode satisfies the criteria for a quantum non-demolition (QND) measurement only if the coupling between the qubit and cavity is much less than their mutual detuning. This can put significant limits on the speed of the measurement, requiring trade-offs in the circuit design between coupling, detuning, and decoherence introduced by the cavity mode. Here, we study a circuit in which the qubit-cavity and the cavity-feedline coupling can be turned on and off, which helps to isolate the qubit. We do not rely on the rotating-wave or dispersive approximations, but solve the full transverse interaction between the qubit and the cavity mode. We show that by carefully choosing the detuning and interaction time, we can exploit a recurrence in the qubit-cavity dynamics in a way that makes it possible to perform very fast, high fidelity, QND measurements. Here, the qubit measurement is performed more like a gate operation between the qubit and the cavity, where the cavity state can be amplified, squeezed, and released in a time-sequenced fashion. In addition, we also show that the non-demolition property of the off-resonant approximation breaks down much faster than its dispersive property, suggesting that many of the dispersive measurements to date have been implemented outside the QND regime.
|
quantum physics
|
We provide a computationally and statistically efficient method for estimating the parameters of a stochastic Gaussian model observed on a regular spatial grid in any number of dimensions. Our proposed method, which we call the debiased spatial Whittle likelihood, makes important corrections to the well-known Whittle likelihood to account for large sources of bias caused by boundary effects and aliasing. We generalise the approach to flexibly allow for significant volumes of missing data, for the usage of irregular sampling schemes including those with lower-dimensional substructure, and for irregular sampling boundaries. We build a theoretical framework under relatively weak assumptions which ensures consistency and asymptotic normality in numerous practical settings. We provide detailed implementation guidelines which ensure the estimation procedure can still be conducted in $\mathcal{O}(n\log n)$ operations, where $n$ is the number of points of the encapsulating rectangular grid, thus keeping the computational scalability of Fourier and Whittle-based methods for large data sets. We validate our procedure over a range of simulated and real world settings, and compare with state-of-the-art alternatives, demonstrating the enduring significant practical appeal of Fourier-based methods, provided they are corrected by the constructive procedures developed in this paper.
|
statistics
|
Let $\mathscr{C}$ be a 2-Calabi-Yau triangulated category, and let $\mathscr{T}$ be a cluster tilting subcategory of $\mathscr{C}$. An important result from Dehy and Keller tells us that a rigid object $c \in \mathscr{C}$ is uniquely defined by its index with respect to $\mathscr{T}$. The notion of triangulated categories extends to the notion of $(d+2)$-angulated categories. Thanks to a paper by Oppermann and Thomas, we now have a definition for cluster tilting subcategories in higher dimensions. This paper proves that under a technical assumption, an indecomposable object in a $(d+2)$-angulated category is uniquely defined by its index with respect to a higher dimensional cluster tilting subcategory. We also demonstrate an application of this result in higher dimensional cluster categories.
|
mathematics
|
This document is the final report of the Community Planning Process (CPP) that describes a comprehensive plan to deliver fusion energy and to advance plasma science. The CPP was initiated by the executive committee of the American Physical Society Division of Plasma Physics (APS DPP) to help the Fusion Energy Sciences Advisory Committee (FESAC) fulfill a charge from the U.S. Department of Energy (DOE) to develop a strategic plan for the DOE Office of Fusion Energy Sciences (FES). In this charge, dated Nov 30, 2018, DOE Deputy Director for Science Dr. Stephen Binkley requested that FESAC "undertake a new long range strategic planning activity for the Fusion Energy Sciences (FES) program. The strategic planning activity to encompass the entire FES research portfolio (namely, burning plasma science and discovery plasma science) should identify and prioritize the research required to advance both the scientific foundation needed to develop a fusion energy source, as well as the broader FES mission to steward plasma science." The CPP represents the first phase in developing a long range strategic plan for FES, and will serve as the basis for the second phase activity conducted by FESAC. It is worth noting that enacting the full scope of the recommendations in the strategic plan in this document will require suitable partnerships with other offices and governmental agencies, as well as with private industry and international partners.
|
physics
|
One of the pillars of any machine learning model is its concepts. Using software engineering, we can engineer these concepts and then develop and expand them. In this article, we present a SELM framework for Software Engineering of machine Learning Models. We then evaluate this framework through a case study. Using the SELM framework, we can improve a machine learning process efficiency and provide more accuracy in learning with less processing hardware resources and a smaller training dataset. This issue highlights the importance of an interdisciplinary approach to machine learning. Therefore, in this article, we have provided interdisciplinary teams' proposals for machine learning.
|
computer science
|
We propose and demonstrate a joint model of anatomical shapes, image features and clinical indicators for statistical shape modeling and medical image analysis. The key idea is to employ a copula model to separate the joint dependency structure from the marginal distributions of variables of interest. This separation provides flexibility on the assumptions made during the modeling process. The proposed method can handle binary, discrete, ordinal and continuous variables. We demonstrate a simple and efficient way to include binary, discrete and ordinal variables into the modeling. We build Bayesian conditional models based on observed partial clinical indicators, features or shape based on Gaussian processes capturing the dependency structure. We apply the proposed method on a stroke dataset to jointly model the shape of the lateral ventricles, the spatial distribution of the white matter hyperintensity associated with periventricular white matter disease, and clinical indicators. The proposed method yields interpretable joint models for data exploration and patient-specific statistical shape models for medical image analysis.
|
electrical engineering and systems science
|
We study the mixed scalar-tensor two-point function in icosahedral inflation. Within the regime of validity of the effective field theory, this has to be perturbatively small; in particular, much smaller than the scalar spectrum. However, it can be much bigger that the tensor spectrum itself. We discuss observational implications for the CMB temperature-polarization spectrum.
|
high energy physics theory
|
Reducing the number of non-Clifford quantum gates present in a circuit is an important task for efficiently implementing quantum computations, especially in the fault-tolerant regime. We present a new method for reducing the number of T-gates in a quantum circuit based on the ZX-calculus, which matches or beats previous approaches to T-count reduction on the majority of our benchmark circuits in the ancilla-free case, in some cases yielding up to 50% improvement. Our method begins by representing the quantum circuit as a ZX-diagram, a tensor network-like structure that can be transformed and simplified according to the rules of the ZX-calculus. We then show that a recently-proposed simplification strategy can be extended to reduce T-count using a new technique called phase teleportation. This technique allows non-Clifford phases to combine and cancel by propagating non-locally through a generic quantum circuit. Phase teleportation does not change the number or location of non-phase gates and the method also applies to arbitrary non-Clifford phase gates as well as gates with unknown phase parameters in parametrised circuits. Furthermore, the simplification strategy we use is powerful enough to validate equality of many circuits. In particular, we use it to show that our optimised circuits are indeed equal to the original ones. We have implemented the routines of this paper in the open-source library PyZX.
|
quantum physics
|
We present a detailed study of the fields and propagation characteristics around the focus of ultrashort radially-polarized laser beams (RPLBs) having low-order spatio-temporal couplings (STCs). The three STCs considered are the focusing of the different frequencies to different positions along the longitudinal coordinate, the focusing of the frequencies to different positions along one transverse coordinate, and the beam waist or Rayleigh range having a power-law frequency dependence. The STCs considered are deemed low-order because they are generally linear in frequency. The combination of a low-order vector beam, ultrashort pulse duration, and the three different STCs shows promise for exotic applications in dielectric or charged particle manipulation and potentially other strong-field phenomena. The STCs presented are all developed in a standard frequency-domain model where each case involves a different chromatic term. We present the results unique to the vector nature of the RPLBs and compare them to the linearly polarized cases, opening up opportunities for control of the electric field around the focus with the additional element of polarization.
|
physics
|
Very recently, there has been significant progress with establishing a common phenomenology of the superconducting cuprates in terms of nuclear magnetic resonance (NMR) shift and relaxation. Different from the old interpretation, it was shown that the shifts demand two coupled spin components with different temperature dependencies. One spin component couples isotropically to the planar Cu nucleus and is likely to reside at planar O, while the other, anisotropic component has its origin in the planar copper $3d(x^2-y^2)$ orbital. Nuclear relaxation, on the other hand, was found to be rather ubiquitous and Fermi liquid-like for planar Cu, i.e., it is independent of doping and material, apart from the sudden drop at the superconducting transition temperature, $T_{\rm c}$. However, there is a doping and material dependent anisotropy that is independent on temperature, above and below $T_{\rm c}$. Here we present a slightly different analysis of the shifts that fits all planar Cu shift data. In addition we are able to derive a simple model that explains nuclear relaxation based on these two spin components. In particular, the only outlier so far, \lsco, can be understood, as well. While this concerns predominantly planar Cu, it is argued that the two component model should fit all cuprate shift and relaxation data.
|
condensed matter
|
The thiophenoxy radical (C6H5S) is a species of possible astrophysical interest due to an electronic transition in a 5000 A region. The B <-- X electronic transition of this radical in the discharge of thiophenol was measured using a cavity ring-down spectrometer. The optical absorption spectrum of this transition was obtained in the range covering from the origin band (0-0) to a frequency of 1750 cm-1. The vibronic bands in the 400-1700 cm-1 region are stronger than the origin band, suggesting structural difference between the ground and excited electronic states. The prominent progression was assigned to the 6a symmetric in-plane CCC bending mode starting from the 6b10 forbidden band. Band origins of individual bands were determined by analysis of the rotational profiles. Although these vibronic bands were not found in optical spectra of diffuse clouds, the upper limits of the column densities for the thiophenoxy radical in the diffuse clouds toward HD 183143 and HD 204827 were evaluated to be 4 x 10^13 cm-2.
|
astrophysics
|
Quantum key distribution allows secure key distribution between remote communication parties. In a quantum network, multiple users are connected by quantum links for key distribution and classical links for encrypted data transmission. When the quantum network structure becomes complicated with a large number of users, it is important to investigate network issues, including security, key management, latency, reliability, scalability, and cost. In this work, we utilize the classical network theory and graph theory to establish a framework for a quantum network, addressing two critical issues, security and key management. First, we design a communication scheme with the highest security level that trusts a minimum number of intermediate nodes. Second, when the quantum key is a limited resource, we design key management and data scheduling schemes to optimize the utility of data transmission. Our results can be directly applied to the current metropolitan and free-space quantum network implementations and can potentially be a standard approach for future quantum network designs.
|
quantum physics
|
Spin Hall effect is the transverse flow of the electron spin in conductors under external electric field. Similarly, thermal gradient in magnetic insulators can drive a transverse flow of the spin angular momentum of magnons, which provides a thermal alternative for spin manipulation. Recently, the phonon angular momentum (PAM), which is the angular momentum of atoms as a result of their orbital motion around their equilibrium positions, has garnered attention as a quantity analogous to the magnon spin. However, can we manipulate PAM like magnon spin? Here, we show that temperature gradient generally induces a transverse flow of PAM, which we term the phonon angular momentum Hall effect (PAMHE). The PAMHE relies only on the presence of transverse and longitudinal acoustic phonons, and it is therefore ubiquitous in condensed matter systems. As a consequence of the PAMHE, PAM accumulates at the edges of a crystal. When the atoms in the crystal carry nonzero Born effective charge, the edge PAM induces edge magnetization, which may be observed through optical measurement. We believe that PAMHE provides a new principle for the manipulation of angular momenta in insulators and opens up an avenue for developing functional materials based on phonon engineering.
|
condensed matter
|
The work presented here attempts at answering the question: how do we decide when a given adetection is a planet or just residual noise in exoplanet direct imaging data? To this end we present a method implemented within a Bayesian framework: (1) to unify 'source detection', and, 'source characterization' into one single rigorous mathematical framework; (2) to enable an adequate hypothesis testing given the S/N of the data; (3) to enhance the detectability of planets faint signal in the presence of instrumental and background noise and to optimize the characterization of the planet. As a proof of concept we implemented a routine named ${\tt PlanetEvidence}$ that integrates the nested sampling technique (Multinest) with a post-processing technique, the Karhunen-Loeve Image Processing (KLIP), algorithm. This is a first step to recast such post-processing method into a fully Bayesian perspective. We test our approach on real direct imaging data, specifically using GPI data of $\beta$ Pictoris b, and, on synthetic data. We find that for the former the method strongly favors the presence of a planet (as expected) and recovers the true parameter posterior distributions. While for the latter case our approach allows us to detect (true) dim sources invisible to the naked eye as real planets, rather than background noise, and set a new lower threshold for detection at the 2$\sigma$ level approximately. Further it allows us to quantify our confidence that a given detection is a real planet and not just residual noise (for example residual speckles). The next natural step is to extend this approach to construct a Bayesian-based algorithm for blind detection, that is, not requiring an initial guess as to the location of the planet. This is the subject of ongoing work.
|
astrophysics
|
We present the technique for resummation of flux tube excitations series arising in pentagon operator expansion program for polygonal Wilson loops in N=4 SYM. Here we restrict ourselves with contributions of one-particle effective states and consider as a particular example NMHV 6 particle amplitude at one-loop. The presented technique is also applicable at higher loops for one effective particle contributions and has the potential for generalization for contributions with more effective particles.
|
high energy physics theory
|
We describe the wartime challenges associated with the rapid developments in plutonium chemistry and metallurgy that were necessary to produce the core of the Trinity Device. Beginning with microgram quantities of plutonium metal late in 1943, initial measurements showed a wide and confusing variance in density and other properties. These confusing results were the first clues to the astounding complexity of plutonium. As this complexity was revealed, it introduced new challenges for the fabrication of kilogram-scale parts. In a remarkable period from January 1944 to June 1945, Manhattan Project scientists made rapid progress in understanding plutonium chemistry and metallurgy. By early 1945, they had discovered five of the six ambient-pressure phases of unalloyed plutonium and reported the density of these phases to within a value of 0.1 g/cm$^3$ of those accepted today. They solved the stability problem introduced by these phases with a rapid alloy development program that ultimately identified gallium as the preferred element to stabilize the delta-phase, producing a plutonium alloy still of scientific and technical interest today. We conclude with a description of post-war developments in these areas, including applications of wartime plutonium metallurgy to civilian applications in nuclear reactors. We dedicate this paper to the memory of Ed Hammel, the Manhattan Project plutonium metallurgist whose previous description and documentation of plutonium history during the war has been essential in our research.
|
physics
|
We calculate the Standard Model (SM) predictions for the differential branching ratio of the rare $B_s \to \phi\mu^+ \mu^-$ decays using $B_s \to \phi$ transition form factors (TFFs) obtained using holographic light-front QCD (hQCD) instead of the traditional QCD sum rules (QCDSR) . Our predictions for the differential branching ratio is in better agreement with the LHCb data. Also, we find that the hQCD prediction for $R_{K^*\phi}$, the ratio of the branching fraction of $B \to K^* \mu^+ \mu^-$ to that of $B_s \to \phi\mu^+ \mu^-$ , is in excellent agreement with both the LHCb and CDF results in low $q^2$ range.
|
high energy physics phenomenology
|
Hydrogen column densities inferred from X-ray absorption are typically 5 - 30 times larger than the neutral atomic hydrogen column densities derived from 21cm HI absorption toward radio-loud active galactic nuclei. Some part of the difference is ascribed to uncertainty in the spin temperature \Tsp\ = 100 K that is often used to convert 21cm HI HI absorption to N(HI). Here we propose another way to infer the gas column from HI absorption. In our Galaxy there is a nearly linear correlation between the inteferometrically-measured integrated 21cm HI absorption \WHI\ and reddening, \WHI\ $\propto$ \EBV$^{1.10}$ for \WHI\ $\ga 0.7$ \kms\ or \EBV\ $\ga 0.04$ mag. Scaling \EBV\ then provides the total gas column density N(H) from the same dust column that is responsible for optical obscuration and X-ray absorption, without calculating N(HI). Values of N(H) so derived typically exceed N(HI) by a factor 4 because the ubiquitous Galactic 21cm HI HI absorption samples only a portion of the interstellar gas. If the well-studied case of Hydra-A is a guide, even very large disparities in X-ray and 21cm HI gas column densities can be explained by resolving the core radiocontinuum and inferring N(H) from 21cm HI absorption. Milky Way conditions are often invoked in discussion of obscured AGN, so the empirical relationship seen in the Milky Way should be a relevant benchmark.
|
astrophysics
|
We are interested in the numerical solution of coupled nonlinear partial differential equations (PDEs) in two and three dimensions. Under certain assumptions on the domain, we take advantage of the Kronecker structure arising in standard space discretizations of the differential operators and illustrate how the resulting system of ordinary differential equations (ODEs) can be treated directly in matrix or tensor form. Moreover, in the framework of the proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM) we derive a two- and three-sided model order reduction strategy that is applied directly to the ODE system in matrix and tensor form respectively. We discuss how to integrate the reduced order model and, in particular, how to solve the tensor-valued linear system arising at each timestep of a semi-implicit time discretization scheme. We illustrate the efficiency of the proposed method through a comparison to existing techniques on classical benchmark problems such as the two- and three-dimensional Burgers equation.
|
mathematics
|
The global dynamics of event cascades are often governed by the local dynamics of peer influence. However, detecting social influence from observational data is challenging due to confounds like homophily and practical issues like missing data. We propose a simple discriminative method to detect influence from observational data. The core of the approach is to train a ranking algorithm to predict the source of the next event in a cascade, and compare its out-of-sample accuracy against a competitive baseline which lacks access to features corresponding to social influence. We analyze synthetically generated data to show that this method correctly identifies influence in the presence of confounds, and is robust to both missing data and misspecification --- unlike well-known alternatives. We apply the method to two real-world datasets: (1) the co-sponsorship of legislation in the U.S. House of Representatives on a social network of shared campaign donors; (2) rumors about the Higgs boson discovery on a follower network of $10^5$ Twitter accounts. Our model identifies the role of social influence in these scenarios and uses it to make more accurate predictions about the future trajectory of cascades.
|
computer science
|
We consider a natural local dynamic on the set of all rooted planar maps with $n$ edges that is in some sense analogous to "edge flip" Markov chains, which have been considered before on a variety of combinatorial structures (triangulations of the $n$-gon and quadrangulations of the sphere, among others). We provide the first polynomial upper bound for the mixing time of this "edge rotation" chain on planar maps: we show that the spectral gap of the edge rotation chain is bounded below by an appropriate constant times $n^{-11/2}$. In doing so, we provide a partially new proof of the fact that the same bound applies to the spectral gap of edge flips on quadrangulations, which makes it possible to generalise a recent result of the author and Stauffer to a chain that relates to edge rotations via Tutte's bijection.
|
mathematics
|
Nuclear spins of noble gases feature extremely long coherence times but are inaccessible to optical photons. Here we realize a coherent interface between light and noble-gas spins that is mediated by alkali atoms. We demonstrate the optical excitation of the noble-gas spins and observe the coherent back-action on the light in the form of high-contrast two-photon spectra. We report on a record two-photon linewidth of 5$\pm$0.7 mHz (millihertz) above room-temperature, corresponding to a one-minute coherence time. This experiment provides a demonstration of coherent bi-directional coupling between light and noble-gas spins, rendering their long-lived spin coherence accessible for manipulations in the optical domain.
|
quantum physics
|
We use renormalization group methods to study composite operators existing at a boundary of an interacting conformal field theory. In particular we relate the data on boundary operators to short-distance (near-boundary) divergences of bulk two-point functions. We further argue that in the presence of running couplings at the boundary the anomalous dimensions of certain composite operators can be computed from the relevant beta functions and remark on the implications for the boundary (pseudo) stress-energy tensor. We apply the formalism to a scalar field theory in $d=3-\epsilon$ dimensions with a quartic coupling at the boundary whose beta function we determine to the first non-trivial order. We study the operators in this theory and compute their conformal data using $\epsilon-$expansion at the Wilson-Fisher fixed point of the boundary renormalization group flow. We find that the model possesses a non-zero boundary stress-energy tensor and displacement operator both with vanishing anomalous dimensions. The boundary stress tensor decouples at the fixed point in accordance with Cardy's condition for conformal invariance. We end the main part of the paper by discussing the possible physical significance of this fixed point for various values of $\epsilon$.
|
high energy physics theory
|
We propose a new method of generating gamma rays with orbital angular momentum (OAM). Accelerated partially-stripped ions are used as an energy up-converter. Irradiating an optical laser beam with OAM on ultrarelativistic ions, they are excited to a state of large angular momentum. Gamma rays with OAM are emitted in their deexcitation process. We examine the excitation cross section and deexcitation rate.
|
high energy physics phenomenology
|
In this paper, we first investigate the thermal stability of black holes in conformal Weyl gravity with a comparison with the Schwarzschild black holes. Then, we consider a minimally coupled massive scalar perturbation and calculate the quasinormal modes in asymptotically dS spacetime by employing the sixth order WKB approximation and asymptotic iteration method. The deviations from those of the Schwarzschild-dS solutions are obtained and the possibility of the presence of quasi-resonance modes for Weyl black hole solutions is investigated. Finally, we consider a massless scalar perturbation in the background of asymptotically AdS solutions and calculate the quasinormal modes by using the pseudospectral method. The effects of the free parameter of the theory on the quasinormal modes are studied and deviations from those of the Schwarzschild-AdS black holes are investigated. The imaginary part of quasinormal frequencies in AdS spacetime is the time scale of a thermal state (in the conformal field theory) to approach thermal equilibrium.
|
high energy physics theory
|
Objective: A person's affective state has known relationships to physiological processes which can be measured by wearable sensors. However, while there are general trends those relationships can be person-specific. This work proposes using neural processes as a way to address individual differences. Methods: Stress classifiers built from classic machine learning models and from neural processes are compared on two datasets using leave-one-participant-out cross-validation. The neural processes models are contextualized on data from a brief period of a particular person's recording. Results: The neural processes models outperformed the standard machine learning models, and had the best performance when using periods of stress and baseline as context. Contextual points chosen from other participants led to lower performance. Conclusion: Neural processes can learn to adapt to person-specific physiological sensor data. There are a wide range of affective and medical applications for which this model could prove useful.
|
statistics
|
A manual for computations in direct Dark Matter detection phenomenology. Featuring self-contained sections on non-relativistic expansion, elastic and inelastic scattering kinematics, Dark Matter velocity distribution, hadronic matrix elements, nuclear form factors, cross sections, rate spectra and parameter-space constraints, as well as a handy two-page summary and Q&A section for a quick reference. A pedagogical, yet general and model independent guide, with examples from standard and non-standard particle Dark Matter models.
|
high energy physics phenomenology
|
We introduce a new method for generation of the phase screen samples with arbitrary spatial spectrum: Sparse Spectrum with uniform wave vectors (SU). Similar to the known Sparse Spectrum (SS) technique, it uses trigonometric series with random discrete support on the wave vector plane, but, unlike the SS technique, the random wave vectors are uniformly distributed on the individual segments of the wave vector plane partition. We compare the accuracy and computational effectiveness of the SU technique with ubiquitous subharmonics complemented DFT method, SS method, and recently published, randomized DFT technique [J. Opt. Soc. Am. B 36, 3249 (2019)]. SSU and SS algorithms generate unbiased samples, for screens with bounded phase variance, and show the superior computational effectiveness for one megapixel and larger screens.
|
electrical engineering and systems science
|
We prove an Atiyah-Segal isomorphism for the higher $K$-theory of coherent sheaves on quotient Deligne-Mumford stacks over $\C$. As an application, we prove the Grothendieck-Riemann-Roch theorem for such stacks. This theorem establishes an isomorphism between the higher $K$-theory of coherent sheaves on a Deligne-Mumford stack and the higher Chow groups of its inertia stack. Furthermore, this isomorphism is covariant for proper maps between Deligne-Mumford stacks.
|
mathematics
|
A scaled-up quantum computer will require a highly efficient control interface that autonomously manipulates and reads out large numbers of qubits, which for solid-state implementations are usually held at millikelvin (mK) temperatures. Advanced CMOS technology, tightly integrated with the quantum system, would be ideal for implementing such a control interface but is generally discounted on the basis of its power dissipation that leads to heating of the fragile qubits. Here, we demonstrate an ultra low power, CMOS-based quantum control platform that takes digital commands as input and generates many parallel qubit control signals. Realized using 100,000 transistors operating near 100 mK, our platform alleviates the need for separate control lines to every qubit by exploiting the low leakage of transistors at cryogenic temperatures to store charge on floating gate structures that are used to tune-up quantum devices. This charge can then be rapidly shuffled between on-chip capacitors to generate the fast voltage pulses required for dynamic qubit control. We benchmark this architecture on a quantum dot test device, showing that the control of thousands of gate electrodes is feasible within the cooling power of commercially available dilution refrigerators.
|
quantum physics
|
Let $p$ be a prime. The $2$-primary part of the class group of the pure quartic field $\mathbb{Q}(\sqrt[4]{p})$ has been determined by Parry and Lemmermeyer when $p \not\equiv \pm 1\bmod 16$. In this paper, we improve the known results in the case $p\equiv \pm 1\bmod 16$. In particular, we determine all primes $p$ such that $4$ does not divide the class number of $\mathbb{Q}(\sqrt[4]{p})$. We also conjecture a relation between the class numbers of $\mathbb{Q}(\sqrt[4]{p})$ and $\mathbb{Q}(\sqrt{-2p})$. We show that this conjecture implies a distribution result of the $2$-class numbers of $\mathbb{Q}(\sqrt[4]{p})$.
|
mathematics
|
We computationally study the Fermi arc states in a Dirac semimetal, both in a semi-infinite slab and in the thin-film limit. We use Cd$_3$A$_2$ as a model system, and include perturbations that break the $C_4$ symmetry and inversion symmetry. The surface states are protected by the mirror symmetries present in the bulk states and thus survive these perturbations. The Fermi arc states persist down to very thin films, thinner than presently measured experimentally, but are affected by breaking the symmetry of the Hamiltonian. Our findings are compatible with experimental observations of transport in Cd$_3$As$_2$, and also suggest that symmetry-breaking terms that preserve the Fermi arc states nevertheless can have a profound effect in the thin film limit.
|
condensed matter
|
We propose a differential radial basis function (RBF) network termed RBF-DiffNet -- whose hidden layer blocks are partial differential equations (PDEs) linear in terms of the RBF -- to make the baseline RBF network robust to noise in sequential data. Assuming that the sequential data derives from the discretisation of the solution to an underlying PDE, the differential RBF network learns constant linear coefficients of the PDE, consequently regularising the RBF network by following modified backward-Euler updates. We experimentally validate the differential RBF network on the logistic map chaotic timeseries as well as on 30 real-world timeseries provided by Walmart in the M5 forecasting competition. The proposed model is compared with the normalised and unnormalised RBF networks, ARIMA, and ensembles of multilayer perceptrons (MLPs) and recurrent networks with long short-term memory (LSTM) blocks. From the experimental results, RBF-DiffNet consistently shows a marked reduction over the baseline RBF network in terms of the prediction error (e.g., 26% reduction in the root mean squared scaled error on the M5 dataset); RBF-DiffNet also shows a comparable performance to the LSTM ensemble at less than one-sixteenth the LSTM computational time. Our proposed network consequently enables more accurate predictions -- in the presence of observational noise -- in sequence modelling tasks such as timeseries forecasting that leverage the model interpretability, fast training, and function approximation properties of the RBF network.
|
computer science
|
The Ensemble Kalman Filters (EnKF) employ a Monte-Carlo approach to represent covariance information, and are affected by sampling errors in operational settings where the number of model realizations is much smaller than the model state dimension. To alleviate the effects of these errors EnKF relies on model-specific heuristics such as covariance localization, which takes advantage of the spatial locality of correlations among the model variables. This work proposes an approach to alleviate sampling errors that utilizes a locally averaged-in-time dynamics of the model, described in terms of a climatological covariance of the dynamical system. We use this covariance as the target matrix in covariance shrinkage methods, and develop a stochastic covariance shrinkage approach where synthetic ensemble members are drawn to enrich both the ensemble subspace and the ensemble transformation.
|
statistics
|
We study the triply heavy baryons $\Omega_{QQQ}$ $(Q=c, b)$ in the QCD Sum Rules by calculating the next-to-leading order (NLO) contribution in the perturbative QCD part of the correlation functions. Compared with the leading order (LO) result, the NLO contribution is found to be very important to the $\Omega_{QQQ}$, for it not only leads to a large correction, but also reduces the parameters dependence and makes the Borel platform more distinct, especially for the $\Omega_{bbb}$ in the $\overline{\rm{MS}}$ scheme, where the platform appears only at NLO but not at LO. In particular, due to the inclusion of the NLO contribution, the renormalization schemes ($\overline{\rm{MS}}$ and On-Shell) dependence and scale dependence are significantly improved. As a result, after including the NLO contribution of the perturbative part in QCD sum rules, the masses are predicted to be $4.53^{+0.26}_{-0.11}$ GeV for $\Omega_{ccc}$ and $14.27^{+0.33}_{-0.32}$ GeV for $\Omega_{bbb}$, where the results are obtained at $\mu=M_B$ with errors including that from the variation of the renormalization scale $\mu$ in the range $(0.8-1.2) M_B$. A careful study for the $\mu$ dependence in a wider range is further performed, which shows that the LO results are very sensitive to the choice of $\mu$ whereas the NLO results are much better. In addition to the $\mu=M_B$ result, a quite stable value, (4.75-4.80) GeV, for the $\Omega_{ccc}$ mass is found in the range of $\mu=(1.2-2.0) M_B$.
|
high energy physics phenomenology
|
The physics aims at the proposed future high-energy linear $e^+e^-$ collider CLIC pose challenging demands on the performance of the detector system. In particular, the vertex and tracking detectors have to combine a spatial resolution of a few micrometres and a low material budget with a time-stamping accuracy of a few nanoseconds. For the vertex detector, fine-pitch sensors, dedicated 65nm readout ASICs, fine-pitch bonding techniques using solder bumps or anisotropic conductive films as well as monolithic devices based on Silicon-On-Insulator technology are explored. Fully monolithic CMOS sensors with large and small collection electrodes are under investigation for the large surface CLIC tracker. This contribution gives an overview of the CLIC vertex and tracking detector R&D, focusing on recent results from test-beam campaigns and simulation-based sensor optimisation studies.
|
physics
|
The substantial growth of network traffic speed and volume presents practical challenges to network data analysis. Packet thinning and flow aggregation protocols such as NetFlow reduce the size of datasets by providing structured data summaries, but conversely this impedes statistical inference. Methods which aim to model patterns of traffic propagation typically do not account for the packet thinning and summarisation process into the analysis, and are often simplistic, e.g.~method-of-moments. As a result, they can be of limited practical use. We introduce a likelihood-based analysis which fully incorporates packet thinning and NetFlow summarisation into the analysis. As a result, inferences can be made for models on the level of individual packets while only observing thinned flow summary information. We establish consistency of the resulting maximum likelihood estimator, derive bounds on the volume of traffic which should be observed to achieve required levels of estimator accuracy, and identify an ideal family of models. The robust performance of the estimator is examined through simulated analyses and an application on a publicly available trace dataset containing over 36m packets over a 1 minute period.
|
statistics
|
The quantum walk is a counterpart of the random walk. The 2-state quantum walk in one dimension can be determined by a measure on the unit circle in the complex plane. As for the singular continuous measure, results on the corresponding quantum walk are limited. In this situation, we focus on a quantum walk, called the Riesz walk, given by the Riesz measure which is one of the famous singular continuous measures. The present paper is devoted to the return probability of the Riesz walk. Furthermore, we present some conjectures on the self-similarity of the walk.
|
quantum physics
|
Eating is a fundamental activity in people's daily life. Studies have shown that many health-related problems such as obesity, diabetes and anemia are closely associated with people's unhealthy eating habits (e.g., skipping meals, eating irregularly and overeating). Traditional eating monitoring solutions relying on self-reports remain an onerous task, while the recent trend requiring users to wear expensive dedicated hardware is still invasive. To overcome these limitations, in this paper, we develop a device-free eating monitoring system using WiFi-enabled devices (e.g., smartphone or laptop). Our system aims to automatically monitor users' eating activities through identifying the fine-grained eating motions and detecting the chewing and swallowing. In particular, our system extracts the fine-grained Channel State Information (CSI) from WiFi signals to distinguish eating from non-eating activities and further recognizing users' detailed eating motions with different utensils (e.g., using a folk, knife, spoon or bare hands). Moreover, the system has the capability of identifying chewing and swallowing through detecting users' minute facial muscle movements based on the derived CSI spectrogram. Such fine-grained eating monitoring results are beneficial to the understanding of the user's eating behaviors and can be used to estimate food intake types and amounts. Extensive experiments with 20 users over 1600-minute eating show that the proposed system can recognize the user's eating motions with up to 95% accuracy and estimate the chewing and swallowing amount with 10% percentage error.
|
computer science
|
We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound's prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we rely on a result showing that data-dependent priors obtained by stochastic gradient Langevin dynamics (SGLD) yield valid PAC-Bayes bounds provided the target distribution of SGLD is {\epsilon}-differentially private. We observe that test error on MNIST and CIFAR10 falls within the (empirically nonvacuous) risk bounds computed under the assumption that SGLD reaches stationarity. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.
|
statistics
|
We propose a novel codimension-n holography, called cone holography, between a gravitational theory in $d+1$ dimensional conical spacetime and a CFT on the $d+1-n$ dimensional defects. For one general class of solutions, we prove that the cone holography is equivalent to AdS/CFT, by showing that the classical gravitational action and thus the CFT partition function in large N limit are the same for the two theories. We test our proposal by studying Weyl anomaly, Entanglement/R\'enyi entropy and correlation functions, and find good agreements between the holographic and the CFT results. In particular, the c-theorem is obeyed by cone holography. These are strong supports for our proposal. We discuss two kinds of boundary conditions, the mixed boundary condition and Neumann boundary condition, and find that they both define a consistent theory of cone holography. The cone holography can be regarded as a generalization of the wedge holography, and it is closely related to the defect CFT, entanglement/R\'enyi entropy and AdS/BCFT(dCFT). Thus it is expected to have a wide range of applications.
|
high energy physics theory
|
Accreting black holes show characteristic 'reflection' features in their X-ray spectra, including the iron K$\alpha$ fluorescence line, which result from X-rays radiated by a compact central corona being reprocessed in the accretion disc atmosphere. The observed line profile is distorted by relativistic effects, providing a diagnostic for disc geometry. Nearly all previous X-ray reflection spectroscopy studies have made the simplifying assumption that the disc ionization state is independent of radius in order to calculate the restframe reflection spectrum. However, this is unlikely to be the case in reality, since the irradiating flux should drop off steeply with radius. Here we analyse a Nuclear Spectroscopic Telescope ARray observation of GRS 1915+105 that exhibits strong reflection features. We find that using a self-consistently calculated radial ionization profile returns a better fit than assuming constant ionization. Our results are consistent with the inner disc being radiation pressure dominated, as is expected from the high inferred accretion rate for this observation. We also find that the assumed ionization profile impacts on the best fitting disc inner radius. This implies that the black hole spin values previously inferred for active galactic nuclei and X-ray binaries by equating the disc inner radius with the innermost stable circular orbit may be changed significantly by the inclusion of a self-consistent ionization profile.
|
astrophysics
|
We have investigated the benefits of spin squeezed states for clocks operated with typical Brownian frequency noise-limited laser sources. Based on an analytic model of the closed servo-loop of an optical atomic clock, we can give quantitative predictions on the optimal clock stability for a given dead time and laser noise. Our analytic predictions are in very good agreement with numerical simulations of the closed servo-loop. We find that for usual cyclic Ramsey interrogation of single atomic ensembles with dead time, even with the current most stable lasers spin squeezing can only improve the clock stability for ensembles below a critical atom number of about one thousand in an optical Sr lattice clock. Even with a future improvement of the laser performance by one order of magnitude the critical atom number still remains below 100,000. In contrast, clocks based on smaller, non-scalable ensembles, such as ion clocks, can already benefit from squeezed states with current clock lasers.
|
quantum physics
|
Maunakea, the proposed site of the Thirty Meter Telescope (TMT), is a lightning-rod topic for Native Hawaiians, Hawaii residents, and the international astronomy community. In this paper we, Native Hawaiian natural scientists and allies, identify historical decisions that impact current circumstances on Maunakea and provide approaches to acknowledging their presence. Our aim is to provide an Indigenous viewpoint centered in Native Hawaiian perspectives on the impacts of the TMT project on the Hawaiian community. We summarize the current Maunakea context from the perspective of the authors who are trained in the natural sciences (inclusive of and beyond astronomy and physics), the majority of whom are Native Hawaiian or Indigenous. We highlight three major themes in the conflict surrounding TMT: 1) physical demonstrations and the use of law enforcement against the protectors of Maunakea; 2) an assessment of the benefit of Maunakea astronomy to Native Hawaiians; and 3) the disconnect between astronomers and Native Hawaiians. We close with general short- and long- term recommendations for the astronomy community, which represent steps that can be taken to re-establish trust and engage in meaningful reciprocity and collaboration with Native Hawaiians and other Indigenous communities. Our recommendations are based on established best principles of free, prior, and informed consent and researcher-community interactions that extend beyond transactional exchanges. We emphasize that development of large-scale astronomical instrumentation must be predicated on consensus from the local Indigenous community about whether development is allowed on their homelands. Proactive steps must be taken to center Indigenous voices in the earliest stages of project design.
|
astrophysics
|
Although forward error-correction (FEC) coding is an essential part of modern fiber-optic communication systems, it is impractical to implement and evaluate FEC in transmission experiments and simulations. Therefore, it is desirable to accurately predict the end-to-end link performance including FEC from transmission data recorded without FEC. In this tutorial, we provide ready-to-implement "recipes" for such prediction techniques, which apply to arbitrary channels and require no knowledge of information or coding theory. The appropriate choice of recipe depends on properties of the FEC encoder and decoder. The covered metrics include bit error rate, symbol error rate, achievable information rate, and asymptotic information, in all cases computed using a mismatched receiver. Supplementary software implementations are available.
|
computer science
|
Neutrino and dark matter experiments with large-volume ($\gtrsim 1$ ton) detectors can provide excellent sensitivity to signals induced by energetic light dark matter coming from the present universe. Taking boosted dark matter as a concrete example of energetic light dark matter, we scrutinize two representative search channels, electron scattering and proton scattering including deep inelastic scattering processes, in the context of elastic and inelastic boosted dark matter, in a completely detector-independent manner. In this work, a dark gauge boson is adopted as the particle to mediate the interactions between the Standard Model particles and boosted dark matter. We find that the signal sensitivity of the two channels highly depends on the (mass-)parameter region to probe, so search strategies and channels should be designed sensibly especially at the earlier stage of experiments. In particular, the contribution from the boosted-dark-matter-initiated deep inelastic scattering can be subleading (important) compared to the quasi-elastic proton scattering, if the mass of the mediator is below (above) $\mathcal{O}$(GeV). We demonstrate how to practically perform searches and relevant analyses, employing example detectors such as DarkSide-20k, DUNE, Hyper-Kamiokande, and DeepCore, with their respective detector specifications taken into consideration. For other potential detectors we provide a summary table, collecting relevant information, from which similar studies can be fulfilled readily.
|
high energy physics phenomenology
|
The minimal standard model of quarks and leptons is extended with a set of vectorlike fermions to allow baryon number $B$ to become a gauged $U(1)_B$ symmetry. The $B$ assignments of the new particles are determined by renormalizable interactions with the known quarks through a color triplet scalar diquark. The spontaneous breaking of $U(1)_B$ by a scalar with $B=3$ results in a conserved residual global $B$ symmetry. A singlet neutral scalar with $B=2$ is a possible long-lived dark-matter candidate.
|
high energy physics phenomenology
|
The satellite of (225088) 2007 OR10 was discovered on archival Hubble Space Telescope images and along with new observations with the WFC3 camera in late 2017 we have been able to determine the orbit. The orbit's notable eccentricity, e$\approx$0.3, may be a consequence of an intrinsically eccentric orbit and slow tidal evolution, but may also be caused by the Kozai mechanism. Dynamical considerations also suggest that the moon is small, D$_{eff}$ $<$ 100 km. Based on the newly determined system mass of 1.75x10$^{21}$ kg, 2007 OR10 is the fifth most massive dwarf planet after Eris, Pluto, Haumea and Makemake. The newly determined orbit has also been considered as an additional option in our radiometric analysis, provided that the moon orbits in the equatorial plane of the primary. Assuming a spherical shape for the primary this approach provides a size of 1230$\pm$50 km, with a slight dependence on the satellite orbit orientation and primary rotation rate chosen, and a bulk density of 1.75$\pm$0.07 g cm$^{-3}$ for the primary. A previous size estimate that assumed an equator-on configuration (1535$^{+75}_{-225}$ km) would provide a density of 0.92$^{+0.46}_{-0.14}$ g cm$^{-3}$, unexpectedly low for a 1000 km-sized dwarf planet.
|
astrophysics
|
Transition metal dichalcogenides (TMDC) are a rich family of two-dimensional materials displaying a multitude of different quantum ground states. In particular, d$^3$ TMDCs are paradigmatic materials hosting a variety of symmetry broken states, including charge density waves, superconductivity, and magnetism. Among this family, NbSe$_2$ is one of the best-studied superconducting materials down to the monolayer limit. Despite its superconducting nature, a variety of results point towards strong electronic repulsions in NbSe$_2$. Here, we control the strength of the interactions experimentally via quantum confinement effects and use low-temperature scanning tunneling microscopy (STM) and spectroscopy (STS) to demonstrate that NbSe$_2$ is in strong proximity to a correlated insulating state. This reveals the coexistence of competing interactions in NbSe$_2$, creating a transition from a superconducting to an insulating quantum correlated state by confinement-controlled interactions. Our results demonstrate the dramatic role of interactions in NbSe$_2$, establishing NbSe$_2$ as a correlated superconductor with competing interactions.
|
condensed matter
|
This work extends previous kinematic studies of young stars in the Head of the Orion A cloud (OMC-1/2/3/4/5). It is based on large samples of infrared, optical, and X-ray selected pre-main sequence stars with reliable radial velocities and Gaia-derived parallaxes and proper motions. Stellar kinematic groups are identified assuming they mimic the motion of their parental gas. Several groups are found to have peculiar kinematics: the NGC 1977 cluster and two stellar groups in the Extended Orion Nebula (EON) cavity are caught in the act of departing their birthplaces. The abnormal motion of NGC 1977 may have been caused by a global hierarchical cloud collapse, feedback by massive Ori OB1ab stars, supersonic turbulence, cloud-cloud collision, and/or slingshot effect; the former two models are favored by us. EON groups might have inherited anomalous motions of their parental cloudlets due to small-scale `rocket effects' from nearby OB stars. We also identify sparse stellar groups to the east and west of Orion A that are drifting from the central region, possibly a slowly expanding halo of the Orion Nebula Cluster. We confirm previously reported findings of varying line-of-sight distances to different parts of the cloud's Head with associated differences in gas velocity. Three-dimensional movies of star kinematics show contraction of the groups of stars in OMC-1 and global contraction of OMC-123 stars. Overall, the Head of Orion A region exhibits complex motions consistent with theoretical models involving hierarchical gravitational collapse in (possibly turbulent) clouds with OB stellar feedback.
|
astrophysics
|
Assessing the impact of the individual actions performed by soccer players during games is a crucial aspect of the player recruitment process. Unfortunately, most traditional metrics fall short in addressing this task as they either focus on rare actions like shots and goals alone or fail to account for the context in which the actions occurred. This paper introduces (1) a new language for describing individual player actions on the pitch and (2) a framework for valuing any type of player action based on its impact on the game outcome while accounting for the context in which the action happened. By aggregating soccer players' action values, their total offensive and defensive contributions to their team can be quantified. We show how our approach considers relevant contextual information that traditional player evaluation metrics ignore and present a number of use cases related to scouting and playing style characterization in the 2016/2017 and 2017/2018 seasons in Europe's top competitions.
|
statistics
|
Retaining spatial characteristics of panchromatic image and spectral information of multispectral bands is a critical issue in pansharpening. This paper proposes a pyramid based deep fusion framework that preserves spectral and spatial characteristics at different scales. The spectral information is preserved by passing the corresponding low resolution multispectral image as residual component of the network at each scale. The spatial information is preserved by training the network at each scale with the high frequencies of panchromatic image alongside the corresponding low resolution multispectral image. The parameters of different networks are shared across the pyramid in order to add spatial details consistently across scales. The parameters are also shared across fusion layers within a network at a specific scale. Experiments suggest that the proposed architecture outperforms state of the art pansharpening models. The proposed model, code and dataset is publicly available at https://github.com/sohaibali01/deep_pyramid_fusion.
|
electrical engineering and systems science
|
Increasing attention to autonomous passenger vehicles has also attracted interest in an autonomous racing series. Because of this, platforms such as Roborace and the Indy Autonomous Challenge are currently evolving. Electric racecars face the challenge of a limited amount of stored energy within their batteries. Furthermore, the thermodynamical influence of an all-electric powertrain on the race performance is crucial. Severe damage can occur to the powertrain components when thermally overstressed. In this work we present a race-time minimal control strategy deduced from an Optimal Control Problem (OCP) that is transcribed into a Nonlinear Problem (NLP). Its optimization variables stem from the driving dynamics as well as from a thermodynamical description of the electric powertrain. We deduce the necessary first-order Ordinary Differential Equations (ODE)s and form simplified loss models for the implementation within the numerical optimization. The significant influence of the powertrain behavior on the race strategy is shown.
|
electrical engineering and systems science
|
We consider the problem of memory holes in slab allocators, where an item entered into memory occupies more memory than it actually requires due to a difference between the nearest larger slab class size and the size of the entered item. We solve this problem by using a greedy algorithm that analyses the pattern of the sizes of items previously entered into the memory and accordingly re-configuring the default slab classes to better suit the learned traffic pattern to minimize memory holes. Using this approach for a consistent data pattern, in our findings, has yielded significant reductions in memory wastage. We consider Memcached as it is one of the most widely used implementations of slab allocators today, and has native support to reconfigure its default slab classes.
|
computer science
|
Predicting future frames of video sequences is challenging due to the complex and stochastic nature of the problem. Video prediction methods based on variational auto-encoders (VAEs) have been a great success, but they require the training data to contain multiple possible futures for an observed video sequence. This is hard to be fulfilled when videos are captured in the wild where any given observation only has a determinate future. As a result, training a vanilla VAE model with these videos inevitably causes posterior collapse. To alleviate this problem, we propose a novel VAE structure, dabbed VAE-in-VAE or VAE$^2$. The key idea is to explicitly introduce stochasticity into the VAE. We treat part of the observed video sequence as a random transition state that bridges its past and future, and maximize the likelihood of a Markov Chain over the video sequence under all possible transition states. A tractable lower bound is proposed for this intractable objective function and an end-to-end optimization algorithm is designed accordingly. VAE$^2$ can mitigate the posterior collapse problem to a large extent, as it breaks the direct dependence between future and observation and does not directly regress the determinate future provided by the training data. We carry out experiments on a large-scale dataset called Cityscapes, which contains videos collected from a number of urban cities. Results show that VAE$^2$ is capable of predicting diverse futures and is more resistant to posterior collapse than the other state-of-the-art VAE-based approaches. We believe that VAE$^2$ is also applicable to other stochastic sequence prediction problems where training data are lack of stochasticity.
|
computer science
|
This study tries to examine the influence of salesperson's customer orientation on customer loyalty. Customer orientation is the approach taken by a salesperson to improve customer relationship and increase sales. Many organizations prefer sales orientation as a strategic approach towards increasing sales. Though successful in its objective, sales orientation fails to attract repetitive purchase. It has become a necessity to train frontline employees to better understand the customer needs, keeping in mind the firm's ultimate objective. This study examines the improvements customer orientation can bring to increase repurchases thus leading to customer loyalty. The findings suggest that product assortment, long lines of customers, customers' annual income, and the listening skills of salesperson were the significant antecedents of customer loyalty.
|
statistics
|
The Hochschild cohomology rings for self-injective algebras of tree classes $E_7$ and $E_8$ with finite representation type was described in terms of generators and relations.
|
mathematics
|
Transport of electrons in a bulk metal is usually well captured by their particle-like aspects, while their wave-like nature is commonly harder to observe. Microstructures can be are fully designed to reveal the quantum phase, for example mesoscopic metal rings resembling interferometers. Here we report a new type of phase coherent oscillation of the out-of-plane magnetoresistance in the layered delafossites PdCoO$_2$ and PtCoO$_2$. The oscillation period is equivalent to that determined by the magnetic flux quantum, $h/e$, threading an area defined by the atomic interlayer separation and the sample width. The phase of the electron wave function in these crystals appears remarkably robust over macroscopic length scales exceeding 10$\mu$m and persisting up to elevated temperatures of $T$>50K. We show that, while the experimental signal cannot be explained in a standard Aharonov-Bohm analysis, it arises due to periodic field-modulation of the out-of-plane hopping. These results demonstrate extraordinary single-particle quantum coherence lengths in the delafossites, and identify a new form of quantum interference in solids.
|
condensed matter
|
Recovering 3D phase features of complex, multiple-scattering biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. This trade-off hinders the rapid analysis of living, dynamic biological samples that are often of greatest interest to biological research. Here, we overcome this bottleneck by combining annular intensity diffraction tomography (aIDT) with an approximant-guided deep learning framework. Using a novel physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples of arbitrary size and structure. This approach highlights that large-scale multiple-scattering models can be leveraged in place of acquiring experimental datasets for achieving highly generalizable deep learning models. We devise a new model-based data normalization pre-processing procedure for homogenizing the sample contrast and achieving uniform prediction quality regardless of scattering strength. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework's capabilities on experimental measurements of epithelial buccal cells and Caenorhabditis elegans worms. We highlight the robustness of this approach by evaluating dynamic samples on a living worm video, and we emphasize our approach's generalizability by recovering algae samples evaluated with different experimental setups. To assess the prediction quality, we develop a novel quantitative evaluation metric and show that our predictions are consistent with our experimental measurements and multiple-scattering physics.
|
electrical engineering and systems science
|
Variants of Triplet networks are robust entities for learning a discriminative embedding subspace. There exist different triplet mining approaches for selecting the most suitable training triplets. Some of these mining methods rely on the extreme distances between instances, and some others make use of sampling. However, sampling from stochastic distributions of data rather than sampling merely from the existing embedding instances can provide more discriminative information. In this work, we sample triplets from distributions of data rather than from existing instances. We consider a multivariate normal distribution for the embedding of each class. Using Bayesian updating and conjugate priors, we update the distributions of classes dynamically by receiving the new mini-batches of training data. The proposed triplet mining with Bayesian updating can be used with any triplet-based loss function, e.g., triplet-loss or Neighborhood Component Analysis (NCA) loss. Accordingly, Our triplet mining approaches are called Bayesian Updating Triplet (BUT) and Bayesian Updating NCA (BUNCA), depending on which loss function is being used. Experimental results on two public datasets, namely MNIST and histopathology colorectal cancer (CRC), substantiate the effectiveness of the proposed triplet mining method.
|
statistics
|
We examine the equilibrium fluctuation spectrum of a constituent semiflexible filament segment in a network. The effect of this cross linking is to modify the mechanical boundary conditions at the end of the filament. We consider the effect of both tensile stress in the network, and its elastic compliance. Most significantly, the network's compliance introduces a nonlinear term into the filament Hamiltonian even in the small-bending approximation. We analyze the effect of this nonlinearity upon the filament's fluctuation profile. We also find that there are three principal fluctuation regimes dominated by one of the following: (i) network tension, (ii) filament bending stiffness, or (iii) network compliance. We propose that one can use observed filament fluctuations as a noninvasive probe of network tension, and we provide the necessary response function to quantitatively analyze this sort of "tension microrheology" in cross linked semiflexible filament networks.
|
condensed matter
|
Clustering is one of the fundamental tasks in data analytics and machine learning. In many situations, different clusterings of the same data set become relevant. For example, different algorithms for the same clustering task may return dramatically different solutions. We are interested in applications in which one clustering has to be transformed into another; e.g., when a gradual transition from an old solution to a new one is required. In this paper, we devise methods for constructing such a transition based on linear programming and network theory. We use a so-called clustering-difference graph to model the desired transformation and provide methods for decomposing the graph into a sequence of elementary moves that accomplishes the transformation. These moves are equivalent to the edge directions, or circuits, of the underlying partition polytopes. Therefore, in addition to a conceptually new metric for measuring the distance between clusterings, we provide new bounds on the circuit diameter of these partition polytopes.
|
mathematics
|
Light sterile neutrinos with a mass around 1 eV have been studied for many years as a possible explanation of the so called short-baseline neutrino oscillation anomalies. Recently, several neutrino oscillation experiments reported preferences for non-zero values of the mixing angles and squared mass differences for active-sterile mixing, which however are not always in agreement. I review our current knowledge of the light sterile neutrino in the 3+1 model, starting with a separate discussion on the status of the most relevant searches and then analyzing the problems that arise when combining different probes in a global fit. A short summary on the tension with cosmological observations is also provided.
|
high energy physics phenomenology
|
The higher-derivative $\alpha'$ corrections consistent with $O(d,d)$ duality invariance can be completely classified for cosmological, purely time-dependent backgrounds. This result is used to show that there are duality invariant theories featuring string-frame de Sitter vacua as solutions that are non-perturbative in $\alpha'$, thus suggesting that classical string theory may realize de Sitter solutions in an unexpected fashion.
|
high energy physics theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.