text
stringlengths
11
9.77k
label
stringlengths
2
104
Non-autoregressive models generate target words in a parallel way, which achieve a faster decoding speed but at the sacrifice of translation accuracy. To remedy a flawed translation by non-autoregressive models, a promising approach is to train a conditional masked translation model (CMTM), and refine the generated results within several iterations. Unfortunately, such approach hardly considers the \textit{sequential dependency} among target words, which inevitably results in a translation degradation. Hence, instead of solely training a Transformer-based CMTM, we propose a Self-Review Mechanism to infuse sequential information into it. Concretely, we insert a left-to-right mask to the same decoder of CMTM, and then induce it to autoregressively review whether each generated word from CMTM is supposed to be replaced or kept. The experimental results (WMT14 En$\leftrightarrow$De and WMT16 En$\leftrightarrow$Ro) demonstrate that our model uses dramatically less training computations than the typical CMTM, as well as outperforms several state-of-the-art non-autoregressive models by over 1 BLEU. Through knowledge distillation, our model even surpasses a typical left-to-right Transformer model, while significantly speeding up decoding.
computer science
In this paper, we extend the original criss-cross algorithms for computing the $\varepsilon$-pseudospectral abscissa and radius to general spectral value sets. By proposing new root-finding-based strategies for the horizontal/radial search subphases, we significantly reduce the number of expensive Hamiltonian eigenvalue decompositions incurred, which typically translates to meaningful speedups in overall computation times. Furthermore, and partly necessitated by our root-finding approach, we develop a new way of handling the singular pencils or problematic interior searches that can arise when computing the $\varepsilon$-spectral value set radius. Compared to would-be direct extensions of the original algorithms, that is, without our additional modifications, our improved criss-cross algorithms are not only noticeably faster but also more robust and numerically accurate, for both spectral value set and pseudospectral problems.
mathematics
We analyze a generalization of the hard sphere dipole system in two dimensions in which the interaction range of the interaction can be varied. We focus on the system in the limit the interaction becomes increasingly short-ranged, while the temperature becomes low. By using a cluster expansion and taking advantage of low temperatures to perform saddle-point approximations, we argue that a well defined double limit exists in which the only structures which contribute to the free energy are chains. We then argue that the dominance of chain structures is equivalent to the dominance of chain diagrams in a cluster expansion, but only if the expansion is performed around a hard sphere system (rather than the standard ideal gas). We show that this leads to non-standard factorization rules for diagrams, and use this to construct a closed-form expression for the free energy at low densities. We then compare this construction to several models previously developed for the hard sphere dipole system in the regime where chain structures dominate, and argue that the comparison provides evidence in favor of one model over the others. We also use this construction to incorporate some finite density effects though the hard sphere radial distribution function, and analyze the impact of these effects on chain length and the equation of state.
condensed matter
Nowadays, the prevalence of sensor networks has enabled tracking of the states of dynamic objects for a wide spectrum of applications from autonomous driving to environmental monitoring and urban planning. However, tracking real-world objects often faces two key challenges: First, due to the limitation of individual sensors, state estimation needs to be solved in a collaborative and distributed manner. Second, the objects' movement behavior is unknown, and needs to be learned using sensor observations. In this work, for the first time, we formally formulate the problem of simultaneous state estimation and behavior learning in a sensor network. We then propose a simple yet effective solution to this new problem by extending the Gaussian process-based Bayes filters (GP-BayesFilters) to an online, distributed setting. The effectiveness of the proposed method is evaluated on tracking objects with unknown movement behaviors using both synthetic data and data collected from a multi-robot platform.
computer science
We consider zero temperature packings of soft spheres, that undergo a jamming to unjamming transition as a function of packing fraction. We compare differences in the structure, as measured from the contact statistics, of a finite subsystem of a large packing to a whole packing with periodic boundaries of an equivalent size and pressure. We find that the fluctuations of the ensemble of whole packings are smaller than those of the ensemble of subsystems. Convergence of these two quantities appears to occur at very large systems, which are usually not attainable in numerical simulations. Finding differences between packings in two dimensions and three dimensions, we also consider four dimensions and mean-field models, and find that they show similar system size dependence. Mean-field critical exponents appear to be consistent with the 3d and 4d packings, suggesting they are above the upper critical dimension. We also find that the convergence as a function of system size to the thermodynamic limit is characterized by two different length scales. We argue that this is the result of the system being above the upper critical dimension.
condensed matter
In the presence of heterogeneous data, where randomly rotated objects fall into multiple underlying categories, it is challenging to simultaneously classify them into clusters and synchronize them based on pairwise relations. This gives rise to the joint problem of community detection and synchronization. We propose a series of semidefinite relaxations, and prove their exact recovery when extending the celebrated stochastic block model to this new setting where both rotations and cluster identities are to be determined. Numerical experiments demonstrate the efficacy of our proposed algorithms and confirm our theoretical result which indicates a sharp phase transition for exact recovery.
statistics
We study the motion of an interface separating two regions with different electronic orders following a short duration pump that drives the system out of equilibrium. Using a generalized Ginzburg-Landau approach and assuming that the main effect of the nonequilibrium drive is to transiently heat the system we address the question of the direction of interface motion; in other words, which ordered region expands and which contracts after the pump. Our analysis includes the effects of differences in free energy landscape and in order parameter dynamics and identifies circumstances in which the drive may act to increase the volume associated with the subdominant order, for example when the subdominant order has a second order free energy landscape while the dominant order has a first order one.
condensed matter
We consider a pentadiagonal matrix which will be described in the text. We demonstrate practical methods for obtaining weak coupling expressions for the lowest eigenvector in terms of the parameters in the matrix, v and w. It is found that the expressions simplify if the wave function coefficients are put in the denominator.
physics
Conventionally, the observed charged leptons are regarded the simultaneous eigenstates of "mass" and "family". Against this view, we discuss a possibility that the observed charged leptons $e_i=(e, \mu, \tau)$ are not identical with the eigenstates of family $e^0_\alpha =(e_1^0, e_2^0, e_3^0)$. Here, we define the eigenstates of family, $e^0_\alpha$, as the states which interact with family gauge bosons in the mass eigenstates of the broken U(3)$_{family}$ gauge symmetry. Although there is at present not any experimental evidence for $e^0_1$-$e^0_2$ mixing, and we have only an upper limit for the mixing from the present experimental data. We will conclude that the $e$-$\mu$ mixing angle $\theta$ must be $\theta \lesssim 10^{-3}$. Thus, we can not exclude a possibility $\theta\neq 0$. If we want more small upper limit of $\theta$, a rare decay search $\mu \rightarrow e + \gamma$ will be useful.
high energy physics phenomenology
We show that the set of frequently universal harmonic functions on a tree T contains a vector space except 0 which is dense in the space of harmonic functions on T seen as subset of C^T . In order to prove this we replace the complex plane C by any separable Frechet space E and we repeat all the theory.
mathematics
Given a log Calabi-Yau surface $Y$ with maximal boundary $D$ and distinguished complex structure, we explain how to construct a mirror Lefschetz fibration $w: M \to \mathbb{C}$, where $M$ is a Weinstein four-manifold, such that the directed Fukaya category of $w$ is isomorphic to $D^b \text{Coh}(Y)$, and the wrapped Fukaya category $\mathcal{W} (M)$ is isomorphic to $D^b \text{Coh}(Y \backslash D)$. We construct an explicit isomorphism between $M$ and the total space of the almost-toric fibration arising in the work of Gross-Hacking-Keel; when $D$ is negative definite this is expected to be the Milnor fibre of a smoothing of the dual cusp of $D$. We also match our mirror potential $w$ with existing constructions for a range of special cases of $(Y,D)$, notably in work of Auroux-Katzarkov-Orlov and Abouzaid.
mathematics
We show that it makes sense to coarse grain hadronic interactions such as $\pi\pi$ and $\pi N$ reactions following previous work on NN scattering. Moreover, if the interaction is taken to be given by chiral dynamics at long distances above a given value $r > r_c$ larger than the elementary radii of the interaction hadrons the unknown short distance region $r< r_c$ is characterized by a {\it finite} number of fitting parameters. This number of independent parameters needed for a presumably complete description of scattering data for a CM energy below $\sqrt{s}$ has been found to be given by $N_{\rm Par} = N_S \times N_I \times (p r_c )^2 /2 $ with $N_S$ and $N_I$ the number of spin and isospin channels, and $p$ the CM momentum respectively. Therefore, for an experiment (or sets of experiments) with a total number of data $N_{\rm Dat}$ the number of degrees of freedom involved in a $\chi^2$-fit is given by $\nu = N_{\rm Dat}-N_{\rm Par}$ and confidence levels can be obtained accordingly by standard means. Namely a $1 \sigma$ confidence level corresponds to $\chi_{\rm min}^2/\nu \in (1- \sqrt{2/\nu},1+\sqrt{2/\nu})$. We discuss the approach for $\pi\pi$ and $\pi N$ with an eye put on a data selection program and the eventual validation of chiral symmetry.
high energy physics phenomenology
Deep active inference has been proposed as a scalable approach to perception and action that deals with large policy and state spaces. However, current models are limited to fully observable domains. In this paper, we describe a deep active inference model that can learn successful policies directly from high-dimensional sensory inputs. The deep learning architecture optimizes a variant of the expected free energy and encodes the continuous state representation by means of a variational autoencoder. We show, in the OpenAI benchmark, that our approach has comparable or better performance than deep Q-learning, a state-of-the-art deep reinforcement learning algorithm.
computer science
A $q$-deformed Weyl-Heisenberg algebra is used to define a deformed displacement operator giving rise to a naturally normalized nonlinear coherent states type. Robust maximally entangled deformed coherent states are studied and the effect of such a deformation on the amount of the entanglement is discussed. The analogy between environment decoherence and algebra deformation is made through the deformation parameter.
quantum physics
Recent observational constraints indicate that primordial black holes (PBHs) with the mass scale $\sim 10^{-12}M_{\odot}$ can explain most of dark matter in the Universe. To produce this kind of PBHs, we need an enhance in the primordial scalar curvature perturbations to the order of ${\mathcal{O}(10^{-2})}$ at the scale $ k \sim 10^{12}~\rm Mpc^{-1}$. Here, we investigate the production of PBHs and induced gravitational waves (GWs) in the framework of Galileon inflation. We solve numerically the Mukhanov-Sasaki equation to obtain the primordial scalar power spectrum. In addition, we estimate the PBHs abundance $f_{\text{PBH}}^{\text{peak}}$ as well as the energy density parameter $\Omega_{\rm GW,0}$ of induced GWs. Interestingly enough is that for a special set of model parameters, we estimate the mass scale and the abundance of PBHs as $\sim{\cal O}(10^{-13})M_{\odot}$ and $f_{\text{PBH}}^{\text{peak}}=0.96$, respectively. This confirms that the mechanism of PBHs production in Galileon inflation can justify most of dark matter. Furthermore, we evaluate the GWs energy density parameter and conclude that it behaves like a power-law function $\Omega_{\rm GW}\sim (f/f_c)^n$ where in the infrared limit $f\ll f_{c}$, the power index reads $n=3-2/\ln(f_c/f)$.
astrophysics
Hand gestures are an intuitive, socially acceptable, and a non-intrusive interaction modality in Mixed Reality (MR) and smartphone based applications. Unlike speech interfaces, they tend to perform well even in shared and public spaces. Hand gestures can also be used to interact with smartphones in situations where the user's ability to physically touch the device is impaired. However, accurate gesture recognition can be achieved through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are computationally heavy and memory hungry, and obtaining real-time performance on-device without additional hardware is still a challenge. To address this, we propose AirPen: an analogue to pen on paper, but in air, for in-air writing and gestural commands that works seamlessly in First and Second Person View. The models are trained on a GPU machine and ported on an Android smartphone. AirPen comprises of three deep learning models that work in tandem: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80% with an average latency of only 0.12 s.
computer science
The crystal structure of BiFeO3/BaxSr1-xTiO3 (BFO/BST) heterostructures with x = 0.2, 0.6 and 0.8, grown on single-crystal MgO (001) substrate was investigated by x-ray diffraction and Raman spectroscopy in order to determine the influence of mismatch-induced strains and spontaneous polarization in BST buffer layers on BFO layers. The lattice parameter of the BFO layers was shown to decrease with increasing concentration of Ba ions, despite the increasing in-plain lattice parameters of tetragonal unit cells of BST layers. The rhombohedral angle of the crystal structure of BFO layers demonstrates an increase towards the ideal cubic perovskite structure with the appearance of the built-in electric field, induced by the spontaneous polarization in buffer layers. This result provides a remarkable tool for the control of polarization in BFO layers and other ferroelectric films in general, by changing the built-in electric field from ferroelectric buffer layer without changing a single crystal substrate.
condensed matter
This paper aims to develop a Fault Tolerant Control (FTC) architecture, for the case of a damaged actuator for a multirotor UAV that can be applied across multirotor platforms based on their Attainable Virtual Control Set (AVCS). The research is aimed to study the AVCS and identify the parameters that limit the controllability of multirotor UAV post an actuator failure. Based on the study of controllability, the requirements for a FTC is laid out. The implemented control solution will be tested on a quadrotor, Intel Shooting Star UAV platform in indoor and outdoor flights using only the onboard sensors. The attitude control solution is implemented with reduced attitude control, and the control allocation is performed with pseudo-inverse based model inversion with sequential desaturation to ensure tilt priority. The model is identified with an offline Ordinary Least Squares routine and subsequently updated with the Recursive Least Squares method. An offline calibration routine is implemented to correct IMU offset distance from the centre of rotation to correct for accelerometer bias caused by the high-speed spin after failure in a quadrotor.
computer science
In this paper, we focus on learning the underlying product graph structure from multidomain training data. We assume that the product graph is formed from a Cartesian graph product of two smaller factor graphs. We then pose the product graph learning problem as the factor graph Laplacian matrix estimation problem. To estimate the factor graph Laplacian matrices, we assume that the data is smooth with respect to the underlying product graph. When the training data is noise free or complete, learning factor graphs can be formulated as a convex optimization problem, which has an explicit solution based on the water-filling algorithm. The developed framework is illustrated using numerical experiments on synthetic data as well as real data related to air quality monitoring in India.
electrical engineering and systems science
Bayesian inference for models with intractable likelihood functions represents a challenging suite of problems in modern statistics. In this work we analyse the Conway-Maxwell-Poisson (COM-Poisson) distribution, a two parameter generalisation of the Poisson distribution. COM-Poisson regression modelling allows the flexibility to model dispersed count data as part of a generalised linear model (GLM) with a COM-Poisson response, where exogenous covariates control the mean and dispersion level of the response. The major difficulty with COM-Poisson regression is that the likelihood function contains multiple intractable normalising constants and is not amenable to standard inference and MCMC techniques. Recent work by Chanialidis et al. (2017) has seen the development of a sampler to draw random variates from the COM-Poisson likelihood using a rejection sampling algorithm. We provide a new rejection sampler for the COM-Poisson distribution which significantly reduces the CPU time required to perform inference for COM-Poisson regression models. A novel extension of this work shows that for any intractable likelihood function with an associated rejection sampler it is possible to construct unbiased estimators of the intractable likelihood which proves useful for model selection or for use within pseudo-marginal MCMC algorithms (Andrieu and Roberts, 2009). We demonstrate all of these methods on a real-world dataset of takeover bids.
statistics
A thermally isolated quantum system undergoes unitary evolution by interacting with an external work source. The two-point energy measurement (TPM) protocol defines the work exchanged between the system and the work source by performing ideal energy measurements on the system before, and after, the unitary evolution. However, the ideal energy measurements used in the TPM protocol ultimately result from a unitary coupling with a measurement apparatus, which requires an interaction with an external work source. For the TPM protocol to be self-consistent, we must be able to perform the TPM protocol on the compound of system plus apparatus, thus revealing the total work distribution, such that when ignoring the apparatus degrees of freedom, we recover the original TPM work distribution for the system of interest. In the present manuscript, we show that such self-consistency is satisfied so long as the apparatus is initially prepared in an energy eigenstate. Moreover, we demonstrate that if the apparatus Hamiltonian is equivalent to the "pointer observable", then: (i) the total work distribution will satisfy the "unmeasured" first law of thermodynamics for all system states and system-only unitary processes; and (ii) the total work distribution will be identical to the system-only work distribution, for all system states and system-only unitary processes, if and only if the unmeasured work due to the unitary coupling between system and apparatus is zero for all system states.
quantum physics
We present a systematic investigation of parton-shower and matching perturbative uncertainties for Higgs-boson production via vector-boson fusion. To this end we employ different generators at next-to-leading order QCD accuracy matched with shower Monte Carlo programs, PYTHIA8, and HERWIG7, and a next-to-next-to-leading order QCD calculation. We thoroughly analyse the intrinsic sources of uncertainty within each generator, and then compare predictions among the different tools using the respective recommended setups. Within typical vector-boson fusion cuts, the resulting uncertainties on observables that are accurate to next-to-leading order are at the 10% level for rates and even smaller for shapes. For observables sensitive to extra radiation effects uncertainties of about 20% are found. We furthermore show how a specific recoil scheme is needed when PYTHIA8 is employed, in order not to encounter unphysical enhancements for these observables. We conclude that for vector-boson fusion processes an assessment of the uncertainties associated with simulation at next-to-leading order matched to parton showers based only on the variation of renormalisation, factorisation and shower scales systematically underestimates their true size.
high energy physics phenomenology
We introduce a new model of correlated randomly growing graphs and study the fundamental questions of detecting correlation and estimating aspects of the correlated structure. The model is simple and starts with any model of randomly growing graphs, such as uniform attachment (UA) or preferential attachment (PA). Given such a model, a pair of graphs $(G_1, G_2)$ is grown in two stages: until time $t_{\star}$ they are grown together (i.e., $G_1 = G_2$), after which they grow independently according to the underlying growth model. We show that whenever the seed graph has an influence in the underlying graph growth model---this has been shown for PA and UA trees and is conjectured to hold broadly---then correlation can be detected in this model, even if the graphs are grown together for just a single time step. We also give a general sufficient condition (which holds for PA and UA trees) under which detection is possible with probability going to $1$ as $t_{\star} \to \infty$. Finally, we show for PA and UA trees that the amount of correlation, measured by $t_{\star}$, can be estimated with vanishing relative error as $t_{\star} \to \infty$.
mathematics
We demonstrate fast (0.25 s/pixel) nanoscale chemical imaging in aqueous solution via tip-enhanced Raman scattering (TERS), with sub-15 nm spatial resolution. The substrate consists of 4-mercaptobenzonitrile-functionalized monocrystalline gold triangular platelets immersed in H2O, which we map using a gold-coated AFM probe illuminated using a 633 nm laser source. We find that the recorded TERS images trace the enhanced local optical fields that are sustained towards the edges and corners of the triangles. In effect, we directly map local optical fields of a plasmonic substrate in aqueous solution through molecular TERS. Overall, our described platform and approach may generally be used for chemical and biological imaging, and potentially, to follow chemical transformations at solid-liquid interfaces through TERS.
physics
Quantum simulation is one of the core areas in the development of quantum technologies. Among the various simulation platforms, integrated waveguides have proven to be a powerful and versatile tool. One important aspect of quantum simulation is the analysis of open quantum systems. In most practical applications it is in fact essential to understand and control the dynamics between the quantum system and its environment. Typical effects of interest for example involve decoherence or memory effects, which lead to non-Markovian behavior. In this work we establish integrated waveguides as a tool for the simulation of open quantum systems. Using polarization as well as a photon's path degree of freedom, we implement the coupling of a qubit with a low dimensional discrete environment. Based on a measure of non-Markovianity, we define and quantify specific types of information and information transfer between system and environment. Experimentally we implement and verify non-Markovian evolution in an optical waveguide array and analyze types of information flow between system and environment realizable through integrated photonics.
quantum physics
We consider the partial data inverse boundary problem for the Schr\"odinger operator at a frequency $k>0$ on a bounded domain in $\mathbb{R}^n$, $n\ge 3$, with impedance boundary conditions. Assuming that the potential is known in a neighborhood of the boundary, we first show that the knowledge of the partial Robin-to-Dirichlet map at the fixed frequency $k>0$ along an arbitrarily small portion of the boundary, determines the potential in a logarithmically stable way. We prove, as the principal result of this work, that the logarithmic stability can be improved to the one of H\"older type in the high frequency regime.
mathematics
The rather elusive High-Frequency Quasi-Periodic Oscillations(HFQPO) observed in the X-ray lightcurve of black holes have been seen in a wide range of frequencies, even within one source. It is also notable to have been detected in "pairs" of HFQPOs with a close to integer ratio between the frequencies. The aim of this paper is to investigate some of the possible observable that we could obtain from having the Rossby Wave Instability (RWI) active in the accretion disc surrounding the compact object. Using the newly developed GR-AMRVAC code able to follow the evolution of the RWI in a full general relativistic framework, we explore how RWI can reproduce observed HFQPO frequencies ratios and if it is compatible with the observations. In order to model the emission coming from the disc we have linked our general relativistic simulations to the general relativistic ray-tracing GYOTO code and delivered synthetic observables that can be confronted to actual data from binary systems hosting HFQPOs.}{We have demonstrated in our study that some changes in the physical conditions prevailing in the part of the disc where RWI can be triggered leads to various dominant RWI modes whose ratio recovers frequency ratios observed in various X-ray binary systems. In addition, we have also highlighted that when RWI is triggered near the last stable orbit of a spinning black hole, the amplitude of the X-ray modulation increases with the spin of the black hole. Revisiting published data on X-ray binary systems, we show that this type of relationship actually exists in the five systems where an indirect measure of the spin of the black hole is available.
astrophysics
Final results are reported from operation of the PICO-60 C$_3$F$_8$ dark matter detector, a bubble chamber filled with 52 kg of C$_3$F$_8$ located in the SNOLAB underground laboratory. The chamber was operated at thermodynamic thresholds as low as 1.2 keV without loss of stability. A new blind 1404-kg-day exposure at 2.45 keV threshold was acquired with approximately the same expected total background rate as the previous 1167-kg-day exposure at 3.3 keV. This increased exposure is enabled in part by a new optical tracking analysis to better identify events near detector walls, permitting a larger fiducial volume. These results set the most stringent direct-detection constraint to date on the WIMP-proton spin-dependent cross section at 2.5 $\times$ 10$^{-41}$ cm$^2$ for a 25 GeV WIMP, and improve on previous PICO results for 3-5 GeV WIMPs by an order of magnitude.
astrophysics
We consider which is the maximum information measurable from the decay distributions of polarised baryon decays via amplitude analysis in the helicity formalism. We focus in particular on the analytical study of the $\Lambda^+_c \to pK^-\pi^+$ decay distributions, demonstrating that the full information on its decay amplitudes can be extracted from its distributions, allowing a simultaneous measurement of both helicity amplitudes and the polarisation vector. This opens the possibility to use the $\Lambda^+_c \to pK^-\pi^+$ decay for applications ranging from New Physics searches to low-energy QCD studies, in particular its use as absolute polarimeter for the $\Lambda^+_c$ baryon. This result is valid as well for baryon decays having the same spin structure and it is cross-checked numerically by means of a toy amplitude fit with Monte Carlo pseudo-data.
high energy physics phenomenology
Learning from the data stored in a database is an important function increasingly available in relational engines. Methods using lower precision input data are of special interest given their overall higher efficiency but, in databases, these methods have a hidden cost: the quantization of the real value into a smaller number is an expensive step. To address the issue, in this paper we present MLWeaving, a data structure and hardware acceleration technique intended to speed up learning of generalized linear models in databases. ML-Weaving provides a compact, in-memory representation enabling the retrieval of data at any level of precision. MLWeaving also takes advantage of the increasing availability of FPGA-based accelerators to provide a highly efficient implementation of stochastic gradient descent. The solution adopted in MLWeaving is more efficient than existing designs in terms of space (since it can process any resolution on the same design) and resources (via the use of bit-serial multipliers). MLWeaving also enables the runtime tuning of precision, instead of a fixed precision level during the training. We illustrate this using a simple, dynamic precision schedule. Experimental results show MLWeaving achieves up to16 performance improvement over low-precision CPU implementations of first-order methods.
computer science
In this paper, we define the holographic multipartite entanglement entropy for $N$ separated subsystems living in a compact $\text{CFT}_d$ space-time. In a large $N$ limit, we find that the first-order holographic entanglement entropy perturbation is proportional to the change in the length of the subsystem and it has symmetric (Gaussian) distribution property. From that finding, we propose to construct two-point correlation functions of holographic entanglement entropy fluctuations analogous to the one that is used in the cosmic microwave background (CMB) temperature fluctuations analysis. Using the first law of thermodynamics, we may correlate tiny changes in entanglement entropy with the temperature fluctuations. By comparing with Planck 2018 CMB data from $j=2$ to $j=2499$ where $j$ denotes the multipole moment, we extract the distribution of the entangling region size that corresponds to the temperature fluctuations. Since the distribution of the entangling region size can be interpreted as the CMB temperature fluctuations, we conclude that entanglement might play a role in the quantum aspects of cosmology.
high energy physics theory
Online field experiments are the gold-standard way of evaluating changes to real-world interactive machine learning systems. Yet our ability to explore complex, multi-dimensional policy spaces - such as those found in recommendation and ranking problems - is often constrained by the limited number of experiments that can be run simultaneously. To alleviate these constraints, we augment online experiments with an offline simulator and apply multi-task Bayesian optimization to tune live machine learning systems. We describe practical issues that arise in these types of applications, including biases that arise from using a simulator and assumptions for the multi-task kernel. We measure empirical learning curves which show substantial gains from including data from biased offline experiments, and show how these learning curves are consistent with theoretical results for multi-task Gaussian process generalization. We find that improved kernel inference is a significant driver of multi-task generalization. Finally, we show several examples of Bayesian optimization efficiently tuning a live machine learning system by combining offline and online experiments.
statistics
We propose practical transceiver structures for double-sided massive multiple-input-multiple-output (MIMO) systems. Unlike standard massive MIMO, both transmit and receive sides are equipped with high-dimensional antenna arrays. We leverage the multi-layer filtering architecture and propose novel layered transceiver schemes with practical channel state information requirements to simplify the complexity of our double-sided massive MIMO system. We conduct a comprehensive simulation campaign to investigate the performance of the proposed transceivers under different channel propagation conditions and to identify the most suitable strategy. Our results show that the covariance matrix eigenfilter design at the outer transceiver layer combined with maximum eigenmode transmission precoding/minimum mean square error combining at the inner transceiver layer yields the best achievable sum rate performance for different propagation conditions and multi-user interference levels.
electrical engineering and systems science
Large, pre-trained generative models have been increasingly popular and useful to both the research and wider communities. Specifically, BigGANs a class-conditional Generative Adversarial Networks trained on ImageNet---achieved excellent, state-of-the-art capability in generating realistic photos. However, fine-tuning or training BigGANs from scratch is practically impossible for most researchers and engineers because (1) GAN training is often unstable and suffering from mode-collapse; and (2) the training requires a significant amount of computation, 256 Google TPUs for 2 days or 8xV100 GPUs for 15 days. Importantly, many pre-trained generative models both in NLP and image domains were found to contain biases that are harmful to society. Thus, we need computationally-feasible methods for modifying and re-purposing these huge, pre-trained models for downstream tasks. In this paper, we propose a cost-effective optimization method for improving and re-purposing BigGANs by fine-tuning only the class-embedding layer. We show the effectiveness of our model-editing approach in three tasks: (1) significantly improving the realism and diversity of samples of complete mode-collapse classes; (2) re-purposing ImageNet BigGANs for generating images for Places365; and (3) de-biasing or improving the sample diversity for selected ImageNet classes.
computer science
In a Riemannian manifold with a smooth positive function that weights the associated Hausdorff measures we study stable sets, i.e., second order minima of the weighted perimeter under variations preserving the weighted volume. By assuming local convexity of the boundary and certain behaviour of the Bakry-\'Emery-Ricci tensor we deduce rigidity properties for stable sets by using deformations constructed from parallel vector fields tangent to the boundary. As a consequence, we completely classify the stable sets in some Riemannian cylinders $\Omega\times\mathbb{R}$ with product weights. Finally, we also establish uniqueness results showing that any minimizer of the weighted perimeter for fixed weighted volume is bounded by a horizontal slice $\Omega\times\{t\}$.
mathematics
In this paper, the existence and uniqueness of the distribution dependent SDEs with H\"{o}lder continuous drift driven by $\alpha$-stable process is investigated. Moreover, by using Zvonkin type transformation, the convergence rate of Euler-Maruyama method is also obtained. The results cover the ones in the case of distribution independent SDEs.
mathematics
We present a strip transition-edge sensor microcalorimeter linear array detector developed for energy dispersive X-ray diffraction imaging and Compton scattering applications. The prototype detector is an array of 20 transition-edge-sensors with absorbers in strip geometry arranged in a linear array. We discuss the fabrication steps needed to develop this array including Mo/Cu bilayer, Au electroplating, and proof-of-principle fabrication of long strips of SiNx membranes. We demonstrate minimal unwanted effect of strip geometry on X-ray pulse response, and show linear relationship of 1/pulse height and pulse decay times with absorber length. For the absorber lengths studied, preliminary measurements show energy resolutions of 40 eV to 180 eV near 17 keV. Furthermore, we show that the heat flow to the cold bath is nearly independent of the absorber area and depends on the SiNx membrane geometry.
physics
Spin-fluctuation-mediated unconventional superconductivity can emerge at the border of magnetism, featuring a superconducting order parameter that changes sign in momentum space. Detection of such a sign-change is experimentally challenging, since most probes are not phase-sensitive. The observation of a spin resonance mode (SRM) from inelastic neutron scattering is often seen as strong phase-sensitive evidence for a sign-changing superconducting order parameter, by assuming the SRM is a spin-excitonic bound state. Here, we show that for the heavy fermion superconductor CeCoIn$_5$, its SRM defies expectations for a spin-excitonic bound state, and is not a manifestation of sign-changing superconductivity. Instead, the SRM in CeCoIn$_5$ likely arises from a reduction of damping to a magnon-like mode in the superconducting state, due to its proximity to magnetic quantum criticality. Our findings emphasize the need for more stringent tests of whether SRMs are spin-excitonic, when using their presence to evidence sign-changing superconductivity.
condensed matter
The search for a dark photon produced at $e^{+}e^{-}$ colliders which subsequently decays into inelastic dark matter particles, is discussed. The heavier dark matter decays into a pair of visible charged particles and a lighter dark matter particle after traveling some distance. The visible decay products can be recorded by a dark matter detector made of emulsions and gas detectors, placed near the main $e^{+}e^{-}$ detector. This setup can not only explore new parameter regions not reached before, but also re-open some regions thought to be excluded by previous experimental data. The physics potential for such a detector around BESIII and Belle II is presented.
high energy physics phenomenology
The increasing penetration of photovoltaic systems in the power grid makes it vulnerable to cloud shadow projection. Real-time cloud segmentation in ground-based infrared images is important to reduce the noise in intra-hour global solar irradiance forecasting. We present a comparison between discriminative and generative models for cloud segmentation. The performances of supervised and unsupervised learning methods in cloud segmentation are evaluated. The discriminative models are solved in the primal formulation to make them feasible in real-time applications. The performances are compared using the j-statistic. Infrared image preprocessing to remove stationary artifacts increases the overall performance in the analyzed methods. The inclusion of features from neighboring pixels in the feature vectors leads to a performance improvement in some of the cases. Markov Random Fields achieve the best performance in both unsupervised and supervised generative models. Discriminative models solved in the primal yield a dramatically lower computing time along with high performance in the segmentation. Generative and discriminative models are comparable when preprocessing is applied to the infrared images.
electrical engineering and systems science
In this article we initiate the study of 1+ 2 dimensional wave maps on a curved spacetime in the low regularity setting. Our main result asserts that in this context the wave maps equation is locally well-posed at almost critical regularity. As a key part of the proof of this result, we generalize the classical optimal bilinear L^2 estimates for the wave equation to variable coefficients, by means of wave packet decompositions and characteristic energy estimates. This allows us to iterate in a curved X^{s,b} space.
mathematics
Multi-user schedulers are designed to achieve optimal average system utility (e.g. throughput) subject to a set of fairness criteria. In this work, scheduling under temporal fairness constraints is considered. Prior works have shown that a class of scheduling strategies called threshold based strategies (TBSs) achieve optimal system utility under temporal fairness constraints. The optimal TBS thresholds are determined as a function of the channel statistics. In order to provide performance guarantees for TBSs in practical scenarios --- where the scheduler learns the optimal thresholds based on the empirical observations of the channel realizations --- it is necessary to evaluate the rates of convergence of TBS thresholds to the optimal value. In this work, these rates of convergence and the effect on the resulting system utility are investigated. It is shown that the best estimate of the threshold vector is at least $\omega(\frac{1}{\sqrt{t}})$ away from the optimal value, where $t$ is the number of observations of the independent and identically distributed channel realizations. Furthermore, it is shown that under long-term fairness constraints, the scheduler may achieve an average utility that is higher than the optimal long-term utility by violating the fairness criteria for a long initial period. Consequently, the resulting system utility may converge to its optimal long-term value from above. The results are verified by providing simulations of practical scheduling scenarios.
electrical engineering and systems science
We propose a convolutional recurrent sparse auto-encoder model. The model consists of a sparse encoder, which is a convolutional extension of the learned ISTA (LISTA) method, and a linear convolutional decoder. Our strategy offers a simple method for learning a task-driven sparse convolutional dictionary (CD), and producing an approximate convolutional sparse code (CSC) over the learned dictionary. We trained the model to minimize reconstruction loss via gradient decent with back-propagation and have achieved competitive results to KSVD image denoising and to leading CSC methods in image inpainting requiring only a small fraction of their run-time.
electrical engineering and systems science
Higher-order quantum theory is an extension of quantum theory where one introduces transformations whose input and output are transformations, thus generalizing the notion of channels and quantum operations. The generalization then goes recursively, with the construction of a full hierarchy of maps of increasingly higher order. The analysis of special cases already showed that higher-order quantum functions exhibit features that cannot be tracked down to the usual circuits, such as indefinite causal structures, providing provable advantages over circuital maps. The present treatment provides a general framework where this kind of analysis can be carried out in full generality. The hierarchy of higher-order quantum maps is introduced axiomatically with a formulation based on the language of types of transformations. Complete positivity of higher-order maps is derived from the general admissibility conditions instead of being postulated as in previous approaches. The recursive characterization of convex sets of maps of a given type is used to prove equivalence relations between different types. The axioms of the framework do not refer to the specific mathematical structure of quantum theory, and can therefore be exported in the context of any operational probabilistic theory.
quantum physics
We construct the spin-projection operators for a theory containing a symmetric two-index tensor and a general three-index tensor. We then use them to analyse, at linearized level, the most general action for a metric-affine theory of gravity with terms up to second order in curvature, which depends on 28 parameters. In the metric case we recover known results. In the torsion-free case, we are able to determine the most general six-parameter class of theories that are projective invariant, contain only one massless spin 2 and no spin 3, and are free of ghosts and tachyons.
high energy physics theory
We present results on the calculation of the polarized 2- and 3-loop anomalous dimensions in a massive computation of the associated operator matrix element. We also discuss the treatment of $\gamma_5$ and derive results in the M-scheme.10
high energy physics phenomenology
Let $p$ be an odd natural number $\ge 3$. Inspired by results from Euclid's {\em Elements}, we express the irrational $$y=\sqrt[p]{d+\sqrt R}, $$ whose degree is $2p$, as a polynomial function of irrationals of degrees $\le p$. In certain cases $y$ is expressed by simple radicals. This reduction of the degree exhibits remarkably regular patterns of the polynomials involved. The proof is based on hypergeometric summation, in particular, on Zeilberger's algorithm.
mathematics
As a modified gravity theory that introduces new gravitational degrees of freedom, the generalized SU(2) Proca theory (GSU2P for short) is the non-Abelian version of the well-known generalized Proca theory where the action is invariant under global transformations of the SU(2) group. This theory was formulated for the first time in Phys. Rev. D 94 (2016) 084041, having implemented the required primary constraint-enforcing relation to make the Lagrangian degenerate and remove one degree of freedom from the vector field in accordance with the irreducible representations of the Poincar\'e group. It was later shown in Phys. Rev. D 101 (2020) 045008, ibid 045009, that a secondary constraint-enforcing relation, which trivializes for the generalized Proca theory but not for the SU(2) version, was needed to close the constraint algebra. It is the purpose of this paper to implement this secondary constraint-enforcing relation in GSU2P and to make the construction of the theory more transparent. Since several terms in the Lagrangian were dismissed in Phys. Rev. D 94 (2016) 084041 via their equivalence to other terms through total derivatives, not all of the latter satisfying the secondary constraint-enforcing relation, the work was not so simple as directly applying this relation to the resultant Lagrangian pieces of the old theory. Thus, we were motivated to reconstruct the theory from scratch. In the process, we found the beyond GSU2P.
high energy physics theory
We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex. We characterize the stability properties of a primal-dual gradient based approach, analyzing its convergence in the presence of worst case deterministic noise. As a main example, we specialize and illustrate the results for the problem of robust sparse recovery. Key to our analysis is a combination of ideas from regularization theory and optimization in the presence of errors. Theoretical results are complemented by experiments showing that state-of-the-art performances can be achieved with considerable computational speed-ups.
statistics
We revisit the multi-loop structure of the anomalous-dimension matrix governing the infrared divergences of massless $n$-particle scattering amplitudes in non-abelian gauge theories. In particular, we derive its most general form at four-loop order, significantly simplifying corresponding expressions given previously. By carefully reevaluating the constraints imposed by two-particle collinear limits, we find that at four-loop order color structures involving $d_R^{abcd}$, the symmetrized trace of four group generators, appear along with cusp logarithms $\ln[\mu^2/(-s_{ij})]$. As a consequence, naive Casimir scaling of the cusp anomalous dimensions associated with the quark and gluon form factors is violated, while a generalized form of Casimir scaling still holds. Our results provide an important ingredient for resummations of large logarithms in $n$-jet cross sections with next-to-next-to-next-to leading logarithmic (N$^3$LL) accuracy.
high energy physics phenomenology
Since the discovery of graphene, two-dimensional materials with atomic level thickness have rapidly grown to be a prosperous field of physical science with interdisciplinary interests, for their fascinating properties and broad applications. Very recently, the experimental observation of ferromagnetism in Cr$_2$Ge$_2$Te$_6$ bilayer and CrI$_3$ monolayer opened a door to pursuit long-absent intrinsic magnetic orders in two-dimensional materials. Meanwhile, the ferroelectricity was also experimentally found in SnTe monolayer and CuInP$_2$S$_6$ few layers. The emergence of these ferroic orders in the two-dimensional limit not only brings new challenges to our physical knowledge, but also provides more functionalities for potential applications. Among various two-dimensional ferroic ordered materials, transition/rare-earth metal halides and their derivants are very common. In this Research Update, based on transition/rare-earth metal halides, the physics of various ferroic orders in two-dimensional will be illustrated. The potential applications based on their magnetic and polar properties will also be discussed.
condensed matter
We present LOFAR observations at 150 MHz of the borderline FRI/FRII giant radio galaxy NGC 6251. This paper presents the most sensitive and highest-resolution images of NGC 6251 at these frequencies to date, revealing for the first time a low-surface-brightness extension to the northern lobe, and a possible backflow associated with the southern lobe. The integrated spectra of components of NGC 6251 are consistent with previous measurements at higher frequencies, similar to results from other LOFAR studies of nearby radio galaxies. We find the outer structures of NGC 6251 to be either at equipartition or slightly electron dominated, similar to those of FRII sources rather than FRIs; but this conclusion remains tentative because of uncertainties associated with the geometry and the extrapolation of X-ray measurements to determine the external pressure distribution on the scale of the outer lobes. We place lower limits on the ages of the extension of the northern lobe and the backflow of the southern lobe of $t \gtrsim 250$ Myr and $t \gtrsim 210$ Myr respectively. We present the first detection of polarisation at 150 MHz in NGC 6251. Taking advantage of the high Faraday resolution of LOFAR, we place an upper limit on the magnetic field in the group of $B < 0.2 (\Lambda_B / 10 {\rm kpc})^{-0.5} \mu$G for a coherence scale of $\Lambda_B < 60 {\rm kpc}$ and $B < 13 \mu$G for $\Lambda_B = 240$ kpc.
astrophysics
Migration of immune cells within the human body allows them to fulfill their main function of detecting pathogens. Adopting an optimal navigation and search strategy by these cells is of crucial importance to achieve an efficient immune response. Analyzing the dynamics of dendritic cells in our in vitro experiments reveals that the directional persistence of these cells is highly correlated with their migration speed, and that the persistence-speed coupling enables the migrating cells to reduce their search time. We introduce theoretically a new class of random search optimization problems by minimizing the mean first-passage time (MFPT) with respect to the strength of the coupling between influential parameters such as speed and persistence length. We derive an analytical expression for the MFPT in a confined geometry and verify that the correlated motion improves the search efficiency if the mean persistence length is sufficiently shorter than the confinement size. In contrast, a positive persistence-speed correlation even increases the MFPT at long persistence length regime, thus, such a strategy is disadvantageous for highly persistent active agents.
condensed matter
We present a classification of non-hermitian random matrices based on implementing commuting discrete symmetries. It contains 38 classes. This generalizes the classification of hermitian random matrices due to Altland-Zirnbauer and it also extends the Ginibre ensembles of non-hermitian matrices.
condensed matter
Accounting for undecided and uncertain voters is a challenging issue for predicting election results from public opinion polls. Undecided voters typify the uncertainty of swing voters in polls but are often ignored or allocated to each candidate in a simple, deterministic manner. Historically this may have been adequate because the undecided were comparatively small enough to assume that they do not affect the relative proportions of the decided voters. However, in the presence of high numbers of undecided voters, these static rules may in fact bias election predictions from election poll authors and meta-poll analysts. In this paper, we examine the effect of undecided voters in the 2016 US presidential election to the previous three presidential elections. We show there were a relatively high number of undecided voters over the campaign and on election day, and that the allocation of undecided voters in this election was not consistent with two-party proportional (or even) allocations. We find evidence that static allocation regimes are inadequate for election prediction models and that probabilistic allocations may be superior. We also estimate the bias attributable to polling agencies, often referred to as "house effects".
statistics
Contrastive divergence is a popular method of training energy-based models, but is known to have difficulties with training stability. We propose an adaptation to improve contrastive divergence training by scrutinizing a gradient term that is difficult to calculate and is often left out for convenience. We show that this gradient term is numerically significant and in practice is important to avoid training instabilities, while being tractable to estimate. We further highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality. Finally, we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases,such as image generation, OOD detection, and compositional generation.
computer science
Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches.
electrical engineering and systems science
Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen classes. Though many ZSL methods rely on a direct mapping between the visual and the semantic space, the calibration deviation and hubness problem limit the generalization capability to unseen classes. Recently emerged generative ZSL methods generate unseen image features to transform ZSL into a supervised classification problem. However, most generative models still suffer from the seen-unseen bias problem as only seen data is used for training. To address these issues, we propose a novel bidirectional embedding based generative model with a tight visual-semantic coupling constraint. We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces. Since the embedding from high-dimensional visual features comprise much non-semantic information, the alignment of visual and semantic in latent space would inevitably been deviated. Therefore, we introduce information bottleneck (IB) constraint to ZSL for the first time to preserve essential attribute information during the mapping. Specifically, we utilize the uncertainty estimation and the wake-sleep procedure to alleviate the feature noises and improve model abstraction capability. In addition, our method can be easily extended to transductive ZSL setting by generating labels for unseen images. We then introduce a robust loss to solve this label noise problem. Extensive experimental results show that our method outperforms the state-of-the-art methods in different ZSL settings on most benchmark datasets. The code will be available at https://github.com/osierboy/IBZSL.
computer science
The goal of human action recognition is to temporally or spatially localize the human action of interest in video sequences. Temporal localization (i.e. indicating the start and end frames of the action in a video) is referred to as frame-level detection. Spatial localization, which is more challenging, means to identify the pixels within each action frame that correspond to the action. This setting is usually referred to as pixel-level detection. In this chapter, we are using action, activity, event interchangeably.
computer science
The discovery rate of fast radio bursts (FRBs) is increasing dramatically thanks to new radio facilities. Meanwhile, wide-field instruments such as the 47 deg$^2$ Zwicky Transient Facility (ZTF) survey the optical sky to study transient and variable sources. We present serendipitous ZTF observations of the CHIME repeating source FRB 180916.J0158+65, that was localized to a spiral galaxy 149 Mpc away and is the first FRB suggesting periodic modulation in its activity. While 147 ZTF exposures corresponded to expected high-activity periods of this FRB, no single ZTF exposure was at the same time as a CHIME detection. No $>3\sigma$ optical source was found at the FRB location in 683 ZTF exposures, totalling 5.69 hours of integration time. We combined ZTF upper limits and expected repetitions from FRB 180916.J0158+65 in a statistical framework using a Weibull distribution, agnostic of periodic modulation priors. The analysis yielded a constraint on the ratio between the optical and radio fluences of $\eta \lesssim 200$, corresponding to an optical energy $E_{\rm opt} \lesssim 3 \times 10^{46}$ erg for a fiducial 10 Jy ms FRB (90% confidence). A deeper (but less statistically robust) constraint of $\eta \lesssim 3$ can be placed assuming a rate of $r(>5$ Jy ms)= hr$^{-1}$ and $1.2\pm 1.1$ FRB occurring during exposures taken in high-activity windows. The constraint can be improved with shorter per-image exposures and longer integration time, or observing FRBs at higher Galactic latitudes. This work demonstrated how current surveys can statistically constrain multi-wavelength counterparts to FRBs even without deliberately scheduled simultaneous radio observation.
astrophysics
Most existing methods for CRF estimation from a single image fail to handle general real images. For instance, EdgeCRF based on colour patches extracted from edges works effectively only when the presence of noise is insignificant, which is not the case for many real images; and, CRFNet, a recent method based on fully supervised deep learning works only for the CRFs that are in the training data, and hence fail to deal with other possible CRFs beyond the training data. To address these problems, we introduce a non-deep-learning method using prediction consistency and gradual refinement. First, we rely more on the patches of the input image that provide more consistent predictions. If the predictions from a patch are more consistent, it means that the patch is likely to be less affected by noise or any inferior colour combinations, and hence, it can be more reliable for CRF estimation. Second, we employ a gradual refinement scheme in which we start from a simple CRF model to generate a result which is more robust to noise but less accurate, and then we gradually increase the model's complexity to improve the result. This is because a simple model, while being less accurate, overfits less to noise than a complex model does. Our experiments show that our method outperforms the existing single-image methods for daytime and nighttime real images. We further propose a more efficient deep learning extension that performs test-time training (based on unsupervised losses) on the test input image. This provides our method better generalization performance than CRFNet making it more practically applicable for CRF estimation for general real images.
computer science
We compute the leading asymptotics as $N\to\infty$ of the maximum of the field $Q_N(q)= \log\det|q- A_N|$, $q\in \mathbb{C}$, for any unitarily invariant Hermitian random matrix $A_N$ associated to a non-critical real-analytic potential. Hence, we verify the leading order in a conjecture of Fyodorov and Simm formulated for the GUE. The method relies on a classical upper-bound and a more sophisticated lower-bound based on a variant of the second-moment method which exploits the hyperbolic branching structure of the field $Q_N(q)$, $q$ in the upper half plane. Specifically, we compare $Q_N$ to an idealized Gaussian field by means of exponential moments. In principle, this method could also be applied to random fields coming from other point processes provided that one can compute certain mixed exponential moments. For unitarily invariant ensembles, we show that these assumptions follow from the Fyodorov-Strahov formula and asymptotics of orthogonal polynomials derived by Deift, Kriecherbauer, McLaughlin, Venakides, and Zhou.
mathematics
Open-dissipative systems obeying parity-time ($\mathcal{PT}$) symmetry are capable of demonstrating oscillatory dynamics akin to the conservative systems. In contrast to limit cycle solutions characteristic of nonlinear systems, the $\mathcal{PT}$-symmetric oscillations form a continuum of non-isolated orbits. However, precise sculpturing of the real potential and the gain-loss spatial profiles required for establishing of the $\mathcal{PT}$-symmetry is practically challenging. The optical devices, such as lasers, exhibit relaxation dynamics and do not operate as the $\mathcal{PT}$-symmetric systems. Here we demonstrate how these constraints can be overcome. We predict that a pair of optically trapped polariton condensates (a polariton dimer) can be excited and operated in the oscillating regime typical of the isolated systems. This regime can be realized in the presence of both dissipative and conservative coupling between the condensates and can be maintained at an arbitrary external pump intensity. Every orbit is characterised by a frequency comb appearing in the spectrum of a dimer in the presence of the conservative nonlinearity. Our results pave the way for the creation of the optical computing devices operating under the constant-wave external pumping.
condensed matter
This is a chapter of the forthcoming Handbook of Multiple Testing. We consider a variety of model selection strategies in a high-dimensional setting, where the number of potential predictors p is large compared to the number of available observations n. In particular modifications of information criteria which are suitable in case of p > n are introduced and compared with a variety of penalized likelihood methods, in particular SLOPE and SLOBE. The focus is on methods which control the FDR in terms of model identification. Theoretical results are provided both with respect to model identification and prediction and various simulation results are presented which illustrate the performance of the different methods in different situations.
statistics
We introduce Multi-Frame Cross-Entropy training (MFCE) for convolutional neural network acoustic models. Recognizing that similar to RNNs, CNNs are in nature sequence models that take variable length inputs, we propose to take as input to the CNN a part of an utterance long enough that multiple labels are predicted at once, therefore getting cross-entropy loss signal from multiple adjacent frames. This increases the amount of label information drastically for small marginal computational cost. We show large WER improvements on hub5 and rt02 after training on the 2000-hour Switchboard benchmark.
electrical engineering and systems science
We investigate self-avoiding walk models of linear block copolymers adsorbed at a surface and desorbed by the action of a force. We rigorously establish the dependence of the free energy on the adsorption and force parameters, and the form of the phase diagram for several cases, including $AB$-diblock copolymers and $ABA$-triblock copolymers, pulled from an end vertex and from the central vertex. Our interest in block copolymers is partly motivated by the occurrence of a novel mixed phase in a directed walk model of diblock copolymers \cite{Iliev} and we believe that this paper is the first rigorous treatment of a self-avoiding walk model of the situation.
condensed matter
We develop new flexible univariate models for light-tailed and heavy-tailed data, which extend a hierarchical representation of the generalized Pareto (GP) limit for threshold exceedances. These models can accommodate departure from asymptotic threshold stability in finite samples while keeping the asymptotic GP distribution as a special (or boundary) case and can capture the tails and the bulk jointly without losing much flexibility. Spatial dependence is modeled through a latent process, while the data are assumed to be conditionally independent. Focusing on a gamma-gamma model construction, we design penalized complexity priors for crucial model parameters, shrinking our proposed spatial Bayesian hierarchical model toward a simpler reference whose marginal distributions are GP with moderately heavy tails. Our model can be fitted in fairly high dimensions using Markov chain Monte Carlo by exploiting the Metropolis-adjusted Langevin algorithm (MALA), which guarantees fast convergence of Markov chains with efficient block proposals for the latent variables. We also develop an adaptive scheme to calibrate the MALA tuning parameters. Moreover, our model avoids the expensive numerical evaluations of multifold integrals in censored likelihood expressions. We demonstrate our new methodology by simulation and application to a dataset of extreme rainfall events that occurred in Germany. Our fitted gamma-gamma model provides a satisfactory performance and can be successfully used to predict rainfall extremes at unobserved locations.
statistics
Likelihood-free methods such as approximate Bayesian computation (ABC) have extended the reach of statistical inference to problems with computationally intractable likelihoods. Such approaches perform well for small-to-moderate dimensional problems, but suffer a curse of dimensionality in the number of model parameters. We introduce a likelihood-free approximate Gibbs sampler that naturally circumvents the dimensionality issue by focusing on lower-dimensional conditional distributions. These distributions are estimated by flexible regression models either before the sampler is run, or adaptively during sampler implementation. As a result, and in comparison to Metropolis-Hastings based approaches, we are able to fit substantially more challenging statistical models than would otherwise be possible. We demonstrate the sampler's performance via two simulated examples, and a real analysis of Airbnb rental prices using a intractable high-dimensional multivariate non-linear state space model containing 13,140 parameters, which presents a real challenge to standard ABC techniques.
statistics
We consider the efficient numerical approximation of acoustic wave propagation in time domain by a finite element method with mass lumping. In the presence of internal damping, the problem can be reduced to a second order formulation in time for the velocity field alone. For the spatial approximation we consider $H(\mathrm{div})$--conforming finite elements of second order. In order to allow for an efficient time integration, we propose a mass-lumping strategy based on approximation of the $L^2$-scalar product by inexact numerical integration which leads to a block-diagonal mass matrix. A careful error analysis allows to show that second order accuracy is not reduced by the quadrature errors which is illustrated also by numerical tests.
mathematics
We release RELIKE (Reionization Effective Likelihood), a fast and accurate effective likelihood code based on the latest Planck 2018 data that allows one constrain any model for reionization between $6 < z < 30$ using five constraints from the CMB reionization principal components (PC). We tested the code on two example models which showed excellent agreement with sampling the exact Planck likelihoods using either a simple Gaussian PC likelihood or its full kernel density estimate. This code enables a fast and consistent means for combining Planck constraints with other reionization data sets, such as kinetic Sunyaev-Zeldovich effects, line-intensity mapping, luminosity function, star formation history, quasar spectra, etc, where the redshift dependence of the ionization history is important. Since the PC technique tests any reionization history in the given range, we also derive model-independent constraints for the total Thomson optical depth $\tau_{\rm PC} = 0.0619^{+0.0056}_{-0.0068}$ and its $15\le z \le 30$ high redshift component $\tau_{\rm PC}(15, 30) < 0.020 $ (95\% C.L.). The upper limits on the high-redshift optical depth is a factor of $\sim3$ larger than those reported in the Planck 2018 cosmological parameter paper using the FlexKnot method and we validate our results with a direct analysis of a two-step model which permits this small high-$z$ component.
astrophysics
As the 5th Generation (5G) mobile networks are bringing about global societal benefits, the design phase for the 6th Generation (6G) has started. 6G will need to enable greater levels of autonomy, improve human machine interfacing, and achieve deep connectivity in more diverse environments. The need for increased explainability to enable trust is critical for 6G as it manages a wide range of mission critical services (e.g. autonomous driving) to safety critical tasks (e.g. remote surgery). As we migrate from traditional model-based optimisation to deep learning, the trust we have in our optimisation modules decrease. This loss of trust means we cannot understand the impact of: 1) poor/bias/malicious data, and 2) neural network design on decisions; nor can we explain to the engineer or the public the network's actions. In this review, we outline the core concepts of Explainable Artificial Intelligence (XAI) for 6G, including: public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, methods to improve explainability, and frameworks to incorporate XAI into future wireless systems. Our review is grounded in cases studies for both PHY and MAC layer optimisation, and provide the community with an important research area to embark upon.
electrical engineering and systems science
We show that the leading semiclassical behavior of soliton form factors at arbitrary momentum transfer is controlled by solutions to a new wave-like integro-differential equation that describes solitons undergoing acceleration. We work in the context of two-dimensional linear sigma models with kink solitons for concreteness, but our methods are purely semiclassical and generalizable.
high energy physics theory
Multiple systems estimation is a key approach for quantifying hidden populations such as the number of victims of modern slavery. The UK Government published an estimate of 10,000 to 13,000 victims, constructed by the present author, as part of the strategy leading to the Modern Slavery Act 2015. This estimate was obtained by a stepwise multiple systems method based on six lists. Further investigation shows that a small proportion of the possible models give rather different answers, and that other model fitting approaches may choose one of these. Three data sets collected in the field of modern slavery, together with a data set about the death toll in the Kosovo conflict, are used to investigate the stability and robustness of various multiple systems estimate methods. The crucial aspect is the way that interactions between lists are modelled, because these can substantially affect the results. Model selection and Bayesian approaches are considered in detail, in particular to assess their stability and robustness when applied to real modern slavery data. A new Markov Chain Monte Carlo Bayesian approach is developed; overall, this gives robust and stable results at least for the examples considered. The software and datasets are freely and publicly available to facilitate wider implementation and further research.
statistics
Involutive MCMC is a unifying mathematical construction for MCMC kernels that generalizes many classic and state-of-the-art MCMC algorithms, from reversible jump MCMC to kernels based on deep neural networks. But as with MCMC samplers more generally, implementing involutive MCMC kernels is often tedious and error-prone, especially when sampling on complex state spaces. This paper describes a technique for automating the implementation of involutive MCMC kernels given (i) a pair of probabilistic programs defining the target distribution and an auxiliary distribution respectively and (ii) a differentiable program that transforms the execution traces of these probabilistic programs. The technique, which is implemented as part of the Gen probabilistic programming system, also automatically detects user errors in the specification of involutive MCMC kernels and exploits sparsity in the kernels for improved efficiency. The paper shows example Gen code for a split-merge reversible jump move in an infinite Gaussian mixture model and a state-dependent mixture of proposals on a combinatorial space of covariance functions for a Gaussian process.
statistics
A newfound security breach in the physical nature of single photon detectors that are generally used in quantum key distribution is explained, we found that the bit contents of a quantum key transmission system can be intercepted from far away by exploiting the ultrawideband electromagnetic signals radiated from hi-voltage avalanche effect of single photon detectors. It means that in fact any Geiger mode avalanche photodiode that is used inside single photon detectors systematically acts like a downconverter that converts the optical-wavelength photons to radio-wavelength photons that can be intercepted by an antenna as side channel attack. Our experiment showed that the radiated waveforms captured by the antenna can be used as a fingerprint. These finger prints were fed to a deep learning neural network as training data, and after training the neural network was able to clone the bit content of quantum transmission.
quantum physics
Type-III-burst radio signals can be mimicked in the laboratory via laser-plasma interaction. Instead of an electron beam generating Langmuir waves (LW) in the interplanetary medium, the LWs are created by a laser interacting with a millimeter-sized plasma through the stimulated Raman instability. In both cases, the LWs feed the Langmuir decay instability which scatters them in several directions. The resulting LWs may couple to form electromagnetic emission at twice the plasma frequency, which has been detected in the interplanetary medium, and recently in a laboratory laser experiment [Marqu\`es et al. Phys. Rev. Lett. 124, 135001 (2020)]. This article presents the first numerical analysis of this laser configuration using particle-in-cell simulations, providing details on the wave spectra that are too difficult to measure in experiments. The role of some parameters is addressed, with a focus on laser intensity, in order to illustrate the behavior of the electromagnetic emission's angular distribution and polarization.
physics
We examine in detail the structure of the Regge limit of the (nonplanar) ${\cal N}=4$ SYM four-point amplitude. We begin by developing a basis of color factors $C_{ik}$ suitable for the Regge limit of the amplitude at any loop order, and then calculate explicitly the coefficients of the amplitude in that basis through three-loop order using the Regge limit of the full amplitude previously calculated by Henn and Mistlberger. We compute these coefficients exactly at one loop, through ${\cal O} (\epsilon^2)$ at two loops, and through ${\cal O} (\epsilon^0)$ at three loops, verifying that the IR-divergent pieces are consistent with (the Regge limit of) the expected infrared divergence structure, including a contribution from the three-loop correction to the dipole formula. We also verify consistency with the IR-finite NLL and NNLL predictions of Caron-Huot et al. Finally we use these results to motivate the conjecture of an all-orders relation between one of the coefficients and the Regge limit of the ${\cal N} =8$ supergravity four-point amplitude.
high energy physics theory
Recent developments in astronomical observations enable direct imaging of circumstellar disks. Precise characterization of such extended structure is essential to our understanding of stellar systems. However, the faint intensity of the circumstellar disks compared to the brightness of the host star compels astronomers to use tailored observation strategies, in addition to state-of-the-art optical devices. Even then, extracting the signal of circumstellar disks heavily relies on post-processing techniques. In this work, we propose a morphological component analysis (MCA) approach that leverages low-complexity models of both the disks and the stellar light corrupting the data. In addition to disks, our method allows to image exoplanets. Our approach is tested through numerical experiments.
astrophysics
We give a general construction of a setup that verifies bulk reconstruction, conservation of relative entropies, and equality of modular flows between the bulk and the boundary, for infinite-dimensional systems with operator-pushing. In our setup, a bulk-to-boundary map is defined at the level of the $C^*$-algebras of state-independent observables. We then show that if the boundary dynamics allow for the existence of a KMS state, physically relevant Hilbert spaces and von Neumann algebras can be constructed directly from our framework. Our construction should be seen as a state-dependent construction of the other side of a wormhole and clarifies the meaning of black hole reconstruction claims such as the Papadodimas-Raju proposal. As an illustration, we apply our result to construct a wormhole based on the HaPPY code, which satisfies all properties of entanglement wedge reconstruction.
high energy physics theory
Given an incomplete ratings data over a set of users and items, the preference completion problem aims to estimate a personalized total preference order over a subset of the items. In practical settings, a ranked list of top-$k$ items from the estimated preference order is recommended to the end user in the decreasing order of preference for final consumption. We analyze this model and observe that such a ranking model results in suboptimal performance when the payoff associated with the recommended items is different. We propose a novel and very efficient algorithm for the preference ranking considering the uncertainty regarding the payoffs of the items. Once the preference scores for the users are obtained using any preference learning algorithm, we show that ranking the items using a risk seeking utility function results in the best ranking performance.
computer science
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices.
computer science
We study the asymptotic Dirichlet problem for Killing graphs with prescribed mean curvature $H$ in warped product manifolds $M\times_\varrho \mathbb{R}$. In the first part of the paper, we prove the existence of Killing graphs with prescribed boundary on geodesic balls under suitable assumptions on $H$ and the mean curvature of the Killing cylinders over geodesic spheres. In the process we obtain a uniform interior gradient estimate improving previous results by Dajczer and de Lira. In the second part we solve the asymptotic Dirichlet problem in a large class of manifolds whose sectional curvatures are allowed to go to $0$ or to $-\infty$ provided that $H$satisfies certain bounds with respect to the sectional curvatures of $M$ and the norm of the Killing vector field. Finally we obtain non-existence results if the prescribed mean curvature function $H$ grows too fast.
mathematics
Elastic neutron scattering on a single crystal and powder X-ray diffraction measurements were carried out to investigate how the crystal structure evolves as a function of temperature in the Weyl semimetal WTe$_{2}$. A sharp transition from the low-temperature orthorhombic phase (T$_{d}$) to the high-temperature monoclinic phase (1T$^{\prime}$) was observed at ambient pressure in the single crystal near $\sim$565 K. Unlike in MoTe$_{2}$, the solid-solid transition from T$_{d}$ to 1T$^{\prime}$ occurs without the cell doubling of the intermediate T$_{d}^{*}$ phase with AABB (or ABBA) layer stacking. In powders however, the thermal transition from the T$_{d}$ to the 1T$^{\prime}$ phase is broadened and a two phase coexistence was observed until 700K, well above the structural transition.
condensed matter
Leptonic CP violation search, neutrino mass hierarchy determination, and precision measurement of oscillation parameters for an unitary test of the neutrino mixing matrix are among the major targets of the ongoing and future neutrino oscillation experiments. The work explores the physics reach for these targets by around 2027, when the 3rd generation of the neutrino experiments starts operation, with a combined sensitivity of three experiments T2K-II, \nova\ extension, and JUNO. It is shown that a joint analysis of these three experiments can conclusively determine the neutrino mass hierarchy. Also, it provides $5\sigma$ C.L. more or less to exclude CP conserving values if true $\delta_{\text{CP}}\sim\pm\frac{\pi}{2}$ and more than $50\%$ fractional region of true \dcp\ values can be explored with a significance of at least $3\sigma$ C.L. Besides, the joint analysis can provide unprecedented precision measurements of the atmospheric neutrino oscillation parameters and a great offer to solve the $\theta_{23}$ octant degeneracy in case of non-maximal mixing.
high energy physics phenomenology
A generalization of the de Gennes-Alexander micronetworks theory is presented. In this framework, the phase transition of synthetic networks of superconducting islands is described by means of a Ginzburg-Landau approach adapted to the case of granular systems. The general implications of the theory are carefully explained. As a specific example, we demonstrate that star networks support the exponential localization of the order parameter accompanied by an enhancement of the critical temperature of the system. These findings contribute to clarify the physics of the phase transitions in synthetic networks of Josephson-coupled superconducting islands.
condensed matter
Sampling-based motion planning is an effective tool to compute safe trajectories for automated vehicles in complex environments. However, a fast convergence to the optimal solution can only be ensured with the use of problem-specific sampling distributions. Due to the large variety of driving situations within the context of automated driving, it is very challenging to manually design such distributions. This paper introduces therefore a data-driven approach utilizing a deep convolutional neural network (CNN): Given the current driving situation, future ego-vehicle poses can be directly generated from the output of the CNN allowing to guide the motion planner efficiently towards the optimal solution. A benchmark highlights that the CNN predicts future vehicle poses with a higher accuracy compared to uniform sampling and a state-of-the-art A*-based approach. Combining this CNN-guided sampling with the motion planner Bidirectional RRT* reduces the computation time by up to an order of magnitude and yields a faster convergence to a lower cost as well as a success rate of 100 % in the tested scenarios.
computer science
In this work, we study a gauge invariant local non-polynomial composite spinor field in the fundamental representation in order to establish its renormalizability. Similar studies were already done in the case of pure Yang-Mills theories where a local composite gauge invariant vector field was obtained and an invariant renormalizable mass term could be introduced. Our model consists of a massive Euclidean Yang-Mills action with gauge group $SU(N)$ coupled to fermionic matter in the presence of an invariant spinor composite field and quantized in the linear covariant gauges. The whole set of Ward identities is analysed and the algebraic proof of the renormalizability of the model is obtained to all orders in a loop expansion.
high energy physics theory
In this paper we investigate a linear chain of qubits and determine that it can be configured into a conditional two-qubit swapping gate, where the first and last qubits of the chain are the swapped qubits, and the remaining middle ancilla qubits are controlling the state of the gate. The swapping gate introduces different phases on the final states depending on the initial states. In particular we focus on a chain of four qubits and show the swapping gate it implements. We simulate the chain with realistic parameters, and decoherence noise and leakage to higher excited states, and find an average fidelity of around 0.99. We propose a superconducting circuit which implements this chain of qubits and present a circuit design of the circuit. We also discuss how to operate the superconducting circuit such that the state of the gate can be controlled. Lastly, we discuss how the circuit can be straightforwardly altered and may be used to simulate Hamiltonians with non-trivial topological properties.
quantum physics
A single-axis Microelectromechanical system gravimeter has recently been developed at the University of Glasgow. The sensitivity and stability of this device was demonstrated by measuring the Earth tides. The success of this device was enabled in part by its extremely low resonant frequency. This low frequency was achieved with a geometric anti-spring design, fabricated using well-established photolithography and dry etch techniques. Analytical models can be used to calculate the results of these non-linear oscillating systems, but the power of finite element analysis has not been fully utilised to explore the parameter space before now. In this article, the results of previous analytical solutions are replicated using finite element models, before applying the same techniques to optimise the design of the gravimeter. These computer models provide the ability to investigate the effect of the fabrication material of the device: anisotropic <100> crystalline silicon. This is a parameter that is difficult to investigate analytically, but finite element modelling is demonstrated here to provide accurate predictions of real gravimeter behaviour by taking anisotropy into account. The finite element models are then used to demonstrate the design of a three-axis gravimeter enabling the gravity tensor to be measured - a significantly more powerful surveying tool than the original single-axis device.
physics
Boron arsenide (BAs) has been the least investigated cubic III-V compound, but it has recently attracted significant attention since the confirmation of its unusually high thermal conductivity above 1000 W/m-K. However, determining how to achieve growth of a BAs single crystal on the centimeter scale remains unsolved, which strongly limits further research into, and potential applications of, this interesting material. Here we report our technique to grow a 7-mm-long BAs single crystal via the chemical vapor transport method by applying an additional nucleation site. The different thermal conductivity values obtained from BAs single crystals grown on nucleation sites of different compositions show the importance of choosing the proper nucleation-site material. We believe these findings will inspire further research into the growth of this unique semiconductor.
condensed matter
Nonparametric varying coefficient (NVC) models are useful for modeling time-varying effects on responses that are measured repeatedly. In this paper, we introduce the nonparametric varying coefficient spike-and-slab lasso (NVC-SSL) for Bayesian estimation and variable selection in NVC models. The NVC-SSL simultaneously selects and estimates the significant varying coefficients, while also accounting for temporal correlations. Our model can be implemented using a computationally efficient expectation-maximization (EM) algorithm. We also employ a simple method to make our model robust to misspecification of the temporal correlation structure. In contrast to frequentist approaches, little is known about the large-sample properties for Bayesian NVC models when the dimension of the covariates $p$ grows much faster than sample size $n$. In this paper, we derive posterior contraction rates for the NVC-SSL model when $p \gg n$ under both correct specification and misspecification of the temporal correlation structure. Thus, our results are derived under weaker assumptions than those seen in other high-dimensional NVC models which assume independent and identically distributed (iid) random errors. Finally, we illustrate our methodology through simulation studies and data analysis. Our method is implemented in the publicly available R package NVCSSL.
statistics
In this paper, we propose a class of monitoring statistics for a mean shift in a sequence of high-dimensional observations. Inspired by the recent U-statistic based retrospective tests developed by Wang et al.(2019) and Zhang et al.(2020), we advance the U-statistic based approach to the sequential monitoring problem by developing a new adaptive monitoring procedure that can detect both dense and sparse changes in real-time. Unlike Wang et al.(2019) and Zhang et al.(2020), where self-normalization was used in their tests, we instead introduce a class of estimators for $q$-norm of the covariance matrix and prove their ratio consistency. To facilitate fast computation, we further develop recursive algorithms to improve the computational efficiency of the monitoring procedure. The advantage of the proposed methodology is demonstrated via simulation studies and real data illustrations.
statistics
We present a family of topological quantum gravity theories associated with the geometric theory of the Ricci flow on Riemannian manifolds. First we use BRST quantization to construct a "primitive" topological Lifshitz-type theory for only the spatial metric, with spatial diffeomorphism invariance and no gauge symmetry, associated with Hamilton's Ricci flow: Hamilton's flow equation appears as the localization equation of the primitive theory. Then we extend the primitive theory by gauging foliation-preserving spacetime symmetries. Crucially, all our theories are required to exhibit an ${\cal N}=2$ extended BRST symmetry. First, we gauge spatial diffeomorphisms, and show that this gives us access to the mathematical technique known as the DeTurck trick. Finally, we gauge foliation-preserving time reparametrizations, both with the projectable and nonprojectable lapse function. The path integral of the full theory is localized to the solutions of Ricci-type flow equations, generalizing those of Perelman. The role of Perelman's dilaton is played by the nonprojectable lapse function. Perelman's ${\cal F}$-functional appears as the superpotential of our theory. Since there is no spin-statistics theorem in nonrelativistic quantum field theory, the two supercharges of our gravity theory do not have to be interpreted as BRST charges and, after the continuation to real time, the theory can be studied as a candidate for nonrelativistic quantum gravity with propagating bosonic and fermionic degrees of freedom.
high energy physics theory
We propose a detector of microwave photons which can distinguish the vacuum state, one-photon state, and the states with two or more photons. Its operation is based on the two-photon transition in a biased Josephson junction and detection occurs when it switches from a superconducting to a normal state. We model the detector theoretically. The detector performs with more than 90% success probability in several microseconds. It is sensitive for the 8.2GHz photons. The working frequency could be set at the design stage in the range from about 1GHz to 20GHz.
quantum physics
In this paper, we study the potential benefits from smart charging for a fleet of electric vehicles (EVs) providing autonomous mobility-on-demand (AMoD) services. We first consider a profit-maximizing platform operator who makes decisions for routing, charging, rebalancing, and pricing for rides based on a network flow model. Clearly, each of these decisions directly influence the fleet's smart charging potential; however, it is not possible to directly characterize the effects of various system parameters on smart charging under a classical network flow model. As such, we propose a modeling variation that allows us to decouple the charging and routing problems faced by the operator. This variation allows us to provide closed-form mathematical expressions relating the charging costs to the maximum battery capacity of the vehicles as well as the fleet operational costs. We show that investing in larger battery capacities and operating more vehicles for rebalancing reduces the charging costs, while increasing the fleet operational costs. Hence, we study the trade-off the operator faces, analyze the minimum cost fleet charging strategy, and provide numerical results illustrating the smart charging benefits to the operator.
electrical engineering and systems science
Understanding sub-cellular protein localisation is an essential component to analyse context specific protein function. Recent advances in quantitative mass-spectrometry (MS) have led to high resolution mapping of thousands of proteins to sub-cellular locations within the cell. Novel modelling considerations to capture the complex nature of these data are thus necessary. We approach analysis of spatial proteomics data in a non-parametric Bayesian framework, using mixtures of Gaussian process regression models. The Gaussian process regression model accounts for correlation structure within a sub-cellular niche, with each mixture component capturing the distinct correlation structure observed within each niche. Proteins with a priori labelled locations motivate using semi-supervised learning to inform the Gaussian process hyperparameters. We moreover provide an efficient Hamiltonian-within-Gibbs sampler for our model. As in other recent work, we reduce the computational burden associated with inversion of covariance matrices by exploiting the structure in the covariance matrix. A tensor decomposition of our covariance matrices allows extended Trench and Durbin algorithms to be applied it order to reduce the computational complexity of inversion and hence accelerate computation. A stand-alone R-package implementing these methods using high-performance C++ libraries is available at: https://github.com/ococrook/toeplitz
statistics
In this paper, we study the well-known stochastic linear bandit problem where a decision-maker sequentially chooses among a set of given actions, observes their noisy reward, and aims to maximize her cumulative expected reward over a horizon of length $T$. In this paper, we first introduce a general analysis framework and a family of rate optimal algorithms for the problem. We show that this family of algorithms includes well-known algorithms such as optimism in the face of uncertainty linear bandit (OFUL) and Thompson sampling (TS) as special cases. The proposed analysis technique directly captures complexity of uncertainty in the action sets that we show is tied to regret analysis of any policy. This insight allows us to design a new rate-optimal policy, called Sieved-Greedy (SG), that reduces the over-exploration problem in existing algorithms. SG utilizes data to discard the actions with relatively low uncertainty and then choosing one among the remaining actions greedily. In addition to proving that SG is theoretically rate-optimal, our empirical simulations show that SG significantly outperforms existing benchmarks such as greedy, OFUL, and TS. Moreover, our analysis technique yields a number of new results such as obtaining poly-logarithmic (in $T$) regret bounds for OFUL and TS, under a generalized gap assumption and a margin condition, as in literature on contextual bandits. We also improve regret bounds of these algorithms for the sub-class of $k$-armed contextual bandit problems by a factor $\sqrt{k}$.
computer science
We analyze the many-body phases of an ensemble of particles interacting via a Lifshitz--Petrich--Gaussian pair potential in a harmonic confinement. We focus on specific parameter regimes where we expect decagonal quasiperiodic cluster arrangements. Performing classical Monte Carlo as well as path integral quantum Monte Carlo methods, we numerically simulate systems of a few thousand particles including thermal and quantum fluctuations. Our findings indicate that the competition between the intrinsic length scale of the harmonic oscillator and the wavelengths associated to the minima of the pair potential generically lead to a destruction of the quasicrystalline pattern. Extensions of this work are also discussed.
condensed matter
In the absence of spin-orbit coupling, the conventional dogma of Anderson localization asserts that all states localize in two dimensions, with a glaring exception: the quantum Hall plateau transition (QHPT). In that case, the localization length diverges and interference-induced quantum-critical spatial fluctuations appear at all length scales. Normally QHPT states occur only at isolated energies; accessing them therefore requires fine-tuning of the electron density or magnetic field. In this paper we show that QHPT states can be realized throughout an energy continuum, i.e. as an "energy stack" of critical states wherein each state in the stack exhibits QHPT phenomenology. The stacking occurs without fine-tuning at the surface of a class AIII topological phase, where it is protected by U(1) and (anomalous) chiral or time-reversal symmetries. Spectrum-wide criticality is diagnosed by comparing numerics to universal results for the longitudinal Landauer conductance and wave function multifractality at the QHPT. Results are obtained from an effective 2D surface field theory and from a bulk 3D lattice model. We demonstrate that the stacking of quantum-critical QHPT states is a robust phenomenon that occurs for AIII topological phases with both odd and even winding numbers. The latter conclusion may have important implications for the still poorly-understood logarithmic conformal field theory believed to describe the QHPT.
condensed matter
We study the link between classical scattering of spinning black holes and quantum amplitudes for massive spin-$s$ particles. Generic spin orientations of the black holes are considered, allowing their spins to be deflected on par with their momenta. We rederive the spin-exponentiated structure of the relevant tree-level amplitude from minimal coupling to Einstein's gravity, which in the $s\to\infty$ limit generates the black holes' complete series of spin-induced multipoles. The resulting scattering function is seen to encode in a simple way the known net changes in the black-hole momenta and spins at first post-Minkowskian order. We connect our findings to a rigorous framework developed elsewhere for computing such observables from amplitudes.
high energy physics theory