text
stringlengths
11
9.77k
label
stringlengths
2
104
This paper applies the dual-signal transformation LSTM network (DTLN) to the task of real-time acoustic echo cancellation (AEC). The DTLN combines a short-time Fourier transformation and a learned feature representation in a stacked network approach, which enables robust information processing in the time-frequency and in the time domain, which also includes phase information. The model is only trained on 60~h of real and synthetic echo scenarios. The training setup includes multi-lingual speech, data augmentation, additional noise and reverberation to create a model that should generalize well to a large variety of real-world conditions. The DTLN approach produces state-of-the-art performance on clean and noisy echo conditions reducing acoustic echo and additional noise robustly. The method outperforms the AEC-Challenge baseline by 0.30 in terms of Mean Opinion Score (MOS).
electrical engineering and systems science
The present thesis aimed to examine the effects of vorticity on the thermodynamics of relativistic quantum systems. We extend the Zubarev's non-equilibrium statistical operator method to address quantum effects induced by vorticity in the presence of chiral matter and external electromagnetic field and keeping full covariant and quantum properties of the system. To investigate the effects of vorticity, this work has been focused on systems consisting of massless chiral fermions. We recovered the significant quantum phenomena known in the literature, namely the chiral magnetic effect, the chiral vortical effect, the axial vortical effect and the chiral separation effect and we also revealed the presence of additional effects at second-order on thermal vorticity. This study has also identified and presented the exact solutions of thermal states for a system at global thermal equilibrium consisting of chiral massless fermions under the action of an external constant homogeneous magnetic field. Taking advantage of these exact solutions and conservation equations, the study also proved that the thermal coefficients related to first-order effects on thermal vorticity do not receive corrections from the external electromagnetic field. The same argument revealed existing relations between those thermal coefficients, even connecting coefficients related to vorticity to other related to electromagnetic field. For instance, this analysis has found that the chiral vortical effect and the chiral magnetic effect conductivities are connected one to the other by a differential equation. Therefore, this research provides the first steps into deriving the relations between the effects and the interplay of electromagnetic fields and vorticity.
high energy physics theory
We consider the Bose gas on a $d$-dimensional anisotropic lattice employing the imperfect (mean-field) gas as a prototype example. We study the dimensional crossover arising as a result of varying the dispersion relation at finite temperature $T$. We analyze in particular situations where one of the relevant effective dimensionalities is located at or below the lower critical dimension, so that the Bose-Einstein condensate becomes expelled from the system by anisotropically modifying the lattice parameters controlling the kinetic term in the Hamiltonian. We clarify the mechanism governing this phenomenon. Subsequently we study the thermodynamic Casimir effect occurring in this system. We compute the exact profile of the scaling function for the Casimir energy. As an effect of strongly anisotropic scale invariance, the Casimir force below or at the critical temperature $T_c$ may be repulsive even for periodic boundary conditions. The corresponding Casimir amplitude is universal only in a restricted sense, and the power law governing the decay of the Casimir interaction becomes modified. We also demonstrate that, under certain circumstances, the scaling function is constant for suffciently large values of the scaling variable, and in consequence is not an analytical function. At $T > T_c$ the Casimir-like interactions reflect the structure of the correlation function, and, for certain orientations of the confining walls, show exponentially damped oscillatory behavior so that the corresponding force is attractive or repulsive depending on the distance.
condensed matter
We consider a scenario where the dark sector includes two Feebly Interacting Massive Particles (FIMPs), with couplings to the Standard Model particles that allow their production in the Early Universe via thermal freeze-in. These couplings generically lead to the decay of the heavier dark matter component into the lighter, possibly leading to observable signals of the otherwise elusive FIMPs. Concretely, we argue that the loop induced decay $\psi_2\rightarrow\psi_1\gamma$ for fermionic FIMPs, or $\phi_2\rightarrow\phi_1\gamma\gamma$ for scalar FIMPs, could have detectable rates for model parameters compatible with the observed dark matter abundance.
high energy physics phenomenology
We study stellar-halo formation using six Milky Way-mass galaxies in FIRE-2 cosmological zoom simulations. We find that $5-40\%$ of the outer ($50-300$ kpc) stellar halo in each system consists of $\textit{in-situ}$ stars that were born in outflows from the main galaxy. Outflow stars originate from gas accelerated by super-bubble winds, which can be compressed, cool, and form co-moving stars. The majority of these stars remain bound to the halo and fall back with orbital properties similar to the rest of the stellar halo at $z=0$.In the outer halo, outflow stars are more spatially homogeneous, metal rich, and alpha-element-enhanced than the accreted stellar halo. At the solar location, up to $\sim 10 \%$ of our kinematically-identified halo stars were born in outflows; the fraction rises to as high as $\sim 40\%$ for the most metal-rich local halo stars ([Fe/H] $> -0.5$). We conclude that the Milky Way stellar halo could contain local counterparts to stars that are observed to form in molecular outflows in distant galaxies. Searches for such a population may provide a new, near-field approach to constraining feedback and outflow physics. A stellar halo contribution from outflows is a phase-reversal of the classic halo formation scenario of Eggen, Lynden-Bell $\&$ Sandange, who suggested that halo stars formed in rapidly $\textit{infalling}$ gas clouds. Stellar outflows may be observable in direct imaging of external galaxies and could provide a source for metal-rich, extreme velocity stars in the Milky Way.
astrophysics
We develop a new hybrid WKB technique to compute boundary-to-boundary scalar Green functions in asymptotically-AdS backgrounds in which the scalar wave equation is separable and is explicitly solvable in the asymptotic region. We apply this technique to a family of six-dimensional $\frac{1}{8}$-BPS asymptotically AdS$_3\,\times\,$S$^3$ horizonless geometries that have the same charges and angular momenta as a D1-D5-P black hole with a large horizon area. At large and intermediate distances, these geometries very closely approximate the extremal-BTZ$\,\times\,$S$^3$ geometry of the black hole, but instead of having an event horizon, these geometries have a smooth highly-redshifted global-AdS$_3\,\times\,$S$^3$ cap in the IR. We show that the response function of a scalar probe, in momentum space, is essentially given by the pole structure of the highly-redshifted global-AdS$_3$ modulated by the BTZ response function. In position space, this translates into a sharp exponential black-hole-like decay for times shorter than $N_1 N_5$, followed by the emergence of evenly spaced "echoes from the cap," with period $\sim N_1 N_5$. Our result shows that horizonless microstate geometries can have the same thermal decay as black holes without the associated information loss.
high energy physics theory
An integrable nonlinear model for the time-dependent equilibration of a bosonic system that has been devised earlier is solved exactly with boundary conditions that are appropriate for a truncated Bose-Einstein distribution, and include the singularity at $\epsilon = \mu$. The buildup of a thermal tail during evaporative cooling, as well as the transition to the condensed state are accounted for. To enforce particle-number conservation during the cooling process with an energy-dependent density of states for a three-dimensional thermal cloud, a time-dependent chemical potential is introduced.
condensed matter
We study the extent to which knot and link states (that is, states in 3d Chern-Simons theory prepared by path integration on knot and link complements) can or cannot be described by stabilizer states. States which are not classical mixtures of stabilizer states are known as "magic states" and play a key role in quantum resource theory. By implementing a particular magic monotone known as the "mana" we quantify the magic of knot and link states. In particular, for $SU(2)_k$ Chern-Simons theory we show that knot and link states are generically magical. For link states, we further investigate the mana associated to correlations between separate boundaries which characterizes the state's long-range magic. Our numerical results suggest that the magic of a majority of link states is entirely long-range. We make these statements sharper for torus links.
high energy physics theory
I obtain new evaluations of special values of multiple polylogarithms by using a limiting case of a basic hypergeometric identity of G. E. Andrews.
mathematics
We revisit our earlier work which lead to a periodic table of Borcherds-Kac-Moody algebras that appeared in the context of the refined generating function of quarter-BPS (dyons) in $\mathcal{N}=4$ supersymmetric four-dimensional string theory. We make new additions to the periodic table by making use of connections with generalized Mathieu moonshine as well as umbral moonshine. We show the modularity of some Siegel modular forms that appear in umbral moonshine associated with Niemeier lattices constructed from A-type root systems and further show that the same Siegel modular forms appear for generalized Mathieu moonshine in some cases. We argue for the existence of a new kind of BKM Lie superalgebras that arise from the dyon generating functions for the $\mathbb{Z}_5$ and $\mathbb{Z}_6$ CHL orbifolds.
high energy physics theory
We consider the involutions known as "toggles," which have been used to give simplified proofs of the fundamental properties of the promotion and evacuation maps. We transfer these involutions so that they generate a group $\mathscr P_n$ that acts on the set $S_n$ of permutations of $\{1,\ldots,n\}$. After characterizing its orbits in terms of permutation skeletons, we apply the action in order to understand West's stack-sorting map. We obtain a very simple proof of a result that clarifies and extensively generalizes a theorem of Bouvel and Guibert and also generalizes a theorem of Bousquet-M\'elou. We also settle a conjecture of Bouvel and Guibert. We prove a result related to the recently-introduced notion of postorder Wilf equivalence. Finally, we investigate an interesting connection among the action of $\mathscr P_n$ on $S_n$, the group structure of $S_n$, and the stack-sorting map.
mathematics
Machine learning is used extensively in recommender systems deployed in products. The decisions made by these systems can influence user beliefs and preferences which in turn affect the feedback the learning system receives - thus creating a feedback loop. This phenomenon can give rise to the so-called "echo chambers" or "filter bubbles" that have user and societal implications. In this paper, we provide a novel theoretical analysis that examines both the role of user dynamics and the behavior of recommender systems, disentangling the echo chamber from the filter bubble effect. In addition, we offer practical solutions to slow down system degeneracy. Our study contributes toward understanding and developing solutions to commonly cited issues in the complex temporal scenario, an area that is still largely unexplored.
statistics
We obtain estimates on the uniform convergence rate of the Birkhoff average of a continuous observable over torus translations and affine skew product toral transformations. The convergence rate depends explicitly on the modulus of continuity of the observable and on the arithmetic properties of the frequency defining the transformation. Furthermore, we show that for the one dimensional torus translation, these estimates are nearly optimal.
mathematics
Uncertainty quantification is a fundamental problem in the analysis and interpretation of synthetic control (SC) methods. We develop conditional prediction intervals in the SC framework, and provide conditions under which these intervals offer finite-sample probability guarantees. Our method allows for covariate adjustment and non-stationary data, among other practically relevant features. The construction begins by noting that the statistical uncertainty of the SC prediction is governed by two distinct sources of randomness: one coming from the construction of the (likely misspecified) SC weights in the pre-treatment period, and the other coming from the unobservable stochastic error in the post-treatment period when the treatment effect is analyzed. Accordingly, our proposed prediction intervals are constructed taking into account both sources of randomness. For implementation, we propose a simulation-based approach along with finite-sample-based probability bound arguments, naturally leading to principled sensitivity analysis methods. We illustrate the numerical performance of our methods using empirical applications and a small simulation study.
statistics
Power spectral density (PSD) estimates of various microphone signal components are essential to many speech enhancement procedures. As speech is highly non-nonstationary, performance improvements may be gained by maintaining time-variations in PSD estimates. In this paper, we propose an instantaneous PSD estimation approach based on generalized principal components. Similarly to other eigenspace-based PSD estimation approaches, we rely on recursive averaging in order to obtain a microphone signal correlation matrix estimate to be decomposed. However, instead of estimating the PSDs directly from the temporally smooth generalized eigenvalues of this matrix, yielding temporally smooth PSD estimates, we propose to estimate the PSDs from newly defined instantaneous generalized eigenvalues, yielding instantaneous PSD estimates. The instantaneous generalized eigenvalues are defined from the generalized principal components, i.e. a generalized eigenvector-based transform of the microphone signals. We further show that the smooth generalized eigenvalues can be understood as a recursive average of the instantaneous generalized eigenvalues. Simulation results comparing the multi-channel Wiener filter (MWF) with smooth and instantaneous PSD estimates indicate better speech enhancement performance for the latter. A MATLAB implementation is available online.
electrical engineering and systems science
In this study, we analyze how changes in the geometry of a potential energy surface in terms of depth and flatness can affect the reaction dynamics. We formulate depth and flatness in the context of one and two degree-of-freedom (DOF) Hamiltonian normal form for the saddle-node bifurcation and quantify their influence on chemical reaction dynamics. In a recent work, Garc\'ia-Garrido, Naik, and Wiggins illustrated how changing the well-depth of a potential energy surface (PES) can lead to a saddle-node bifurcation. They have shown how the geometry of cylindrical manifolds associated with the rank-1 saddle changes en route to the saddle-node bifurcation. Using the formulation presented here, we show how changes in the parameters of the potential energy control the depth and flatness and show their role in the quantitative measures of a chemical reaction. We quantify this role of the depth and flatness by calculating the ratio of the bottleneck-width and well-width, reaction probability (also known as transition fraction or population fraction), gap time (or first passage time) distribution, and directional flux through the dividing surface (DS) for small to high values of total energy. The results obtained for these quantitative measures are in agreement with the qualitative understanding of the reaction dynamics.
physics
Actin cytoskeleton networks generate local topological signatures due to the natural variations in the number, size, and shape of holes of the networks. Persistent homology is a method that explores these topological properties of data and summarizes them as persistence diagrams. In this work, we analyze and classify these filament networks by transforming them into persistence diagrams whose variability is quantified via a Bayesian framework on the space of persistence diagrams. The proposed generalized Bayesian framework adopts an independent and identically distributed cluster point process characterization of persistence diagrams and relies on a substitution likelihood argument. This framework provides the flexibility to estimate the posterior cardinality distribution of points in a persistence diagram and the posterior spatial distribution simultaneously. We present a closed form of the posteriors under the assumption of Gaussian mixtures and binomials for prior intensity and cardinality respectively. Using this posterior calculation, we implement a Bayes factor algorithm to classify the actin filament networks and benchmark it against several state-of-the-art classification methods.
statistics
We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a regular (aperiodic and irreducible) finite Markov chain. Specially, consider a random walk on a regular Markov chain and a Hermitian matrix-valued function on its state space. Our result gives exponentially decreasing bounds on the tail distributions of the extreme eigenvalues of the sample mean matrix. Our proof is based on the matrix expander (regular undirected graph) Chernoff bound [Garg et al. STOC '18] and scalar Chernoff-Hoeffding bounds for Markov chains [Chung et al. STACS '12]. Our matrix Chernoff bound for Markov chains can be applied to analyze the behavior of co-occurrence statistics for sequential data, which have been common and important data signals in machine learning. We show that given a regular Markov chain with $n$ states and mixing time $\tau$, we need a trajectory of length $O(\tau (\log{(n)}+\log{(\tau)})/\epsilon^2)$ to achieve an estimator of the co-occurrence matrix with error bound $\epsilon$. We conduct several experiments and the experimental results are consistent with the exponentially fast convergence rate from theoretical analysis. Our result gives the first bound on the convergence rate of the co-occurrence matrix and the first sample complexity analysis in graph representation learning.
statistics
We prove the relative Grauert-Riemenschneider vanishing, Kawamata-Viehweg vanishing, and Koll\'ar injectivity theorems for $\mathbf{Q}$-schemes, solving conjectures of Boutot and Kawakita. Our proof uses Grothendieck's limit theorem for sheaf cohomology and Zariski-Riemann spaces. As an application, we extend Boutot's theorem to the case of locally quasi-excellent $\mathbf{Q}$-algebras by showing that if $R \to R'$ is a cyclically pure homomorphism of locally quasi-excellent $\mathbf{Q}$-algebras, and $R'$ has rational singularities, then $R$ has rational singularities. This solves a conjecture of Boutot and answers a question of Schoutens in the locally quasi-excellent case.
mathematics
The computational burden of running a complex computer model can make optimization impractical. Gaussian Processes (GPs) are statistical surrogates (also known as emulators) that alleviate this issue since they cheaply replace the computer model. As a result, the exploration vs. exploitation trade-off strategy can be accelerated by building a GP surrogate. In this paper, we propose a new surrogate-based optimization scheme that minimizes the number of evaluations of the computationally expensive function. Taking advantage of parallelism of the evaluation of the unknown function, the uncertain regions are explored simultaneously, and a batch of input points is chosen using Mutual Information for Computer Experiments (MICE), a sequential design algorithm which maximizes the information theoretic Mutual Information over the input space. The computational efficiency of interweaving the optimization scheme with MICE (optim-MICE) is examined and demonstrated on test functions. Optim-MICE is compared with state-of-the-art heuristics such as Efficient Global Optimization (EGO) and GP-Upper Confidence Bound (GP-UCB). We demonstrate that optim-MICE outperforms these schemes on a large range of computational experiments.
statistics
We present a general theory of comparison of quantum channels, concerning with the question of simulability or approximate simulability of a given quantum channel by allowed transformations of another given channel. We introduce a modification of conditional min-entropies, with respect to the set F of allowed transformations, and show that under some conditions on F, these quantities characterize approximate simulability. If F is the set of free superchannels in a quantum resource theory of processes, the modified conditional min-entropies form a complete set of resource monotones. If the transformations in F consist of a preprocessing and a postprocessing of specified forms, approximate simulability is also characterized in terms of success probabilities in certain guessing games, where a preprocessing of a given form can be chosen and the measurements are restricted. These results are applied to several specific cases of simulability of quantum channels, including postprocessings, preprocessings and processing of bipartite channels by LOCC superchannels and by partial superchannels, as well as simulability of sets of quantum measurements. These questions are first studied in a general setting that is an extension of the framework of general probabilistic theories (GPT), suitable for dealing with channels. Here we prove a general theorem that shows that approximate simulability can be characterized by comparing outcome probabilities in certain tests. This result is inspired by the classical Le Cam randomization criterion for statistical experiments and contains its finite dimensional version as a special case.
quantum physics
Many real-world decision-making tasks require learning casual relationships between a set of variables. Typical causal discovery methods, however, require that all variables are observed, which might not be realistic in practice. Unfortunately, in the presence of latent confounding, recovering casual relationships from observational data without making additional assumptions is an ill-posed problem. Fortunately, in practice, additional structure among the confounders can be expected, one such example being pervasive confounding, which has been exploited for consistent causal estimation in the special case of linear causal models. In this paper, we provide a proof and method to estimate causal relationships in the non-linear, pervasive confounding setting. The heart of our procedure relies on the ability to estimate the pervasive confounding variation through a simple spectral decomposition of the observed data matrix. We derive a DAG score function based on this insight, and empirically compare our method to existing procedures. We show improved performance on both simulated and real datasets by explicitly accounting for both confounders and non-linear effects.
statistics
This is a brief pedagogic introduction to searches for new physics with top quarks. It covers indirect searches for heavy new particles based on standard model effective theory and direct searches for new signatures of a light hidden sector. LHC and flavor observables complement and strengthen each other in this endeavor.
high energy physics phenomenology
The Mission Accessible Near-Earth Object Survey (MANOS) aims to observe and characterize small (mean absolute magnitude H ~ 25 mag) Near-Earth Objects (NEOs) that are accessible by spacecraft (mean $\Delta v$ ~ 5.7 km/s) and that make close approaches with the Earth (mean Minimum Orbital Intersection Distance MOID ~ 0.03 AU). We present here the first results of the MANOS visible spectroscopic survey. The spectra were obtained from August 2013 to March 2018 at Lowell Observatory's Discovery Channel 4.3 meter telescope, and both Gemini North and South facilities. In total, 210 NEOs have been observed and taxonomically classified. Our taxonomic distribution shows significant variations with respect to surveys of larger objects. We suspect these to be due to a dependence of Main Belt source regions on object size. Compared to previous surveys of larger objects (Binzel et al. 2019, 2004; Perna et al. 2018), we report a lower fraction of S+Q-complex asteroids of 43.8 $\pm$ 4.6%. We associate this decrease with a lack of Phocaea family members at very small size. We also report higher fractions of X-complex and A-type asteroids of 23.8 $\pm$ 3.3% and 3.8 $\pm$ 1.3% respectively due to an increase of Hungaria family objects at small size. We find a strong correlation between the Q/S ratio and perihelion distance. We suggest this correlation is due to planetary close encounters with Venus playing a major role in turning asteroids from S to Q-type. This hypothesis is supported by a similar correlation between the Q/S ratio and Venus MOID.
astrophysics
We describe and investigate a connection between the topology of isolated singularities of plane curves and the mutation equivalence, in the sense of cluster algebra theory, of the quivers associated with their morsifications.
mathematics
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.
computer science
This work develops rigorous theoretical basis for the fact that deep Bayesian neural network (BNN) is an effective tool for high-dimensional variable selection with rigorous uncertainty quantification. We develop new Bayesian non-parametric theorems to show that a properly configured deep BNN (1) learns the variable importance effectively in high dimensions, and its learning rate can sometimes "break" the curse of dimensionality. (2) BNN's uncertainty quantification for variable importance is rigorous, in the sense that its 95% credible intervals for variable importance indeed covers the truth 95% of the time (i.e., the Bernstein-von Mises (BvM) phenomenon). The theoretical results suggest a simple variable selection algorithm based on the BNN's credible intervals. Extensive simulation confirms the theoretical findings and shows that the proposed algorithm outperforms existing classic and neural-network-based variable selection methods, particularly in high dimensions.
statistics
We study conformal field theories (CFTs) on curved spaces including both orientable and unorientable manifolds possibly with boundaries. We first review conformal transformations on curved manifolds. We then compute the identity components of conformal groups acting on various metric spaces using a simple fact; given local coordinate systems be single-valued. Boundary conditions thus obtained which must be satisfied by conformal Killing vectors (CKVs) correctly reproduce known conformal groups. As a byproduct, on $\mathbb S^1_l\times\mathbb H^2_r$, by setting their radii $l=Nr$ with $N\in\mathbb N^\times$, we find (the identity component of) the conformal group enhances, whose persistence in higher dimensions is also argued. We also discuss forms of correlation functions on these spaces using the symmetries. Finally, we study a $d$-torus $\mathbb T^d$ in detail, and show the identity component of the conformal group acting on the manifold in general is given by $\text{Conf}_0(\mathbb T^d)\simeq U(1)^d$ when $d\ge2$. Using the fact, we suggest some candidates of conformal manifolds of CFTs on $\mathbb T^d$ without assuming the presence of supersymmetry (SUSY). In order to clarify which parts of correlation functions are physical, we also discuss renormalization group (RG) and local counterterms on curved spaces.
high energy physics theory
By employing a pseudo-orthonormal coordinate-free approach, the Dirac equation for particles in the Kerr--Newman spacetime is separated into its radial and angular parts. In the massless case to which a special attention is given, the general Heun-type equations turn into their confluent form. We show how one recovers some results previously obtained in literature, by other means.
high energy physics theory
Neglecting spin effects, one introduces here a subtle approximation for the scattering angle, which allows the obtaining of a logarithmic leading Regge pole, consistent with the Froissart-Martin bound. A simple parameterization is also introduced for the proton-proton total cross section. Fitting procedures are implemented only for the highest energy experimental data available. The intercept for a linear approach is obtained, indicating the presence of a soft pomeron. The Tsallis entropy in the impact parameter space is calculated using the Regge pole formalism. This entropy depends on a free parameter, whose value implies the existence of a central or non-central maximum value for the entropy. The hollowness effect is discussed in terms of this parameter.
high energy physics phenomenology
One of the main factors limiting the efficiency of spark-ignited engines is the occurrence of engine knock. In high temperature and high pressure in-cylinder conditions, the fuel-air mixture auto-ignites creating pressure shock waves in the cylinder. Knock can significantly damage the engine and hinder its performance; as such, conservative knock control strategies are generally implemented that avoid such operating conditions at the cost of lower thermal efficiencies. Significant improvements in the performance of conventional knock controllers are possible if the properties of the knock process are better characterized and exploited in knock controller designs. One of the methods undertaken to better characterize knocking instances is to employ a probabilistic approach, in which the likelihood of knock is derived from the statistical distribution of knock intensity. In this paper, it is shown that knock intensity values at a fixed operating point for single fuel and dual fuel engines are accurately described using a mixed lognormal distribution. The fitting accuracy is compared against those for a randomly generated mixed-lognormally distributed data set, and shown to exceed a 95% accuracy threshold for almost all of the operating points tested. Additionally, this paper discusses a stochastic knock control approach that leverages the mixed lognormal distribution to adjust spark timing based on knock intensity measurements. This more informed knock control strategy would allow for improvements in engine performance and fuel efficiency by minimizing knock occurrences.
electrical engineering and systems science
This tutorial demonstrates the estimation and interpretation of the Multilevel Social Relations Model for dyadic data. The Social Relations Model is appropriate for data structures in which individuals appear multiple times as both the source and recipient of dyadic outcomes. Estimated using Stat-JR statistical software, the models are fitted to multiple outcome types: continuous, count, and binary outcomes. In addition, models are demonstrated for dyadic data from a single group and from multiple groups. The modeling approaches are illustrated via a series of case studies, and the data and software to replicate these analyses are available as supplemental files.
statistics
Recent research demonstrated that the superficially well-trained machine learning (ML) models are highly vulnerable to adversarial examples. As ML techniques are becoming a popular solution for cyber-physical systems (CPSs) applications in research literatures, the security of these applications is of concern. However, current studies on adversarial machine learning (AML) mainly focus on pure cyberspace domains. The risks the adversarial examples can bring to the CPS applications have not been well investigated. In particular, due to the distributed property of data sources and the inherent physical constraints imposed by CPSs, the widely-used threat models and the state-of-the-art AML algorithms in previous cyberspace research become infeasible. We study the potential vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples that satisfy the intrinsic constraints of the physical systems. We first summarize the difference between AML in CPSs and AML in existing cyberspace systems and propose a general threat model for ConAML. We then design a best-effort search algorithm to iteratively generate adversarial examples with linear physical constraints. We evaluate our algorithms with simulations of two typical CPSs, the power grids and the water treatment system. The results show that our ConAML algorithms can effectively generate adversarial examples which significantly decrease the performance of the ML models even under practical constraints.
computer science
We investigated the time-variability and spectral properties of the eclipsing X-ray source Circinus Galaxy X-1 (GG X-1), using Chandra, XMM-Newton and ROSAT. We phase-connected the lightcurves observed over 20 years, and obtained a best-fitting period $P = (25,970.0 \pm 0.1)$ s $\approx$7.2 hr, and a period derivative $\dot{P}/P = ( 10.2\pm4.6) \times 10^{-7}$ yr$^{-1}$. The X-ray lightcurve shows asymmetric eclipses, with sharp ingresses and slow, irregular egresses. The eclipse profile and duration vary substantially from cycle to cycle. We show that the X-ray spectra are consistent with a power-law-like component, absorbed by neutral and ionized Compton-thin material, and by a Compton-thick, partial-covering medium, responsible for the irregular dips. The high X-ray/optical flux ratio rules out the possibility that CG X-1 is a foreground Cataclysmic Variable; in agreement with previous studies, we conclude that it is the first example of a compact ultraluminous X-ray source fed by a Wolf-Rayet star or stripped Helium star. Its unocculted luminosity varies between $\approx$4 $\times 10^{39}$ erg s$^{-1}$ and $\approx$3 $\times 10^{40}$ erg s$^{-1}$. Both the donor star and the super-Eddington compact object drive powerful outflows: we suggest that the occulting clouds are produced in the wind-wind collision region and in the bow shock in front of the compact object. Among the rare sample of Wolf-Rayet X-ray binaries, CG X-1 is an exceptional target for studies of super-critical accretion and close binary evolution; it is also a likely progenitor of gravitational wave events.
astrophysics
The modified XY model is a modification of the XY model by addition of a half-periodic term. The modified Goldstone model is a regular and continuum version of the modified XY model. The former admits a vortex molecule, that is, two half-quantized vortices connected by a domain wall, as a regular topological soliton solution to the equation of motion while the latter admits it as a singular configuration. Here we define the ${\mathbb Z}_n$ modified XY and Goldstone models as the $n=2$ case to be the modified XY and Goldstone models, respectively. We exhaust all stable and metastalble vortex solutions for $n=2,3$ and find a vortex confinement transition from an integer vortex to a vortex molecule of $n$ $1/n$-quantized vortices, depending on the ratio between the term of the XY model and the modified term. We find for the case of $n=3$, a rod-shaped molecule is the most stable while a Y-shaped molecule is metastable. We also construct some solutions for the case of $n=4$.The vortex confinement transition can be understood in terms of the ${\mathbb C}/{\mathbb Z}_n$ orbifold geometry.
high energy physics theory
The chromospheric Lyman-alpha line of neutral hydrogen (\lya; 1216\AA) is the strongest emission line in the solar spectrum. Fluctuations in \lya\ are known to drive changes in planetary atmospheres, although few instruments have had the ability to capture rapid \lya\ enhancements during solar flares. In this paper we describe flare-associated emissions via a statistical study of 477 M- and X-class flares as observed by the EUV Sensor on board the 15th Geostationary Operational Environmental Satellite, which has been monitoring the full-disk solar \lya\ irradiance on 10~s timescales over the course of Solar Cycle 24. The vast majority (95\%) of these flares produced \lya\ enhancements of 10\% or less above background levels, with a maximum increase of $\sim$30\%. The irradiance in \lya\ was found to exceed that of the 1-8 \AA\ X-ray irradiance by as much as two orders of magnitude in some cases, although flares that occurred closer to the solar limb were found to exhibit less of a \lya\ enhancement. This center-to-limb variation was verified through a joint, stereoscopic observation of an X-class flare that appeared near the limb as viewed from Earth, but close to disk center as viewed by the MAVEN spacecraft in orbit around Mars. The frequency distribution of peak \lya\ was found to have a power-law slope of $2.8\pm0.27$. We also show that increased \lya\ flux is closely correlated with induced currents in the ionospheric E-layer through the detection of the solar flare effect as observed by the Kakioka magnetometer.
astrophysics
In this paper, we develop a new neural network family based on power series expansion, which is proved to achieve a better approximation accuracy comparing to existing neural networks. This new set of neural networks can improve the expressive power while preserving comparable computational cost by increasing the degree of the network instead of increasing the depth or width. Numerical results have shown the advantage of this new neural network.
mathematics
Barium stars are peculiar red giants characterized by an overabundance of s-process elements along with an enrichment in carbon. These stars are discovered in binaries with white dwarf companions. The more recently formed of these stars are still surrounded by a planetary nebula. Precise abundance determinations of the various s-process elements, especially, of the lightest, short-lived radionuclide technetium will establish constraints for the formation of s-process elements in asymptotic giant branch stars as well as mass transfer through, for example, stellar wind, Roche-lobe overflow, and common-envelope evolution. We performed a detailed spectral analysis of the K-type subgiant central star of the planetary nebula Hen 2-39 based on high-resolution optical spectra obtained with the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope using LTE model atmospheres. We confirm the effective temperature of $T_\mathrm{eff} = 4350 \pm 150$ K for the central star of the planetary nebula Hen 2-39. It has a photospheric carbon enrichment of $[\mathrm{C/H}]= 0.36 \pm 0.08$ and a barium overabundance of $[\mathrm{Ba/Fe}]= 1.8 \pm 0.5$. We find a deficiency for most of the iron-group elements (calcium to iron) and establish an upper abundance limit for technetium ($\log \epsilon_\mathrm{Tc} < 2.5$). The quality of the available optical spectra is not sufficient to measure abundances of all s-process elements accurately. Despite large uncertainties on the abundances as well as on the model yields, the derived abundances are most consistent with a progenitor mass in the range 1.75-3.00 $M_\odot$ and a metallicity of $[\mathrm{Fe/H}]= -0.3 \pm 1.0$. This result leads to the conclusion that the formation of such systems requires a relatively large mass transfer that is most easily obtained via wind-Roche lobe overflow.
astrophysics
The state-of-the-art neural network architectures make it possible to create spoken language understanding systems with high quality and fast processing time. One major challenge for real-world applications is the high latency of these systems caused by triggered actions with high executions times. If an action can be separated into subactions, the reaction time of the systems can be improved through incremental processing of the user utterance and starting subactions while the utterance is still being uttered. In this work, we present a model-agnostic method to achieve high quality in processing incrementally produced partial utterances. Based on clean and noisy versions of the ATIS dataset, we show how to create datasets with our method to create low-latency natural language understanding components. We get improvements of up to 47.91 absolute percentage points in the metric F1-score.
computer science
Let $A$ and $B$ be C*-algebras and $\varphi\colon A\to B$ be a $*$-homomorphism. We discuss the properties of the kernel and (co-)image of the induced map $\mathrm{K}_{0}(\varphi)\colon \mathrm{K}_{0}(A) \to \mathrm{K}_{0}(B)$ on the level of K-theory. In particular, we are interested in the case that the co-image is torsion free, and show that it holds when $A$ and $ B $ are commutative and unital, $B$ has real rank zero, and $\varphi$ is unital and injective. We also show that $ A$ is embeddable in $B$ if $ \mathrm{K}_{0}(\varphi)$ is injective and $A$ has stable rank one and real rank zero.
mathematics
In this paper we present a phenomenological analysis of the Partially Aligned Two Higgs Doublet Model (PA-2HDM) by using leptonic decays of mesons and $B^0_{d,s}$-$\bar B^0_{d,s}$ mixing. We focus our attention in a scenario where the leading contribution to FCNC is given by the tree level interaction with the light pseudoscalar $A^0$ ($M_{A^0}\sim 250$ GeV). We show how an underlying flavor symmetry controls FCNC in the quark and lepton couplings with the pseudoscalar, without alignment between Yukawa matrices. Upper bounds on the free parameters are calculated in the context of the leptonic decays $B^0_{s,d}\to\mu^+\mu^-$ and $K^0_L\to \mu^+\mu^-$ and $B^0_{s,d}$ mixing. Also, our assumptions implies that bounds on New Physics contribution in the quark sector coming from $B^0_{s,d}$ mixing impose an upper bound on the parameters for the leptonic sector. Finally we give predictions of branching ratios for leptonic decay of mesons with FCNC and LFV.
high energy physics phenomenology
Football forecasting models traditionally rate teams on past match results, that is based on the number of goals scored. Goals, however, involve a high element of chance and thus past results often do not reflect the performances of the teams. In recent years, it has become increasingly clear that accounting for other match events such as shots at goal can provide a better indication of the relative strengths of two teams than the number of goals scored. Forecast models based on this information have been shown to be successful in outperforming those based purely on match results. A notable weakness, however, is that this approach does not take into account differences in the probability of shot success among teams. A team that is more likely to score from a shot will need fewer shots to win a match, on average. In this paper, we propose a simple parametric model to predict the probability of a team scoring, given it has taken a shot at goal. We show that the resulting forecasts are able to outperform a model assuming an equal probability of shot success among all teams. We then show that the model can be combined with predictions of the number of shots achieved by each team, and can increase the skill of forecasts of both the match outcome and of whether the total number of goals in a match will exceed 2.5. We assess the performance of the forecasts alongside two betting strategies and find mixed evidence for improved performance.
statistics
We revisited the young Large Magellanic Cloud star cluster NGC1971 with the aim of providing additional clues to our understanding of its observed extended Main Sequence turnoff (eMSTO), a feature common seen in young stars clusters, which was recently argued to be caused by a real age spread similar to the cluster age (~160 Myr). We combined accurate Washington and Stromgren photometry of high membership probability stars to explore the nature of such an eMSTO. From different ad hoc defined pseudo colors we found that bluer and redder stars distributed throughout the eMSTO do not show any inhomogeneities of light and heavy-element abundances. These 'blue' and 'red' stars split into two clearly different groups only when the Washington $M$ magnitudes are employed, which delimites the number of spectral features responsible for the appearance of the eMSTO. We speculate that Be stars populate the eMSTO of NGC1971 because: i) Hbeta contributes to the M passband; ii) Hbeta emissions are common features of Be stars and; iii) Washington M and T1 magnitudes show a tight correlation; the latter measuring the observed contribution of Halpha emission line in Be stars, which in turn correlates with Hbeta emissions. As far as we are aware, this is the first observational result pointing to Hbeta emissions as the origin of eMSTOs observed in young star clusters. The presence outcome will certainly open new possibilities of studying eMSTO from photometric systems with passbands centered at features commonly seen in Be stars.
astrophysics
Connected and automated vehicles (CAVs) provide the most intriguing opportunity to reduce pollution, energy consumption, and travel delays. In earlier work, we addressed the optimal coordination of CAVs using Hamiltonian analysis. In this paper, we investigate the nature of the unconstrained problem and provide conditions under which the state and control constraints become active. We derive a closed-form analytical solution of the constrained optimization problem and evaluate the solution using numerical simulation.
mathematics
We study ferromagnetism and its stability in twisted bilayer graphene. We work with a Hubbard-like interaction that corresponds to the screened Coulomb interaction in a well-defined limit where the Thomas-Fermi screening length $l_\text{TF}$ is much larger than monolayer graphene's lattice spacing $l_g \ll l_\text{TF}$ and much smaller than the Moir\'e super lattice's spacing $ l_\text{TF} \ll l_{\text{Moir\'e}}$. We show that in the perfectly flat band "chiral" limit and at filling fractions $\pm 3/4$, the saturated ferromagnetic (spin and valley polarized) states are ideal ground states candidates in the large band-gap limit. By assuming a large enough substrate (hBN) induced sub-lattice potential, the same argument can be applied to filling fractions $\pm 1/4$. We estimate the regime of stability of the ferromagnetic phase around the chiral limit by studying the exactly calculated spectrum of one-magnon excitations. The instability of the ferromagnetic state is signaled by a negative magnon excitation energy. This approach allows us to deform the results of the idealized chiral model (by increasing the bandwidth and/or modified interactions) towards more realistic systems. Furthermore, we use the low energy part of the exact one-magnon spectrum to calculate the spin-stiffness of the Goldstone modes throughout the ferromagnetic phase. The calculated value of spin-stiffness can determine the excitation energy of charged skyrmions.
condensed matter
We consider perturbations of the $4$ dimensional Reissner-Nordstr\"om spacetime induced by the probe scattering of a point particle with charge and mass moving on an unbound trajectory with an asymptotically large velocity. The resulting classical radiative solutions are the gravitational and electromagnetic bremmstrahlung. We use these classical solutions to derive the universal photon and graviton soft factor contributions at the tree level, which have the same form as noted in the literature. The soft factor expressions enable us to investigate the tail contribution to the memory effect in late time gravitational and electromagnetic waveforms. We find that generically, the contribution from the charge dominates that from the mass in the late time radiation.
high energy physics theory
We review in detail the Batalin-Vilkovisky formalism for Lagrangian field theories and its mathematical foundations with an emphasis on higher algebraic structures and classical field theories. In particular, we show how a field theory gives rise to an $L_\infty$-algebra and how quasi-isomorphisms between $L_\infty$-algebras correspond to classical equivalences of field theories. A few experts may be familiar with parts of our discussion, however, the material is presented from the perspective of a very general notion of a gauge theory. We also make a number of new observations and present some new results. Most importantly, we discuss in great detail higher (categorified) Chern-Simons theories and give some useful shortcuts in usually rather involved computations.
high energy physics theory
Motivated from the classical expressions of the mean squared displacement and the velocity autocorrelation function of Brownian particles suspended either in a Newtonian viscous fluid or trapped in a harmonic potential, we show that for all time-scales the mean squared displacement of Brownian microspheres with mass $m$ and radius $R$ suspended in any linear, isotropic viscoelastic material is identical to the creep compliance of a linear mechanical network that is a parallel connection of the linear viscoelastic material with an inerter with distributed inertance, $m_R=\frac{m}{\text{6}\pi R}$. The synthesis of this mechanical network leads to the statement of a viscous-viscoelastic correspondence principle for Brownian motion which simplifies appreciably the calculations of the mean squared displacement and the velocity autocorrelation function of Brownian particles suspended in viscoelastic materials where inertia effects are non-negligible at longer time-scales. The viscous-viscoelastic correspondence principle established in this paper by introducing the concept of the inerter is equivalent to the viscous-viscoelastic analogy adopted by Mason and Weitz (1995).
condensed matter
Air conditioning (AC) accounts for a critical portion of the global energy consumption. To improve its energy performance, it is important to fairly benchmark its energy performance and provide the evaluation feedback to users. However, this task has not been well tackled in the residential sector. In this paper, we propose a data-driven approach to fairly benchmark the AC energy performance of residential rooms. First, regression model is built for each benchmarked room so that its power consumption can be predicted given different weather conditions and AC settings. Then, all the rooms are clustered based on their areas and usual AC temperature set points. Lastly, within each cluster, rooms are benchmarked based on their predicted power consumption under uniform weather conditions and AC settings. A real-world case study was conducted with data collected from 44 residential rooms. Results show that the constructed regression models have an average prediction accuracy of 85.1% in cross-validation tests, and support vector regression with Gaussian kernel is the overall most suitable model structure for building the regression model. In the clustering step, 44 rooms are successfully clustered into seven clusters. By comparing the benchmarking scores generated by the proposed approach with two sets of scores computed from historical power consumption data, we demonstrate that the proposed approach is able to eliminate the influences of room areas, weather conditions, and AC settings on the benchmarking results. Therefore, the proposed benchmarking approach is valid and fair. As a by-product, the approach is also shown to be useful to investigate how room areas, weather conditions, and AC settings affect the AC power consumption of rooms in real life.
electrical engineering and systems science
We investigate bound states in the continuum (BICs) in a planar dielectric waveguide structure consisting of a gold grating on a dielectric layer with a back layer of metal. In this structure, Friedrich-Wintgen (FW) BICs caused by the destructive interference between the radiations from two waveguide modes appear near the anti-crossing point of the dispersion curves. In this study, it is revealed that the branch at which the BIC appears changes according to the polarization of incident radiation. Based on a temporal coupled mode theory, it is shown that the BIC branch is determined by the sign of the product of the coupling coefficients between the two waveguide modes and external radiation, which is consistent with FW theory. The signs of the coupling coefficients are estimated by the waveguide-mode decomposition of the numerically obtained electric fields and are confirmed to vary depending on the polarization.
physics
We adopt a robust numerical continuation scheme to examine the global bifurcation of periodic traveling waves of the capillary-gravity Whitham equation, which combines the dispersion in the linear theory of capillary-gravity waves and a shallow water nonlinearity. We employ a highly accurate numerical method for space discretization and time stepping, to address orbital stability and instability for a rich variety of the solutions. Our findings can help classify capillary-gravity waves and understand their long-term dynamics.
physics
I present the tensor computer algebra package FieldsX, which extends the xAct suite of tensor algebra packages to perform computations in field theory with fermions and gauge fields. This includes the standard tools of curved-space $\gamma$ matrices, Fierz identities, invariant tensors on Lie algebras, arbitrary gradings and left and right variational derivatives, as well as the decomposition of spinor products into irreducible components following the approach of d'Auria, Fr\'e, Maina and Regge [Annals Phys. 139 (1982) 93]. Lastly, it also includes functions to work with nilpotent differentials such as the BV-BRST differential for (supersymmetric) gauge theories and to compute their (relative) cohomologies, from which anomalies and gauge-invariant operators can be determined. I illustrate the use of the package with the example of $\mathcal{N}=1$ Super-Yang--Mills theory.
high energy physics theory
I examine the fate of a kinetic Potts ferromagnet with a high ground-state degeneracy that undergoes a deep quench to zero-temperature. I consider single spin-flip dynamics on triangular lattices of linear dimension $8 \le L \le 128$ and set the number of spin states $q$ equal to the number of lattice sites $L \times L$. The ground state is the most abundant final state, and is reached with probability $\approx 0.71$. Three-hexagon states occur with probability $\approx 0.26$, and hexagonal tessellations with more than three clusters form with probabilities of $\mathcal{O}(10^{-3})$ or less. Spanning stripe states -- where the domain walls run along one of the three lattice directions -- appear with probability $\approx 0.03$. "Blinker" configurations, which contain perpetually flippable spins, also emerge, but with a probability that is vanishingly small with the system size.
physics
Anomalous metallic properties are often observed in the proximity of quantum critical points (QCPs), with violation of the Fermi Liquid paradigm. We propose a scenario where, due to the presence of a nearby QCP, dynamical fluctuations of the order parameter with finite correlation length mediate a nearly isotropic scattering among the quasiparticles over the entire Fermi surface. This scattering produces an anomalous metallic behavior, which is extended to the lowest temperatures by an increase of the damping of the fluctuations. We phenomenologically identify one single parameter ruling this increasing damping when the temperature decreases, accounting for both the linear-in-temperature resistivity and the seemingly divergent specific heat observed, e.g., in high-temperature superconducting cuprates and some heavy-fermion metals.
condensed matter
We describe the global geometry, symmetries and tensors for Double Field Theory over pairs of nilmanifolds with fluxes or gerbes. This is achieved by a rather straightforward application of a formalism we developed previously. This formalism constructs the analogue of a Courant algebroid over the correspondence space of a T-duality, using the language of graded manifolds, derived brackets and we use the description of nilmanifolds in terms of periodicity conditions rather than local patches. The strong section condition arises purely algebraically, and we show that for a particularly symmetric solution of this condition, we recover the Courant algebroids of both nilmanifolds with fluxes. We also discuss the finite, global symmetries of general local Double Field Theory and explain how this specializes to the case of T-duality between nilmanifolds.
high energy physics theory
In the recent works [arXiv:1803.05809],[arXiv: 1806.01842], Halohedron emerged as amplituhedron for 1-loop planar diagrams in bi-adjoint massless $\phi^3$ theory. Halohedron is a specific case of graph cubeahedron where the considered graph is a cycle-graph. In [arXiv: 1906.06861],[arXiv: 1501.07152], the authors provide construction of any graph cubeahedron and we use this construction to find the polytopal realization of Halohedron. We show that the Halohedron we obtain is equivalent to the proposed realization of Halohedron in `Big Kinematic Space'[arXiv: 1806.01842].
high energy physics theory
Particle detectors record the interactions of subatomic particles and their passage through matter. The identification of these particles is necessary for in-depth physics analysis. While particles can be identified by their individual behavior as they travel through matter, the full context of the interaction in which they are produced can aid the classification task substantially. We have developed the first convolutional neural network for particle identification which uses context information. This is also the first implementation of a four-tower siamese-type architecture both for separation of independent inputs and inclusion of context information. The network classifies clusters of energy deposits from the NOvA neutrino detectors as electrons, muons, photons, pions, and protons with an overall efficiency and purity of 83.3% and 83.5%, respectively. We show that providing the network with context information improves performance by comparing our results with a network trained without context information.
physics
The exotic states $X_{0,1}(2900)$ with the quark flavor of $cs\bar{u}\bar{d}$ are recently observed in the mass spectrum of $D^+K^-$ in $B^-\to D^-D^+K^-$ by the LHCb collaboration. To explore the nature of $X_{0,1}(2900)$, except for analyzing their masses and decay widths as usually did in literatures, the study of their production mechanism in $B$-meson weak decays would provide another important information. The amplitude of $B^-\to D^- X_{0,1}$ is non-factorizable. We consider the final-state-interaction effects and calculate them via the rescattering mechanism. The measured branching fractions of $B^-\to D^- X_{0,1}$ are revealed. It is manifested by ${B}^-\to \Lambda_c^-\Xi_c^{(\prime)0}$ and $\Lambda_b^0\to P_c^+K^-$ that the rescattering mechanism can result in the relatively large branching fractions. The similar processes of $B^-\to \pi^-X_{0,1}$ are also analyzed. The isospins of $X_{0,1}$ can be investigated by $B\to DX_{0,1}^{\pm,0}$ decays.
high energy physics phenomenology
Producing nano-structures with embedded bright ensembles of lifetime-limited emitters is a challenge with potential high impact in a broad range of physical sciences. In this work, we demonstrate controlled charge transfer to and from dark states exhibiting very long lifetimes in high density ensembles of SiV centers hosted in a CVD-grown diamond nano-pyramid. Further, using a combination of resonant photoluminescence excitation and a frequency-selective persistent hole burning technique that exploits such charge state transfer, we could demonstrate close to lifetime-limited linewidths from the SiV centers. Such a nanostructure with thousands of bright narrow linewidth emitters in a volume much below $\lambda^3$ will be useful for coherent light-matter coupling, for biological sensing, and nanoscale thermometry.
quantum physics
In this paper, we propose a novel supervised single-channel speech enhancement method combing the the Kullback-Leibler divergence-based non-negative matrix factorization (NMF) and hidden Markov model (NMF-HMM). With the application of HMM, the temporal dynamics information of speech signals can be taken into account. In the training stage, the sum of Poisson, leading to the KL divergence measure, is used as the observation model for each state of HMM. This ensures that a computationally efficient multiplicative update can be used for the parameter update of the proposed model. In the online enhancement stage, we propose a novel minimum mean-square error (MMSE) estimator for the proposed NMF-HMM. This estimator can be implemented using parallel computing, saving the time complexity. The performance of the proposed algorithm is verified by objective measures. The experimental results show that the proposed strategy achieves better speech enhancement performance than state-of-the-art speech enhancement methods. More specifically, compared with the traditional NMF-based speech enhancement methods, our proposed algorithm achieves a 5\% improvement for short-time objective intelligibility (STOI) and 0.18 improvement for perceptual evaluation of speech quality (PESQ).
electrical engineering and systems science
Continuous-variable quantum key distribution (CV-QKD) is realized with coherent detection and is therefore very suitable for a cost-efficient implementation. The major challenge in CV-QKD is mitigation of laser phase noise at a signal to noise ratio of much less than 0 dB. So far, this has been achieved with a remote local oscillator or with auxiliary signals. For the first time, we experimentally demonstrate that CV-QKD can be performed with a real local oscillator and without auxiliary signals which is achieved by applying Machine Learning methods. It is shown that, with the most established discrete modulation protocol, the experimental system works down to a quantum channel signal to noise ratio of -19.1 dB. The performance of the experimental system allows CV-QKD at a key rate of 9.2 Mbit/s over a fiber distance of 26 km. After remote local oscillator and auxiliary signal aided CV-QKD, this could mark a starting point for a third generation of CV-QKD systems that are even more attractive for a wide implementation because they are almost identical to standard coherent systems.
quantum physics
Using the volume proposal, we compute the change of complexity of holographic states caused by a small conformal transformation in AdS$_{3}$/CFT$_{2}$. This computation is done perturbatively to second order. We give a general result and discuss some of its properties. As operators generating such conformal transformations can be explicitly constructed in CFT terms, these results allow for a comparison between holographic methods of defining and computing computational complexity and purely field-theoretic proposals. A comparison of our results to one such proposal is given.
high energy physics theory
The coma of comet 67P/Churyumov-Gerasimenko has been probed by the Rosetta spacecraft and shows a variety of different molecules. The ROSINA COmet Pressure Sensor and the Double Focusing Mass Spectrometer provide in-situ densities for many volatile compounds including the 14 gas species H2O, CO2, CO, H2S, O2, C2H6, CH3OH, H2CO, CH4, NH3, HCN, C2H5OH, OCS, and CS2. We fit the observed densities during the entire comet mission between August 2014 and September 2016 to an inverse coma model. We retrieve surface emissions on a cometary shape with 3996 triangular elements for 50 separated time intervals. For each gas we derive systematic error bounds and report the temporal evolution of the production, peak production, and the time-integrated total production. We discuss the production for the two lobes of the nucleus and for the northern and southern hemispheres. Moreover we provide a comparison of the gas production with the seasonal illumination.
astrophysics
This paper discusses the historical evidence for the jets from the central Black Hole of the Galaxy in the 4th and 14th centuries. We suggest that the apparitions of a "lightening cross" during the day time recorded in 312, 351 and 1317 were caused by the line of two jets beamed back-to-back from the central black hole and crossed the visible projection of the Galaxy disc. All three historical accounts that record the flashing signs of a cross give precise time and geographical locations of these astronomical events (the vicinity of Rome, Jerusalem and the vicinity of Moscow respectively) and most importantly the position in the sky in connection to the Sun. These positions coincide with the location of the Milky Way center on the sky in these specific places and dates. Therefore, it is logical to assume that the intersection of the jets and the lighted projection of the Galaxy disc was the source of the cross visions in the Middle Ages. The forth evidence of crossing lights is found in russian chronicle under 1377 year. This event took place in the night sky under Moscow.
physics
Individual Treatment Effect (ITE) estimation is an extensively researched problem, with applications in various domains. We model the case where there exists heterogeneous non-compliance to a randomly assigned treatment, a typical situation in health (because of non-compliance to prescription) or digital advertising (because of competition and ad blockers for instance). The lower the compliance, the more the effect of treatment prescription, or individual prescription effect (IPE), signal fades away and becomes hard to estimate. We propose a new approach for the estimation of the IPE that takes advantage of observed compliance information to prevent signal fading. Using the Structural Causal Model framework and do-calculus, we define a general mediated causal effect setting and propose a corresponding estimator which consistently recovers the IPE with asymptotic variance guarantees. Finally, we conduct experiments on both synthetic and real-world datasets that highlight the benefit of the approach, which consistently improves state-of-the-art in low compliance settings
statistics
We perform global fits within Standard Model Effective Field Theory (SMEFT) combining top-quark pair production processes and decay with $b\rightarrow s$ flavor changing neutral current transitions and $Z \to b \bar b$ in three stages: using existing data from the LHC and $B$-factories, using projections for the HL-LHC and Belle II, and studying the additional new physics impact from a future lepton collider. The latter is ideally suited to directly probe $\ell^+\ell^-\rightarrow t\bar t$ transitions. We observe powerful synergies in combining both top and beauty observables as flat directions are removed and more operators can be probed. We find that a future lepton collider significantly enhances this interplay and qualitatively improves global SMEFT fits.
high energy physics phenomenology
For ensembles of Hamiltonians that fall under the Dyson classification of random matrices with $\beta \in \{1,2,4\}$, the low-temperature mean entropy can be shown to vanish as $\langle S(T)\rangle\sim \kappa T^{\beta+1}$. A similar relation holds for Altland-Zirnbauer ensembles. JT gravity has been shown to be dual to the double-scaling limit of a $\beta =2$ ensemble, with a classical eigenvalue density $\propto e^{S_0}\sqrt{E}$ when $0 < E \ll 1$. We use universal results about the distribution of the smallest eigenvalues in such ensembles to calculate $\kappa$ up to corrections that we argue are doubly exponentially small in $S_0$.
high energy physics theory
Learning a particular task from a dataset, samples in which originate from diverse contexts, is challenging, and usually addressed by deepening or widening standard neural networks. As opposed to conventional network widening, multi-path architectures restrict the quadratic increment of complexity to a linear scale. However, existing multi-column/path networks or model ensembling methods do not consider any feature-dependent allocation of parallel resources, and therefore, tend to learn redundant features. Given a layer in a multi-path network, if we restrict each path to learn a context-specific set of features and introduce a mechanism to intelligently allocate incoming feature maps to such paths, each path can specialize in a certain context, reducing the redundancy and improving the quality of extracted features. This eventually leads to better-optimized usage of parallel resources. To do this, we propose inserting feature-dependent cross-connections between parallel sets of feature maps in successive layers. The weighting coefficients of these cross-connections are computed from the input features of the particular layer. Our multi-path networks show improved image recognition accuracy at a similar complexity compared to conventional and state-of-the-art methods for deepening, widening and adaptive feature extracting, in both small and large scale datasets.
computer science
This article illustrates the development of a software named GoldEnvSim for simulation of the dispersion of radionuclides in the atmosphere. The software is written in JavaFX programming language to couple the Weather Research and Forecasting (WRF) model and the FLEXPART-WRF model. The highlight function of this software is to provide convenience for users to run a simulation workflow with a user-friendly interface. Many toolkits for post-processing and visualizing output are also incorporated to make this software more comprehensive. At this first version, GoldEnvSim is specifically designed to analyze and predict the dispersion of radioactive materials in the atmosphere, but it has potential for further development and applicable to other fields of environmental science. For demonstration, a simulation of the dispersion of the Cs-137 that is assumed to be released from the Fangchenggang nuclear power plant to whole Vietnam territory was performed. The simulation result on meteorological in comparison to the monitoring data taken from a first-class meteorological observatory was used to evaluate the accuracy of dispersion simulation result.
physics
Astrophysical magnetic fields decay primarily via two processes namely, ambipolar diffusion and turbulence. Constraints on the strength and the spectral index of non-helical magnetic fields have been derived earlier in the literature through the effect of the above mentioned processes on the Cosmic Microwave Background (CMB) radiation. A helical component of the magnetic field is also produced in various models of magnetogenesis, which can explain larger coherence length magnetic field. In this study, we focus on studying the effects of post recombination decay of maximally helical magnetic fields through ambipolar diffusion and decaying magnetic turbulence and the impact of this decay on CMB. We find that helical magnetic fields lead to changes in the evolution of baryon temperature and ionization fraction which in turn lead to modifications in the CMB temperature and polarization anisotropy. These modifications are different from those arising due to non-helical magnetic fields with the changes dependent on the strength and the spectral index of the magnetic field power spectra.
astrophysics
Closed-loop control of turbulent flows is a challenging problem with important practical and fundamental implications. We perform closed-loop control of forced, turbulent jets based on a wave-cancellation strategy. The study is motivated by the success of recent studies in applying wave cancellation to control instability waves in transitional boundary layers and free-shear flows. Using a control law obtained through a system-identification technique, we successfully implement wave-cancellation-based, closed-loop control, achieving order-of-magnitude attenuations of velocity fluctuations. Control is shown to reduce fluctuation levels over an extensive streamwise range.
physics
A non-local hidden variables theory for non-relativisitic quantum theory is presented, which gives a realist completion of quantum mechanics, in the sense of a complete description of individual events. The proposed fundamental theory is an extension of an energetic causal set theory, which assumes that time, events, causal structure, momentum and energy are fundamental. But space and the wave function are emergent. The beables of the theory are the views of the events, which are a subset of their causal pasts. Thus, this theory asserts that the universe is a causal network of events, which consists of partial views of itself as seen by looking backwards from each event. The fundamental dynamics is based on an action whose potential energy is proportional to the variety, which is a measure of the diversity of the views of the events, while the kinetic energy is proportional to its rate of change. The Schroedinger equation is derived to leading order in an expansion in density of the events of the fundamental histories. To higher order, there are computable corrections, non-linear in the wave function, from which new physical effects may be predicted.
quantum physics
Quantum simulation promises to have wide applications in many fields where problems are hard to model with classical computers. Various quantum devices of different platforms have been built to tackle the problems in, say, quantum chemistry, condensed matter physics, and high-energy physics. Here, we report an experiment towards the simulation of quantum gravity by simulating the holographic entanglement entropy. On a six-qubit nuclear magnetic resonance quantum simulator, we demonstrate a key result of Anti-de Sitter/conformal field theory(\adscft) correspondence---the Ryu-Takayanagi formula is demonstrated by measuring the relevant entanglement entropies on the perfect tensor state. The fidelity of our experimentally prepared the six-qubit state is 85.0\% via full state tomography and reaches 93.7\% if the signal-decay due to decoherence is taken into account. Our experiment serves as the basic module of simulating more complex tensor network states that exploring \adscft correspondence. As the initial experimental attempt to study \adscft via quantum information processing, our work opens up new avenues exploring quantum gravity phenomena on quantum simulators.
quantum physics
With the overwhelming popularity of Knowledge Graphs (KGs), researchers have poured attention to link prediction to fill in missing facts for a long time. However, they mainly focus on link prediction on binary relational data, where facts are usually represented as triples in the form of (head entity, relation, tail entity). In practice, n-ary relational facts are also ubiquitous. When encountering such facts, existing studies usually decompose them into triples by introducing a multitude of auxiliary virtual entities and additional triples. These conversions result in the complexity of carrying out link prediction on n-ary relational data. It has even proven that they may cause loss of structure information. To overcome these problems, in this paper, we represent each n-ary relational fact as a set of its role and role-value pairs. We then propose a method called NaLP to conduct link prediction on n-ary relational data, which explicitly models the relatedness of all the role and role-value pairs in an n-ary relational fact. We further extend NaLP by introducing type constraints of roles and role-values without any external type-specific supervision, and proposing a more reasonable negative sampling mechanism. Experimental results validate the effectiveness and merits of the proposed methods.
computer science
We report the discovery of the planet OGLE-2018-BLG-0532Lb, with very obvious signatures in the light curve that lead to an estimate of the planet-host mass ratio $q=M_{\rm planet}/M_{\rm host}\simeq 1\times10^{-4}$. Although there are no obvious systematic residuals to this double-lens/single-source (2L1S) fit, we find that $\chi^2$ can be significantly improved by adding either a third lens (3L1S, $\Delta\chi^2=81$) or second source (2L2S, $\Delta\chi^2=65$) to the lens-source geometry. After thorough investigation, we conclude that we cannot decisively distinguish between these two scenarios and therefore focus on the robustly-detected planet. However, given the possible presence of a second planet, we investigate to what degree and with what probability such additional planets may affect seemingly single-planet light curves. Our best estimates for the properties of the lens star and the secure planet are: a host mass $M\sim 0.25\,M_\odot$, system distance $D_L\sim 1\,$kpc and planet mass $m_{p,1}= 8\,M_\oplus$ with projected separation $a_{1,\perp}=1.4\,$au. However, there is a relatively bright $I=18.6$ (and also relatively blue) star projected within $<50\,$mas of the lens, and if future high-resolution images show that this is coincident with the lens, then it is possible that it is the lens, in which case, the lens would be both more massive and more distant than the best-estimated values above.
astrophysics
With the prevalence of the Internet, online reviews have become a valuable information resource for people. However, the authenticity of online reviews remains a concern, and deceptive reviews have become one of the most urgent network security problems to be solved. Review spams will mislead users into making suboptimal choices and inflict their trust in online reviews. Most existing research manually extracted features and labeled training samples, which are usually complicated and time-consuming. This paper focuses primarily on a neglected emerging domain - movie review, and develops a novel unsupervised spam detection model with an attention mechanism. By extracting the statistical features of reviews, it is revealed that users will express their sentiments on different aspects of movies in reviews. An attention mechanism is introduced in the review embedding, and the conditional generative adversarial network is exploited to learn users' review style for different genres of movies. The proposed model is evaluated on movie reviews crawled from Douban, a Chinese online community where people could express their feelings about movies. The experimental results demonstrate the superior performance of the proposed approach.
computer science
We present a bipartite two-level system coupled to electromagnetic quantum vacuum fluctuations through a general dipolar coupling. We derive the master equation in the framework of open quantum systems, assuming an environment composed of (i) solely vacuum fluctuations and (ii) the vacuum fluctuations and a conducting plate located at a fixed distance from the bipartite system. For both cases considered, we study the dynamics of the bipartite system and the temporal evolution of the concurrence of an initial entangled bipartite state. We further analyze the generation of entanglement due to the vacuum structure. Finally, we study the different induced contributions to the correction of the unitary geometric phase of a bipartite quantum state so as to explore the possibility of future experimental setups by considering the influence of boundaries conditions in vacuum
quantum physics
Restricting the $\mathbb{Z}_2$-graded tensor product of Clifford algebras $C\ell_4\hat{\otimes}C\ell_6 $ to the particle subspace allows a natural definition of the Higgs field $\Phi$, the scalar part of Quillen's superconnection, as an element of $C\ell_4^1$. We emphasize the role of the exactly conserved weak hypercharge Y, promoted here to a superselection rule for both observables and gauge transformations. This yields a change of the definition of the particle subspace adopted in recent work with Michel Dubois-Violette \cite{DT20}; here we exclude the zero eigensubspace of Y consisting of the sterile (anti)neutrinos which are allowed to mix. One thus modifies the Lie superalgebra generated by the Higgs field. Equating the normalizations of $\Phi$ in the lepton and the quark subalgebras we obtain a relation between the masses of the W boson and the Higgs that fits the experimental values within one percent accuracy.
high energy physics phenomenology
The Jaynes-Cummings (JC) model represents one of the simplest ways in which single qubits can interact with single photon modes, leading to profound quantum phenomena like superpositions of light and matter states. One system, that can be described with the JC model, is a single quantum dot embedded in a micropillar cavity. In this joint experimental and theoretical study we investigate such a system using four-wave mixing (FWM) micro-spectroscopy. Special emphasis is laid on the dependence of the FWM signals on the number of photons injected into the microcavity. By comparing simulation and experiment, which are in excellent agreement with each other, we infer that up to ~20 photons take part in the observed FWM dynamics. Thus we verify the validity of the JC model for the system under consideration in this non-trivial regime. We find that the inevitable coupling between the quantum dot exciton and longitudinal acoustic phonons of the host lattice influences the real time FWM dynamics and has to be taken into account for a sufficient description of the quantum dot-microcavity system. Performing additional simulations in an idealized dissipation-less regime, we observe that the FWM signal exhibits quasi-periodic dynamics, analog to the collapse and revival phenomenon of the JC model. In these simulations we also see that the FWM spectrum has a triplet structure, if a large number of photons is injected into the cavity.
condensed matter
We study a host of spacetimes where the Weyl curvature may be expressed algebraically in terms of an Abelian field strength. These include Type D spacetimes in four and higher dimensions which obey a simple quadratic relation between the field strength and the Weyl tensor, following the Weyl spinor double copy relation. However, we diverge from the usual double copy paradigm by taking the gauge fields to be in the curved spacetime as opposed to an auxiliary flat space. We show how for Gibbons-Hawking spacetimes with more than two centres a generalisation of the Weyl doubling formula is needed by including a derivative-dependent expression which is linear in the Abelian field strength. We also find a type of twisted doubling formula in a case of a manifold with Spin(7) holonomy in eight dimensions. For Einstein Maxwell theories where there is an independent gauge field defined on spacetime, we investigate how the gauge fields determine the Weyl spacetime curvature via a doubling formula. We first show that this occurs for the Reissner-Nordstrom metric in any dimension, and that this generalises to the electrically-charged Born-Infeld solutions. Finally, we consider brane systems in supergravity, showing that a similar doubling formula applies. This Weyl formula is based on the field strength of the p-form potential that minimally couples to the brane and the brane world volume Killing vectors.
high energy physics theory
The finite time, $\tau_{\rm dep}$, over which positrons from $\beta^{+}$ decays of $^{56}$Co deposit energy in type Ia supernovae ejecta lead, in case the positrons are trapped, to a slower decay of the bolometric luminosity compared to an exponential decline. Significant light-curve flattening is obtained when the ejecta density drops below the value for which $\tau_{\rm dep}$ equals the $^{56}$Co life-time. We provide a simple method to accurately describe this "delayed deposition" effect, which is straightforward to use for analysis of observed light curves. We find that the ejecta heating is dominated by delayed deposition typically from 600 to 1200~day, and only later by longer lived isotopes $^{57}$Co and $^{55}$Fe decay (assuming solar abundance). For the relatively narrow $^{56}$Ni velocity distributions of commonly studied explosion models, the modification of the light curve depends mainly on the $^{56}$Ni mass-weighted average density, $\langle \rho \rangle t^{3}$. Accurate late-time bolometric light curves, which may be obtained with JWST far-infrared (far-IR) measurements, will thus enable to discriminate between explosion models by determining $\langle \rho \rangle t^3$ (and the $^{57}$Co and $^{55}$Fe abundances). The flattening of light curves inferred from recent observations, which is uncertain due to the lack of far-IR data, is readily explained by delayed deposition in models with $\langle \rho\rangle t^{3} \approx 0.2\,M_{\odot}\,(10^{4}\, \textrm{km}\,\textrm{s}^{-1})^{-3}$, and does not imply supersolar $^{57}$Co and $^{55}$Fe abundances.
astrophysics
We propose a new approach to computing global minimizers of singular value functions in two real variables. Specifically, we present new algorithms to compute the Kreiss constant of a matrix and the distance to uncontrollability of a linear control system, both to arbitrary accuracy. Previous state-of-the-art methods for these two quantities rely on 2D level-set tests that are based on solving large eigenvalue problems. Consequently, these methods are costly, i.e., $\mathcal{O}(n^6)$ work using dense eigensolvers, and often multiple tests are needed before convergence. Divide-and-conquer techniques have been proposed that reduce the work complexity to $\mathcal{O}(n^4)$ on average and $\mathcal{O}(n^5)$ in the worst case, but these variants are nevertheless still very expensive and can be numerically unreliable. In contrast, our new interpolation-based globality certificates perform level-set tests by building interpolant approximations to certain one-variable continuous functions that are both relatively cheap and numerically robust to evaluate. Our new approach has a $\mathcal{O}(kn^3)$ work complexity and uses $\mathcal{O}(n^2)$ memory, where $k$ is the number of function evaluations necessary to build the interpolants. Not only is this interpolation process mostly "embarrassingly parallel," but also low-fidelity approximations typically suffice for all but the final interpolant, which must be built to high accuracy. Even without taking advantage of the aforementioned parallelism, $k$ is sufficiently small that our new approach is generally orders of magnitude faster than the previous state-of-the-art.
mathematics
We examine the effect that the subtraction of multiple photons has on the statistical characteristics of a light field. In particular, we are interested in the question whether an initial state transforms into a lasing state, i.e.,~a (phase diffused) coherent state, after infinitely many photon subtractions. This question is discussed in terms of the Glauber P-representation $P(\alpha)$, the photon number distribution $P[n]$, and the experimentally relevant autocorrelation functions $g^{(m)}$. We show that a thermal state does not converge to a lasing state, although all of its autocorrelation functions at zero delay time converge to one. This contradiction is resolved by the analysis of the involved limits, and a general criterion for an initial state to reach at least such a pseudo-lasing state ($g^{(m)}\to 1$) is derived, revealing that they can be generated from a large class of initial states.
physics
We propose that the recently defined persistent homology dimensions are a practical tool for fractal dimension estimation of point samples. We implement an algorithm to estimate the persistent homology dimension, and compare its performance to classical methods to compute the correlation and box-counting dimensions in examples of self-similar fractals, chaotic attractors, and an empirical dataset. The performance of the $0$-dimensional persistent homology dimension is comparable to that of the correlation dimension, and better than box-counting.
mathematics
The analysis of the variability of active galactic nuclei (AGNs) at different wavelengths and the study of possible correlations among different spectral windows are nowadays a major field of inquiry. Optical variability has been largely used to identify AGNs in multivisit surveys. The strength of a selection based on optical variability lies in the chance to analyze data from surveys of large sky areas by ground-based telescopes. However the effectiveness of optical variability selection, with respect to other multiwavelength techniques, has been poorly studied down to the depth expected from next generation surveys. Here we present the results of our r-band analysis of a sample of 299 optically variable AGN candidates in the VST survey of the COSMOS field, counting 54 visits spread over three observing seasons spanning > 3 yr. This dataset is > 3 times larger in size than the one presented in our previous analysis (De Cicco et al. 2015), and the observing baseline is ~8 times longer. We push towards deeper magnitudes (r(AB) ~23.5 mag) compared to past studies; we make wide use of ancillary multiwavelength catalogs in order to confirm the nature of our AGN candidates, and constrain the accuracy of the method based on spectroscopic and photometric diagnostics. We also perform tests aimed at assessing the relevance of dense sampling in view of future wide-field surveys. We demonstrate that the method allows the selection of high-purity (> 86%) samples. We take advantage of the longer observing baseline to achieve great improvement in the completeness of our sample with respect to X-ray and spectroscopically confirmed samples of AGNs (59%, vs. ~15% in our previous work), as well as in the completeness of unobscured and obscured AGNs. The effectiveness of the method confirms the importance to develop future, more refined techniques for the automated analysis of larger datasets.
astrophysics
Inverter based renewable generation (RG), especially at the distribution level, is supposed to trip offline during an islanding situation. However, islanding detection is done by comparing the voltage and frequency measurements at the point of common coupling (PCC), with limits defined in the form of ride-through curves. Current practice is to use the same limit throughout the year independent of the operating conditions. This could result in the tripping of RG at times when the system is already weak, thereby posing a threat to voltage security by heavily limiting the load margin (LM). Conversely, heavily relaxing these limits would result in scenarios where the generation does not go offline even during an islanding situation. The proposed methodology focuses on optimizing low-voltage ride-through (LVRT) settings at selective RGs as a preventive control for maintaining a desired steady-state voltage stability margin while not sacrificing dependability during islanding. The proposed process is a multi-stage approach, in which at each stage, a subset of estimated poor-quality solutions is screened out based on various sensitivities. A full continuation power flow (CPFLOW) is only run at the beginning and in the last stage on a handful of remaining candidate solutions, thereby cutting down heavily on the computation time. The effectiveness of the approach is demonstrated on the IEEE 9-bus system.
electrical engineering and systems science
We report the Se substitution effects on the crystal structure, superconducting properties, and valence states of self-doped BiCh2-based compound CeOBiS2-xSex. Polycrystalline CeOBiS2-xSex samples with x = 0-1.0 were synthesized. For x = 0.4 and 0.6, bulk superconducting transitions with a large shielding volume fraction were observed in magnetic susceptibility measurements; the highest transition temperature (Tc) was 3.0 K for x = 0.6. A superconductivity phase diagram of CeOBiS2-xSex was established based on Tc estimated from the electrical resistivity and magnetization measurements. The emergence of superconductivity in CeOBiS2-xSex was explained with two essential parameters of in-plane chemical pressure and carrier concentration, which systematically changed with increasing Se concentration.
condensed matter
The state of supranuclear matter in compact star remains puzzling, and it is argued that pulsars could be strangeon stars. The consequences of merging double strangeon stars are worth exploring, especially in the new era of multi-messenger astronomy. This paper gives the first qualitative description about the evolution of ejecta and find that the ejecta could end up with two components. In the hot environment of the merger, the strangeon nuggets ejected by tidal disruption and hydrodynamical squeezing would suffer from evaporation, in which process particles, such as strangeons, neutrons and protons, are emitted. Taking into account both the evaporation of strangeon nuggets and the decay of strangeons, most of the ejected strangeon nuggets would turn into neutrons and protons within 10 ms, and the dependence of evaporation rate on temperature leads to the two-component ejecta. Light curves are derived for both high and low opacity components, where the former would be ejected from the directions around the equatorial plane, and the latter would be ejected in a broad range of angular directions. Although the total ejected mass would be $\sim 10^{-3} M_{\odot}$ only, the spin-down power of the long-lived remnant would account for the whole emission of kilonova AT2017gfo associated with GW 170817. The detailed picture of merging double strangeon stars is expected to be tested by future numerical simulations.
astrophysics
In many real-world planning problems with factored, mixed discrete and continuous state and action spaces such as Reservoir Control, Heating Ventilation, and Air Conditioning, and Navigation domains, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allows us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep neural network models of their state transitions. But there remains one major problem for the task of control -- how can we plan with deep network learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to mixed discrete and continuous domains? In this paper, we introduce two types of nonlinear planning methods that can leverage deep neural network learned transition models: Hybrid Deep MILP Planner (HD-MILP-Plan) and Tensorflow Planner (TF-Plan). In HD-MILP-Plan, we make the critical observation that the Rectified Linear Unit transfer function for deep networks not only allows faster convergence of model learning, but also permits a direct compilation of the deep network transition model to a Mixed-Integer Linear Program encoding. Further, we identify deep network specific optimizations for HD-MILP-Plan that improve performance over a base encoding and show that we can plan optimally with respect to the learned deep networks. In TF-Plan, we take advantage of the efficiency of auto-differentiation tools and GPU-based computation where we encode a subclass of purely continuous planning problems as Recurrent Neural Networks and directly optimize the actions through backpropagation. We compare both planners and show that TF-Plan is able to approximate the optimal plans found by HD-MILP-Plan in less computation time...
computer science
This paper proposes a learning-based model predictive control (MPC) approach for the thermal control of a four-zone smart building. The objectives are to minimize energy consumption and maintain the residents' comfort. The proposed control scheme incorporates learning with the model-based control. The occupancy profile in the building zones are estimated in a long-term horizon through the artificial neural network (ANN), and this data is fed into the model-based predictor to get the indoor temperature predictions. The Energy Plus software is utilized as the actual dataset provider (weather data, indoor temperature, energy consumption). The optimization problem, including the actual and predicted data, is solved in each step of the simulation and the input setpoint temperature for the heating/cooling system, is generated. Comparing the results of the proposed approach with the conventional MPC results proved the significantly better performance of the proposed method in energy savings (40.56% less cooling power consumption and 16.73% less heating power consumption), and residents' comfort.
electrical engineering and systems science
Quantum statistics can be considered from the perspective of postquantum no-signaling theories in which either none or only a certain number of quantum systems are trusted. In these scenarios, the role of states is played by the so-called no-signaling boxes or no-signaling assemblages respectively. It has been shown so far that in the usual Bell non-locality scenario with a single measurement run, quantum statistics can never reproduce an extremal non-local point within the set of no-signaling boxes. We provide here a general no-go rule showing that the latter stays true even if arbitrary sequential measurements are allowed. On the other hand, we prove a positive result showing that already a single trusted qubit is enough for quantum theory to produce a self-testable extremal point within the corresponding set of no-signaling assemblages. This result opens up the possibility for security proofs of cryptographic protocols against general no-signaling adversaries.
quantum physics
Faraday and Kerr rotations are magnetooptical (MO) effects used for rotating the polarization of light in transmission and reflection from a magnetized medium, respectively. MO effects combined with intrinsically fast magnetization reversal, which can go down to a few tens of femtoseconds or less, can be applied in magnetooptical spatial light modulators (MOSLMs) promising for nonvolatile, ultrafast, and high-resolution spatial modulation of light. With the recent progress in low-power switching of magnetic and MO materials, MOSLMs may lead to major breakthroughs and benefit beyond state-of-the-art holography, data storage, optical communications, heads-up displays, virtual and augmented reality devices, and solid-state light detection and ranging (LIDAR). In this study, the recent developments in the growth, processing, and engineering of advanced materials with high MO figures of merit for practical MOSLM devices are reviewed. The challenges with MOSLM functionalities including the intrinsic weakness of MO effect and large power requirement for switching are assessed. The suggested solutions are evaluated, different driving systems are investigated, and resulting device architectures are benchmarked. Finally, the research opportunities on MOSLMs for achieving integrated, high-contrast, and low-power devices are presented.
physics
We show that a form of strong simulation for $n$-qubit quantum stabilizer circuits $C$ is computable in $O(s + n^\omega)$ time, where $\omega$ is the exponent of matrix multiplication. Solution counting for quadratic forms over $\mathbb{F}_2$ is also placed into $O(n^\omega)$ time. This improves previous $O(n^3)$ bounds. Our methods in fact show an $O(n^2)$-time reduction from matrix rank over $\mathbb{F}_2$ to computing $p = |\langle \; 0^n \;|\; C \;|\; 0^n \;\rangle|^2$ (hence also to solution counting) and a converse reduction that is $O(s + n^2)$ except for matrix multiplications used to decide whether $p > 0$. The current best-known worst-case time for matrix rank is $O(n^{\omega})$ over $\mathbb{F}_2$, indeed over any field, while $\omega$ is currently upper-bounded by $2.3728\dots$ Our methods draw on properties of classical quadratic forms over $\mathbb{Z}_4$. We study possible distributions of Feynman paths in the circuits and prove that the differences in $+1$ vs. $-1$ counts and $+i$ vs. $-i$ counts are always $0$ or a power of $2$. Further properties of quantum graph states and connections to graph theory are discussed.
computer science
Automated game design is the problem of automatically producing games through computational processes. Traditionally, these methods have relied on the authoring of search spaces by a designer, defining the space of all possible games for the system to author. In this paper, we instead learn representations of existing games from gameplay video and use these to approximate a search space of novel games. In a human subject study we demonstrate that these novel games are indistinguishable from human games in terms of challenge, and that one of the novel games was equivalent to one of the human games in terms of fun, frustration, and likeability.
computer science
Recent years have seen new general notions of contextuality emerge. Most of these employ context-independent symbols to represent random variables in different contexts. As an example, the operational theory of Spekkens [1] treats an observable being measured in two different contexts identically. Non-contextuality in this approach is the impossibility of drawing ontological distinctions between identical elements of the operational theory. However, a recent collection of work seeks to exploit context-dependent symbols of random variables to interpret contextuality [2, 3]. This approach associates contextuality with the possibility of imposing a particular joint distribution on random variables recorded under different experimental contexts. This paper compares these two different treatments of random variables and highlights the limitations of the context-dependent approach as a physical theory.
quantum physics
We demonstrate an optomechanical platform where optical mode conversion mediated by mechanical motion enables arbitrary tailoring of polarization states of propagating light fields. Optomechanical interactions are realized in a Fabry-Per\'ot resonator, which naturally supports two polarization-degenerate states while an optical control field induces rotational symmetry breaking. Applying such principles, the entire Poincar\'e sphere is spanned by just optical control of the driving field, realizing reciprocal and non-reciprocal optomechanically-induced birefringence for linearly polarized and circularly polarized control driving. A straightforward extension of this setup also enables all-optical tunable isolation and circulation. Our findings open new avenues to exploit optomechanics for arbitrary manipulation of light polarization.
physics
The renormalization of theories with flavor mixing is discussed, and it is shown that the physical unstable particles should be interpreted as quasiparticles which cannot be regarded as external states. Several popular beliefs on renormalization are disproved accordingly, and the limitations of physical renormalization schemes are discussed. In addition, the properties of unstable particles with flavor mixing such as decay widths are studied from scattering mediated by them.
high energy physics phenomenology
Quasars have long been known as intrinsically variable sources, but the physical mechanism underlying the temporal optical/UV variability is still not well understood. We propose a novel nonparametric method for modeling and forecasting the optical variability of quasars utilizing an autoencoder neural network to gain insight into the underlying processes. The autoencoder is trained with ~15,000 decade-long quasar light curves obtained by the Catalina Real-time Transient Survey selected with negligible flux contamination from the host galaxy. The autoencoder's performance in forecasting the temporal flux variation of quasars is superior to that of the damped random walk process. We find a temporal asymmetry in the optical variability and a novel relation - the amplitude of the variability asymmetry decreases as luminosity and/or black hole mass increases - is suggested with the help of autoencoded features. The characteristics of the variability asymmetry are in agreement with those from the self-organized disk instability model, which predicts that the magnitude of the variability asymmetry decreases as the ratio of the diffusion mass to inflow mass in the accretion disk increases.
astrophysics
In this paper three dimensional relativistic hydrodynamic simulations of AGN jets are presented to investigate the FR I/FR II dichotomy. Three simulations are presented which illustrates the difference in morphology for high/low Lorentz factor injection as well as a stratified background medium. Lorentz factors of 10 and 1.0014 were used for the high and low Lorentz factor cases respectively. The hydrodynamic simulations show a division in the morphology of jets based on their initial injection luminosity. An additional simulation was set up to investigate the evolution of the low Lorentz factor jet if the mass injection was lowered after a certain time. A synchrotron emission model was applied to these simulations to reproduce intensity maps at radio frequencies (1.5GHz) which were compared to the observed emission structures of FR I/FR II radio galaxies. The effect of Doppler boosting on the intensity maps was also investigated for different polar angles. The intensity maps of both the high and low Lorentz factor cases reproduced emission structures that resemble those of FR II type radio galaxies with a dominant cocoon region containing time dependent hot spots and filaments. An FR I like structure was, however, produced for the low Lorentz factor case if the mass injection rate was lowered after a set time period.
astrophysics
One of the central problems in the study of quantum resource theories is to provide a given resource with an operational meaning, characterizing physical tasks in which the resource can give an explicit advantage over all resourceless states. We show that this can always be accomplished for all convex resource theories. We establish in particular that any resource state enables an advantage in a channel discrimination task, allowing for a strictly greater success probability than any state without the given resource. Furthermore, we find that the generalized robustness measure serves as an exact quantifier for the maximal advantage enabled by the given resource state in a class of subchannel discrimination problems, providing a universal operational interpretation to this fundamental resource quantifier. We also consider a wider range of subchannel discrimination tasks and show that the generalized robustness still serves as the operational advantage quantifier for several well-known theories such as entanglement, coherence, and magic.
quantum physics