text
stringlengths
11
9.77k
label
stringlengths
2
104
We prove the validity of an inequality involving a mean of the area and the length of the boundary of immersed disks whose boundaries are homotopically non-trivial curves in an oriented compact manifold which possesses convex mean curvature boundary, positive escalar curvature and admits a map to $\mathbb{D}^2\times T^{n}$ with nonzero degree, where $\mathbb{D}^2$ is a disk and $T^n$ is an $n$-dimensional torus. We also prove a rigidity result for the equality case when the boundary is totally geodesic. This can be viewed as a partial generalization of a result due to Lucas Ambr\'ozio in \cite{AMB} to higher dimensions.
mathematics
In this paper, we describe all finite Wajsberg algebras of order n<=9.
mathematics
M-theory is known to possess supersymmetric solutions where the geometry is $\mathrm{AdS}_3\times S^3\times S^3$ warped over a Riemann surface $\Sigma_{2}$. The simplest examples in this class can be engineered by placing M2 and M5 branes as defects inside of a stack of background M5 branes. In this paper we show that a generalization of this construction yields more general solutions in the aforementioned class. The background branes are now M5's carrying M2 brane charge, while the defect branes are now placed at the origin of a flat hyperplane with a conical defect. The equations of motion imply a relation between the deficit angle produced by the conical defect and the M2 charge carried by the background branes.
high energy physics theory
A weakly-supervised learning framework named as complementary-label learning has been proposed recently, where each sample is equipped with a single complementary label that denotes one of the classes the sample does not belong to. However, the existing complementary-label learning methods cannot learn from the easily accessible unlabeled samples and samples with multiple complementary labels, which are more informative. In this paper, to remove these limitations, we propose the novel multi-complementary and unlabeled learning framework that allows unbiased estimation of classification risk from samples with any number of complementary labels and unlabeled samples, for arbitrary loss functions and models. We first give an unbiased estimator of the classification risk from samples with multiple complementary labels, and then further improve the estimator by incorporating unlabeled samples into the risk formulation. The estimation error bounds show that the proposed methods are in the optimal parametric convergence rate. Finally, the experiments on both linear and deep models show the effectiveness of our methods.
statistics
We study the partial breaking of ${\cal N}=2$ global supersymmetry, using a novel formalism that allows for the off-shell nonlinear realization of the broken supersymmetry, extending previous results scattered in the literature. We focus on the Goldstone degrees of freedom of a massive ${\cal N}=1$ gravitino multiplet which are described by deformed ${\cal N}=2$ vector and single-tensor superfields satisfying nilpotent constraints. We derive the corresponding actions and study the interactions of the superfields involved, as well as constraints describing incomplete ${\cal N}=2$ matter multiplets of non-linear supersymmetry (vectors and single-tensors).
high energy physics theory
Semiclassical electrodynamics is an appealing approach for studying light-matter interactions, especially for realistic molecular systems. However, there is no unique semiclassical scheme. On the one hand, intermolecular interactions can be described instantaneously by static two-body interactions connecting different molecules plus a classical transverse E-field; we will call this Hamiltonian #I. On the other hand, intermolecular interactions can also be described as effects that are mediated exclusively through a classical one-body E-field without any quantum effects at all (assuming we ignore electronic exchange); we will call this Hamiltonian #II. Moreover, one can also mix these two Hamiltonians into a third, hybrid Hamiltonian, which preserves quantum electron-electron correlations for lower excitations but describes higher excitations in a mean-field way. To investigate which semiclassical scheme is most reliable for practical use, here we study the real-time dynamics of a pair of identical two-level systems (TLSs) undergoing either resonance energy transfer (RET) or collectively driven dynamics. While all approaches perform reasonably well when there is no strong external excitation, we find that no single approach is perfect for all conditions. Each method has its own distinct problems: Hamiltonian #I performs best for RET but behaves in a complicated manner for driven dynamics. Hamiltonian #II is always stable, but obviously fails for RET at short distances. One key finding is that, under externally driving, a full configuration interaction description of Hamiltonian #I strongly overestimates the long-time electronic energy, highlighting the not obvious fact that, if one plans to merge quantum molecules with classical light, a full, exact treatment of electron-electron correlations can actually lead to worse results than a simple mean-field treatment.
physics
The advent of learning-based methods in speech enhancement has revived the need for robust and reliable training features that can compactly represent speech signals while preserving their vital information. Time-frequency domain features, such as the Short-Term Fourier Transform (STFT) and Mel-Frequency Cepstral Coefficients (MFCC), are preferred in many approaches. While the MFCC provide for a compact representation, they ignore the dynamics and distribution of energy in each mel-scale subband. In this work, a speech enhancement system based on Generative Adversarial Network (GAN) is implemented and tested with a combination of Audio FingerPrinting (AFP) features obtained from the MFCC and the Normalized Spectral Subband Centroids (NSSC). The NSSC capture the locations of speech formants and complement the MFCC in a crucial way. In experiments with diverse speakers and noise types, GAN-based speech enhancement with the proposed AFP feature combination achieves the best objective performance while reducing memory requirements and training time.
electrical engineering and systems science
We make a theoretical and experimental summary of the state-of-the-art status of hot and dense QCD matter studies on selected topics. We review the Beam Energy Scan program for the QCD phase diagram and present the current status of search for QCD Critical Point, particle production in high baryon density region, hypernuclei production, and global polarization effects in nucleus-nucleus collisions. The available experimental data in the strangeness sector suggests that a grand canonical approach in thermal model at high collision energy makes a transition to the canonical ensemble behavior at low energy. We further discuss future prospects of nuclear collisions to probe properties of baryon-rich matter. Creation of a quark-gluon plasma at high temperature and low baryon density has been called the "Little-Bang" and, analogously, a femtometer-scale explosion of baryon-rich matter at lower collision energy could be called the "Femto-Nova", which may possibly sustain substantial vorticity and magnetic field for non-head-on collisions.
high energy physics phenomenology
Magnetism of the $S$ = 1 Heisenberg antiferromagnets on the spatially anisotropic square lattice has been scarcely explored. Here we report a study of the magnetism, specific heat, and thermal conductivity on Ni[SC(NH$_2$)$_2$]$_6$Br$_2$ (DHN) single crystals. Ni$^{2+}$ ions feature an $S$ = 1 rectangular lattice in the $bc$ plane, which can be viewed as an unfrustrated spatially anisotropic square lattice. A long-range antiferromagnetic order is developed at $T \rm_N =$ 2.23 K. Below $T\rm_N$, an upturn is observed in the $b$-axis magnetic susceptibility and the resultant minimum might be an indication for the $XY$ anisotropy in the ordered state. A gapped spin-wave dispersion is confirmed from the temperature dependence of the magnetic specific heat. Anisotropic temperature-field phase diagrams are mapped out and possible magnetic structures are proposed.
condensed matter
In this study we focus on the prediction of basketball games in the Euroleague competition using machine learning modelling. The prediction is a binary classification problem, predicting whether a match finishes 1 (home win) or 2 (away win). Data is collected from the Euroleague's official website for the seasons 2016-2017, 2017-2018 and 2018-2019, i.e. in the new format era. Features are extracted from matches' data and off-the-shelf supervised machine learning techniques are applied. We calibrate and validate our models. We find that simple machine learning models give accuracy not greater than 67% on the test set, worse than some sophisticated benchmark models. Additionally, the importance of this study lies in the "wisdom of the basketball crowd" and we demonstrate how the predicting power of a collective group of basketball enthusiasts can outperform machine learning models discussed in this study. We argue why the accuracy level of this group of "experts" should be set as the benchmark for future studies in the prediction of (European) basketball games using machine learning.
statistics
The availability of data sets with large numbers of variables is rapidly increasing. The effective application of Bayesian variable selection methods for regression with these data sets has proved difficult since available Markov chain Monte Carlo methods do not perform well in typical problem sizes of interest. The current paper proposes new adaptive Markov chain Monte Carlo algorithms to address this shortcoming. The adaptive design of these algorithms exploits the observation that in large $p$ small $n$ settings, the majority of the $p$ variables will be approximately uncorrelated a posteriori. The algorithms adaptively build suitable non-local proposals that result in moves with squared jumping distance significantly larger than standard methods. Their performance is studied empirically in high-dimensional problems (with both simulated and actual data) and speedups of up to 4 orders of magnitude are observed. The proposed algorithms are easily implementable on multi-core architectures and are well suited for parallel tempering or sequential Monte Carlo implementations.
statistics
We study the structure of the non-perturbative free energy of a one-parameter class of little string theories (LSTs) of A-type in the so-called unrefined limit. These theories are engineered by $N$ M5-branes probing a transverse flat space. By analysing a number of examples, we observe a pattern which suggests to write the free energy in a fashion that resembles a decomposition into higher-point functions which can be presented in a graphical way reminiscent of sums of (effective) Feynman diagrams: to leading order in the instanton parameter of the LST, the $N$ external states are given either by the fundamental building blocks of the theory with $N=1$, or the function that governs the counting of BPS states of a single M5-brane coupling to one M2-brane on either side. These states are connected via an effective coupling function which encodes the details of the gauge algebra of the LST and which in its simplest (non-trivial) form is captured by the scalar Greens function on the torus. More complicated incarnations of this function show certain similarities with so-called modular graph functions, which have appeared in the study of Feynman amplitudes in string- and field theory. Finally, similar structures continue to exist at higher instanton orders, which, however, also contain contributions that can be understood as the action of (Hecke) operators on the leading instanton result.
high energy physics theory
We consider the electronic spectrum near $M=(\pi,\pi)$ in the nematic phase of FeSe ($T<T_{{\rm nem}}$) and make a detailed comparison with recent ARPES and STM experiments. Our main focus is the unexpected temperature dependence of the excitations at the $M$ point. These have been identified as having $xz$ and $yz$ orbital character well below $T_{{\rm nem}}$, but remain split at $T>T_{{\rm nem}}$, in apparent contradiction to the fact that in the tetragonal phase the $xz$ and $yz$ orbitals are degenerate. Here we present two scenarios which can describe the data. In both scenarios, hybridization terms present in the tetragonal phase leads to an orbital transmutation, a change in the dominant orbital character of some of the bands, between $T > T_{\rm nem}$ and $T \ll T_{\rm nem}$. The first scenario relies on the spin-orbit coupling at the $M$ point. We show that a finite spin-orbit coupling gives rise to orbital transmutation, in which one of the modes, identified as $xz$ ($yz)$ at $T \ll T_{{\rm nem}}$, becomes predominantly $xy$ at $T > T_{{\rm nem}}$ and hence does not merge with the predominantly $yz$ ($xz$) mode. The second scenario, complementary to the first, takes into consideration the fact that both ARPES and STM are surface probes. In the bulk, a direct hybridization between the $xz$ and $yz$ orbitals is not allowed at the $M$ point, however, it is permitted on the surface. In the presence of a direct $xz/yz$ hybridization, the orbital character of the $xz/yz$ modes changes from pure $xz$ and pure $yz$ at $T \ll T_{{\rm nem}}$ to $xz \pm yz$ at $T > T_{{\rm nem}}$, i.e., the two modes again have mono-orbital character at low $T$, but do not merge at $T_{{\rm nem}}$. We discuss how these scenarios can be distinguished in polarized ARPES experiments.
condensed matter
Large tensor (multi-dimensional array) data are now routinely collected in a wide range of applications, due to modern data collection capabilities. Often such observations are taken over time, forming tensor time series. In this paper we present a factor model approach for analyzing high-dimensional dynamic tensor time series and multi-category dynamic transport networks. Two estimation procedures along with their theoretical properties and simulation results are presented. Two applications are used to illustrate the model and its interpretations.
statistics
We demonstrate that adding a conical deficit to a black hole holographic heat engine increases its efficiency; in contrast, allowing a black hole to accelerate {\it decreases} efficiency if the same average conical deficit is maintained. Adding other charges to the black hole does not change this qualitative effect. We also present a simple formula to calculate the efficiency of elliptical cycles for any $C_V\neq 0$ black hole, which allows a more efficient numerical algorithm for computation.
high energy physics theory
The BV formalism is proposed for the theories where the gauge symmetry parameters are unfree, being constrained by differential equations.
high energy physics theory
The dynamics of symbolic systems, such as multidimensional subshifts of finite type or cellular automata, are known to be closely related to computability theory. In particular, the appropriate tools to describe and classify topological entropy for this kind of systems turned out to be of computational nature. Part of the great importance of these symbolic systems relies on the role they have played in understanding more general systems over non-symbolic spaces. The aim of this article is to investigate topological entropy from a computability point of view in this more general, not necessarily symbolic setting. In analogy to effective subshifts, we consider computable maps over effective compact sets in general metric spaces, and study the computability properties of their topological entropies. We show that even in this general setting, the entropy is always a $\Sigma_2$-computable number. We then study how various dynamical and analytical constrains affect this upper bound, and prove that it can be lowered in different ways depending on the constraint considered. In particular, we obtain that all $\Sigma_2$-computable numbers can already be realized within the class of surjective computable maps over $\{0,1\}^{\mathbb{N}}$, but that this bound decreases to $\Pi_{1}$(or upper)-computable numbers when restricted to expansive maps. On the other hand, if we change the geometry of the ambient space from the symbolic $\{0,1\}^{\mathbb{N}}$ to the unit interval $[0,1]$, then we find a quite different situation -- we show that the possible entropies of computable systems over $[0,1]$ are exactly the $\Sigma_{1}$(or lower)-computable numbers and that this characterization switches down to precisely the computable numbers when we restrict the class of system to the quadratic family.
mathematics
When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. To this end, we develop an interpretable modular neural framework for modeling the intentions of other observed entities. We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft, and show that incorporating interpretability can significantly increase predictive performance under the right conditions.
computer science
Spatio-temporal deterministic chaos at small Taylor-Reynolds numbers $Re_{\lambda} \lesssim 40$ and distributed chaos at turbulent $Re_{\lambda} \gtrsim 40$ in passive scalar dynamics have been studied using results of direct numerical simulations of homogeneous incompressible flows (with and without mean gradient of the passive scalar) for $8 \leq Re_{\lambda} < 700$ and of a reacting turbulent mixing layer. It is shown that the deterministic chaos in the passive scalar fluctuations at the small $Re_{\lambda}$ is characterized by exponential spatial (wavenumber) spectrum: $E(k) \propto \exp-(k/k_c)$, whereas the distributed chaos at turbulent $Re_{\lambda}$ is characterized by stretched exponential spectrum $E(k) \propto \exp-(k/k_{\beta})^{3/4}$. The Birkhoff-Saffman invariant related to the momentum conservation and, due to the Noether theorem, to the spatial homogeneity has been used as a theoretical basis for this stretched exponential spectrum. Although the $k_c$ and $k_{\beta}$ represent the large-scale structures a relevance of the Batchelor scale $k_{bat}$ has been established as well: the normalized values $k_c/k_{bat}$ and $k_{\beta}/k_{bat}$ exhibit universality.
physics
Half-maximal, $\,\mathcal{N}=4\,$, sectors of $\,D=4\,$ $\,\mathcal{N}=8\,$ supergravity with a dyonic ISO(7) gauging are investigated. We focus on a half-maximal sector including three vector multiplets, that arises as a certain $\,\textrm{SO}(3)_{\textrm{R}}$-invariant sector of the full theory. We discuss the embedding of this sector into the largest half-maximal sector of the $\,\mathcal{N}=8\,$ supergravity retaining six vector multiplets. We also provide its canonical $\,\mathcal{N}=4\,$ formulation and show that, from this perspective, our model leads in its own right to a new explicit gauging of $\,\mathcal{N}=4\,$ supergravity. Finally, expressions for the restricted duality hierarchy are given and the vacuum structure is investigated. Five new non-supersymmetric AdS vacua are found numerically. The previously known $\,\mathcal{N}=2\,$ and $\,\mathcal{N}=3\,$ AdS vacua are also contained in our $\,\mathcal{N}=4\,$ model. Unlike when embedded in previously considered sectors with fewer fields, these vacua exhibit their full $\,\mathcal{N}=2\,$ and $\,\mathcal{N}=3\,$ supersymmetry within our $\,\mathcal{N}=4\,$ model.
high energy physics theory
This paper addresses the ways AI ethics research operates on an ideology of ideal theory, in the sense discussed by Mills (2005) and recently applied to AI ethics by Fazelpour \& Lipton (2020). I address the structural and methodological conditions that attract AI ethics researchers to ideal theorizing, and the consequences this approach has for the quality and future of our research community. Finally, I discuss the possibilities for a nonideal future in AI ethics.
computer science
CYRA (CrYogenic solar spectrogRAph) is a facility instrument of the 1.6-meter Goode Solar Telescope (GST) at the Big Bear Solar Observatory (BBSO). CYRA focuses on the study of the near-infrared solar spectrum between 1 and 5 microns, a under explored region which is not only a fertile ground for photospheric magnetic diagnostics, but also allows a unique window into the chromosphere lying atop the photosphere. CYRA is the first ever fully cryogenic spectrograph in any solar observatory with its two predecessors, on the McMath-Pierce and Mees Telescopes, being based on warm optics except for the detectors and order sorting filters. CYRA is used to probe magnetic fields in various solar features and the quiet photosphere. CYRA measurements will allow new and better 3D extrapolations of the solar magnetic field and will provide more accurate boundary conditions for solar activity models. Superior spectral resolution of 150,000 and better allows enhanced observations of the chromosphere in the carbon monoxide (CO) spectral bands and will yield a better understanding of energy transport in the solar atmosphere. CYRA is divided into two optical sub-systems: The Fore-Optics Module and the Spectrograph. The Spectrograph is the heart of the instrument and contains the IR detector, grating, slits, filters, and imaging optics all in a cryogenically cooled Dewar (cryostat). The detector a 2048 by 2048 pixel HAWAII 2 array produced by Teledyne Scientific & Imaging, LLC. The interior of the cryostat and the readout electronics are maintained at 90 Kelvin by helium refrigerant based cryo-coolers, while the IR array is cooled to 30 Kelvin. The Fore-Optics Module de-rotates and stabilizes the solar image, provides scanning capabilities, and transfers the GST image to the Spectrograph. CYRA has been installed and is undergoing its commissioning phase.
astrophysics
Refactoring is the art of improving the design of a system without altering its external behavior. Refactoring has become a well established and disciplined software engineering practice that has attracted a significant amount of research presuming that refactoring is primarily motivated by the need to improve system structures. However, recent studies have shown that developers may incorporate refactorings in other development activities that go beyond improving the design. Unfortunately, these studies are limited to developer interviews and a reduced set of projects. To cope with the above-mentioned limitations, we aim to better understand what motivates developers to apply refactoring by mining and classifying a large set of 111,884 commits containing refactorings, extracted from 800 Java projects. We trained a multi-class classifier to categorize these commits into 3 categories, namely, Internal QA, External QA, and Code Smell Resolution, along with the traditional BugFix and Functional categories. This classification challenges the original definition of refactoring, being exclusive to improving the design and fixing code smells. Further, to better understand our classification results, we analyzed commit messages to extract textual patterns that developers regularly use to describe their refactorings. The results show that (1) fixing code smells is not the main driver for developers to refactoring their codebases. Refactoring is solicited for a wide variety of reasons, going beyond its traditional definition; (2) the distribution of refactorings differs between production and test files; (3) developers use several patterns to purposefully target refactoring; (4) the textual patterns, extracted from commit messages, provide better coverage for how developers document their refactorings.
computer science
Dynamical universality classes are distinguished by their dynamical exponent $z$ and unique scaling functions encoding space-time asymmetry for, e.g. slow-relaxation modes or the distribution of time-integrated currents. So far the universality class of the Nagel-Schreckenberg (NaSch) model, which is a paradigmatic model for traffic flow on highways, was not known except for the special case $v_{\text{max}}=1$. Here the model corresponds to the TASEP (totally asymmetric simple exclusion process) that is known to belong to the superdiffusive Kardar-Parisi-Zhang (KPZ) class with $z=3/2$. In this paper, we show that the NaSch model also belongs to the KPZ class \cite{KPZ} for general maximum velocities $v_{\text{max}}>1$. Using nonlinear fluctuating hydrodynamics theory we calculate the nonuniversal coefficients, fixing the exact asymptotic solutions for the dynamical structure function and the distribution of time-integrated currents. Performing large-scale Monte-Carlo simulations we show that the simulation results match the exact asymptotic KPZ solutions without any fitting parameter left. Additionally, we find that nonuniversal early-time effects or the choice of initial conditions might have a strong impact on the numerical determination of the dynamical exponent and therefore lead to inconclusive results. We also show that the universality class is not changed by extending the model to a two-lane NaSch model with dynamical lane changing rules.
condensed matter
In phylogenetics, tree-based networks are used to model and visualize the evolutionary history of species where reticulate events such as horizontal gene transfer have occurred. Formally, a tree-based network $N$ consists of a phylogenetic tree $T$ (a rooted, binary, leaf-labeled tree) and so-called reticulation edges that span between edges of $T$. The network $N$ is typically visualized by drawing $T$ downward and planar and reticulation edges with one of several different styles. One aesthetic criteria is to minimize the number of crossings between tree edges and reticulation edges. This optimization problem has not yet been researched. We show that, if reticulation edges are drawn x-monotone, the problem is NP-complete, but fixed-parameter tractable in the number of reticulation edges. If, on the other hand, reticulation edges are drawn like "ears", the crossing minimization problem can be solved in quadratic time.
computer science
We systematically study the holographic phase transition of the radion field in a five-dimensional warped model which includes a scalar potential with a power-like behavior. We consider Kaluza-Klein (KK) resonances with masses $m_{\rm KK}$ at the TeV scale or beyond. The backreaction of the radion field on the gravitational metric is taken into account by using the superpotential formalism. The confinement/deconfinement first order phase transition leads to a gravitational wave stochastic background which mainly depends on the scale $m_{\rm KK}$ and the number of colors, $N$, in the dual theory. Its power spectrum peaks at a frequency that depends on the amount of tuning required in the electroweak sector. It turns out that the present and forthcoming gravitational wave observatories can probe scenarios where the KK resonances are very heavy. Current aLIGO data already rule out vector boson KK resonances with masses in the interval $m_{\rm KK}\sim(1 - 10) \times 10^5$ TeV. Future gravitational experiments will be sensitive to resonances with masses $m_{\rm KK}\lesssim 10^5$ TeV (LISA), $10^8$ TeV (aLIGO Design) and $10^9$ TeV (ET). Finally, we also find that the Big Bang Nucleosynthesis bound in the frequency spectrum turns into a lower bound for the nucleation temperature as $T_n \gtrsim 10^{-4}\sqrt{N} \,m_{\rm KK}$.
high energy physics phenomenology
We present a new Bayesian inference method for compartmental models that takes into account the intrinsic stochasticity of the process. We show how to formulate a SIR-type Markov jump process as the solution of a stochastic differential equation with respect to a Poisson Random Measure (PRM), and how to simulate the process trajectory deterministically from a parameter value and a PRM realisation. This forms the basis of our Data Augmented MCMC, which consists in augmenting parameter space with the unobserved PRM value. The resulting simple Metropolis-Hastings sampler acts as an efficient simulation-based inference method, that can easily be transferred from model to model. Compared with a recent Data Augmentation method based on Gibbs sampling of individual infection histories, PRM-augmented MCMC scales much better with epidemic size and is far more flexible. PRM-augmented MCMC also yields a posteriori estimates of the PRM, that represent process stochasticity, and which can be used to validate the model. If the model is good, the posterior distribution should exhibit no pattern and be close to the PRM prior distribution. We illustrate this by fitting a non-seasonal model to some simulated seasonal case count data. Applied to the Zika epidemic of 2013 in French Polynesia, our approach shows that a simple SEIR model cannot correctly reproduce both the initial sharp increase in the number of cases as well as the final proportion of seropositive. PRM-augmentation thus provides a coherent story for Stochastic Epidemic Model inference, where explicitly inferring process stochasticity helps with model validation.
statistics
The Sheaf-Theoretic Contextuality (STC) theory developed by Abramsky and colleagues is a very general account of whether multiply overlapping subsets of a set, each of which is endowed with certain "local'" structure, can be viewed as inheriting this structure from a global structure imposed on the entire set. A fundamental requirement of STC is that any intersection of subsets inherit one and the same structure from all intersecting subsets. I show that when STC is applied to systems of random variables, it can be recast in the language of the Contextuality-by-Default (CbD) theory, and this allows one to extend STC to arbitrary systems, in which the requirement in question (called "consistent connectedness'" in CbD) is not necessarily satisfied. When applied to deterministic systems, such as systems of logical statements with fixed truth values, a problem arises of distinguishing lack of consistent connectedness from contextuality. I show that it can be resolved by considering systems with multiple possible deterministic realizations as quasi-probabilistic systems with Bayesian priors assigned to the realizations. Although STC and CbD have distinct native languages and distinct aims and means, the conceptual modifications presented in this paper seem to make them essentially coextensive.
quantum physics
The advent of modern technology, permitting the measurement of thousands of characteristics simultaneously, has given rise to floods of data characterized by many large or even huge datasets. This new paradigm presents extraordinary challenges to data analysis and the question arises: how can conventional data analysis methods, devised for moderate or small datasets, cope with the complexities of modern data? The case of high dimensional data is particularly revealing of some of the drawbacks. We look at the case where the number of characteristics measured in an object is at least the number of observed objects and conclude that this configuration leads to geometrical and mathematical oddities and is an insurmountable barrier for the direct application of traditional methodologies. If scientists are going to ignore fundamental mathematical results arrived at in this paper and blindly use software to analyze data, the results of their analyses may not be trustful, and the findings of their experiments may never be validated. That is why new methods together with the wise use of traditional approaches are essential to progress safely through the present reality.
statistics
As deep learning based models are increasingly being used for information retrieval (IR), a major challenge is to ensure the availability of test collections for measuring their quality. Test collections are generated based on pooling results of various retrieval systems, but until recently this did not include deep learning systems. This raises a major challenge for reusable evaluation: Since deep learning based models use external resources (e.g. word embeddings) and advanced representations as opposed to traditional methods that are mainly based on lexical similarity, they may return different types of relevant document that were not identified in the original pooling. If so, test collections constructed using traditional methods are likely to lead to biased and unfair evaluation results for deep learning (neural) systems. This paper uses simulated pooling to test the fairness and reusability of test collections, showing that pooling based on traditional systems only can lead to biased evaluation of deep learning systems.
computer science
Recent observations of lensed sources have shown that the faintest ($M_{\mathrm{UV}} \approx -15\,\mathrm{mag}$) galaxies observed at z=6-8 appear to be extremely compact. Some of them have inferred sizes of less than 40 pc for stellar masses between $10^6$ and $10^7\,\mathrm{M}_{\odot}$, comparable to individual super star clusters or star cluster complexes at low redshift. High-redshift, low-mass galaxies are expected to show a clumpy, irregular morphology and if star clusters form in each of these well-separated clumps, the observed galaxy size would be much larger than the size of an individual star forming region. As supernova explosions impact the galaxy with a minimum delay time that exceeds the time required to form a massive star cluster, other processes are required to explain the absence of additional massive star forming regions. In this work we investigate whether the radiation of a young massive star cluster can suppress the formation of other detectable clusters within the same galaxy already before supernova feedback can affect the galaxy. We find that in low-mass ($M_{200} \lesssim 10^{10}\,\mathrm{M}_{\odot}$) haloes, the radiation from a compact star forming region with an initial mass of $10^{7}\,\mathrm{M}_{\odot}$ can keep gas clumps with Jeans masses larger than $\approx 10^{7}\,\mathrm{M}_{\odot}$ warm and ionized throughout the galaxy. In this picture, the small intrinsic sizes measured in the faintest $z=6-8$ galaxies are a natural consequence of the strong radiation field that stabilises massive gas clumps. A prediction of this mechanism is that the escape fraction for ionizing radiation is high for the extremely compact, high-z sources.
astrophysics
From a new rigorous formulation of the general axiomatic foundations of thermodynamics we derive an operational definition of entropy that responds to the emergent need in many technological frameworks to understand and deploy thermodynamic entropy well beyond the traditional realm of equilibrium states of macroscopic systems. The new treatment starts from a previously developed set of carefully worded operational definitions for all the necessary basic concepts, and is not based on the traditional ones of "heat" and of "thermal reservoir." It is achieved in three steps. First, a new definition of thermodynamic temperature is stated, for any stable equilibrium state. Then, by employing this definition, a measurement procedure is developed which defines uniquely the property entropy in a broad domain of states, which could include \textit{in principle}, even some non-equilibrium states of few-particle systems, provided they are separable and uncorrelated. Finally, the domain of validity of the definition is extended, possibly to every state of every system, by a different procedure, based on the preceding one, which associates a range of entropy values to any state not included in the previous domain. The principle of entropy non-decrease and the additivity of entropy are proved in both the domains considered.
quantum physics
In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as single-energy CT (SECT) scanners. Here we purpose a deep learning approach to perform DECT imaging by using standard SECT data. We designed a predenoising and difference learning mechanism to generate DECT images from SECT data. The performance of the deep learning-based DECT approach was studied using images from patients who received contrast-enhanced abdomen DECT scan with a popular DE application: virtual non-contrast (VNC) imaging and contrast quantification. Clinically relevant metrics were used for quantitative assessment. The absolute HU difference between the predicted and original high-energy CT images are 1.3 HU, 1.6 HU, 1.8 HU and 1.3 HU for the ROIs on aorta, liver, spine and stomach, respectively. The aorta iodine quantification difference between iodine maps obtained from the original and deep learning DECT images is smaller than 1.0\%, and the noise levels in the material images have been reduced by more than 7-folds for the latter. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method allows us to obtain high-quality DECT images without paying the overhead of conventional hardware-based DECT solutions and thus leads to a new paradigm of spectral CT imaging.
physics
HD 62658 (B9p V) is a little-studied chemically peculiar star. Light curves obtained by the Kilodegree Extremely Little Telescope (KELT) and Transiting Exoplanet Survey Satellite (TESS) show clear eclipses with a period of about 4.75 d, as well as out-of-eclipse brightness modulation with the same 4.75 d period, consistent with synchronized rotational modulation of surface chemical spots. High-resolution ESPaDOnS circular spectropolarimetry shows a clear Zeeman signature in the line profile of the primary; there is no indication of a magnetic field in the secondary. PHOEBE modelling of the light curve and radial velocities indicates that the two components have almost identical masses of about 3 M$_\odot$. The primary's longitudinal magnetic field $\langle B_z \rangle$ varies between about $+100$ and $-250$ G, suggesting a surface magnetic dipole strength $B_{\rm d} = 850$~G. Bayesian analysis of the Stokes $V$ profiles indicates $B_{\rm d} = 650$~G for the primary and $B_{\rm d} < 110$ G for the secondary. The primary's line profiles are highly variable, consistent with the hypothesis that the out-of-eclipse brightness modulation is a consequence of rotational modulation of that star's chemical spots. We also detect a residual signal in the light curve after removal of the orbital and rotational modulations, which might be pulsational in origin; this could be consistent with the weak line profile variability of the secondary. This system represents an excellent opportunity to examine the consequences of magnetic fields for stellar structure via comparison of two stars that are essentially identical with the exception that one is magnetic. The existence of such a system furthermore suggests that purely environmental explanations for the origin of fossil magnetic fields are incomplete.
astrophysics
We propose a non-destructive means of characterizing a semiconductor wafer via measuring parameters of an induced quantum dot on the material system of interest with a separate probe chip that can also house the measurement circuitry. We show that a single wire can create the dot, determine if an electron is present, and be used to measure critical device parameters. Adding more wires enables more complicated (potentially multi-dot) systems and measurements. As one application for this concept we consider silicon metal-oxide-semiconductor and silicon/silicon-germanium quantum dot qubits relevant to quantum computing and show how to measure low-lying excited states (so-called "valley" states). This approach provides an alternative method for characterization of parameters that are critical for various semiconductor-based quantum dot devices without fabricating such devices.
condensed matter
The puzzling idea that the combination of independent estimates of the magnitude of a quantity results in a very accurate prediction, which is superior to any or, at least, to most of the individual estimates is known as the wisdom of crowds. Here we use the Federal Reserve Bank of Philadelphia's Survey of Professional Forecasters database to confront the statistical and psychophysical explanations of this phenomenon. Overall we find that the data do not support any of the proposed explanations of the wisdom of crowds. In particular, we find a positive correlation between the variance (or diversity) of the estimates and the crowd error in disagreement with some interpretations of the diversity prediction theorem. In addition, contra the predictions of the psychophysical augmented quincunx model, we find that the skew of the estimates offers no information about the crowd error. More importantly, we find that the crowd beats all individuals in less than 2% of the forecasts and beats most individuals in less than 70% of the forecasts, which means that there is a sporting chance that an individual selected at random will perform better than the crowd. These results contrast starkly with the performance of non-natural crowds composed of unbiased forecasters which beat most individuals in practically all forecasts. The moderate statistical advantage of a real-world crowd over its members does not justify the ado about its wisdom, which is most likely a product of the selective attention fallacy.
statistics
The physical characteristics and evolution of a large-scale helium plume are examined through a series of numerical simulations with increasing physical resolution using adaptive mesh refinement (AMR). The five simulations each model a 1~m diameter circular helium plume exiting into a (4~m)$^3$ domain, and differ solely with respect to the smallest scales resolved using the AMR, spanning resolutions from 15.6~mm down to 0.976~mm. As the physical resolution becomes finer, the helium-air shear layer and subsequent Kelvin-Helmholtz instability are better resolved, leading to a shift in the observed plume structure and dynamics. In particular, a critical resolution is found between 3.91~mm and 1.95~mm, below which the mean statistics and frequency content of the plume are altered by the development of a Rayleigh-Taylor instability near the centerline in close proximity to the base of the plume. This shift corresponds to a plume "puffing" frequency that is slightly higher than would be predicted using empirical relationships developed for buoyant jets. Ultimately, the high-fidelity simulations performed here are intended as a new validation dataset for the development of subgrid-scale models used in large eddy simulations of real-world buoyancy-driven flows.
physics
Phase retrieval, i.e. the reconstruction of phase information from intensity information, is a central problem in many optical systems. Here, we demonstrate that a deep residual neural net is able to quickly and accurately perform this task for arbitrary point spread functions (PSFs) formed by Zernike-type phase modulations. Five slices of the 3D PSF at different focal positions within a two micron range around the focus are sufficient to retrieve the first six orders of Zernike coefficients.
electrical engineering and systems science
Active learning for sentence understanding aims at discovering informative unlabeled data for annotation and therefore reducing the demand for labeled data. We argue that the typical uncertainty sampling method for active learning is time-consuming and can hardly work in real-time, which may lead to ineffective sample selection. We propose adversarial uncertainty sampling in discrete space (AUSDS) to retrieve informative unlabeled samples more efficiently. AUSDS maps sentences into latent space generated by the popular pre-trained language models, and discover informative unlabeled text samples for annotation via adversarial attack. The proposed approach is extremely efficient compared with traditional uncertainty sampling with more than 10x speedup. Experimental results on five datasets show that AUSDS outperforms strong baselines on effectiveness.
computer science
Data competitions rely on real-time leaderboards to rank competitor entries and stimulate algorithm improvement. While such competitions have become quite popular and prevalent, particularly in supervised learning formats, their implementations by the host are highly variable. Without careful planning, a supervised learning competition is vulnerable to overfitting, where the winning solutions are so closely tuned to the particular set of provided data that they cannot generalize to the underlying problem of interest to the host. This paper outlines some important considerations for strategically designing relevant and informative data sets to maximize the learning outcome from hosting a competition based on our experience. It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved. The post-competition analysis, which complements the leaderboard, uses exploratory data analysis and generalized linear models (GLMs). The GLMs not only expand the range of results we can explore, they also provide more detailed analysis of individual sub-questions including similarities and differences between algorithms across different types of scenarios, universally easy or hard regions of the input space, and different learning objectives. When coupled with a strategically planned data generation approach, the methods provide richer and more informative summaries to enhance the interpretation of results beyond just the rankings on the leaderboard. The methods are illustrated with a recently completed competition to evaluate algorithms capable of detecting, identifying, and locating radioactive materials in an urban environment.
statistics
The gauge theories underlying gauged supergravity and exceptional field theory are based on tensor hierarchies: generalizations of Yang-Mills theory utilizing algebraic structures that generalize Lie algebras and, as a consequence, require higher-form gauge fields. Recently, we proposed that the algebraic structure allowing for consistent tensor hierarchies is axiomatized by `infinity-enhanced Leibniz algebras' defined on graded vector spaces generalizing Leibniz algebras. It was subsequently shown that, upon appending additional vector spaces, this structure can be reinterpreted as a differential graded Lie algebra. We use this observation to streamline the construction of general tensor hierarchies, and we formulate dynamics in terms of a hierarchy of first-order duality relations, including scalar fields with a potential.
high energy physics theory
A numerical semigroup $S$ is a subset of the non-negative integers containing $0$ that is closed under addition. The Hilbert series of $S$ (a formal power series equal to the sum of terms $t^n$ over all $n \in S$) can be expressed as a rational function in $t$ whose numerator is characterized in terms of the topology of a simplicial complex determined by membership in $S$. In this paper, we obtain analogous rational expressions for the related power series whose coefficient of $t^n$ equals $f(n)$ for one of several semigroup-theoretic invariants $f:S \to \mathbb R$ known to be eventually quasipolynomial.
mathematics
A new approach for the supercavitating hull optimization was proposed, which combines the CFD simulation and analytical methods. The high-speed penetration into water at the velocity 1 km/s was considered with the fixed body mass and caliber. Six different axisymmetric hulls with disc, 30{\deg} and 10{\deg} conical cavitators were simulated with the use of FLUENT at the first stage of penetration (till the 6 m depth). The optimized hull shapes were calculated with the use of quasi-steady approach and the characteristics obtained by CFD simulation. The use of slender cavitators drastically decreases the local pressures, but the higher friction drag also reduces the depth of supercavitating penetration. Optimized conical hulls can ensure much deeper penetration in comparison with the initial cylindrical hulls. The slenderest cavitator with the cylindrical hull ensures small loads and very deep penetration.
physics
We show how to use random field theory in a supervised, energy-based model for multiple pseudo image classification of 2D integer matrices. In the model, each row of a 2D integer matrix is a pseudo image where a local receptive field focuses on multiple portions of individual rows for simultaneous learning. The model is used for a classification task consisting of presence of patient biomarkers indicative of a particular disease.
electrical engineering and systems science
The localization of stellar-mass binary black hole mergers using gravitational waves is critical in understanding the properties of the binaries' host galaxies, observing possible electromagnetic emission from the mergers, or using them as a cosmological distance ladder. The precision of this localization can be substantially increased with prior astrophysical information about the binary system. In particular, constraining the inclination of the binary can reduce the distance uncertainty of the source. Here we present the first realistic set of localizations for binary black hole mergers, including different prior constraints on the binaries' inclinations. We find that prior information on the inclination can reduce the localization volume by a factor of 3. We discuss two astrophysical scenarios of interest: (i) follow-up searches for beamed electromagnetic/neutrino counterparts and (ii) mergers in the accretion disks of active galactic nuclei.
astrophysics
The Drell-Yan hadronic tensor for electromagnetic (EM) current is calculated in the Sudakov region $s\gg Q^2\gg q_\perp^2$ with ${1\over Q^2}$ accuracy, first at the tree level and then with the double-log accuracy. It is demonstrated that in the leading order in $N_c$ the higher-twist quark-quark-gluon TMDs reduce to leading-twist TMDs due to QCD equation of motion. The resulting tensor for unpolarized hadrons is EM gauge-invariant and depends on two leading-twist TMDs: $f_1$ responsible for total DY cross section, and Boer-Mulders function $h_1^\perp$. The order-of-magnitude estimates of angular distributions for DY process seem to agree with LHC results at corresponding kinematics.
high energy physics phenomenology
The prediction of electrical power in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power output can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power output as a function of these environmental conditions in order to maximize the profit. The research community has solved this problem by applying machine learning techniques and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms in which data is arriving continuously and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and performance to be applied on this streaming scenario.
electrical engineering and systems science
Motivated by the need to statistically quantify differences between modern (complex) data-sets which commonly result as high-resolution measurements of stochastic processes varying over a continuum, we propose novel testing procedures to detect relevant differences between the second order dynamics of two functional time series. In order to take the between-function dynamics into account that characterize this type of functional data, a frequency domain approach is taken. Test statistics are developed to compare differences in the spectral density operators and in the primary modes of variation as encoded in the associated eigenelements. Under mild moment conditions, we show convergence of the underlying statistics to Brownian motions and construct pivotal test statistics. The latter is essential because the nuisance parameters can be unwieldy and their robust estimation infeasible, especially if the two functional time series are dependent. Besides from these novel features, the properties of the tests are robust to any choice of frequency band enabling also to compare energy contents at a single frequency. The finite sample performance of the tests are verified through a simulation study and are illustrated with an application to fMRI data.
statistics
We introduce a new quantum R\'enyi divergence $D^{\#}_{\alpha}$ for $\alpha \in (1,\infty)$ defined in terms of a convex optimization program. This divergence has several desirable computational and operational properties such as an efficient semidefinite programming representation for states and channels, and a chain rule property. An important property of this new divergence is that its regularization is equal to the sandwiched (also known as the minimal) quantum R\'enyi divergence. This allows us to prove several results. First, we use it to get a converging hierarchy of upper bounds on the regularized sandwiched $\alpha$-R\'enyi divergence between quantum channels for $\alpha > 1$. Second it allows us to prove a chain rule property for the sandwiched $\alpha$-R\'enyi divergence for $\alpha > 1$ which we use to characterize the strong converse exponent for channel discrimination. Finally it allows us to get improved bounds on quantum channel capacities.
quantum physics
We define jet transition values for the anti-$k_{\bot}$ algorithm for both hadron and $e^+e^-$ colliders. We show how these transition values can be computed and how they can be used to improve the performance of clusterization when jet resolution parameters are varied over a larger set of values. Finally we present a simple performance test to illustrate the behavior of the new method compared to the original one.
high energy physics phenomenology
We show how one may test macroscopic local realism where, different from conventional Bell tests, all relevant measurements need only distinguish between two macroscopically distinct states of the system being measured. Here, measurements give macroscopically distinguishable outcomes for a system observable and do not resolve microscopic properties (of order $\hbar$). Macroscopic local realism assumes: (1) macroscopic realism (the system prior to measurement is in a state which will lead to just one of the macroscopically distinguishable outcomes) and (2) macroscopic locality (a measurement on a system at one location cannot affect the macroscopic outcome of the measurement on a system at another location, if the measurement events are spacelike separated). To obtain a quantifiable test, we define $M$-scopic local realism where the outcomes are separated by an amount $\sim M$. We first show for $N$ up to $20$ that $N$-scopic Bell violations are predicted for entangled superpositions of $N$ bosons (at each of two sites). Secondly, we show violation of $M$-scopic local realism for entangled superpositions of coherent states of amplitude $\alpha$, for arbitrarily large $M=\alpha$. In both cases, the systems evolve dynamically according to a local nonlinear interaction. The first uses nonlinear beam splitters realised through nonlinear Josephson interactions; the second is based on nonlinear Kerr interactions. To achieve the Bell violations, the traditional choice between two spin measurement settings is replaced by a choice between different times of evolution at each site.
quantum physics
Stellar bars and spiral arms co-exist and co-evolve in most disc galaxies in the local Universe. However, the physical nature of this interaction remains a matter of debate. In this work, we present a set of numerical simulations based on isolated galactic models aimed to explore how the bar properties affect the induced spiral structure. We cover a large combination of bar properties, including the bar length, axial ratio, mass and rotation rate. We use three galactic models describing galaxies with rising, flat and declining rotation curves. We found that the pitch angle best correlates with the bar pattern speed and the spiral amplitude with the bar quadrupole moment. Our results suggest that galaxies with declining rotation curves are the most efficient forming grand design spiral structure, evidenced by spirals with larger amplitude and pitch angle. We also test the effects of the velocity ellipsoid in a subset of simulations. We found that as we increase the radial anisotropy, spirals increase their pitch angle but become less coherent with smaller amplitude.
astrophysics
In many real-world scientific problems, generating ground truth (GT) for supervised learning is almost impossible. The causes include limitations imposed by scientific instrument, physical phenomenon itself, or the complexity of modeling. Performing artificial intelligence (AI) tasks such as segmentation, tracking, and analytics of small sub-cellular structures such as mitochondria in microscopy videos of living cells is a prime example. The 3D blurring function of microscope, digital resolution from pixel size, optical resolution due to the character of light, noise characteristics, and complex 3D deformable shapes of mitochondria, all contribute to making this problem GT hard. Manual segmentation of 100s of mitochondria across 1000s of frames and then across many such videos is not only herculean but also physically inaccurate because of the instrument and phenomena imposed limitations. Unsupervised learning produces less than optimal results and accuracy is important if inferences relevant to therapy are to be derived. In order to solve this unsurmountable problem, we bring modeling and deep learning to a nexus. We show that accurate physics based modeling of microscopy data including all its limitations can be the solution for generating simulated training datasets for supervised learning. We show here that our simulation-supervised segmentation approach is a great enabler for studying mitochondrial states and behaviour in heart muscle cells, where mitochondria have a significant role to play in the health of the cells. We report unprecedented mean IoU score of 91% for binary segmentation (19% better than the best performing unsupervised approach) of mitochondria in actual microscopy videos of living cells. We further demonstrate the possibility of performing multi-class classification, tracking, and morphology associated analytics at the scale of individual mitochondrion.
electrical engineering and systems science
We analyze the infrared behavior of the two and four-point functions for the massless $O(N)$ model in Lorentzian de Sitter spacetime, using the $1/N$ expansion. Our approach is based in the study of the Schwinger-Dyson equations on the sphere (Euclidean de Sitter space), using the fact that the infrared behavior in Lorentzian spacetime is determined by the pole structure of the Euclidean correlation functions. We compute the two-point function up to the NTLO in $1/N$, and show that in the infrared it behaves as the superposition of two massive free propagators with effective masses of the same order, but not equal to, the dynamical mass $m_{dyn}$. We compare our results with those obtained using other approaches, and find that they are equivalent but retrieved in a considerably simpler way. We also discuss the infrared behavior of the equal-times four-point functions.
high energy physics theory
Computing control invariant sets is paramount in many applications. The families of sets commonly used for computations are ellipsoids and polyhedra. However, searching for a control invariant set over the family of ellipsoids is conservative for systems more complex than unconstrained linear time invariant systems. Moreover, even if the control invariant set may be approximated arbitrarily closely by polyhedra, the complexity of the polyhedra may grow rapidly in certain directions. An attractive generalization of these two families are piecewise semi-ellipsoids. We provide in this paper a convex programming approach for computing control invariant sets of this family.
mathematics
We review gauge-freedom in quantum electrodynamics (QED) outside of textbook regimes. We emphasise that QED subsystems are defined relative to a choice of gauge. Each definition uses different gauge-invariant observables. We show that this relativity is only eliminated if a sufficient number of Markovian and weak-coupling approximations are employed. All physical predictions are gauge-invariant, including subsystem properties such as photon number and entanglement. However, subsystem properties naturally differ for different physical subsystems. Gauge-ambiguities arise not because it is unclear how to obtain gauge-invariant predictions, but because it is not always clear which physical observables are the most operationally relevant. The gauge-invariance of a prediction is necessary but not sufficient to ensure its operational relevance. We show that in controlling which gauge-invariant observables are used to define a material system, the choice of gauge affects the balance between the material system's localisation and its electromagnetic dressing. We review various implications of subsystem gauge-relativity for deriving effective models, for describing time-dependent interactions, for photodetection theory, and for describing matter within a cavity.
quantum physics
The search for materials with topological properties is an ongoing effort. In this article we propose a systematic statistical method supported by machine learning techniques that is capable of constructing topological models for a generic lattice without prior knowledge of the phase diagram. By sampling tight-binding parameter vectors from a random distribution we obtain data sets that we label with the corresponding topological index. This labeled data is then analyzed to extract those parameters most relevant for the topological classification and to find their most likely values. We find that the marginal distributions of the parameters already define a topological model. Additional information is hidden in correlations between parameters. Here we present as a proof of concept the prediction of the Haldane model as the prototypical topological insulator for the honeycomb lattice in Altland-Zirnbauer (AZ) class A. The algorithm is straightforwardly applicable to any other AZ class or lattice and could be generalized to interacting systems.
condensed matter
This paper introduces an R package ForecastTB that can be used to compare the accuracy of different forecasting methods as related to the characteristics of a time series dataset. The ForecastTB is a plug-and-play structured module, and several forecasting methods can be included with simple instructions. The proposed test-bench is not limited to the default forecasting and error metric functions, and users are able to append, remove, or choose the desired methods as per requirements. Besides, several plotting functions and statistical performance metrics are provided to visualize the comparative performance and accuracy of different forecasting methods. Furthermore, this paper presents real application examples with natural time series datasets (i.e., wind speed and solar radiation) to exhibit the features of the ForecastTB package to evaluate forecasting comparison analysis as affected by the characteristics of a dataset. Modeling results indicated the applicability and robustness of the proposed R package ForecastTB for time series forecasting.
statistics
Hexagonally deformed Fermi surfaces and strong nesting, found in topological insulators (TIs) such as $Bi_2Se_3$ and $Bi_2Te_3$ over the past decade, have led to several predictions of the existence of Density Wave order in these systems. Recent evidence for strong Fermi nesting in superconducting $Cu-Bi_2Se_3$ and $Nb-Bi_2Se_3$ has further led to the speculation about the importance of charge order in the context of unconventional superconductivity. Here, we report what we believe is the first direct observation of Charge Density Wave (CDW) order in $Bi_2Se_3$. Our results include the observation of a 140K metal-insulator-metal transition in resistivity as a function of temperature. This is corroborated by nuclear magnetic resonance (NMR) studies of the spin-lattice relaxation rate of the $^{209}Bi$ nucleus, which also displays a transition at 140K associated with an opening of an energy gap of ~8meV. Additionally, we use electron diffraction to reveal a periodic lattice distortion (PLD) in $Bi_2Se_3$, together with diffuse charge order between \vec{k} and\ \vec{k}\pm\mathrm{\Delta}\vec{k}. This diffuse scattering points toward the presence of an incommensurate charge density wave (I-CDW) above room temperature, which locks into a CDW upon cooling below $\sim140K$. We also observe an additional transition in $\frac{1}{T_1}$ near 200K, which appears to display anisotropy with the direction of applied magnetic field. In this report, we focus on the CDW transition at 140K and include some speculation of the other transition observed at 200K by NMR, also revealed here for the first time.
condensed matter
Time irreversibility, which characterizes nonequilibrium processes, can be measured based on the probabilistic differences between symmetric vectors. To simplify the quantification of time irreversibility, symmetric permutations instead of symmetric vectors have been employed in some studies. However, although effective in practical applications, this approach is conceptually incorrect. Time irreversibility should be measured based on the permutations of symmetric vectors rather than symmetric permutations, whereas symmetric permutations can instead be employed to determine the quantitative amplitude irreversibility -- a novel parameter proposed in this paper for nonequilibrium calculated by means of the probabilistic difference in amplitude fluctuations. Through theoretical and experimental analyses, we highlight the strong similarities and close associations between the time irreversibility and amplitude irreversibility measures. Our paper clarifies the connections of and the differences between the two types of permutation-based parameters for quantitative nonequilibrium, and by doing so, we bridge the concepts of amplitude irreversibility and time irreversibility and broaden the selection of quantitative tools for studying nonequilibrium processes in complex systems.
electrical engineering and systems science
Recent methods for long-tailed instance segmentation still struggle on rare object classes with few training data. We propose a simple yet effective method, Feature Augmentation and Sampling Adaptation (FASA), that addresses the data scarcity issue by augmenting the feature space especially for rare classes. Both the Feature Augmentation (FA) and feature sampling components are adaptive to the actual training status -- FA is informed by the feature mean and variance of observed real samples from past iterations, and we sample the generated virtual features in a loss-adapted manner to avoid over-fitting. FASA does not require any elaborate loss design, and removes the need for inter-class transfer learning that often involves large cost and manually-defined head/tail class groups. We show FASA is a fast, generic method that can be easily plugged into standard or long-tailed segmentation frameworks, with consistent performance gains and little added cost. FASA is also applicable to other tasks like long-tailed classification with state-of-the-art performance. Code will be released.
computer science
In this work, we develop a robust adaptive well-balanced and positivity-preserving central-upwind scheme on unstructured triangular grids for shallow water equations. The numerical method is an extension of the scheme from [{\sc Liu {\em et al.}},J. of Comp. Phys, 374 (2018), pp. 213 - 236]. As a part of the adaptive central-upwind algorithm, we obtain local a posteriori error estimator for the efficient mesh refinement strategy. The accuracy, high-resolution, and efficiency of the new adaptive central-upwind scheme are demonstrated on a number of challenging tests for shallow water models.
mathematics
Two-dimensional superfluidity and quantum turbulence are directly connected to the microscopic dynamics of quantized vortices. However, surface effects have prevented direct observations of coherent vortex dynamics in strongly-interacting two-dimensional systems. Here, we overcome this challenge by confining a two-dimensional droplet of superfluid helium at microscale on the atomically-smooth surface of a silicon chip. An on-chip optical microcavity allows laser-initiation of vortex clusters and nondestructive observation of their decay in a single shot. Coherent dynamics dominate, with thermal vortex diffusion suppressed by six orders-of-magnitude. This establishes a new on-chip platform to study emergent phenomena in strongly-interacting superfluids, test astrophysical dynamics such as those in the superfluid core of neutron stars in the laboratory, and construct quantum technologies such as precision inertial sensors.
condensed matter
We study the Littlewood-Paley-Stein functions associated with Hodge-de Rham and Schr{\"o}dinger operators on Riemannian manifolds. Under conditions on the Ricci curvature we prove their boundedness on L p for p in some interval (p 1 , 2] and make a link to the Riesz Transform. An important fact is that we do not make assumptions of doubling measure or estimates on the heat kernel in this case. For p > 2 we give a criterion to obtain the boundedness of the vertical Littlewood-Paley-Stein function associated with Schr{\"o}dinger operators on L p .
mathematics
We provide a brief but self-contained review of two-dimensional conformal field theory, from the basic principles to some of the simplest models. From the representations of the Virasoro algebra on the one hand, and the state-field correspondence on the other hand, we deduce Ward identities and Belavin--Polyakov--Zamolodchikov equations for correlation functions. We then explain the principles of the conformal bootstrap method, and introduce conformal blocks. This allows us to define and solve minimal models and Liouville theory. In particular, we study their three- and four-point functions, and discuss their existence and uniqueness. In appendices, we introduce the free boson theory (with an arbitrary central charge), and the modular bootstrap in minimal models.
high energy physics theory
We propose the use of superconducting nanowires as both target and sensor for direct detection of sub-GeV dark matter. With excellent sensitivity to small energy deposits on electrons, and demonstrated low dark counts, such devices could be used to probe electron recoils from dark matter scattering and absorption processes. We demonstrate the feasibility of this idea using measurements of an existing fabricated tungsten-silicide nanowire prototype with 0.8 eV energy threshold and 4.3 nanograms with 10 thousand seconds of exposure, which showed no dark counts. The results from this device already place meaningful bounds on dark matter-electron interactions, including the strongest terrestrial bounds on sub-eV dark photon absorption to date. Future expected fabrication on larger scales and with lower thresholds should enable probing new territory in the direct detection landscape, establishing the complementarity of this approach to other existing proposals.
high energy physics phenomenology
We compare analytic predictions for real and Fourier space two-point statistics for biased tracers from a variety of Lagrangian Perturbation Theory approaches against those from state of the art N-body simulations in $f(R)$ Hu-Sawicki and the nDGP braneworld modified gravity theories. We show that the novel physics of gravitational collapse in scalar tensor theories with the chameleon or the Vainshtein screening mechanism can be effectively factored in with bias parameters analytically predicted using the Peak-Background Split formalism when updated to include the environmental sensitivity of modified gravity theories as well as changes to the halo mass function. We demonstrate that Convolution Lagrangian Perturbation Theory (CLPT) and Standard Perturbation Theory (SPT) approaches provide accurate analytic methods to predict the correlation function and power spectra, respectively, for biased tracers in modified gravity models and are able to characterize both the BAO, power-law and small scale regimes needed for upcoming galaxy surveys such as DESI, Euclid, LSST and WFIRST.
astrophysics
We prove that no eigenvalue of the clamped disk can have multiplicity greater than six. Our method of proof is based on a new recursion formula, linear algebra arguments and a transcendency theorem due to Siegel and Shidlovskii.
mathematics
We investigate approaches to reduce the computational complexity of Volterra nonlinear equalizers (VNLEs) for short-reach optical transmission systems using intensity modulation and direct detection (IM/DD). In this contribution we focus on a structural reduction of the number of kernels, i.e. we define rules to decide which terms need to be implemented and which can be neglected before the kernels are calculated. This static complexity reduction is to be distinguished from other approaches like pruning or L1 regularization, that are applied after the adaptation of the full Volterra equalizer e.g. by thresholding. We investigate the impact of the complexity reduction on 90 GBd PAM6 IM/DD experimental data acquired in a back-to-back setup as well as in case of transmission over 1 km SSMF. First, we show, that the third-order VNLE terms have a significant impact on the overall performance of the system and that a high number of coefficients is necessary for optimal performance. Afterwards, we show that restrictions, for example on the tap spacing among samples participating in the same kernel, can lead to an improved tradeoff between performance and complexity compared to a full third-order VNLE. We show an example, in which the number of third-order kernels is halved without any appreciable performance degradation.
electrical engineering and systems science
The R package abn is designed to fit additive Bayesian models to observational datasets. It contains routines to score Bayesian networks based on Bayesian or information theoretic formulations of generalized linear models. It is equipped with exact search and greedy search algorithms to select the best network. It supports a possible blend of continuous, discrete and count data and input of prior knowledge at a structural level. The Bayesian implementation supports random effects to control for one-layer clustering. In this paper, we give an overview of the methodology and illustrate the package's functionalities using a veterinary dataset about respiratory diseases in commercial swine production.
statistics
We study color allowed bottom baryon to $s$-wave and $p$-wave charmed baryon non-leptonic decays in this work. The charmed baryons include spin-1/2 and spin-3/2 states. Explicitly, we consider $\Lambda_b\to \Lambda^{(*,**)}_c M^-$, $\Xi_b\to\Xi_c^{(**)} M^-$ and $\Omega_b\to\Omega^{(*,**)}_c M^-$ decays with $M=\pi, K, \rho, K^*, a_1, D, D_s, D^*, D^*_s$, $\Lambda^{(*,**)}_c=\Lambda_c, \Lambda_c(2595), \Lambda_c(2625), \Lambda_c(2765), \Lambda_c(2940)$, $\Xi_c^{(**)}=\Xi_c, \Xi_c(2815), \Xi_c(2790)$ and $\Omega^{(*,**)}_c=\Omega_c, \Omega_c(2770), \Omega_c(3050), \Omega_c(3090), \Omega_c(3120)$. There are six types of transitions, namely, (i) ${\cal B}_b({\bf \bar 3_f},1/2^+)$ to ${\cal B}_c({\bf \bar 3_f},1/2^+)$, (ii) ${\cal B}_b({\bf 6_f},1/2^+)$ to ${\cal B}_c({\bf 6_f},1/2^+)$, (iii) ${\cal B}_b({\bf 6_f},1/2^+)$ to ${\cal B}_c({\bf 6_f},3/2^+)$, (iv) ${\cal B}_b({\bf 6_f},1/2^+)$ to ${\cal B}_c({\bf 6_f},3/2^-)$, (v) ${\cal B}_b({\bf \bar 3_f},1/2^+)$ to ${\cal B}_c({\bf \bar 3_f},1/2^-)$, and (vi) ${\cal B}_b( {\bf \bar 3_f},1/2^+)$ to ${\cal B}_c({\bf \bar 3_f},3/2^-)$ transitions. Types (i) to (iii) involve spin 1/2 and 3/2 $s$-wave charmed baryons, while types (iv) to (vi) involve spin 1/2 and 3/2 $p$-wave charmed baryons. The transition form factors are calculated in the light-front quark model approach. All of the form factors in the $1/2\to 1/2$ and $1/2 \to 3/2$ transitions are extracted, and they are found to reasonably satisfy the relations obtained in the heavy quark limit, as we are using heavy but finite $m_b$ and $m_c$. Using na\"{i}ve factorization, decay rates and up-down asymmetries of the above modes are predicted and can be checked experimentally. The study on these decay modes may shed light on the quantum numbers of $\Lambda_c(2765)$, $\Lambda_c(2940)$ $\Omega_c(3050)$, $\Omega_c(3090)$ and $\Omega_c(3120)$.
high energy physics phenomenology
The aim of this article is to prove that the Torelli group action on the G-character varieties is ergodic for G a connected, semi-simple and compact Lie group.
mathematics
While the hurdle Poisson regression is a popular class of models for count data with excessive zeros, the link function in the binary component may be unsuitable for highly imbalanced cases. Ordinary Poisson regression is unable to handle the presence of dispersion. In this paper, we introduce Conway-Maxwell-Poisson (CMP) distribution and integrate use of flexible skewed Weibull link functions as better alternative. We take a fully Bayesian approach to draw inference from the underlying models to better explain skewness and quantify dispersion, with Deviance Information Criteria (DIC) used for model selection. For empirical investigation, we analyze mining injury data for period 2013-2016 from the U.S. Mine Safety and Health Administration (MSHA). The risk factors describing proportions of employee hours spent in each type of mining work are compositional data; the probabilistic principal components analysis (PPCA) is deployed to deal with such covariates. The hurdle CMP regression is additionally adjusted for exposure, measured by the total employee working hours, to make inference on rate of mining injuries; we tested its competitiveness against other models. This can be used as predictive model in the mining workplace to identify features that increase the risk of injuries so that prevention can be implemented.
statistics
A new technique for nonparametric regression of multichannel signals is presented. The technique is based on the use of the Rational-Dilation Wavelet Transform (RADWT), equipped with a tunable Q-factor able to provide sparse representations of functions with different oscillations persistence. In particular, two different frames are obtained by two RADWT with different Q-factors that give sparse representations of functions with low and high resonance. It is assumed that the signals are measured simultaneously on several independent channels and that they share the low resonance component and the spectral characteristics of the high resonance component. Then, a regression analysis is performed by means of the grouped lasso penalty. Furthermore, a result of asymptotic optimality of the estimator is presented using reasonable assumptions and exploiting recent results on group-lasso like procedures. Numerical experiments show the performance of the proposed method in different synthetic scenarios as well as in a real case example for the analysis and joint detection of sleep spindles and K-complex events for multiple electroencephalogram (EEG) signals.
statistics
We explore scenarios for the accretion of ornamental ridges on Saturn's moons Pan, Atlas, and Daphnis from material in Saturn's rings. Accretion of complex shaped ridges from ring material should be possible when the torque from accreted material does not exceed the tidal torque from Saturn that ordinarily maintains tidal lock. This gives a limit on the maximum accretion rate and the minimum duration for equatorial ridge growth. We explore the longitude distribution of ridges accreted from ring material, initially in circular orbits, onto a moon that is on a circular, inclined or eccentric orbit. Sloped and lobed ridges can be accreted if the moon tidally realigns during accretion due to its change in shape or because the disk edge surface density profile allows ring material originating at different initial semi-major axes to impact the moon at different locations on its equatorial ridge. We find that accretion from an asymmetric gap might account for a depression on Atlas's equatorial ridge. Accretion from an asymmetric gap at orbital eccentricity similar to the Hill eccentricity, might allow accretion of multiple lobes, as seen on Pan. Two possibly connected scenarios are promising for growth of ornamental equatorial ridges. The moon migrates through the ring, narrowing its gap and facilitating accretion. The moon's orbital eccentricity could increase due to orbital resonance with another moon, pushing it into its gap edges and facilitating accretion.
astrophysics
Given a DT-operator $Z$ whose Brown measure is radially symmetric and has a certain concentration property, it is shown that $Z$ is not spectral in the sense of Dunford. This is accomplished by showing that the angles between certain complementary Haagerup-Schultz projections of $Z$ are zero. New estimates on norms and traces of powers of algebra-valued circular operators over commutative C$^*$-algebras are also proved.
mathematics
We study the estimation of the average causal effect (ACE) on the survival scale where right-censoring exists and high-dimensional covariate information is available. We propose new estimators using regularized survival regression and survival random forests (SRF) to make adjustment with high dimensional covariates to improve efficiency. We study the behavior of general adjusted estimator when the adjustments are `risk consistent' and `jackknife compatible'. The theoretical results provided guarantee that the estimators we proposed are more efficient than the unadjusted one asymptotically when using SRF for adjustment. The finite sample behavior of our methods are studied by simulation, and the results are in agreement with our theoretical results. We also illustrated our methods via analyzing the real data from transplant research to identify the relative effectiveness of identical sibling donors compared to unrelated donors with the adjustment of cytogenetic abnormalities.
statistics
We present a windowed technique to learn parsimonious time-varying autoregressive models from multivariate timeseries. This unsupervised method uncovers interpretable spatiotemporal structure in data via non-smooth and non-convex optimization. In each time window, we assume the data follow a linear model parameterized by a system matrix, and we model this stack of potentially different system matrices as a low rank tensor. Because of its structure, the model is scalable to high-dimensional data and can easily incorporate priors such as smoothness over time. We find the components of the tensor using alternating minimization and prove that any stationary point of this algorithm is a local minimum. We demonstrate on a synthetic example that our method identifies the true rank of a switching linear system in the presence of noise. We illustrate our model's utility and superior scalability over extant methods when applied to several synthetic and real-world example: two types of time-varying linear systems, worm behavior, sea surface temperature, and monkey brain datasets.
statistics
The aim of this work is to find a progenitor for Canes Venatici I (CVn I), under the assumption that it is a dark matter free object that is undergoing tidal disruption. With a simple point mass integrator, we searched for an orbit for this galaxy using its current position, position angle, and radial velocity in the sky as constraints. The orbit that gives the best results has the pair of proper motions $\mu_\alpha$ = -0.099 mas yr$^{-1}$ and $\mu_\delta$ = -0.147 mas yr$^{-1}$, that is an apogalactic distance of 242.79 kpc and a perigalactic distance of 20.01 kpc. Using a dark matter free progenitor that undergoes tidal disruption, the best-fitting model matches the final mass, surface brightness, effective radius, and velocity dispersion of CVn I simultaneously. This model has an initial Plummer mass of 2.47 x $10^7$ M$_\odot$ and a Plummer radius of 653 pc, producing a remnant after 10 Gyr with a final mass of 2.45 x 10$^5$ M$_\odot$, a central surface brightness of 26.9 mag arcsec$^{-2}$, an effective radius of 545.7 pc, and a velocity dispersion with the value 7.58 km s$^{-1}$. Furthermore, it is matching the position angle and ellipticity of the projected object in the sky.
astrophysics
We use Gaia DR2 systemic proper motions of 45 satellite galaxies to constrain the mass of the Milky Way using the scale free mass estimator of Watkins et al. (2010). We first determine the anisotropy parameter $\beta$, and the tracer satellites' radial density index $\gamma$ to be $\beta$=$-0.67^{+0.45}_{-0.62}$ and $\gamma=2.11\pm0.23$. When we exclude possible former satellites of the Large Magellanic Cloud, the anisotropy changes to $\beta$=$-0.21^{+0.37}_{-0.51}$. We find that the index of the Milky Way's gravitational potential $\alpha$, which is dependent on the mass itself, is the parameter with the largest impact on the mass determination. Via comparison with cosmological simulations of Milky Way-like galaxies, we carried out a detailed analysis of the estimation of the observational uncertainties and their impact on the mass estimator. We found that the mass estimator is biased when applied naively to the satellites of simulated Milky Way halos. Correcting for this bias, we obtain for our Galaxy a mass of $0.58^{+0.15}_{-0.14}\times10^{12}$M$_\odot$ within 64 kpc, as computed from the inner half of our observational sample, and $1.43^{+0.35}_{-0.32}\times10^{12}$M$_\odot$ within 273 kpc, from the full sample; this latter value extrapolates to a virial mass of $M_\mathrm{vir\,\Delta=97}$=$1.51^{+0.45}_{-0.40} \times 10^{12}M_{\odot}$ corresponding to a virial radius of R$_\mathrm{vir}$=$308\pm29$ kpc. This value of the Milky Way mass lies in-between other mass estimates reported in the literature, from various different methods.
astrophysics
Entangled two-photon spectroscopy is expected to provide advantages compared with classical protocols. It is achieved by coherently controlling the spectral properties of energy-entangled photons. We present here an experimental setup that allows the spectral shaping of entangled photons with high resolution. We evaluate its performances by detecting sum frequency generation in a non-linear crystal. The efficiency of the process is compared when performed with classical or entangled light.
quantum physics
Several particles are not observed directly, but only through their decay products. We consider the possibility that they might be fakeons, i.e. fake particles, which mediate interactions but are not asymptotic states. A crucial role to determine the true nature of a particle is played by the imaginary parts of the one-loop radiative corrections, which are affected in nontrivial ways by the presence of fakeons in the loop. The knowledge we have today is sufficient to prove that most non directly observed particles are true physical particles. However, in the case of the Higgs boson the possibility that it might be a fakeon remains open. The issue can be resolved by means of precision measurements in existing and future accelerators.
high energy physics phenomenology
A computational methodology is introduced to minimize infection opportunities for people suffering some degree of lockdown in response to a pandemic, as is the 2020 COVID-19 pandemic. Persons use their mobile phone or computational device to request trips to places of their need or interest indicating a rough time of day: `morning', `afternoon', `night' or `any time' when they would like to undertake these outings as well as the desired place to visit. An artificial intelligence methodology which is a variant of Genetic Programming studies all requests and responds with specific time allocations for such visits that minimize the overall risks of infection, hospitalization and death of people. A number of alternatives for this computation are presented and results of numerical experiments involving over 230 people of various ages and background health levels in over 1700 visits that take place over three consecutive days. A novel partial infection model is introduced to discuss these proof of concept solutions which are compared to round robin uninformed time scheduling for visits to places. The computations indicate vast improvements with far fewer dead and hospitalized. These auger well for a more realistic study using accurate infection models with the view to test deployment in the real world. The input that drives the infection model is the degree of infection by taxonomic class, such as the information that may arise from population testing for COVID-19 or, alternatively, any contamination model. The taxonomy class assumed in the computations is the likely level of infection by age group.
computer science
Testing the implementation of deep learning systems and their training routines is crucial to maintain a reliable code base. Modern software development employs processes, such as Continuous Integration, in which changes to the software are frequently integrated and tested. However, testing the training routines requires running them and fully training a deep learning model can be resource-intensive, when using the full data set. Using only a subset of the training data can improve test run time, but can also reduce its effectiveness. We evaluate different ways for training set reduction and their ability to mimic the characteristics of model training with the original full data set. Our results underline the usefulness of training set reduction, especially in resource-constrained environments.
statistics
I introduce the consequences of neutrino mass and mixing in the dense environments of the early Universe and in astrophysical environments. Thermal and matter effects are reviewed in the context of a two-neutrino formalism, with methods of extension to multiple neutrinos. The observed large neutrino mixing angles place the strongest constraint on cosmological lepton (or neutrino) asymmetries, while new sterile neutrinos provide a wealth of possible new physics, including lepton asymmetry generation as well as candidates for dark matter. I also review cosmic microwave background and large-scale structure constraints on neutrino mass and energy density. Lastly, I review how X-ray astronomy has become a branch of neutrino physics in searches for keV-scale sterile neutrino dark matter radiative decay.
high energy physics phenomenology
Deconfined quantum criticality with emergent SO(5) symmetry in correlated systems remains elusive. Here, by performing numerically-exact state-of-the-art quantum Monte Carlo (QMC) simulations, we show convincing evidences of deconfined quantum critical points (DQCP) between antiferromagnetic and valence-bond-solid phases in the extended Hubbard model of fermions on the honeycomb lattice with large system sizes. We further demonstrate evidences of the SO(5) symmetry at the DQCP. It is important to note that the critical exponents obtained by finite-size scaling at the DQCP here are consistent with the rigourous conformal bounds. Consequently, we established a promising arena of DQCP with emergent SO(5) symmetry in interacting systems of fermions. Its possible experimental relevances in correlated systems of Dirac fermions will be discussed briefly.
condensed matter
The exceptional euclidean Jordan algebra of 3x3 hermitian octonionic matrices, appears to be tailor made for the internal space of the three generations of quarks and leptons. The maximal rank subgroup of its automorphism group F4 that respects the lepton-quark splitting is the product of the colour SU(3) with an "electroweak" SU(3) factor. Its intersection with the automorphism group Spin(9) of the special Jordan subalgebra J, associated with a single generation of fundamental fermions, is precisely the symmetry group S(U(3)xU(2)) of the Standard Model. The Euclidean extension of J involves 32 primitive idempotents giving the states of the first generation fermions. The triality relating left and right Spin(8) spinors to 8-vectors corresponds to the Yukawa coupling of the Higgs boson to quarks and leptons.
high energy physics theory
Astrophysical jets are launched from strongly magnetized systems that host an accretion disk surrounding a central object. Here we address the question how to generate the accretion disk magnetization and field structure required for jet launching. We continue our work from Paper I (Mattia & Fendt 2020a), considering a non-scalar accretion disk mean-field $\alpha^2\Omega$-dynamo in the context of large scale disk-jet simulations. We now investigate a disk dynamo that follows analytical solutions of mean-field dynamo theory, essentially based only on a single parameter, the Coriolis number. We thereby confirm the anisotropy of the dynamo tensor acting in accretion disks, allowing to relate both the resistivity and mean-field dynamo to the disk turbulence. Our new model recovers previous simulations applying a purely radial initial field, while allowing for a more stable evolution for seed fields with a vertical component. We also present correlations between the strength of the disk dynamo coefficients and the dynamical parameters of the jet that is launched, and discuss their implication for observed jet quantities.
astrophysics
We investigate the effect of a four-dimensional Fourier transform on the formulation of Navier-Stokes equation in Fourier space and the way the energy is transferred between Fourier components (modal interactions, commonly referred to as triad interactions in the classical 3-dimensional analysis). We specifically consider the effect of a spatially and temporally finite, digitally sampled velocity record on the modal interactions and find that Fourier components may interact within a broadened frequency window as compared to the usual integrals over infinite ranges. We also see how these finite velocity records have a significant effect on the efficiency of the different modal interactions and thereby on the shape and development of velocity power spectra. The observation that mismatches in the wavevector triadic interactions may be compensated by a corresponding mismatch in the frequencies supports the empirically deduced delayed interactions reported in [Josserand \textit{et al.}, \textit{J. Stat. Phys.} (2017)]. Collectively, these results explain the occurrence and time development of the so-called Richardson cascade and also why deviations from the classical Richardson cascade may occur. Finally, we quote results from companion papers that deal with measurements and computer simulations of the time development of velocity power spectra in a turbulent jet flow into which a single Fourier mode (narrow band oscillation) is injected.
physics
Context. Coronal mass ejections (CMEs) on the Sun are the largest explosions in the Solar System that can drive powerful plasma shocks. The eruptions, shocks, and other processes associated to CMEs are efficient particle accelerators and the accelerated electrons in particular can produce radio bursts through the plasma emission mechanism. Aims. Coronal mass ejections and associated radio bursts have been well studied in cases where the CME originates close to the solar limb or within the frontside disc. Here, we study the radio emission associated with a CME eruption on the back side of the Sun on 22 July 2012. Methods. Using radio imaging from the Nan\c{c}ay Radioheliograph, spectroscopic data from the Nan\c{c}ay Decametric Array, and extreme-ultraviolet observations from the Solar Dynamics Observatory and Solar Terrestrial Relations Observatory spacecraft, we determine the nature of the observed radio emission as well as the location and propagation of the CME. Results. We show that the observed low-intensity radio emission corresponds to a type II radio burst or a short-duration type IV radio burst associated with a CME eruption due to breakout reconnection on the back side of the Sun, as suggested by the pre-eruptive magnetic field configuration. The radio emission consists of a large, extended structure, initially located ahead of the CME, that corresponds to various electron acceleration locations. Conclusions. The observations presented here are consistent with the breakout model of CME eruptions. The extended radio emission coincides with the location of the current sheet and quasi-separatrix boundary of the CME flux and the overlying helmet streamer and also with that of a large shock expected to form ahead of the CME in this configuration.
astrophysics
We study the prospects of a displaced-vertex search of sterile neutrinos at the Large Hadron Collider (LHC) in the framework of the neutrino-extended Standard Model Effective Field Theory ($\nu$SMEFT). The production and decay of sterile neutrinos can proceed via the standard active-sterile neutrino mixing in the weak current, as well as through higher-dimensional operators arising from decoupled new physics. If sterile neutrinos are long-lived, their decay can lead to displaced vertices which can be reconstructed. We investigate the search sensitivities for the ATLAS/CMS detector, the future far-detector experiments: AL3X, ANUBIS, CODEX-b, FASER, MATHUSLA, and MoEDAL-MAPP, and at the proposed fixed-target experiment SHiP. We study scenarios where sterile neutrinos are predominantly produced via rare charm and bottom mesons decays through minimal mixing and/or dimension-six operators in the $\nu$SMEFT Lagrangian. We perform simulations to determine the potential reach of high-luminosity LHC experiments in probing the EFT operators, finding that these experiments are very competitive with other searches.
high energy physics phenomenology
For all k>0 integer, we consider the regularised I-function of the family of del Pezzo surfaces of degree 8k+4 in P(2,2k+1,2k+1, 4k+1), first constructed by Johnson and Koll\'ar. We show that this function, which is of hypergeometric type, is a period of an explicit pencil of curves. Thus the pencil is a candidate LG mirror of the family of del Pezzo surfaces. The main feature of these surfaces, which makes the mirror construction especially interesting, is that the anticanonical system is empty: because of this, our mirrors are not covered by any other construction known to us. We discuss connections to the work of Beukers, Cohen and Mellit on hypergeometric functions.
mathematics
Identification of features is a critical task in microbiome studies that is complicated by the fact that microbial data are high dimensional and heterogeneous. Masked by the complexity of the data, the problem of separating signals from noise becomes challenging and troublesome. For instance, when performing differential abundance tests, multiple testing adjustments tend to be overconservative, as the probability of a type I error (false positive) increases dramatically with the large numbers of hypotheses. Moreover, the grouping effect of interest can be obscured by heterogeneity. These factors can incorrectly lead to the conclusion that there are no differences in the microbiome compositions. We translate and represent the problem of identifying differential features as a dynamic layout of separating the signal from its random background. We propose progressive permutation as a method to achieve this process and show converging patterns. More specifically, we progressively permute the grouping factor labels of the microbiome samples and perform multiple differential abundance tests in each scenario. We then compare the signal strength of the top features from the original data with their performance in permutations, and observe an apparent decreasing trend if these top features are true positives identified from the data. We have developed this into a user-friendly RShiny tool and R package, which consist of functions that can convey the overall association between the microbiome and the grouping factor, rank the robustness of the discovered microbes, and list the discoveries, their effect sizes, and individual abundances.
statistics
We resum the leading logarithms $\alpha_s^n \ln^{2 n-1}(1-z)$, $n=1,2,\ldots$ near the kinematic threshold $z=Q^2/\hat{s}\to 1$ of the Drell-Yan process at next-to-leading power in the expansion in $(1-z)$. The derivation of this result employs soft-collinear effective theory in position space and the anomalous dimensions of subleading-power soft functions, which are computed. Expansion of the resummed result leads to the leading logarithms at fixed loop order, in agreement with exact results at NLO and NNLO and predictions from the physical evolution kernel at N$^3$LO and N$^4$LO, and to new results at the five-loop order and beyond.
high energy physics phenomenology
Describing the comprehensive evolutionary scenario for asteroids is key to explaining the various physical processes of the solar system. Bulk-scale carbonaceous chondrites (CCs) possibly record the primordial information associated with the formation processes of their parent bodies. In this study, we tried to estimate the relative formation region of volatile-rich asteroids by utilizing the nucleosynthetic Cr isotopic variation (54Cr/52Cr) in bulk-scale CCs. Numerical calculations were conducted to track the temporal evolution of isotopically different (solar and presolar) dust and 54Cr/52Cr values for mixed materials with disk radius. First, we found that isotopic heterogeneities in CC formation regions would be preserved with a weak turbulence setting that would increase the timescales of the advection and diffusion in the disk. Second, we assessed the effects of gaps formed by giant planets. Finally, the distance from the injected supernovae and Cr isotopic compositions of the presolar grains were investigated in terms of the estimated formation region of CCs. In our results, a plausible formation region of four types of CCs can be obtained with the supernova from approximately 2 pc and typical Cr isotopic compositions of presolar grains. Among the parent bodies of CCs (i.e., volatile-rich asteroids), B-type asteroids formed in the outermost region, which is inconsistent with the present population, showing that D-type asteroids are generally located beyond most of the C-complex asteroids. Both the initial and present orbits of asteroids might be explained by the scatter attributed to the inward-outward migration of Jupiter and Saturn.
astrophysics
Quantum bits can be isolated to perform useful information-theoretic tasks, even though physical systems are fundamentally described by very high-dimensional operator algebras. This is because qubits can be consistently embedded into higher-dimensional Hilbert spaces. A similar embedding of classical probability distributions into quantum theory enables the emergence of classical physics via decoherence. Here, we ask which other probabilistic models can similarly be embedded into finite-dimensional quantum theory. We show that the embeddable models are exactly those that correspond to the Euclidean special Jordan algebras: quantum theory over the reals, the complex numbers, or the quaternions, and "spin factors" (qubits with more than three degrees of freedom), and direct sums thereof. Among those, only classical and standard quantum theory with superselection rules can arise from a physical decoherence map. Our results have significant consequences for some experimental tests of quantum theory, by clarifying how they could (or could not) falsify it. Furthermore, they imply that all unrestricted non-classical models must be contextual.
quantum physics
We present a method to obtain the average and the typical value of the number of critical points of the empirical risk landscape for generalized linear estimation problems and variants. This represents a substantial extension of previous applications of the Kac-Rice method since it allows to analyze the critical points of high dimensional non-Gaussian random functions. We obtain a rigorous explicit variational formula for the annealed complexity, which is the logarithm of the average number of critical points at fixed value of the empirical risk. This result is simplified, and extended, using the non-rigorous Kac-Rice replicated method from theoretical physics. In this way we find an explicit variational formula for the quenched complexity, which is generally different from its annealed counterpart, and allows to obtain the number of critical points for typical instances up to exponential accuracy.
statistics
In this paper, we study a novel problem: "automatic prescription recommendation for PD patients." To realize this goal, we first build a dataset by collecting 1) symptoms of PD patients, and 2) their prescription drug provided by neurologists. Then, we build a novel computer-aided prescription model by learning the relation between observed symptoms and prescription drug. Finally, for the new coming patients, we could recommend (predict) suitable prescription drug on their observed symptoms by our prescription model. From the methodology part, our proposed model, namely Prescription viA Learning lAtent Symptoms (PALAS), could recommend prescription using the multi-modality representation of the data. In PALAS, a latent symptom space is learned to better model the relationship between symptoms and prescription drug, as there is a large semantic gap between them. Moreover, we present an efficient alternating optimization method for PALAS. We evaluated our method using the data collected from 136 PD patients at Nanjing Brain Hospital, which can be regarded as a large dataset in PD research community. The experimental results demonstrate the effectiveness and clinical potential of our method in this recommendation task, if compared with other competing methods.
computer science
The increasing importance of solar power for electricity generation leads to an increasing demand for probabilistic forecasting of local and aggregated PV yields. In this paper we use an indirect modeling approach for hourly medium to long term local PV yields based on publicly available irradiation data. We suggest a time series model for global horizontal irradiation for which it is easy to generate an arbitrary number of scenarios and thus allows for multivariate probabilistic forecasts for arbitrary time horizons. In contrast to many simplified models that have been considered in the literature so far it features several important stylized facts. Sharp time dependent lower and upper bounds of global horizontal irradiations are estimated that improve the often used physical bounds. The parameters of the beta distributed marginals of the transformed data are allowed to be time dependent. A copula-based time series model is introduced for the hourly and daily dependence structure based on a simple graphical structure known from the theory of vine copulas. Non-Gaussian copulas like Gumbel and BB1 copulas are used that allow for the important feature of so-called tail dependence. Evaluation methods like the continuous ranked probability score (CRPS), the energy score (ES) and the variogram score (VS) are used to compare the power of the model for multivariate probabilistic forecasting with other models used in the literature showing that our model outperforms other models in many respects.
statistics
Wireless sensor networks (WSNs) are promising solutions for large infrastructure monitoring because of their ease of installation, computing and communication capability, and cost-effectiveness. Long-term structural health monitoring (SHM), however, is still a challenge because it requires continuous data acquisition for the detection of random events such as earthquakes and structural collapse. To achieve long-term operation, it is necessary to reduce the power consumption of sensor nodes designed to capture random events and, thus, enhance structural safety. In this paper, we present an event-based sensing system design based on an ultra-low-power microcontroller with a programmable event-detection mechanism to allow continuous monitoring; the device is triggered by vibration, strain, or a timer and has a programmed threshold, resulting in ultra-low-power consumption of the sensor node. Furthermore, the proposed system can be easily reconfigured to any existing wireless sensor platform to enable ultra-low power operation. For validation, the proposed system was integrated with a commercial wireless platform to allow strain, acceleration, and time-based triggering with programmed thresholds and current consumptions of 7.43 and 0.85 mA in active and inactive modes, respectively.
electrical engineering and systems science