text
stringlengths
11
9.77k
label
stringlengths
2
104
We analysed the deep archival Chandra observations of the high-temperature galaxy cluster Abell 2319 to investigate the prominent cold front in its core. The main sharp arc of the front shows wiggles, or variations of the radius of the density jump along the arc. At the southern end of the arc is a feature that resembles a Kelvin-Helmholtz (KH) eddy, beyond which the sharp front dissolves. These features suggest that KH instabilities develop at the front. Under this assumption, we can place an upper limit on the ICM viscosity that is several times below the isotropic Spitzer value. Other features include a split of the cold front at its northern edge, which may be another KH eddy. There is a small pocket of hot, less-dense gas inside the cold front, which may indicate a `hole' in the front's magnetic insulation layer that lets the heat from the outer gas to penetrate inside the front. Finally, a large concave brightness feature southwest of the cluster core can be caused by the gasdynamic instabilities. We speculate that it can also be an inner boundary of a giant AGN bubble, similar to that in Ophiuchus. If the latter interpretation is supported by better radio data, this could be a remnant of another extremely powerful AGN outburst.
astrophysics
Text classification plays a vital role today especially with the intensive use of social networking media. Recently, different architectures of convolutional neural networks have been used for text classification in which one-hot vector, and word embedding methods are commonly used. This paper presents a new language independent word encoding method for text classification. The proposed model converts raw text data to low-level feature dimension with minimal or no preprocessing steps by using a new approach called binary unique number of word "BUNOW". BUNOW allows each unique word to have an integer ID in a dictionary that is represented as a k-dimensional vector of its binary equivalent. The output vector of this encoding is fed into a convolutional neural network (CNN) model for classification. Moreover, the proposed model reduces the neural network parameters, allows faster computation with few network layers, where a word is atomic representation the document as in word level, and decrease memory consumption for character level representation. The provided CNN model is able to work with other languages or multi-lingual text without the need for any changes in the encoding method. The model outperforms the character level and very deep character level CNNs models in terms of accuracy, network parameters, and memory consumption; the results show total classification accuracy 91.99% and error 8.01% using AG's News dataset compared to the state of art methods that have total classification accuracy 91.45% and error 8.55%, in addition to the reduction in input feature vector and neural network parameters by 62% and 34%, respectively.
computer science
We develop a resource theory of symmetric distinguishability, the fundamental objects of which are elementary quantum information sources, i.e., sources that emit one of two possible quantum states with given prior probabilities. Such a source can be represented by a classical-quantum state of a composite system $XA$, corresponding to an ensemble of two quantum states, with $X$ being classical and $A$ being quantum. We study the resource theory for two different classes of free operations: $(i)$ ${\rm{CPTP}}_A$, which consists of quantum channels acting only on $A$, and $(ii)$ conditional doubly stochastic (CDS) maps acting on $XA$. We introduce the notion of symmetric distinguishability of an elementary source and prove that it is a monotone under both these classes of free operations. We study the tasks of distillation and dilution of symmetric distinguishability, both in the one-shot and asymptotic regimes. We prove that in the asymptotic regime, the optimal rate of converting one elementary source to another is equal to the ratio of their quantum Chernoff divergences, under both these classes of free operations. This imparts a new operational interpretation to the quantum Chernoff divergence. We also obtain interesting operational interpretations of the Thompson metric, in the context of the dilution of symmetric distinguishability.
quantum physics
Monte Carlo tree search (MCTS) has received considerable interest due to its spectacular success in the difficult problem of computer Go and also proved beneficial in a range of other domains. A major issue that has received little attention in the MCTS literature is the fact that, in most games, different actions can lead to the same state, that may lead to a high degree of redundancy in tree representation and unnecessary additional computational cost. We extend MCTS to single rooted directed acyclic graph (SR-DAG), and consider the Best Arm Identification (BAI) and the Best Leaf Identification (BLI) problem of an expanding SR-DAG of arbitrary depth. We propose algorithms that are (epsilon, delta)-correct in the fixed confidence setting, and prove an asymptotic upper bounds of sample complexity for our BAI algorithm. As a major application for our BLI algorithm, a novel approach for Feature Selection is proposed by representing the feature set space as a SR-DAG and repeatedly evaluating feature subsets until a candidate for the best leaf is returned, a proof of concept is shown on benchmark data sets.
computer science
Quantum Electrodynamics (QED) is considered the most accurate theory in the history of science. However, this precision is limited to a single experimental value: the anomalous magnetic moment of the electron (g-factor). The calculation of the electron g-factor was carried out in 1950 by Karplus and Kroll. Seven years later, Petermann detected and corrected a serious error in the calculation of a Feynman diagram; however, neither the original calculation nor the subsequent correction was ever published.Therefore, the entire prestige of QED depends on the calculation of a single Feynman diagram (IIc) that has never been published and cannot be independently verified.
physics
Recent advances have shown that satellite communication (SatCom) will be an important enabler for next generation terrestrial networks as it can provide numerous advantages, including global coverage, high speed connectivity, reliability, and instant deployment. An ideal alternative for radio frequency (RF) satellites is its free-space optical (FSO) counterpart. FSO or laser SatCom can mitigate the problems occurring in RF SatCom, while providing important advantages, including reduced mass, lower consumption, better throughput, and lower costs. Furthermore, laser SatCom is inherently resistant to jamming, interception, and interference. Owing to these benefits, this paper focuses on downlink laser SatCom, where the best ground station (GS) is selected among numerous candidates to provide reliable connectivity and maximum site diversity. To quantify the performance of the proposed scheme, we derive closed-form outage probability and ergodic capacity expressions for two different practical GS deployment scenarios. Furthermore, asymptotic analysis is conducted to obtain the overall site diversity gain, and aperture averaging is studied to illustrate the impact of aperture diameter on the overall performance. Finally, important design guidelines that can be useful in the design of practical laser SatComs are outlined.
electrical engineering and systems science
In this paper, we are interested in automata over infinite words and infinite duration games, that we view as general transition systems. We study transformations of systems using a Muller condition into ones using a parity condition, extending Zielonka's construction. We introduce the alternating cycle decomposition transformation, and we prove a strong optimality result: for any given deterministic Muller automaton, the obtained parity automaton is minimal both in size and number of priorities among those automata admitting a morphism into the original Muller automaton. We give two applications. The first is an improvement in the process of determinisation of B\"uchi automata into parity automata by Piterman and Schewe. The second is to present characterisations on the possibility of relabelling automata with different acceptance conditions.
computer science
This paper reports on the topological effects of three-dimensional (3D) porous graphene with tunable pore sizes and a preserved 2D graphene system of Dirac quasiparticles on its electrical properties. This 3D architecture is characterized by the intrinsic curvature of smoothly interconcnected graphene sheets without edges, the structures and properties of which can be controlled with its pore sizes. The impact of pore size on the electrical transport properties was investigated through magnetoresistance measurements. We observed that 3D graphene with small pores exhibits transitioning to weak localization with decreasing temperature. The comparison with the theory based on the quantum correction clarified that an increase in the intrinsic curvature significantly induces the intervalley scattering event, which breaks the chirality. This increase in the intervalley scattering rate originates from the unique topological effects of 3D graphene, i.e., the topological defects required to form the high curvature and the resulting chirality mixing. We also discuss the scattering processes due to microscopic chemical bonding states as found by high spatial-resolved X-ray photoemission spectral imaging, to support the validity of our finding.
condensed matter
Informal romanization is an idiosyncratic process used by humans in informal digital communication to encode non-Latin script languages into Latin character sets found on common keyboards. Character substitution choices differ between users but have been shown to be governed by the same main principles observed across a variety of languages---namely, character pairs are often associated through phonetic or visual similarity. We propose a noisy-channel WFST cascade model for deciphering the original non-Latin script from observed romanized text in an unsupervised fashion. We train our model directly on romanized data from two languages: Egyptian Arabic and Russian. We demonstrate that adding inductive bias through phonetic and visual priors on character mappings substantially improves the model's performance on both languages, yielding results much closer to the supervised skyline. Finally, we introduce a new dataset of romanized Russian, collected from a Russian social network website and partially annotated for our experiments.
computer science
Mediation analysis aims at disentangling the effects of a treatment on an outcome through alternative causal mechanisms and has become a popular practice in biomedical and social science applications. The causal framework based on counterfactuals is currently the standard approach to mediation, with important methodological advances introduced in the literature in the last decade, especially for simple mediation, that is with one mediator at the time. Among a variety of alternative approaches, K. Imai et al. showed theoretical results and developed an R package to deal with simple mediation as well as with multiple mediation involving multiple mediators conditionally independent given the treatment and baseline covariates. This approach does not allow to consider the often encountered situation in which an unobserved common cause induces a spurious correlation between the mediators. In this context, which we refer to as mediation with uncausally related mediators, we show that, under appropriate hypothesis, the natural direct and joint indirect effects are non-parametrically identifiable. Moreover, we adopt the quasi-Bayesian algorithm developed by Imai et al. and propose a procedure based on the simulation of counterfactual distributions to estimate not only the direct and joint indirect effects but also the indirect effects through individual mediators. We study the properties of the proposed estimators through simulations. As an illustration, we apply our method on a real data set from a large cohort to assess the effect of hormone replacement treatment on breast cancer risk through three mediators, namely dense mammographic area, nondense area and body mass index.
statistics
Our ability to control a whole network can be achieved via a small set of driver nodes. While the minimum number of driver nodes needed for control is fixed in a given network, there are multiple choices for the driver node set. A quantity used to investigate this multiplicity is the fraction of redundant nodes in the network, referring to nodes that do not need any external control. Previous work has discovered a bimodality feature characterized by a bifurcation diagram: networks with the same statistical property would stay with equal probability to have a large or small fraction of redundant nodes. Here we find that this feature is rooted in the symmetry of the directed network, where both the degree distribution and the degree correlation can play a role. The in-in and out-out degree correlation will suppress the bifurcation, as networks with such degree correlations are asymmetric under network transpose. The out-in and in-out degree correlation do not change the network symmetry, hence the bimodality feature is preserved. However, the out-in degree correlation will change the critical average degree needed for the bifurcation. Hence by fixing the average degree of networks and tuning out-in degree correlation alone, we can observe a similar bifurcation diagram. We conduct analytical analyses that adequately explain the emergence of bimodality caused by out-in degree correlation. We also propose a quantity, taking both degree distribution and degree correlation into consideration, to predict if a network would be at the upper or lower branch of the bifurcation. As is well known that most real networks are not neutral, our results extend our understandings of the controllability of complex networks.
physics
The eigenstate decoherence hypothesis (EDH) asserts that each individual eigenstate of a large closed system is locally classical-like. We test this hypothesis for a heavy particle interacting with a gas of light particles. This system is paradigmatic for studies of the quantum-to-classical transition: The reduced state of the heavy particle is widely believed to rapidly loose any nonclassical features due to the interaction with the gas. Yet, we find numerical evidence that the EDH is violated: certain eigenstates of this model are manifestly non-classical. Only the weak version of EDH referring to the majority (instead of the totality) of eigenstates holds.
quantum physics
Let $h \geq 2$ and let ${ \mathcal A} = (A_1,\ldots, A_h)$ be an $h$-tuple of sets of integers. For nonzero integers $c_1,\ldots, c_h$, consider the linear form $\varphi = c_1 x_1 + c_2x_2 + \cdots + c_h x_h$. The \emph{representation function} $R_{ \mathcal{A},\varphi}(n)$ counts the number of $h$-tuples $(a_1,\ldots, a_h) \in A_1 \times \cdots \times A_h$ such that $\varphi(a_1,\ldots, a_h) = n$. The $h$-tuple $\mathcal{A}$ is a \emph{$\varphi$-Sidon system of multiplicity $g$} if $R_{\mathcal A,\varphi}(n) \leq g$ for all $n \in \mathbf{Z}$. For every positive integer $g$, let $F_{\varphi,g}(n)$ denote the largest integer $q$ such that there exists a $\varphi$-Sidon system $\mathcal {A} = (A_1,\ldots, A_h)$ of multiplicity $g$ with \[ A_i \subseteq [1,n] \qquad \text{and} \qquad |A_i| = q \] for all $i =1,\ldots, h$. It is proved that, for all linear forms $\varphi$, \[ \limsup_{n\rightarrow \infty} \frac{F_{\varphi,g}(n)}{n^{1/h}} < \infty \] and, for linear forms $\varphi$ whose coefficients $c_i$ satisfy a certain divisibility condition, \[ \liminf_{n\rightarrow\infty} \frac{F_{\varphi,h!}(n)}{n^{1/h}} \geq 1. \]
mathematics
A growing proportion of human interactions are digitized on social media platforms and subjected to algorithmic decision-making, and it has become increasingly important to ensure fair treatment from these algorithms. In this work, we investigate gender bias in collaborative-filtering recommender systems trained on social media data. We develop neural fair collaborative filtering (NFCF), a practical framework for mitigating gender bias in recommending sensitive items (e.g. jobs, academic concentrations, or courses of study) using a pre-training and fine-tuning approach to neural collaborative filtering, augmented with bias correction techniques. We show the utility of our methods for gender de-biased career and college major recommendations on the MovieLens dataset and a Facebook dataset, respectively, and achieve better performance and fairer behavior than several state-of-the-art models.
computer science
We introduce a new set of boundary conditions for three-dimensional higher spin gravity with gauge group $SL(3,\mathbb{R})\times SL(3,\mathbb{R})$, where its dynamics at the boundary is described by the members of the modified Boussinesq integrable hierarchy. In the asymptotic region the gauge fields are written in the diagonal gauge, where the excitations go along the generators of the Cartan subalgebra of $sl(3,\mathbb{R})\oplus sl(3,\mathbb{R})$. We show that the entire integrable structure of the modified Boussinesq hierarchy, i.e., the phase space, the Poisson brackets and the infinite number of commuting conserved charges, are obtained from the asymptotic structure of the higher spin theory. Furthermore, its known relation with the Boussinesq hierarchy is inherited from our analysis once the asymptotic conditions are re-expressed in the highest weight gauge. Hence, the Miura map is recovered from a purely geometric construction in the bulk. Black holes that fit within our boundary conditions, the Hamiltonian reduction at the boundary, and the generalization to higher spin gravity with gauge group $SL(N,\mathbb{R})\times SL(N,\mathbb{R})$ are also discussed.
high energy physics theory
We present a robot kinematic calibration method that combines complementary calibration approaches: self-contact, planar constraints, and self-observation. We analyze the estimation of the end effector parameters, joint offsets of the manipulators, calibration of the complete kinematic chain (DH parameters), and we compare our results with ground truth measurements provided by a laser tracker. Our main findings are: (1) When applying the complementary calibration approaches in isolation, the self-contact approach yields the best and most stable results. (2) All combinations of more than one approach were always superior to using any single approach in terms of calibration errors as well as the observability of the estimated parameters. Combining more approaches delivers robot parameters that better generalize to the parts of workspace not used for the calibration. (3) Sequential calibration, i.e.\ calibrating cameras first and then robot kinematics, is more effective than simultaneous calibration of all parameters. In real experiments, we employ two industrial manipulators mounted on a common base. The manipulators are equipped with force/torque sensors at their wrists, with two cameras attached to the robot base, and with special end effectors with fiducial markers. We collect a new comprehensive dataset for robot kinematic calibration and make it publicly available. The dataset and its analysis provide quantitative and qualitative insights that go beyond the specific manipulators used in this work and are applicable to self-contained robot kinematic calibration in general.
computer science
Many causal processes have spatial and temporal dimensions. Yet the classic causal inference framework is not directly applicable when the treatment and outcome variables are generated by spatio-temporal processes with an infinite number of possible event locations at each point in time. We take up the challenge of extending the potential outcomes framework to these settings by formulating the treatment point process as stochastic intervention. Our causal estimands include the expected number of outcome events in a specified area of interest under a particular stochastic treatment assignment strategy. We develop an estimation technique that applies the inverse probability of treatment weighting method to spatially-smoothed outcome surfaces. We demonstrate that the proposed estimator is consistent and asymptotically normal as the number of time period approaches infinity. A primary advantage of our methodology is its ability to avoid structural assumptions about spatial spillover and temporal carryover effects. We use the proposed methods to estimate the effects of American airstrikes on insurgent violence in Iraq (February 2007-July 2008). We find that increasing the average number of daily airstrikes for up to one month increases insurgent attacks across Iraq and within Baghdad. We also find evidence that airstrikes can displace attacks from Baghdad to new locations up to 400 kilometers away.
statistics
Despite the large number of patients in Electronic Health Records (EHRs), the subset of usable data for modeling outcomes of specific phenotypes are often imbalanced and of modest size. This can be attributed to the uneven coverage of medical concepts in EHRs. In this paper, we propose OMTL, an Ontology-driven Multi-Task Learning framework, that is designed to overcome such data limitations. The key contribution of our work is the effective use of knowledge from a predefined well-established medical relationship graph (ontology) to construct a novel deep learning network architecture that mirrors this ontology. It can effectively leverage knowledge from a well-established medical relationship graph (ontology) by constructing a deep learning network architecture that mirrors this graph. This enables common representations to be shared across related phenotypes, and was found to improve the learning performance. The proposed OMTL naturally allows for multitask learning of different phenotypes on distinct predictive tasks. These phenotypes are tied together by their semantic distance according to the external medical ontology. Using the publicly available MIMIC-III database, we evaluate OMTL and demonstrate its efficacy on several real patient outcome predictions over state-of-the-art multi-task learning schemes.
computer science
The study of complex networks with multi-weights has been a hot topic recently. For a network with a single weight, previous studies have shown that they can promote synchronization. But for complex networks with multi-weights, there are no rigorous analysis to show that synchronization can be reached faster. In this paper, the complex network is allowed to be directed, which will make the synchronization analysis difficult for multiple couplings. In virtue of the normalized left eigenvectors (NLEVec) corresponding to the zero eigenvalue of coupling matrices, we prove that if the Chebyshev distance between NLEVec is less than some value, which is defined as the allowable deviation bound, then the synchronization and control will be realized with sufficiently large coupling strengths, i.e., all coupling matrices do accelerate synchronization. Moreover, adaptive rules are also designed for the coupling strength.
electrical engineering and systems science
We study holographic subregion volume complexity for a line segment in the AdS$_3$ Vaidya geometry. On the field theory side, this gravity background corresponds to a sudden quench which leads to the thermalization of the strongly-coupled dual conformal field theory. We find the time-dependent extremal volume surface by numerically solving a partial differential equation with boundary condition given by the Hubeny-Rangamani-Takayanagi surface, and we use this solution to compute holographic subregion complexity as a function of time. Approximate analytical expressions valid at early and at late times are derived.
high energy physics theory
Tensor network operators, such as the matrix product operator (MPO) and the projected entangled-pair operator (PEPO), can provide efficient representation of certain linear operators in high dimensional spaces. This paper focuses on the efficient representation of tensor network operators with long-range pairwise interactions such as the Coulomb interaction. For MPOs, we find that all existing efficient methods exploit a peculiar "upper-triangular low-rank" (UTLR) property, i.e. the upper-triangular part of the matrix can be well approximated by a low-rank matrix, while the matrix itself can be full-rank. This allows us to convert the problem of finding the efficient MPO representation into a matrix completion problem. We develop a modified incremental singular value decomposition method (ISVD) to solve this ill-conditioned matrix completion problem. This algorithm yields equivalent MPO representation to that developed in [Stoudenmire and White, Phys. Rev. Lett. 2017]. In order to efficiently treat more general tensor network operators, we develop another strategy for compressing tensor network operators based on hierarchical low-rank matrix formats, such as the hierarchical off-diagonal low-rank (HODLR) format, and the $\mathcal{H}$-matrix format. Though the pre-constant in the complexity is larger, the advantage of using the hierarchical low-rank matrix format is that it is applicable to both MPOs and PEPOs. For the Coulomb interaction, the operator can be represented by a linear combination of $\mathcal{O}(\log(N)\log(N/\epsilon))$ MPOs/PEPOs, each with a constant bond dimension, where $N$ is the system size and $\epsilon$ is the accuracy of the low-rank truncation. Neither the modified ISVD nor the hierarchical low-rank algorithm assumes that the long-range interaction takes a translation-invariant form.
physics
Relation between the lepton and quark mixings: $U_{PMNS} \approx V_{CKM}^{\dagger} U_X$, where $U_X$ is the BM or TBM mixing matrices, implies the quark-lepton (Grand) unification and existence of hidden sector with certain flavor symmetries. The latter couples to the visible sector via the neutrino portal and is responsible for $U_X$, as well as for smallness of neutrino mass. GUT ensures appearance of $\sim V_{CKM}$ in the lepton mixing. General features of this scenario (inverse or double seesaw, screening of the Dirac structures, basis fixing symmetry) are described and two realizations are presented. The high energy realization is based on $SO(10)$ GUT with the hidden sector at the Planck scale. The low energy realization includes the 100 TeV scale $L-R$ symmetry and the hidden sector at the keV - MeV scale.
high energy physics phenomenology
The worldsheet of the string theory, which consisting of 26 free scalar fields in Minkowski space, is two dimensional conformal field theory. If we denote the two dimension conformal field theory by elliptic curve and denote the partition function of string theory by modular form, then the relation between conformal field theory and the string theory can be represented as the Taniyama-Shimura conjecture. Moreover, it also can be generalized to the $F$-theory.
high energy physics theory
The computation of strongly correlated quantum systems is challenging because of its potentially exponential scaling in the number of electron configurations. Variational calculation of the two-electron reduced density matrix (2-RDM) without the many-electron wave function exploits the pairwise nature of the electronic Coulomb interaction to compute a lower bound on the ground-state energy with polynomial computational scaling. Recently, a dual-cone formulation of the variational 2-RDM calculation was shown to generate the ground-state energy, albeit not the 2-RDM, at a substantially reduced computational cost, especially for higher $N$-representability conditions such as the T2 constraint. Here we generalize the dual-cone variational 2-RDM method to compute not only the ground-state energy but also the 2-RDM. The central result is that we can compute the 2-RDM from a generalization of the Hellmann-Feynman theorem. Specifically, we prove that in the Lagrangian formulation of the dual-cone optimization the 2-RDM is the Lagrange multiplier. We apply the method to computing the energies and properties of strongly correlated electrons -- including atomic charges, electron densities, dipole moments, and orbital occupations -- in an illustrative hydrogen chain and the nitrogen-fixation catalyst FeMoco. The dual variational computation of the 2-RDM with T2 or higher $N$-representability conditions provides a polynomially scaling approach to strongly correlated molecules and materials with significant applications in atomic and molecular and condensed-matter chemistry and physics.
quantum physics
Quantum simulation has shown great potential in many fields due to its powerful computational capabilities. However, the limited fidelity can lead to a severe limitation on the number of gate operations, which requires us to find optimized algorithms. Trotter decomposition and high order Trotter decompositions are widely used in quantum simulations. We find that they can be significantly improved by force gradient integrator in lattice QCD. Therefore, force gradient decomposition shows a great prospective in future applications of quantum simulation.
quantum physics
When first principle models cannot be derived due to the complexity of the real system, data-driven methods allow us to build models from system observations. As these models are employed in learning-based control, the quality of the data plays a crucial role for the performance of the resulting control law. Nevertheless, there hardly exist measures for assessing training data sets, and the impact of the distribution of the data on the closed-loop system properties is largely unknown. This paper derives - based on Gaussian process models - an analytical relationship between the density of the training data and the control performance. We formulate a quality measure for the data set, which we refer to as $\rho$-gap, and derive the ultimate bound for the tracking error under consideration of the model uncertainty. We show how the $\rho$-gap can be applied to a feedback linearizing control law and provide numerical illustrations for our approach.
electrical engineering and systems science
Holographic CFTs and holographic RG flows on space-time manifolds which are $d$-dimensional products of spheres are investigated. On the gravity side, this corresponds to Einstein-dilaton gravity on an asymptotically $AdS_{d+1}$ geometry, foliated by a product of spheres. We focus on holographic theories on $S^2\times S^2$, we show that the only regular five-dimensional bulk geometries have an IR endpoint where one of the sphere shrinks to zero size, while the other remains finite. In the $Z_2$-symmetric limit, where the two spheres have the same UV radii, we show the existence of a infinite discrete set of regular solutions, satisfying an Efimov-like discrete scaling. The $Z_2$-symmetric solution in which both spheres shrink to zero at the endpoint is singular, whereas the solution with lowest free energy is regular and breaks $Z_2$ symmetry spontaneously. We explain this phenomenon analytically by identifying an unstable mode in the bulk around the would-be $Z_2$-symmetric solution. The space of theories have two branches that are connected by a conifold transition in the bulk, which is regular and correspond to a quantum first order transition. Our results also imply that $AdS_5$ does not admit a regular slicing by $S^2\times S^2$.
high energy physics theory
In this paper, we propose a method to calculate the exact Taylor series of the scattering matrix in general multiterminal tight-binding systems to arbitrary order N, which allows us to find the Taylor expansion of Landauer conductance in mesoscopic systems. The method is based on the recursive scattering matrix method (RSMM) that permits us to find the scattering matrix of a system from the scattering matrices of its subsystems. Following ideas of automatic differentiation, we determine expressions for the sum, product, inverse, and diagonalization of a matrix Taylor expansion, and use them into the RSMM to find Taylor series of scattering matrices. The method is validated by obtaining the transmission function of atomic chains with site defects and graphene nanoconstrictions. Finally, an analysis of convergence radius and error estimations of these Taylor expansions is presented.
condensed matter
In disruption-tolerant networking (DTN), data is transmitted in a store-carry-forward fashion from network node to network node. In this paper, we present an open source DTN implementation, called DTN7, of the recently released Bundle Protocol Version 7 (draft version 13). DTN7 is written in Go and provides features like memory safety and concurrent execution. With its modular design and interchangeable components, DTN7 facilitates DTN research and application development. Furthermore, we present results of a comparative experimental evaluation of DTN7 and other DTN systems including Serval, IBR-DTN, and Forban. Our results indicate that DTN7 is a flexible and efficient open-source multi-platform implementation of the most recent Bundle Protocol Version 7.
computer science
We propose a statistical model to understand people's perception of their carbon footprint. Driven by the observation that few people think of CO2 impact in absolute terms, we design a system to probe people's perception from simple pairwise comparisons of the relative carbon footprint of their actions. The formulation of the model enables us to take an active-learning approach to selecting the pairs of actions that are maximally informative about the model parameters. We define a set of 18 actions and collect a dataset of 2183 comparisons from 176 users on a university campus. The early results reveal promising directions to improve climate communication and enhance climate mitigation.
statistics
We study $\widehat{\text{CGHS}}$ gravity, a variant of the matterless Callan-Giddings-Harvey-Strominger model. We show that it describes a universal sector of the near horizon perturbations of non-extremal black holes in higher dimensions. In many respects this theory can be viewed as a flat space analog of Jackiw-Teitelboim gravity. The result for the Euclidean path integral implies that $\widehat{\text{CGHS}}$ is dual to a Gaussian ensemble that we describe in detail. The simplicity of this theory allows us to compute exact quantities such as the quenched free energy and provides a useful playground to study baby universes, averages and factorization. We also give evidence for the existence of a non-perturbative completion in terms of a matrix model. Finally, flat wormhole solutions are discussed.
high energy physics theory
A Horizontal Visibility Graph (HVG) is a simple graph extracted from an ordered sequence of real values, and this mapping has been used to provide a combinatorial encryption of time series for the task of performing network based time series analysis. While some properties of the spectrum of these graphs --such as the largest eigenvalue of the adjacency matrix-- have been routinely used as measures to characterise time series complexity, a theoretic understanding of such properties is lacking. In this work we explore some algebraic and spectral properties of these graphs associated to periodic and chaotic time series. We focus on the family of Feigenbaum graphs, which are HVGs constructed in correspondence with the trajectories of one-parameter unimodal maps undergoing a period-doubling route to chaos (Feigenbaum scenario). For the set of values of the map's parameter $\mu$ for which the orbits are periodic with period $2^n$, Feigenbaum graphs are fully characterised by two integers (n,k) and admit an algebraic structure. We explore the spectral properties of these graphs for finite n and k, and among other interesting patterns we find a scaling relation for the maximal eigenvalue and we prove some bounds explaining it. We also provide numerical and rigorous results on a few other properties including the determinant or the number of spanning trees. In a second step, we explore the set of Feigenbaum graphs obtained for the range of values of the map's parameter $\mu$ for which the system displays chaos. We show that in this case, Feigenbaum graphs form an ensemble for each value of $\mu$ and the system is typically weakly self-averaging. Unexpectedly, we find that while the largest eigenvalue can distinguish chaos from an iid process, it is not a good measure to quantify the chaoticity of the process, and that the eigenvalue density does a better job.
physics
We report time-of-flight neutron spectroscopic and diffraction studies of the 5$d^2$ cubic double pervoskite magnets, Ba$_2$MOsO$_6$ ($M$ = Zn, Mg, Ca). These cubic materials are all described by antiferromagnetically-coupled 5$d^2$ Os$^{6+}$ ions decorating a face-centred cubic (FCC) lattice. They all exhibit thermodynamic anomalies consistent with phase transitions at a temperature $T^*$, and exhibit a gapped magnetic excitation spectrum with spectral weight concentrated at wavevectors typical of type I antiferromagnetic orders. While muon spin resonance experiments show clear evidence for time reversal symmetry breaking, no corresponding magnetic Bragg scattering is observed at low temperatures. These results, consistent with low temperature octupolar or quadrupolar order, are discussed in the context of other 5$d^2$ DP magnets, and theories for $d^2$ ions on a FCC lattice which predict exotic orders driven by multipolar interactions.
condensed matter
Cauchy summation formula plays a central role in application of character calculus to many problems, from AGT-implied Nekrasov decomposition of conformal blocks to topological-vertex decompositions of link invariants. We briefly review the equivalence between Cauchy formula and expressibility of skew characters through the Littlewood-Richardson coefficients. As not-quite-a-trivial illustration we consider how this equivalence works in the case of plane partitions -- at the simplest truly interesting level of just four boxes.
high energy physics theory
With the hypothesis of minimal flavor violation, we find that there exists a power-aligned relation between the Yukawa couplings of the two scalar doublets in the two-Higgs-doublet model with Hermitian Yukawa matrices. Within such a power-aligned framework, it is found that a simultaneous explanation of the anomalies observed in the electron and muon anomalous magnetic moments can be reached with TeV-scale quasi-degenerate Higgs masses, and the resulting parameter space is also phenomenologically safer under the B-physics, $Z$ and $\tau$ decay data, as well as the current LHC bounds. Furthermore, the flavor-universal power that enhances the charged-lepton Yukawa couplings prompts an interesting correlation between the two anomalies, which makes the model distinguishable from the (generalized) linearly aligned and the lepton-specific two-Higgs-doublet models that address the same anomalies but in a non-correlative manner, and hence testable by future precise measurements.
high energy physics phenomenology
We present IrRep - a Python code that calculates the symmetry eigenvalues of electronic Bloch states in crystalline solids and the irreducible representations under which they transform. As input it receives bandstructures computed with state-of-the-art Density Functional Theory codes such as VASP, Quantum Espresso, or Abinit, as well as any other code that has an interface to Wannier90. Our code is applicable to materials in any of the 230 space groups and double groups preserving time-reversal symmetry with or without spin-orbit coupling included, for primitive or conventional unit cells. This makes IrRep a powerful tool to systematically analyze the connectivity and topological classification of bands, as well as to detect insulators with non-trivial topology, following the Topological Quantum Chemistry formalism: IrRep can generate the input files needed to calculate the (physical) elementary band representations and the symmetry-based indicators using the CheckTopologicalMat routine of the Bilbao Crystallographic Server. It is also particularly suitable for interfaces with other plane-waves based codes, due to its flexible structure.
condensed matter
We propose a deep learning method to build an AdS/QCD model from the data of hadron spectra. A major problem of generic AdS/QCD models is that a large ambiguity is allowed for the bulk gravity metric with which QCD observables are holographically calculated. We adopt the experimentally measured spectra of $\rho$ and $a_2$ mesons as training data, and perform a supervised machine learning which determines concretely a bulk metric and a dilaton profile of an AdS/QCD model. Our deep learning (DL) architecture is based on the AdS/DL correspondence (arXiv:1802.08313) where the deep neural network is identified with the emergent bulk spacetime.
high energy physics theory
X-ray observations of the hot gas filling the intra-cluster medium provide a wealth of information on the dynamics of clusters of galaxies. The global equilibrium of the ICM is believed to be partially ensured by non-thermal pressure support, notably the dissipation of energy through turbulent motions. Accurate mapping of turbulence using X-ray emission lines is challenging due to the lack of spatially-resolved spectroscopy. Only future instruments such as the X-ray Integral Field Unit (X-IFU) on Athena will have the spatial and spectral resolution to quantitatively investigate the ICM turbulence at all scales. Powerful diagnostics for these studies are line shift and the line broadening maps, and the second-order structure function. When estimating these quantities, instruments will be limited by uncertainties of their measurements, and by the sample variance (aka cosmic variance) of the observation. We extend here the formalism started in our companion paper I to include the effect of statistical uncertainties in the estimation of these line diagnostics, in particular for structure functions. We demonstrate that statistics contribute to the total variance through different terms, which depend on the geometry of the detector, the spatial binning and the nature of the turbulent field. These terms are important when probing the small scales of the turbulence. An application of these equations is performed for the X-IFU, using synthetic turbulent velocity maps of a Coma-like cluster of galaxies. Results are in excellent agreement with the formulas both for the structure function estimation (<3%) and its variance (<10%). The expressions derived here and in paper I are generic, and ensure an estimation of the total errors in any X-ray measurement of turbulent structure functions. They also open the way for optimisations in the upcoming instrumentation and in observational strategies.
astrophysics
In this paper, we investigate the fixed-time behavioral control problem for a team of second-order nonlinear agents, aiming to achieve a desired formation with collision/obstacle~avoidance. In the proposed approach, the two behaviors(tasks) for each agent are prioritized and integrated via the framework of the null-space-based behavioral projection, leading to a desired merged velocity that guarantees the fixed-time convergence of task errors. To track this desired velocity, we design a fixed-time sliding mode controller for each agent with state-independent adaptive gains, which provides a fixed-time convergence of the tracking error. The control scheme is implemented in a distributed manner, where each agent only acquires information from its neighbors in the network. Moreover, we adopt an online learning algorithm to improve the robustness of the closed system with respect to uncertainties/disturbances. Finally, simulation results are provided to show the effectiveness of the proposed approach.
mathematics
We present a new physics informed neural network (PINN) algorithm for solving brittle fracture problems. While most of the PINN algorithms available in the literature minimize the residual of the governing partial differential equation, the proposed approach takes a different path by minimizing the variational energy of the system. Additionally, we modify the neural network output such that the boundary conditions associated with the problem are exactly satisfied. Compared to conventional residual based PINN, the proposed approach has two major advantages. First, the imposition of boundary conditions is relatively simpler and more robust. Second, the order of derivatives present in the functional form of the variational energy is of lower order than in the residual form. Hence, training the network is faster. To compute the total variational energy of the system, an efficient scheme that takes as input a geometry described by spline based CAD model and employs Gauss quadrature rules for numerical integration has been proposed. Moreover, we utilize the concept of transfer learning to obtain the crack path in an efficient manner. The proposed approach is used to solve four fracture mechanics problems. For all the examples, results obtained using the proposed approach match closely with the results available in the literature. For the first two examples, we compare the results obtained using the proposed approach with the conventional residual based neural network results. For both the problems, the proposed approach is found to yield better accuracy compared to conventional residual based PINN algorithms.
statistics
With the increasing adoption of Automatic Vehicle Location (AVL) and Automatic Passenger Count (APC) technologies by transit agencies, a massive amount of time-stamped and location-based passenger boarding and alighting count data can be collected on a continuous basis. The availability of such large-scale transit data offers new opportunities to produce estimates for Origin-Destination (O-D) flows, helping inform transportation planning and transit management. However, the state-of-the-art methodologies for AVL/APC data analysis mostly tackle the O-D flow estimation problem within routes and barely infer the transfer activities across the entire transit network. This paper proposes three optimization models to identify transfers and approximate network-level O-D flows by minimizing the deviations between estimated and observed proportions or counts of transferring passengers: A Quadratic Integer Program (QIP), a feasible rounding procedure for the Quadratic Convex Programming (QCP) relaxation of the QIP, and an Integer Program (IP). The inputs of the models are readily available by applying the various route-level flow estimation algorithms to the automatically collected AVL/APC data and the output of the models is a network O-D estimation at varying geographical resolutions. The optimization models were evaluated on a case study for Ann Arbor-Ypsilanti area in Michigan. The IP model outperforms the QCP approach in terms of accuracy and remains tractable from an efficiency standpoint, contrary to the QIP. Its estimated O-D matrix achieves an R-Squared metric of 95.57% at the Traffic Analysis Zone level and 92.39% at the stop level, compared to the ground-truth estimates inferred from the state-of-practice trip-chaining methods.
mathematics
The effect of pressure on the room temperature solubility of hydrogen in Zircaloy-4 was examined using synchrotron X-ray diffraction on small ground flake samples in a diamond anvil cell at pressures up to 20.9 GPa. Different combinations of hydrogen level/state in the sample and of pressure transmitting medium were examined; in all three cases examined, it could be concluded that pressure resulted in the dissolution of d hydrides and that interstitial hydrogen retards the formation of w Zr. A pressure of around 9 GPa was required to halve the hydride fraction. These results imply that the effect of pressure is thermodynamically analogous to that of increasing temperature, but that the effect is small. The results are consistent with the volume per Zr atom of the a, d and w phases, with the bulk moduli of a and d, and with previous measurements of the hydrogen site molar volumes in the a and d phases. The results are interpreted in terms of their implication for our understanding of the driving forces for hydride precipitation at crack tips, which are in a region of hydrostatic tensile stress on the order of 1.5 GPa.
condensed matter
Gravitational waves are ripples in the space time fabric when high energy events such as black hole mergers or neutron star collisions take place. The first Gravitational Wave (GW) detection (GW150914) was made by the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo Collaboration on September 14, 2015. Furthermore, the proof of the existence of GWs had countless implications from Stellar Evolution to General Relativity. Gravitational waves detection requires multiple filters and the filtered data has to be studied intensively to come to conclusions on whether the data is a just a glitch or an actual gravitational wave detection. However, with the use of Deep Learning the process is simplified heavily, as it reduces the level of filtering greatly, and the output is more definitive, even though the model produces a probabilistic result. Our technique, Deep Learning, utilizes a different implementation of a one-dimensional convolutional neural network (CNN). The model is trained by a composite of real LIGO noise, and injections of GW waveform templates. The CNN effectively uses classification to differentiate weak GW time series from non-gaussian noise from glitches in the LIGO data stream. In addition, we are the first study to utilize fine-tuning as a means to train the model with a second pass of data, while maintaining all the learned features from the initial training iteration. This enables our model to have a sensitivity of 100%, higher than all prior studies in this field, when making real-time detections of GWs at an extremely low Signal-to-noise ratios (SNR), while still being less computationally expensive. This sensitivity, in part, is also achieved through the use of deep signal manifolds from both the Hanford and Livingston detectors, which enable the neural network to be responsive to false positives.
astrophysics
The development of high-throughput sequencing and targeted therapies has led to the emergence of personalized medicine: a patient's molecular profile or the presence of a specific biomarker of drug response will correspond to a treatment recommendation made either by a physician or by a treatment assignment algorithm. The growing number of such algorithms raises the question of how to quantify their clinical impact knowing that a personalized medicine strategy will inherently include different versions of treatment. We thus specify an appropriate causal framework with multiple versions of treatment to define the causal effects of interest for precision medicine strategies and estimate them emulating clinical trials with observational data. Therefore, we determine whether the treatment assignment algorithm is more efficient than different control arms: gold standard treatment, observed treatments or random assignment of targeted treatments. Causal estimates of the precision medicine effects are first evaluated on simulated data and they demonstrate a lower biases and variances compared with naive estimation of the difference in expected outcome between treatment arms. The various simulations scenarios also point out the different bias sources depending on the clinical situation (heterogeneity of response, assignment of observed treatments etc.). A RShiny interactive application is also provided to further explore other user-defined scenarios. The method is then applied to data from patient-derived xenografts (PDX): each patient tumour is implanted in several immunodeficient cloned mice later treated with different drugs, thus providing access to all corresponding drug sensitivities for all patients. Access to these unique pre-clinical data emulating counterfactual outcomes allows to validate the reliability of causal estimates obtained with the proposed method.
statistics
The FeSe nematic phase has been the focus of recent research on iron based superconductors (IBSs) due to its unique properties. A number of electronic structure studies were performed to find the origin of the phase. However, such attempts came out with conflicting results and caused additional controversies. Here, we report results from angle resolved photoemission and X-ray absorption spectroscopy studies on FeSe with detwinning by a piezo stack. We have fully resolved band dispersions with orbital characters near the Brillouin zone corner which reveals absence of a Fermi pocket at the Y point in the 1Fe Brillouin zone. In addition, the occupation imbalance between dxz and dyz orbitals is found to be opposite to that of iron pnictides, which is consistent with the identified band characters. These results settle down controversial issues in the FeSe nematic phase and shed light on the origin of nematic phases in IBSs.
condensed matter
Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique self-distillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.
computer science
We study the cohomology of the complexes of differential, integral and pseudo forms on odd symplectic manifolds taking the wedge product with the symplectic form as differential. We show that the cohomology classes are in correspondence with inequivalent Lagrangian submanifolds and that they all define semidensities on them. Further, we introduce new operators that move from one Lagragian submanifold to another and we investigate their relation with the so-called picture changing operators for the de Rham differential. Finally, we prove the isomorphism between the cohomology of the de Rham differential and the cohomology of BV Laplacian in the extended framework of differential, integral and pseudo forms.
high energy physics theory
A new meta-algorithm for estimating the conditional average treatment effects is proposed in the paper. The main idea underlying the algorithm is to consider a new dataset consisting of feature vectors produced by means of concatenation of examples from control and treatment groups, which are close to each other. Outcomes of new data are defined as the difference between outcomes of the corresponding examples comprising new feature vectors. The second idea is based on the assumption that the number of controls is rather large and the control outcome function is precisely determined. This assumption allows us to augment treatments by generating feature vectors which are closed to available treatments. The outcome regression function constructed on the augmented set of concatenated feature vectors can be viewed as an estimator of the conditional average treatment effects. A simple modification of the Co-learner based on the random subspace method or the feature bagging is also proposed. Various numerical simulation experiments illustrate the proposed algorithm and show its outperformance in comparison with the well-known T-learner and X-learner for several types of the control and treatment outcome functions.
statistics
This paper is concerned with the affine-invariant ternary codes which are defined by Hermitian functions. We compute the incidence matrices of 2-designs that are supported by the minimum weight codewords of these ternary codes. The linear codes generated by the rows of these incidence matrix are subcodes of the extended codes of the 4-th order generalized Reed-Muller codes and they also hold 2-designs. Finally, we give the dimensions and lower bound of the minimum weights of these linear codes.
computer science
Rhombohedral B$_{12}$ unit is viewed as a host matrix embedding linear tri-atomic arrangements of elements (E) resulting in a relatively large family of boron-rich compounds with B$_{12}${E-E-E} generic formulation. The present work focuses on boron subnitride, B$_{13}$N$_2$ that we express in present context as B$_{12}${N-B-N}. Within well established quantum density functional theory (DFT) a full study of its electronic properties is provided. Also linear triatomic arrangements in view of the existence in simple compounds such as sodium azide NaN$_3$, i.e., Na$^I${N-N-N} and calcium cyanamide, Ca$^{II}${N-C-N}, we devised Sc$^{III}${N-B-N} to establish comparison with B$_{12}${N-B-N}. ScBN$_2$ is calculated to be cohesive and possessing N-B-N isolated from ScIII with dB-N = 1.33 {\AA}. In B$_{12}${N-B-N} an elongated dB-N=1.43 {\AA} is identified due to the bonding of N with one of the two B12 boron substructures, B1 with the formation of "3B...N-B-N...3B"-like complex accompanied by a magnetic instability. Spin polarized (SP) calculations led to the onset of magnetization on central boron with M=1 $\mu_B$ in a stable half-ferromagnetic ground state observed from the electronic density of states (DOS). The results are backed with total energy and calculations in both non-spin-polarized (NSP) and spin-polarized stabilizing the latter configuration over a broad range of volumes from M(V) plots. Further illustrative results are given with the charge densities (total and magnetic) and electron localization function (ELF).
condensed matter
Understanding the spatiotemporal road network accessibility during a hurricane evacuation, the level of ease of residents in an area in reaching evacuation destination sites through the road network, is a critical component of emergency management. While many studies have attempted to measure road accessibility (either in the scope of evacuation or beyond), few have considered both dynamic evacuation demand and characteristics of a hurricane. This study proposes a methodological framework to achieve this goal. In an interval of every six hours, the method first estimates the evacuation demand in terms of number of vehicles per household in each county subdivision by considering the hurricane's wind radius and track. The closest facility analysis is then employed to model evacuees' route choices towards the predefined evacuation destinations. The potential crowdedness index (PCI), a metric capturing the level of crowdedness of each road segment, is then computed by coupling the estimated evacuation demand and route choices. Finally, the road accessibility of each sub-county is measured by calculating the reciprocal of the sum of PCI values of corresponding roads connecting evacuees from the sub-county to the designated destinations. The method is applied to the entire state of Florida during Hurricane Irma in September 2017. Results show that I-75 and I-95 northbound have a high level of congestion, and sub-counties along the northbound I-95 suffer from the worst road accessibility. In addition, this research performs a sensitivity analysis for examining the impacts of different choices of behavioral response curves on accessibility results.
physics
In this paper we consider closed orientable surfaces $M$ of positive genus and diffeomorphisms $f:M\rightarrow M.$ The main objective is to show that such maps, under certain natural conditions and some generic assumptions, can have unbounded periodic open disks, which are very weird objects. For instance, if $D\subset M$ is such a disk (a maximal one), then the connected components of its lift to the universal cover of $M,$ denoted $\widetilde{M},$ all share the same boundary $K,$ which is the lift of $\partial D$ to $\widetilde{M}.$ In other words, there is an equivariant closed connected set $K$ in $\widetilde{M}$ (in particular, $K$ is unbounded in every direction), which is equal to the boundary of all connected components of the lift of $D.$ We present two main results. The first describes the dynamics of $f\mid _D$ when the rotation number of the prime ends compactification of $D$ is rational. It implies in particular that $D$ is either the basin of an attractor or of a repeller contained in $D$. The other result shows that unbounded periodic open disks appear for generic one-parameter families $f_t:M\rightarrow M$ whenever the rotation set of $f_t$ grows. When $M$ is the torus, apart from some $C^r$-generic conditions (which hold for all $r\geq 1),$ we also assume that the rotation set of $f$ $($or $f_t)$ has non-empty interior. For higher genus surfaces, we need a more complicated hypothesis, the existence of a fully essential system of curves.
mathematics
The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning. Most works study this problem under very weak assumptions, in which case it is provably hard. We consider the case of stronger structural assumptions that are commonly satisfied in modern machine learning, including representation learning and generative modelling with autoencoder architectures. Under these assumptions we propose and study an estimator that can be easily implemented, works well in high dimensions, and enjoys faster rates of convergence. We verify the behavior of our estimator empirically in both synthetic and real-data experiments, and discuss its direct implications for total correlation, entropy, and mutual information estimation.
statistics
It is known that some GIT compactifications associated to moduli spaces of either points in the projective line or cubic surfaces are isomorphic to Baily-Borel compactifications of appropriate ball quotients. In this paper, we show that their respective toroidal compactifications are isomorphic to moduli spaces of stable pairs as defined in the context of the MMP. Moreover, we give a precise mixed-Hodge-theoretic interpretation of this isomorphism for the case of eight labeled points in the projective line.
mathematics
In this paper, a non-isolated high step-up dc-dc converter is presented. The proposed converter is composed of an interleaved structure and diode-capacitor multiplier cells for interfacing low-voltage renewable energy sources to high-voltage distribution buses. The aforementioned topology can provide a very high voltage gain due to employing the coupled inductors and the diode-capacitor cells. The coupled inductors are connected to the diode-capacitor multiplier cells to achieve the interleaved energy storage in the output side. Furthermore, the proposed topology provides continuous input current with low voltage stress on the power devices. The reverse recovery problem of the diodes is reduced. This topology can be operated at a reduced duty cycle by adjusting the turn ratio of the coupled inductors. Moreover, the performance comparison between the proposed topology and other converters are introduced. The design considerations operation principle, steady-state analysis, simulation results, and experimental verifications are presented. Therefore, a 500-W hardware prototype with an input voltage of 30-V and an output voltage of 1000-V is built to verify the performance and the theoretical analysis.
electrical engineering and systems science
In this work, we address the possibility of finding domain wall solutions in Horndeski gravity. Our principal motivation is base in recent investigations about the applications of the first-order formalism in cosmology and braneworld in Horndeski gravity. We find that the first-order equations for corresponding cosmology/domain wall solutions in four-dimensions play hole of the braneworld solutions in five-dimensions. Also we finding the mechanism of gravity localization into the domain wall for suitable values of the Horndeski parameters couplying to the four-dimensional gravity in a 3-brane in the Minkowski (M$_{4}$) or Anti-de Sitter (AdS$_{4}$) space.
high energy physics theory
We investigate the quantum numbers of the pentaquark states $\textrm{P}_{\textrm{c}}^{+}$, which are composed of four (three flavors) quarks and an antiquark, by analyzing their inherent nodal structure in this paper. Assuming that the four quarks form a tetrahedron or a square, and the antiquark locates at the center of the four quark cluster, we determine the nodeless structure of the states with orbital angular moment $L \leq 3$, and in turn, the accessible low-lying states. Since the inherent nodal structure depends only on the inherent geometric symmetry, we propose the quantum numbers $J^{P}$ of the low-lying pentaquark states $\textrm{P}_{c}^{+}$ may be ${\frac{3}{2}}^{-}$, ${\frac{5}{2}}^{-} $, ${\frac{3}{2}}^{+}$, ${\frac{5}{2}}^{+} $, independent of dynamical models.
high energy physics phenomenology
Numerical models of gas inflow towards a supermassive black hole (SMBH) show that star formation may occur in such an environment through the growth of a gravitationally unstable gas disc. We consider the effect of nuclear activity on such a scenario. We present the first three-dimensional grid-based radiative hydrodynamic simulations of direct collisions between infalling gas streams and a $4 \times 10^6~\text{M}_\odot$ SMBH, using ray-tracing to incorporate radiation consistent with an active galactic nucleus (AGN). We assume inflow masses of $ \approx 10^5~\text{M}_\odot$ and explore radiation fields of 10% and 100% of the Eddington luminosity ($L_\text{edd}$). We follow our models to the point of central gas disc formation preceding star formation and use the Toomre Q parameter ($Q_T$) to test for gravitational instability. We find that radiation pressure from UV photons inhibits inflow. Yet, for weak radiation fields, a central disc forms on timescales similar to that of models without feedback. Average densities of $> 10^{8}~\text{cm}^{-3}$ limit photo-heating to the disc surface allowing for $Q_T\approx1$. For strong radiation fields, the disc forms more gradually resulting in lower surface densities and larger $Q_T$ values. Mass accretion rates in our models are consistent with 1%--60% of the Eddington limit, thus we conclude that it is unlikely that radiative feedback from AGN activity would inhibit circumnuclear star formation arising from a massive inflow event.
astrophysics
In this work, we propose an efficient method for solving box constrained derivative free optimization problems involving high dimensions. The proposed method relies on exploring the feasible region using a direct search approach based on scaled conjugate gradient with quadratic interpolation models. The extensive numerical computations carried out over test problems with varying dimensions demonstrate the performance of the proposed method for derivative free optimization.
mathematics
Diffusion-weighted magnetic resonance imaging (D-MRI) is an in-vivo and non-invasive imaging technology to probe anatomical architectures of biological samples. The anatomy of white matter fiber tracts in the brain can be revealed to help understanding of the connectivity patterns among different brain regions. In this paper, we propose a novel Nearest-neighbor Adaptive Regression Model (NARM) for adaptive estimation of the fiber orientation distribution (FOD) function based on D-MRI data, where spatial homogeneity is used to improve FOD estimation by incorporating neighborhood information. Specifically, we formulate the FOD estimation problem as a weighted linear regression problem, where the weights are chosen to account for spatial proximity and potential heterogeneity due to different fiber configurations. The weights are adaptively updated and a stopping rule based on nearest neighbor distance is designed to prevent over-smoothing. NARM is further extended to accommodate D-MRI data with multiple bvalues. Comprehensive simulation results demonstrate that NARM leads to satisfactory FOD reconstructions and performs better than voxel-wise estimation as well as competing smoothing methods. By applying NARM to real 3T D-MRI datasets, we demonstrate the effectiveness of NARM in recovering more realistic crossing fiber patterns and producing more coherent fiber tracking results, establishing the practical value of NARM for analyzing D-MRI data and providing reliable information on brain structural connectivity.
statistics
Entanglement plays a prominent role in the study of condensed matter many-body systems: Entanglement measures not only quantify the possible use of these systems in quantum information protocols, but also shed light on their physics. However, exact analytical results remain scarce, especially for systems out of equilibrium. In this work we examine a paradigmatic one-dimensional fermionic system that consists of a uniform tight-binding chain with an arbitrary scattering region near its center, which is subject to a DC bias voltage at zero temperature. The system is thus held in a current-carrying nonequilibrium steady state, which can nevertheless be described by a pure quantum state. Using a generalization of the Fisher-Hartwig conjecture, we present an exact calculation of the bipartite entanglement entropy of a subsystem with its complement, and show that the scaling of entanglement with the length of the subsystem is highly unusual, containing both a volume-law linear term and a logarithmic term. The linear term is related to imperfect transmission due to scattering, and provides a generalization of the Levitov-Lesovik full counting statistics formula. The logarithmic term arises from the Fermi discontinuities in the distribution function. Our analysis also produces an exact expression for the particle-number-resolved entanglement. We find that although to leading order entanglement equipartition applies, the first term breaking it grows with the size of the subsystem, a novel behavior not observed in previously studied systems. We apply our general results to a concrete model of a tight-binding chain with a single impurity site, and show that the analytical expressions are in good agreement with numerical calculations. The analytical results are further generalized to accommodate the case of multiple scattering regions.
quantum physics
We study the homogenization of a Hamilton-Jacobi equation forced by rapidly oscillating noise that is colored in space and white in time. It is shown that the homogenized equation is deterministic, and, in general, the noise has an enhancement effect, for which we provide a quantitative estimate. As an application, we perform a noise sensitivity analysis for Hamilton-Jacobi equations forced by a noise term with small amplitude, and identify the scaling at which the macroscopic enhancement effect is felt. The results depend on new, probabilistic estimates for the large scale H\"older regularity of the solutions, which are of independent interest.
mathematics
We study QCD with massless quarks on $\mathbb{R}^3\times S^1$ under symmetry-twisted boundary conditions with small compactification radius, i.e. at high temperatures. Under suitable boundary conditions, the theory acquires a part of the center symmetry and it is spontaneously broken at high temperatures. We show that these vacua at high temperatures can be regarded as different symmetry-protected topological orders, and the domain walls between them support nontrivial massless gauge theories as a consequence of anomaly-inflow mechanism. At sufficiently high temperatures, we can perform the semiclassical analysis to obtain the domain-wall theory, and $2$d $U(N_\mathrm{c}-1)$ gauge theories with massless fermions match the 't~Hooft anomaly. We perform these analysis for the high-temperature domain wall of $\mathbb{Z}_{N_\mathrm{c}}$-QCD and also of Roberge-Weiss phase transitions.
high energy physics theory
We describe Parrotron, an end-to-end-trained speech-to-speech conversion model that maps an input spectrogram directly to another spectrogram, without utilizing any intermediate discrete representation. The network is composed of an encoder, spectrogram and phoneme decoders, followed by a vocoder to synthesize a time-domain waveform. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent, prosody, and background noise, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We further show that this normalization model can be adapted to normalize highly atypical speech from a deaf speaker, resulting in significant improvements in intelligibility and naturalness, measured via a speech recognizer and listening tests. Finally, demonstrating the utility of this model on other speech tasks, we show that the same model architecture can be trained to perform a speech separation task
electrical engineering and systems science
We propose a quantum algorithm to solve systems of nonlinear algebraic equations. In the ideal case the complexity of the algorithm is linear in the number of variables $n$, which means our algorithm's complexity is less than $O(n^{3})$ of the corresponding classical ones when the set of equations has many variables and not so high orders. Different with most popular way of representing the results by state vector, we get the results in computation basis which is readable directly.
quantum physics
Deep Inelastic Scattering (DIS) experiments at the planned Electron-Ion Collider will be affected by details of the hadron formation inside the nuclear volume. Besides semi-inclusive particle production experiments decays of the target nucleus via emission of neutrons provide an additional opportunity to probe this domain. This paper reports on the hybrid dynamical+statistical calculations of low-energy neutron production in muon- and virtual photon-induced collisions with nuclei. We confirm the conclusion that the E665 data on neutron production in $\mu^-$ + Pb DIS at 470 GeV indicate a strong suppression of the final state interaction for hadrons with momenta above $\sim 1$ GeV/c. Ultraperipheral heavy-ion collisions at the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) can be used to test this suppression. The calculations of the neutron multiplicity distributions and $p_t$-spectra in photon - nucleus collisions at the energies accessible at the LHC and RHIC are presented for several models of hadron formation. We argue that studies of neutron production in ultraperipheral heavy ion collisions open a new window on the small-$x$ dynamics and hadron component of the photon wave function.
high energy physics phenomenology
We use the functional renormalization group equation for the effective average action to study the fixed point structure of gravity-fermion systems on a curved background spacetime. We approximate the effective average action by the Einstein-Hilbert action supplemented by a fermion kinetic term and a coupling of the fermion bilinears to the spacetime curvature. The latter interaction is singled out based on a "smart truncation building principle". The resulting renormalization group flow possesses two families of interacting renormalization group fixed points extending to any number of fermions. The first family exhibits an upper bound on the number of fermions for which the fixed points could provide a phenomenologically interesting high-energy completion via the asymptotic safety mechanism. The second family comes without such a bound. The inclusion of the non-minimal gravity-matter interaction is crucial for discriminating the two families. Our work also clarifies the origin of the strong regulator-dependence of the fixed point structure reported in earlier literature and we comment on the relation of our findings to studies of the same system based on a vertex expansion of the effective average action around a flat background spacetime.
high energy physics theory
Frequency stability of power systems becomes more vulnerable with the increase of solar photovoltaic (PV). Energy storage provides an option to mitigate the impact of high PV penetration. Using the U.S. Eastern Interconnection (EI) and Texas Interconnection (ERCOT) power grid models, this paper investigates the capabilities of using energy storage to improve frequency response under high PV penetration. The study result helps to identify the potential and impact factors in utilizing energy storage to improve frequency response in high renewable penetration power grids.
electrical engineering and systems science
Period-color (PC) relations may be used to study the interaction of the stellar photosphere and the hydrogen ionization front (HIF). RR Lyraes (RRLs) and long period classical Cepheids (P > 10d) have been found to exhibit different PC behavior at minimum and maximum light which can be explained by the HIF-photosphere interaction based on their location on the HR diagram. In this work, we extend the study to include type II Cepheids (T2Cs) with an aim to test the HIF-photosphere interaction theory across a broad spectrum of variable star types. We find W Vir stars and BL Her stars to have similar PC relations as those from long period and short period classical Cepheids, respectively. We also use MESA to compute RRL, BL Her and classical Cepheid models to study the theoretical HIF-photosphere distance and find the results to be fairly consistent with the HIF-photosphere interaction theory.
astrophysics
Shannon quantum information entropies $S_{\rho,\gamma}$, Fisher informations $I_{\rho,\gamma}$, Onicescu energies $O_{\rho,\gamma}$ and complexities $e^SO$ are calculated both in position (subscript $\rho$) and momentum ($\gamma$) spaces for azimuthally symmetric 2D nanoring that is placed into combination of transverse uniform magnetic field $\bf B$ and Aharonov-Bohm (AB) flux $\phi_{AB}$ and whose potential profile is modeled by superposition of quadratic and inverse quadratic dependencies on radius $r$. Increasing intensity $B$ flattens momentum waveforms $\Phi_{nm}({\bf k})$ and in the limit of infinitely large fields they turn to zero, what means that the position wave functions $\Psi_{nm}({\bf r})$, which are their Fourier counterparts, tend in this limit to the $\delta$-functions. Position (momentum) Shannon entropy depends on the field $B$ as a negative (positive) logarithm of $\omega_{eff}\equiv\left(\omega_0^2+\omega_c^2/4\right)^{1/2}$, where $\omega_0$ determines the quadratic steepness of the confining potential and $\omega_c$ is a cyclotron frequency. This makes the sum ${S_\rho}_{nm}+{S_\gamma}_{nm}$ a field-independent quantity that increases with the principal $n$ and azimuthal $m$ quantum numbers and does satisfy entropic uncertainty relation. Position Fisher information does not depend on $m$, linearly increases with $n$ and varies as $\omega_{eff}$ whereas its $n$ and $m$ dependent Onicescu counterpart ${O_\rho}_{nm}$ changes as $\omega_{eff}^{-1}$. The products ${I_\rho}_{nm}{I_\gamma}_{nm}$ and ${O_\rho}_{nm}{O_\gamma}_{nm}$ are $B$-independent quantities. A dependence of the measures on the ring geometry is discussed. It is argued that a variation of the position Shannon entropy or Onicescu energy with the AB field uniquely determines an associated persistent current as a function of $\phi_{AB}$ at $B=0$. An inverse statement is correct too.
quantum physics
If at least one of the members of a compact binary coalescence is charged, the inspiral of the two members would generate a Poynting flux with an increasing power, giving rise to a brief electromagnetic counterpart temporally associated with the chirp signal of the merger (with possibly a small temporal offset), which we term as the {\em charged Compact Binary Coalescence} (cCBC) signal. We develop a general theory of cCBC for any mass and amount of charge for each member. Neutron stars (NSs), as spinning magnets, are guaranteed to be charged, so the cCBC signal should accompany all neutron star mergers. The cCBC signal is clean in a BH-NS merger with a small mass ratio ($q \equiv m_2/m_1 < 0.2$), in which the NS plunges into the BH as a whole, and its luminosity/energy can reach that of a fast radio burst if the NS is Crab-like. The strength of the cCBC signal in Extreme Mass Ratio Inspiral Systems (EMRIs) is also estimated.
astrophysics
Reservoir computing is an emerging methodology for neuromorphic computing that is especially well-suited for hardware implementations in size, weight, and power (SWaP) constrained environments. This work proposes a novel hardware implementation of a reservoir computer using a planar nanomagnet array. A small nanomagnet reservoir is demonstrated via micromagnetic simulations to be able to identify simple waveforms with 100% accuracy. Planar nanomagnet reservoirs are a promising new solution to the growing need for dedicated neuromorphic hardware.
computer science
Searching for the top squark (stop) is a key task to test the naturalness of SUSY. Different from stop pair production, single stop production relies on its electroweak properties and can provide some unique signatures. Following the single production process $pp \to \tilde t_1 \tilde{\chi}^-_1 \to t \tilde{\chi}^0_1 \tilde{\chi}^-_1$, the top quark has two decay channels: leptonic channel and hadronic channel. In this paper, we probe the observability of these two channels in a simplified MSSM scenario. We find that, at the 27 TeV LHC with the integrated luminosity of ${\cal L} = 15~\text{ab}^{-1}$, $m_{\tilde{t}_1}<1900$ GeV and $\mu<750$ GeV can be excluded at $2\sigma$ through the leptonic mono-top channel, while $m_{\tilde{t}_1}<1200$ GeV and $\mu<350$ GeV can be excluded at $2\sigma$ through the hadronic channel.
high energy physics phenomenology
The many-worlds interpretation (MWI) of quantum mechanics is studied from an unprecedented ontological perspective based on the reality of (semi-) deterministic parallel worlds in the interpretation. It is demonstrated that with thanks to the uncertainty principle there would be no consistent way to specify the correct ontology of the Universe, hence the MWI is subject to an inherent contradiction which claims that the world where we live in is unreal.
quantum physics
In this paper we address the benefit of adding adversarial training to the task of monocular depth estimation. A model can be trained in a self-supervised setting on stereo pairs of images, where depth (disparities) are an intermediate result in a right-to-left image reconstruction pipeline. For the quality of the image reconstruction and disparity prediction, a combination of different losses is used, including L1 image reconstruction losses and left-right disparity smoothness. These are local pixel-wise losses, while depth prediction requires global consistency. Therefore, we extend the self-supervised network to become a Generative Adversarial Network (GAN), by including a discriminator which should tell apart reconstructed (fake) images from real images. We evaluate Vanilla GANs, LSGANs and Wasserstein GANs in combination with different pixel-wise reconstruction losses. Based on extensive experimental evaluation, we conclude that adversarial training is beneficial if and only if the reconstruction loss is not too constrained. Even though adversarial training seems promising because it promotes global consistency, non-adversarial training outperforms (or is on par with) any method trained with a GAN when a constrained reconstruction loss is used in combination with batch normalisation. Based on the insights of our experimental evaluation we obtain state-of-the art monocular depth estimation results by using batch normalisation and different output scales.
electrical engineering and systems science
The article [HPS] established a monotonicity inequality for the Helmholtz equation and presented applications to shape detection and local uniqueness in inverse boundary problems. The monotonicity inequality states that if two scattering coefficients satisfy $q_1 \leq q_2$, then the corresponding Neumann-to-Dirichlet operators satisfy $\Lambda(q_1) \leq \Lambda(q_2)$ up to a finite dimensional subspace. Here we improve the bounds for the dimension of this space. In particular, if $q_1$ and $q_2$ have the same number of positive Neumann eigenvalues, then the finite dimensional space is trivial.
mathematics
Negative magnetoresistance is rare in non-magnetic materials. Recently, a negative magnetoresistance has been observed in the quantum limit of $\beta$-Ag$_2$Se, where only one band of Landau levels is occupied in a strong magnetic field parallel to the applied current. $\beta$-Ag$_2$Se is a material that host a Kramers Weyl cone with band degeneracy near the Fermi energy. Kramers Weyl cones exist at time-reversal invariant momenta in all symmorphic chiral crystals, and at a subset of these momenta, including the $\Gamma$ point, in non-symmorphic chiral crystals. Here, we present a theory for the negative magnetoresistance in the quantum limit of Kramers Weyl semimetals. We show that, although there is a band touching similar to those in Weyl semimetals, negative magnetoresistance can exist without a chiral anomaly. We find that it requires screened Coulomb scattering potentials between electrons and impurities, which is naturally the case in $\beta$-Ag$_2$Se.
condensed matter
Within this paper, the exploration of an evolutionary approach to an alternative CellLineNet: a convolutional neural network adept at the classification of epithelial breast cancer cell lines, is presented. This evolutionary algorithm introduces control variables that guide the search of architectures in the search space of inverted residual blocks, bottleneck blocks, residual blocks and a basic 2x2 convolutional block. The promise of EvoCELL is predicting what combination or arrangement of the feature extracting blocks that produce the best model architecture for a given task. Therein, the performance of how the fittest model evolved after each generation is shown. The final evolved model CellLineNet V2 classifies 5 types of epithelial breast cell lines consisting of two human cancer lines, 2 normal immortalized lines, and 1 immortalized mouse line (MDA-MB-468, MCF7, 10A, 12A and HC11). The Multiclass Cell Line Classification Convolutional Neural Network extends our earlier work on a Binary Breast Cancer Cell Line Classification model. This paper presents an on-going exploratory approach to neural network architecture design and is presented for further study.
computer science
Axion Like Particles (ALPs) are one promising kind of dark matter candidate particles that are predicted to couple with photons in the presence of magnetic fields. The oscillations between photons and ALPs travelling in the magnetic fields have been used to constrain ALP properties. In this work, we obtain some new constraints on the ALP mass $m_{\rm a}$ and the photon-ALP coupling constant $g$ with two different magnetic field models through TeV photons from PKS 2155-304. One is the discrete-$\varphi$ model that the magnetic field has the orientation angle $\varphi$ changes discretely and randomly from one coherent domain to the next, another is the linearly-continuous-$\varphi$ model that the magnetic field orientation angle $\varphi$ varies continuously across neighboring coherent domains. For the discrete-$\varphi$ model, we can obtain the best constraints on the ALP mass $m_{1}=m_{\rm a}/(1\ {\rm neV})=0.1$ and on the photon-ALP coupling constant $g_{11}=g/(10^{-11}\ {\rm GeV^{-1}})=5$, the reasonable range of the ALP mass $m_{1}$ is 0.08 $\thicksim$ 0.2 when $g_{11}$=5, and the only reasonable value of the photon-ALP coupling constant is $g_{11}$=5 when $m_{1}$=0.1. For the linearly-continuous-$\varphi$ model, we can obtain the best constraints on the ALP mass $m_{1}=0.1$ and on the photon-ALP coupling constant $g_{11}=0.7$, the reasonable range of the ALP mass $m_{1}$ is 0.05 $\thicksim$ 0.4 when $g_{11}$=0.7, and the reasonable range of the photon-ALP coupling constant $g_{11}$ is 0.5 $\thicksim$ 1 when $m_{1}$=0.1. All the results are consistent with the upper bound ($g<6.6\times10^{-11}\ {\rm GeV^{-1}}$, i.e. $g_{11}<6.6$) set by the CAST experiment.
astrophysics
Tensor network methods are powerful and efficient tools to study the properties and dynamics of statistical and quantum systems, in particular in one and two dimensions. In recent years, these methods were applied to lattice gauge theories, yet these theories remain a challenge in $(2+1)$ dimensions. In this article, we present a new (decorated) tensor network algorithm, in which the tensors encode the lattice gauge amplitude expressed in the fusion basis. This has several advantages: Firstly, the fusion basis does diagonalize operators measuring the magnetic fluxes and electric charges associated to a hierarchical set of regions. The algorithm allows therefore a direct access to these observables. Secondly the fusion basis is, as opposed to the previously employed spin network basis, stable under coarse graining. Thirdly, due to the hierarchical structure of the fusion basis, the algorithm does implement predefined disentangles, that remove short-scale entanglement. We apply this new algorithm to lattice gauge theories defined for the quantum group $\text{SU}(2)_{\rm k}$ and identify a weak and a strong coupling phase for various levels $\rm k$. As we increase the level $\rm k$, the critical coupling $g_c$ decreases linearly, suggesting the absence of a deconfining phase for the continuous group $\text{SU}(2)$. Moreover, we illustrate the scaling behaviour of the Wilson loops in the two phases.
high energy physics theory
Primary neuronal cultures have been widely used to study neuronal morphology, neurophysiology, neurodegenerative processes, and molecular mechanism of synaptic plasticity underlying learning and memory. Yet, the unique behavioral properties of neurons make them challenging to study - with phenotypic differences expressed as subtle changes in neuronal arborization rather than easy to assay features such as cell count. The need to analyze morphology, growth, and intracellular transport has motivated the development of increasingly sophisticated microscopes and image analysis techniques. Due to its high-contrast, high-specificity output, many assays rely on confocal fluorescence microscopy, genetic methods, or antibody staining techniques. These approaches often limit the ability to measure quantitatively dynamic activity such as intracellular transport and growth. In this work, we describe a method for label-free live-cell cell imaging with antibody staining specificity by estimating the associated fluorescent signals via quantitative phase imaging and deep convolutional neural networks. This computationally inferred fluorescence image is then used to generate a semantic segmentation map, annotating subcellular compartments of live unlabeled neural cultures. These synthetic fluorescence maps were further applied to study the time-lapse development of hippocampal neurons, highlighting the relationships between the cellular dry mass production and the dynamic transport activity within the nucleus and neurites. Our implementation provides a high-throughput strategy to analyze neural network arborization dynamically, with high specificity and without the typical phototoxicity and photobleaching limitations associated with fluorescent markers.
electrical engineering and systems science
If weak equivalence principle is violated then different types of neutrinos would couple differently with gravity and that may produce a gravity induced oscillation for the neutrinos of different flavour. We explore here the possibility that very small violation of the principle of weak equivalence (VEP) can be probed by ultra high energy neutrinos from distant astrophysical sources. The very long baseline length and the ultra high energies of such neutrinos could be helpful to probe very small VEP. We consider a 4-flavour neutrino scenario (3 active + 1 sterile) with both mass-flavour and gravity induced oscillations and compare the detection signatures for these neutrinos (muon tracks and shower events) with and without gravity induced oscillations at a kilometer scale detector such as IceCube. We find that even very small VEP ($\sim 10^{-42}$) can considerably affect the detected muon yield produced by UHE neutrinos from distant Gamma Ray Bursts (GRBs)
high energy physics phenomenology
Exploratory cancer drug studies test multiple tumor cell lines against multiple candidate drugs. The goal in each paired (cell line, drug) experiment is to map out the dose-response curve of the cell line as the dose level of the drug increases. We propose Bayesian Tensor Filtering (BTF), a hierarchical Bayesian model for dose-response modeling in multi-sample, multi-treatment cancer drug studies. BTF uses low-dimensional embeddings to share statistical strength between similar drugs and similar cell lines. Structured shrinkage priors in BTF encourage smoothness in the dose-response curves while remaining adaptive to sharp jumps when the data call for it. We focus on a pair of cancer drug studies exhibiting a particular pathology in their experimental design, leading us to a non-conjugate monotone mixture-of-Gammas likelihood. To perform posterior inference, we develop a variant of the elliptical slice sampling algorithm for sampling from linearly-constrained multivariate normal priors with non-conjugate likelihoods. In benchmarks, BTF outperforms state-of-the-art methods for covariance regression and dynamic Poisson matrix factorization. On the two cancer drug studies, BTF outperforms the current standard approach in biology and reveals potential new biomarkers of drug sensitivity in cancer. Code is available at https://github.com/tansey/functionalmf.
statistics
Three-wave mixing spectroscopy of chiral molecules, which exist in left-handed and right-handed conformations, allows for enantio-selective population transfer despite random orientation of the molecules. This is based on constructive interference of the three-photon pathways for one enantiomer and destructive one for the other. We prove here that three mutually orthogonal polarization directions are required to this end. Two different dynamical regimes exist to realize enantio-selective population transfer, and we show that they correspond to different phase conditions in the three-wave mixing. We find the excitation scheme used in current rotational three-wave mixing experiments of chiral molecules with $C_1$ symmetry to be close to optimal and discuss prospects for ro-vibrational three-wave mixing experiments of axially chiral molecules. Our comprehensive study allows us to clarify earlier misconceptions in the literature.
quantum physics
We propose and experimentally demonstrate a new spectroscopic method, image-charge detection, for the Rydberg states of surface electrons on liquid helium. The excitation of the Rydberg states of the electrons induces an image current in the circuit to which the electrons are capacitively coupled. In contrast to the conventional microwave absorption measurement, this method makes it possible to resolve the transitions to high-lying Rydberg states of the surface electrons. We also show that this method can potentially be used to detect quantum states of a single electron, which paves a way to utilize the quantum states of the surface electrons on liquid helium for quantum computing.
condensed matter
An important objective of experimental biology is the quantification of the relationship between predictor and response variables, a statistical analysis often termed variance partitioning (VP). In this paper, a series of simulations is presented, aiming to generate quantitative estimates of the expected statistical uncertainty of VP analyses. We demonstrate scenarios with considerable uncertainty in VP estimates, such that it significantly reduces the statistical reliability of the obtained results. Especially when a predictor variable of a dataset shows a low between-group variance, VP estimates may show a high margin of error. This becomes particularly important when the respective predictor variable only explains a small fraction of the overall variance, or the number of replicates is particularly small. Moreover, it is demonstrated that the expected error of VP estimates of a dataset can be approximated by bootstrap resampling, giving researchers a tool for the quantification of the uncertainty associated with an arbitrary VP analysis. The applicability of this method is demonstrated by a re-analysis of the Oribatid mite dataset introduced by Borcard and Legendre in 1994 and the Barro Colorado Island tree count dataset by Condit and colleagues. We believe that this study may encourage biologists to approach routine statistical analyses such as VP more critically, and report the error associated with them more frequently.
statistics
Submodularity is desirable for a variety of objectives in content selection where the current neural encoder-decoder framework is inadequate. However, it has so far not been explored in the neural encoder-decoder system for text generation. In this work, we define diminishing attentions with submodular functions and in turn, prove the submodularity of the effective neural coverage. The greedy algorithm approximating the solution to the submodular maximization problem is not suited to attention score optimization in auto-regressive generation. Therefore instead of following how submodular function has been widely used, we propose a simplified yet principled solution. The resulting attention module offers an architecturally simple and empirically effective method to improve the coverage of neural text generation. We run experiments on three directed text generation tasks with different levels of recovering rate, across two modalities, three different neural model architectures and two training strategy variations. The results and analyses demonstrate that our method generalizes well across these settings, produces texts of good quality and outperforms state-of-the-art baselines.
computer science
In this paper, we propose a novel data augmentation method for training neural networks for Direction of Arrival (DOA) estimation. This method focuses on expanding the representation of the DOA subspace of a dataset. Given some input data, it applies a transformation to it in order to change its DOA information and simulate new potentially unseen one. Such transformation, in general, is a combination of a rotation and a reflection. It is possible to apply such transformation due to a well-known property of First Order Ambisonics (FOA). The same transformation is applied also to the labels, in order to maintain consistency between input data and target labels. Three methods with different level of generality are proposed for applying this augmentation principle. Experiments are conducted on two different DOA networks. Results of both experiments demonstrate the effectiveness of the novel augmentation strategy by improving the DOA error by around 40%.
electrical engineering and systems science
Bridge sampling is an effective Monte Carlo method for estimating the ratio of normalizing constants of two probability densities, a routine computational problem in statistics, physics, chemistry, and other fields. The Monte Carlo error of the bridge sampling estimator is determined by the amount of overlap between the two densities. In the case of uni-modal densities, Warp-I, II, and III transformations (Meng and Schilling, 2002) are effective for increasing the initial overlap, but they are less so for multi-modal densities. This paper introduces Warp-U transformations that aim to transform multi-modal densities into Uni-modal ones without altering their normalizing constants. The construction of a Warp-U transformation starts with a Normal (or other convenient) mixture distribution $\phi_{\text{mix}}$ that has reasonable overlap with the target density $p$, whose normalizing constant is unknown. The stochastic transformation that maps $\phi_{\text{mix}}$ back to its generating distribution $N(0,1)$ is then applied to $p$ yielding its Warp-U version, which we denote $\tilde{p}$. Typically, $\tilde{p}$ is uni-modal and has substantially increased overlap with $N(0,1)$. Furthermore, we prove that the overlap between $\tilde{p}$ and $N(0,1)$ is guaranteed to be no less than the overlap between $p$ and $\phi_{\text{mix}}$, in terms of any $f$-divergence. We propose a computationally efficient method to find an appropriate $\phi_{\text{mix}}$, and a simple but effective approach to remove the bias which results from estimating the normalizing constants and fitting $\phi_{\text{mix}}$ with the same data. We illustrate our findings using 10 and 50 dimensional highly irregular multi-modal densities, and demonstrate how Warp-U sampling can be used to improve the final estimation step of the Generalized Wang-Landau algorithm (Liang, 2005), a powerful sampling and estimation method.
statistics
We study SU(5) chiral gauge theories on $R^3\times S^1$. With an unequal number of fundamental and antifundmental matter representations we calculate nontrivial pre-ADS superpotentials generated by composite multi-monopoles. We also point out that the structure of the composite multi-monopoles can be determined simply from the affine Dynkin diagrams of the gauge group and its unbroken subgroup. For the case of one flavor, we find that the superpotential is independent of the composite meson. We show that dynamical 4D SUSY breaking in the simplest chiral SU(5) gauge theory can be demonstrated directly via semi-classical effects on the circle.
high energy physics theory
The Lasserre Hierarchy is a set of semidefinite programs which yield increasingly tight bounds on optimal solutions to many NP-hard optimization problems. The hierarchy is parameterized by levels, with a higher level corresponding to a more accurate relaxation. High level programs have proven to be invaluable components of approximation algorithms for many NP-hard optimization problems. There is a natural analogous quantum hierarchy, which is also parameterized by level and provides a relaxation of many (QMA-hard) quantum problems of interest. In contrast to the classical case, however, there is only one approximation algorithm which makes use of higher levels of the hierarchy. Here we provide the first ever use of the level-$2$ hierarchy in an approximation algorithm for a particular QMA-complete problem, so-called Quantum Max Cut. We obtain modest improvements on state-of-the-art approximation factors for this problem, as well as demonstrate that the level-$2$ hierarchy satisfies many physically-motivated constraints that the level-$1$ does not satisfy. Indeed, this observation is at the heart of our analysis and indicates that higher levels of the quantum Lasserre Hierarchy may be very useful tools in the design of approximation algorithms for QMA-complete problems.
quantum physics
In this paper, we propose \texttt{FedGLOMO}, the first (first-order) FL algorithm that achieves the optimal iteration complexity (i.e matching the known lower bound) on smooth non-convex objectives -- without using clients' full gradient in each round. Our key algorithmic idea that enables attaining this optimal complexity is applying judicious momentum terms that promote variance reduction in both the local updates at the clients, and the global update at the server. Our algorithm is also provably optimal even with compressed communication between the clients and the server, which is an important consideration in the practical deployment of FL algorithms. Our experiments illustrate the intrinsic variance reduction effect of \texttt{FedGLOMO} which implicitly suppresses client-drift in heterogeneous data distribution settings and promotes communication-efficiency. As a prequel to \texttt{FedGLOMO}, we propose \texttt{FedLOMO} which applies momentum only in the local client updates. We establish that \texttt{FedLOMO} enjoys improved convergence rates under common non-convex settings compared to prior work, and with fewer assumptions.
statistics
All attempts to directly detect particle dark matter (DM) scattering on nuclei suffer from the partial or total loss of sensitivity for DM masses in the GeV range or below. We derive novel constraints from the inevitable existence of a subdominant, but highly energetic, component of DM generated through collisions with cosmic rays. Subsequent scattering inside conventional DM detectors, as well as neutrino detectors sensitive to nuclear recoils, limits the DM-nucleon scattering cross section to be below $10^{-31}$ cm$^2$ for both spin-independent and spin-dependent scattering of light DM.
high energy physics phenomenology
In-bulk processing of materials by laser radiation has largely evolved over the last decades and still opensup new scientific and industrial potentials. The development of any in-bulk processing application relieson the knowledge of laser propagation and especially the volumetric field distribution near the focus.Many commercial programs can simulate this, but, in order to adapt them, or to develop new methods,one usually needs to create a specific software. Besides, most of the time people also need to measurethe actual field distribution near the focus to evaluate their assumptions in the simulation. To easily get access to this knowledge, we present our high-precision field distribution measuring method and release our in-house software InFocus, under the Creative Commons 4.0 License. Our measurementsprovide 300-nm longitudinal resolution and diffraction limited lateral resolution. The in-house softwareallows fast vectorial analysis of the focused volumetric field distribution in the bulk. The simulationsof light propagation under different conditions (focusing optics, wavelength, spatial shape, propagationmedium) are in excellent agreement with propagation imaging experiments. The aberrations provoked by the refractive index mismatch as well as those induced by the focusing optics are both taken into account.The results indicate that our proposed model is suitable for the precise evaluation of energy deposition
physics
The Stokes resolvent problem $\lambda u - \Delta u + \nabla \phi = f$ with $\mathrm{div}(u) = 0$ subject to homogeneous Dirichlet or homogeneous Neumann-type boundary conditions is investigated. In the first part of the paper we show that for Neumann-type boundary conditions the operator norm of $\mathrm{L}^2_{\sigma} (\Omega) \ni f \mapsto \pi \in \mathrm{L}^2 (\Omega)$ decays like $\lvert \lambda \rvert^{- 1 / 2}$ which agrees exactly with the scaling of the equation. In comparison to that, we show that the operator norm of this mapping under Dirichlet boundary conditions decays like $\lvert \lambda \rvert^{- \alpha}$ for $0 \leq \alpha < 1 / 4$ and we show that this decay rate cannot be improved to any exponent $\alpha > 1 / 4$, thereby, violating the natural scaling of the equation. In the second part of this article, we investigate the Stokes resolvent problem subject to homogeneous Neumann-type boundary conditions if the underlying domain $\Omega$ is convex. We establish optimal resolvent estimates and gradient estimates in $\mathrm{L}^p (\Omega ; \mathbb{C}^d)$ for $2d / (d + 2) < p < 2d / (d - 2)$ (with $1 < p < \infty$ if $d = 2$). This interval is larger than the known interval for resolvent estimates subject to Dirichlet boundary conditions on general Lipschitz domains and is to the best knowledge of the author the first result that provides $\mathrm{L}^p$-estimates for the Stokes resolvent subject to Neumann-type boundary conditions on general convex domains.
mathematics
We present a density functional theory analysis of nitrogen-vacancy (NV) centers in diamond which are located in the vicinity of extended defects, namely intrinsic stacking faults (ISF), extrinsic stacking faults (ESF), and coherent twin boundaries (CTB) on \{111\} planes in diamond crystals. Several sites for NV centers close to the extended defects are energetically preferred with respect to the bulk crystal. This indicates that NV centers may be enriched at extended defects. We report the hyperfine structure (HFS) and zero-field splitting (ZFS) parameters of the NV centers at the extended defects which typically deviate by about 10\% but in some cases up to 90\% from their bulk values. Furthermore, we find that the influence of the extended defects on the NV centers is of short range: NV centers that are about three double layers (corresponding to $\sim6$ $\mathring{A}$ away from defect planes already show bulk-like behavior.
condensed matter
We establish the converse of Weyl's eigenvalue inequality for additive Hermitian perturbations of a Hermitian matrix.
mathematics
We consider a matching system where items arrive one by one at each node of a compatibility network according to Poisson processes and depart from it as soon as they are matched to a compatible item. The matching policy considered is a generalized max-weight policy where decisions can be noisy. Additionally, some of the nodes may have impatience, i.e. leave the system before being matched. Using specific properties of the max-weight policy, we construct several Lyapunov functions, including a simple quadratic one. This allows us to establish stability results, to construct bounds for the stationary mean and variances of the total amount of customers in the system, and to prove exponential convergence speed towards the stationary measure. We finally illustrate some of these results using simulations on toy examples.
mathematics
We deal with the solution of a generic linear inverse problem in the Hilbert space setting. The exact right hand side is unknown and only accessible through discretised measurements corrupted by white noise with unknown arbitrary distribution. The measuring process can be repeated, which allows to estimate the measurement error through averaging. We show convergence against the true solution of the infinite-dimensional problem for a priori and a posteriori regularisation schemes, as the number of measurements and the dimension of the discretisation tend to infinity, under natural and easily verifiable conditions for the discretisation.
mathematics
In high temperature SU(2) gluodynamics, the condensation of the zero component gauge field potential A_0 =const and its gauge-fixing dependence are investigated. A_0 is mutually related with Polyakov's loop <L>. The two-loop effective potential W(A_0,xi) is recalculated in the background relativistic R_xi gauge. It depends on the parameter xi, has a nontrivial minimum and satisfies Nielsen's identity. These signs mean gauge invariance of the condensation phenomenon. Following the idea of Belyaev, we express W(A_0,xi) in terms of <L>. The obtained effective potential of order parameter differs from that derived by this author. It is independent of xi and has a nontrivial minimum position. Hence the A_0 condensation follows. We show that the equation relating A_0 and (A_0)|_(classical) coincides with the special characteristic orbit in the (A)$-plain along which the W(A_0,xi) is xi-independent. In this way the link between these two gauge invariant descriptions is established. The minimum value of the Polyakov loop is calculated. Comparison with results of other authors is given.
high energy physics phenomenology