text
stringlengths
11
9.77k
label
stringlengths
2
104
A class of 4d $\mathcal{N}=3$ SCFTs can be obtained from gauging a discrete subgroup of the global symmetry group of $\mathcal{N}=4$ Super Yang-Mills theory. This discrete subgroup contains elements of both the $SU(4)$ R-symmetry group and the $SL(2,\mathbb{Z})$ S-duality group of $\mathcal{N}=4$ SYM. We give a prescription for how to perform the discrete gauging at the level of the superconformal index and Higgs branch Hilbert series. We interpret and match the information encoded in these indices to known results for rank one $\mathcal{N}=3$ theories. Our prescription is easily generalised for the Coloumb branch and the Higgs branch indices of higher rank theories, allowing us to make new predictions for these theories. Most strikingly we find that the Coulomb branches of higher rank theories are generically not-freely generated.
high energy physics theory
In almost all quantum applications, one of the key steps is to verify that the fidelity of the prepared quantum state meets the expectations. In this paper, we propose a new approach to solve this problem using machine learning techniques. Compared to other fidelity estimation methods, our method is applicable to arbitrary quantum states, the number of required measurement settings is small, and this number does not increase with the size of the system. For example, for a general five-qubit quantum state, only four measurement settings are required to predict its fidelity with $\pm1\%$ precision in a non-adversarial scenario. This machine learning-based approach for estimating quantum state fidelity has the potential to be widely used in the field of quantum information.
quantum physics
We compute the dominant two-loop corrections to the Higgs trilinear coupling $\lambda_{hhh}$ and to the Higgs quartic coupling $\lambda_{hhhh}$ in models with extended Higgs sectors, using the effective-potential approximation. We provide in this paper all necessary details about our calculations, and present general $\overline{\text{MS}}$ expressions for derivatives of the integrals appearing in the effective potential at two loops. We also consider three particular Beyond-the-Standard-Model (BSM) scenarios -- namely a typical scenario of an Inert Doublet Model (IDM), and scenarios of a Two-Higgs-Doublet Model (2HDM) and of a Higgs Singlet Model (HSM) without scalar mixing -- and we include all the necessary finite counterterms to obtain (in addition to $\overline{\text{MS}}$ results) on-shell scheme expressions for the corrections to the Higgs self-couplings. With these analytic results, we investigate the possible magnitude of two-loop BSM contributions to the Higgs self-couplings and the fate of the non-decoupling effects that are known to appear at one loop. We find that, at least as long as pertubative unitarity conditions are fulfilled, the size of two-loop corrections remains well below that of one-loop corrections. Typically, two-loop contributions to $\lambda_{hhh}$ amount to approximately 20% of those at one loop, implying that the non-decoupling effects observed at one loop are not significantly modified, but also meaning that higher-order corrections need to be taken into account for the future perspective of precise measurements of the Higgs trilinear coupling.
high energy physics phenomenology
We demonstrate that $\mathbb{Z}_2$ gauge transformations and lattice deformations in Kitaev's honeycomb lattice model can have the same description in the continuum limit of the model in terms of chiral gauge fields. The chiral gauge fields are coupled to the Majorana fermions that satisfy the Dirac dispersion relation in the non-Abelian sector of the model. For particular values, the effective chiral gauge field becomes equivalent to the $\mathbb{Z}_2$ gauge field, enabling us to associate effective fluxes to lattice deformations. Motivated by this equivalence, we consider Majorana-bounding $\pi$ vortices and Majorana-bounding lattice twists and demonstrate that they are adiabatically connected to each other. This equivalence opens the possibility for novel encoding of Majorana-bounding defects that might be easier to realise in experiments.
condensed matter
Traumatic Brain Injury (TBI) is a common cause of death and disability. However, existing tools for TBI diagnosis are either subjective or require extensive clinical setup and expertise. The increasing affordability and reduction in size of relatively high-performance computing systems combined with promising results from TBI related machine learning research make it possible to create compact and portable systems for early detection of TBI. This work describes a Raspberry Pi based portable, real-time data acquisition, and automated processing system that uses machine learning to efficiently identify TBI and automatically score sleep stages from a single-channel Electroen-cephalogram (EEG) signal. We discuss the design, implementation, and verification of the system that can digitize EEG signal using an Analog to Digital Converter (ADC) and perform real-time signal classification to detect the presence of mild TBI (mTBI). We utilize Convolutional Neural Networks (CNN) and XGBoost based predictive models to evaluate the performance and demonstrate the versatility of the system to operate with multiple types of predictive models. We achieve a peak classification accuracy of more than 90% with a classification time of less than 1 s across 16 s - 64 s epochs for TBI vs control conditions. This work can enable development of systems suitable for field use without requiring specialized medical equipment for early TBI detection applications and TBI research. Further, this work opens avenues to implement connected, real-time TBI related health and wellness monitoring systems.
electrical engineering and systems science
This note extends a recently proposed algorithm for model identification and robust MPC of asymptotically stable, linear time-invariant systems subject to process and measurement disturbances. Independent output predictors for different steps ahead are estimated with Set Membership methods. It is here shown that the corresponding prediction error bounds are the least conservative in the considered model class. Then, a new multi-rate robust MPC algorithm is developed, employing said multi-step predictors to robustly enforce constraints and stability against disturbances and model uncertainty, and to reduce conservativeness. A simulation example illustrates the effectiveness of the approach.
electrical engineering and systems science
Removing the field redefinitions, the Bianchi identities and the total derivative freedoms from the general form of gauge invariant NS-NS couplings at order $\alpha'^3$, we have found that the minimum number of independent couplings is 872. We find that there are schemes in which there is no term with structures $R,\,R_{\mu\nu},\,\nabla_\mu H^{\mu\alpha\beta}$, $ \nabla_\mu\nabla^\mu\Phi$. In these schemes, there are sub-schemes in which, except one term, the couplings can have no term with more than two derivatives. In the sub-scheme that we have chosen, the 872 couplings appear in 55 different structures. We fix some of the parameters in type II supersting theory by its corresponding four-point functions. The coupling which has term with more than two derivatives is constraint to be zero by the four-point functions.
high energy physics theory
Development of new techniques to search for particles beyond the standard model is crucial for understanding the ultraviolet completion of particle physics. Several hypothetical particles are predicted to mediate exotic spin-dependent interactions between particles of the standard model that may be accessible to laboratory experiments. However, laboratory searches are mostly conducted for static spin-dependent interactions, with only a few experiments so far addressing spin- and velocity-dependent interactions. Here, we demonstrate a search for exotic spin- and velocity-dependent interactions with a spin-based amplifier. Our technique makes use of hyperpolarized nuclear spins as a pre-amplifier to enhance the effect of pseudo-magnetic field produced by exotic interactions by an amplification factor of > 100. Using such a spin-based amplifier, we establish constraints on the spin- and velocity-dependent interactions between polarized and unpolarized nucleons in the force range of 0.03-100 m. Our limits represent at least two orders of magnitude improvement compared to previous experiments. The established technique can be further extended to investigate other exotic spin-dependent interactions.
quantum physics
We investigate the representation theory of the Temperley-Lieb algebra, $TL_n(\delta)$, defined over a field of positive characteristic. The principle question we seek to answer is the multiplicity of simple modules in cell modules for $TL_n$ over arbitrary rings. This provides us with the decomposition numbers for this algebra, as well as the dimensions of all simple modules. We obtain these results from diagrammatic principles, without appealing to realisations of $TL_n$ as endomorphism algebras of $U_q(\mathfrak{sl}_2)$ modules.
mathematics
We apply general moment identities for Poisson stochastic integrals with random integrands to the computation of the moments of Markovian growth-collapse processes. This extends existing formulas for mean and variance available in the literature to closed form moments expressions of all orders. In comparison with other methods based on differential equations, our approach yields polynomial expressions in the time parameter. We also treat the case of the associated embedded chain.
mathematics
In this paper, a strong multiplicity one theorem for Katz modular forms is studied. We show that a cuspidal Katz eigenform which admits an irreducible Galois representation is in the level and weight old space of a uniquely associated Katz newform. We also set up multiplicity one results for Katz eigenforms which have reducible Galois representation.
mathematics
We show that a necessary and sufficient condition for a cyclic code C over Z4 of odd length to be an LCD code is that C=(f(x)) where f is a self-reciprocal polynomial in Z4[X].
mathematics
We demonstrate high-frequency (> 3 GHz), high quality factor radio frequency (RF) resonators in unreleased thin film gallium nitride (GaN) on sapphire and silicon carbide substrates by exploiting acoustic guided mode (Lamb wave) resonances. The associated energy trapping, due to mass loading from the gold electrodes, allows us to efficiently excite these resonances from a 50 $\Omega$ input. The higher phase velocity, combined with lower electrode damping, enables high quality factors with moderate electrode pitch, and provides a viable route towards high-frequency piezoelectric devices. The GaN platform, with its ability to guide and localize high-frequency sound on the surface of a chip with access to high-performance active devices, will serve as a key building block for monolithically integrated RF front-ends.
physics
The last two years have seen widespread acceptance of the idea that the Milky Way halo was largely created in an early (8-10 Gyr ago) and massive ($> 10^{10} M_\odot$) merger. The roots of this idea pre-date the Gaia mission, but the exquisite proper motions available from Gaia have made the hypothesis irresistible. We trace the history of this idea, reviewing the series of papers that led to our current understanding.
astrophysics
With the development of high energy physics experiments, a large amount of exotic states in the hadronic sector have been observed. In order to shed some insights on the nature of the tetraquark and pentaquark candidates, a constituent quark model, along with the gaussian expansion method, has been employed systematically in the real- and complex-range investigations. We review herein the double- and full-heavy tetraquarks, but also the hidden-charm, -bottom and doubly charmed pentaquarks. Several experimentally observed exotic hadrons are well reproduced within our approach; moreover, their possible compositeness are suggested and other properties analyzed accordingly such as their decay widths and general patterns in the spectrum. Besides, we report also some predictions not yet seen by experiment within the studied tetraquark and pentaquark sectors.
high energy physics phenomenology
Multiple sets of measurements on the same objects obtained from different platforms may reflect partially complementary information of the studied system. The integrative analysis of such data sets not only provides us with the opportunity of a deeper understanding of the studied system, but also introduces some new statistical challenges. First, the separation of information that is common across all or some of the data sets, and the information that is specific to each data set is problematic. Furthermore, these data sets are often a mix of quantitative and discrete (binary or categorical) data types, while commonly used data fusion methods require all data sets to be quantitative. In this paper, we propose an exponential family simultaneous component analysis (ESCA) model to tackle the potential mixed data types problem of multiple data sets. In addition, a structured sparse pattern of the loading matrix is induced through a nearly unbiased group concave penalty to disentangle the global, local common and distinct information of the multiple data sets. A Majorization-Minimization based algorithm is derived to fit the proposed model. Analytic solutions are derived for updating all the parameters of the model in each iteration, and the algorithm will decrease the objective function in each iteration monotonically. For model selection, a missing value based cross validation procedure is implemented. The advantages of the proposed method in comparison with other approaches are assessed using comprehensive simulations as well as the analysis of real data from a chronic lymphocytic leukaemia (CLL) study. Availability: the codes to reproduce the results in this article are available at https://gitlab.com/uvabda.
statistics
We experimentally realize a nonlinear quantum protocol on single-photon qubits with linear optical elements and appropriate measurements. The quantum nonlinearity is induced by post-selecting the polarization qubit based on a measurement result obtained on the spatial degree of freedom of the single photon which plays the role of a second qubit. Initially, both qubits are prepared in the same quantum state and an appropriate two-qubit unitary transformation entangles them before the measurement on the spatial part. We analyze the result by quantum state tomography on the polarization degree of freedom. We then demonstrate the usefulness of the protocol for quantum state discrimination by iteratively applying it on either one of two slightly different quantum states which rapidly converge to different orthogonal states by the iterative dynamics. Our work opens the door to employ effective quantum nonlinear evolution for quantum information processing.
quantum physics
Variational Auto-encoders (VAEs) are deep generative latent variable models consisting of two components: a generative model that captures a data distribution p(x) by transforming a distribution p(z) over latent space, and an inference model that infers likely latent codes for each data point (Kingma and Welling, 2013). Recent work shows that traditional training methods tend to yield solutions that violate modeling desiderata: (1) the learned generative model captures the observed data distribution but does so while ignoring the latent codes, resulting in codes that do not represent the data (e.g. van den Oord et al. (2017); Kim et al. (2018)); (2) the aggregate of the learned latent codes does not match the prior p(z). This mismatch means that the learned generative model will be unable to generate realistic data with samples from p(z)(e.g. Makhzani et al. (2015); Tomczak and Welling (2017)). In this paper, we demonstrate that both issues stem from the fact that the global optima of the VAE training objective often correspond to undesirable solutions. Our analysis builds on two observations: (1) the generative model is unidentifiable - there exist many generative models that explain the data equally well, each with different (and potentially unwanted) properties and (2) bias in the VAE objective - the VAE objective may prefer generative models that explain the data poorly but have posteriors that are easy to approximate. We present a novel inference method, LiBI, mitigating the problems identified in our analysis. On synthetic datasets, we show that LiBI can learn generative models that capture the data distribution and inference models that better satisfy modeling assumptions when traditional methods struggle to do so.
computer science
A novel phase retrieval algorithm for broadband hyperspectral phase imaging from noisy intensity observations is proposed. It utilizes advantages of the Fourier Transform spectroscopy in the self-referencing optical setup and provides, additionally beyond spectral intensity distribution, reconstruction of the investigated object's phase. The noise amplification Fellgett's disadvantage is relaxed by the application of sparse wavefront noise filtering embedded in the proposed algorithm. The algorithm reliability is proved by simulation tests and results of physical experiments on transparent objects which demonstrate precise phase imaging and object depth (profile) reconstructions.
electrical engineering and systems science
Films of Hf0.5Z0.5O2 (HZO) contain a network of grain boundaries. In (111) HZO epitaxial films on (001) SrTiO3, for instance, twinned orthorhombic (o-HZO) ferroelectric crystallites coexist with grain boundaries between o-HZO and a residual paraelectric monoclinic (m-HZO) phase. These grain boundaries contribute to the resistive switching response in addition to the genuine ferroelectric polarization switching and have detrimental effects on device performance. Here, it is shown that, by using suitable nanometric capping layers deposited on HZO film, a radical improvement of the operation window of the tunnel device can be achieved. Crystalline SrTiO3 and amorphous AlOx are explored as capping layers. It is observed that these layers conformally coat the HZO surface and allow to increase the yield and homogeneity of functioning ferroelectric junctions while strengthening endurance. Data show that the capping layers block ionic-like transport channels across grain boundaries. It is suggested that they act as oxygen suppliers to the oxygen-getters grain boundaries in HZO. In this scenario it could be envisaged that these and other oxides could also be explored and tested for fully compatible CMOS technologies.
condensed matter
In this work, we propose a probabilistic teleportation protocol to teleport a single qubit via three-qubit W-states using two-qubit measurement basis. We show that for the proper choice of the state parameter of the resource state, it is possible to make success probability of the protocol very high. We deduce the condition for the successful execution of our teleportation protocol and this gives us new class of three-qubit W-states which act as a resource state. We have constructed operators that can be used to verify the condition of teleportation in experiment. This verification is necessary for the detection of whether the given three-qubit state is useful in our teleportation protocol or not. Further we quantify the amount of entanglement contained in the newly identified shared W-class of states. Moreover, we show that the W-class of shared state used in the teleportation protocol can be prepared using NMR set up.
quantum physics
Analyzing and utilizing spatiotemporal big data are essential for studies concerning climate change. However, such data are not fully integrated into climate models owing to limitations in statistical frameworks. Herein, we employ VARENN (visually augmented representation of environment for neural networks) to efficiently summarize monthly observations of climate data for 1901-2016 into 2-dimensional graphical images. Using red, green, and blue channels of color images, three different variables are simultaneously represented in a single image. For global datasets, models were trained via convolutional neural networks. These models successfully classified rises and falls in temperature and precipitation. Moreover, similarities between the input and target variables were observed to have a significant effect on model accuracy. The input variables had both seasonal and interannual variations, whose importance was quantified for model efficacy. VARENN is thus an effective method to summarize spatiotemporal data objectively and accurately.
statistics
We establish a surface order large deviation estimate for the magnetisation of low temperature $\phi^4_3$. As a byproduct, we obtain a decay of spectral gap for its Glauber dynamics given by the $\phi^4_3$ singular stochastic PDE. Our main technical contributions are contour bounds for $\phi^4_3$, which extends 2D results by Glimm, Jaffe, and Spencer (1975). We adapt an argument by Bodineau, Velenik, and Ioffe (2000) to use these contour bounds to study phase segregation. The main challenge to obtain the contour bounds is to handle the ultraviolet divergences of $\phi^4_3$ whilst preserving the structure of the low temperature potential. To do this, we build on the variational approach to ultraviolet stability for $\phi^4_3$ developed recently by Barashkov and Gubinelli (2019).
mathematics
Let $0\leq \alpha<n$, $m\in \mathbb{N}$ and let consider $T_{\alpha,m}$ be a of integral operator, given by kernel of the form $$K(x,y)=k_1(x-A_1y)k_2(x-A_2y)\dots k_m(x-A_my),$$ where $A_i$ are invertible matrices and each $k_i$ satisfies a fractional size and generalized fractional H\"ormander condition. In [Iba\~nez-Firnkorn, G. H., and Riveros, M. S. (2018). Certain fractional type operators with H\"ormander conditions. To appear in Ann. Acad. Sci. Fenn. Math.] it was proved that $T_{\alpha,m}$ is controlled in $L^p(w)$-norms, $w\in A_{\infty}$, by the sum of maximal operators $M_{A_i^{-1},\alpha}$. In this paper we present the class of weights $\mathcal{A}_{A,p,q}$, where $A$ is an invertible matrix. This class are the good weights for the weak-type estimate of $M_{A^{-1},\alpha}$. For certain kernels $k_i$ we can characterize the weights for the strong-type estimate of $T_{\alpha,m}$. Also, we give a the strong-type estimate using testing conditions.
mathematics
Thanks to space-borne experiments of cosmic-ray (CR) detection, such as the AMS and PAMELA missions in low-Earth orbit, or the Voyager-1 spacecraft in the interstellar space, a large collection of multi-channel and time-resolved CR data has become available. Recently, the AMS experiment has released new precision data, on the proton and helium fluxes in CRs, measured on monthly basis during its first six years of mission. The AMS data reveal a remarkable long-term behavior in the temporal evolution of the proton-to-helium ratio at rigidity $R = p/Z <$ 3 GV. As we have argued in a recent work, such a behavior may reflect the transport properties of low-rigidity CRs in the inteplanetary space. In particular, it can be caused by mass/charge dependence of the CR diffusion coefficient. In this paper, we present our developments in the numerical modeling of CR transport in the Milky Way and in the heliosphere. Within our model, and with the help of approximated analytical solutions, we describe in details the relations between the properties of CR diffusion and the time-dependent evolution of the proton-to-helium ratio.
astrophysics
Moir\'e superlattices are emerging as a new route for engineering strongly correlated electronic states in two-dimensional van der Waals heterostructures, as recently demonstrated in the correlated insulating and superconducting states in magic-angle twisted bilayer graphene and ABC trilayer graphene/boron nitride moir\'e superlattices. Transition metal dichalcogenide (TMDC) moir\'e heterostructures provide another exciting model system to explore correlated quantum phenomena, with the addition of strong light-matter interactions and large spin-orbital coupling. Here we report the optical detection of strongly correlated phases in semiconducting WSe2/WS2 moir\'e superlattices. Our sensitive optical detection technique reveals a Mott insulator state at one hole per superlattice site ({\nu} = 1), and surprising insulating phases at fractional filling factors {\nu} = 1/3 and 2/3, which we assign to generalized Wigner crystallization on an underlying lattice. Furthermore, the unique spin-valley optical selection rules of TMDC heterostructures allow us to optically create and investigate low-energy spin excited states in the Mott insulator. We reveal an especially slow spin relaxation lifetime of many microseconds in the Mott insulating state, orders-of-magnitude longer than that of charge excitations. Our studies highlight novel correlated physics that can emerge in moir\'e superlattices beyond graphene.
condensed matter
Classification of the extent of damage suffered by a building in a seismic event is crucial from the safety perspective and repairing work. In this study, authors have proposed a CNN based autonomous damage detection model. Over 1200 images of different types of buildings-1000 for training and 200 for testing classified into 4 categories according to the extent of damage suffered. Categories are namely, no damage, minor damage, major damage, and collapse. Trained network tested by the application of various algorithms with different learning rates. The most optimum results were obtained on the application of VGG16 transfer learning model with a learning rate of 1e-5 as it gave a training accuracy of 97.85% and validation accuracy of up to 89.38%. The model developed has real-time application in the event of an earthquake.
computer science
We present an analytical and numerical analysis of the particle creation in an optomechanical cavity in parametric resonance. We treat both the electromagnetic field and the mirror as quantum degrees of freedom and study the dynamical evolution as a closed quantum system. We consider different initial states and investigate the spontaneous emission of photons from phonons in the mirror. We find that for initial phononic product states the evolution of the photon number can be described as a non-harmonic quantum oscillator, providing an useful tool so as to estimate the maximum and mean number of photons produced for arbitrary high energies. The efficiency of this mechanism is further analyzed for a detuned cavity as well as the possibility of stimulating the photon production by adding some initial ones to the cavity. We also find relationships for the maximum and mean entanglement between the mirror and the wall in these states. Additionally we study coherent states for the motion of the mirror to connect this model with previous results from quantum field theory with a classical mirror. Finally we study thermal states of phonons in the wall and the equilibration process that leads to a stationary distribution.
quantum physics
Applications such as autonomous vehicles and medical screening use deep learning models to localize and identify hundreds of objects in a single frame. In the past, it has been shown how an attacker can fool these models by placing an adversarial patch within a scene. However, these patches must be placed in the target location and do not explicitly alter the semantics elsewhere in the image. In this paper, we introduce a new type of adversarial patch which alters a model's perception of an image's semantics. These patches can be placed anywhere within an image to change the classification or semantics of locations far from the patch. We call this new class of adversarial examples `remote adversarial patches' (RAP). We implement our own RAP called IPatch and perform an in-depth analysis on image segmentation RAP attacks using five state-of-the-art architectures with eight different encoders on the CamVid street view dataset. Moreover, we demonstrate that the attack can be extended to object recognition models with preliminary results on the popular YOLOv3 model. We found that the patch can change the classification of a remote target region with a success rate of up to 93% on average.
computer science
A modified periodic boundary condition adequate for non-hermitian topological systems is proposed. Under this boundary condition a topological number characterizing the system is defined in the same way as in the corresponding hermitian system and hence, at the cost of introducing an additional parameter that characterizes the non-hermitian skin effect, the idea of bulk-edge correspondence in the hermitian limit can be applied almost as it is. We develop this framework through the analysis of a non-hermitian SSH model with chiral symmetry, and prove the bulk-edge correspondence in a generalized parameter space. A finite region in this parameter space with a nontrivial pair of chiral winding numbers is identified as topologically nontrivial, indicating the existence of a topologically protected edge state under open boundary.
condensed matter
Longest common subsequence (LCS) is one of the most fundamental problems in combinatorial optimization. Apart from theoretical importance, LCS has enormous applications in bioinformatics, revision control systems, and data comparison programs. Although a simple dynamic program computes LCS in quadratic time, it has been recently proven that the problem admits a conditional lower bound and may not be solved in truly subquadratic time. In addition to this, LCS is notoriously hard with respect to approximation algorithms. Apart from a trivial sampling technique that obtains a $n^{x}$ approximation solution in time $O(n^{2-2x})$ nothing else is known for LCS. This is in sharp contrast to its dual problem edit distance for which several linear time solutions are obtained in the past two decades.
computer science
The development of high-brightness free-electron lasers (FEL) has revolutionised our ability to create and study matter in the high-energy-density (HED) regime. Current diagnostic techniques have been very successful in yielding information on fundamental thermodynamic plasma properties, but provide only limited or indirect information on the detailed quantum structure of these systems, and on how it is affected by ionization dynamics. Here we show how the electronic structure of solid-density nickel, heated to temperatures of 10's of eV on femtosecond timescales, can be studied by resonant (Raman) inelastic x-ray scattering (RIXS) using the Linac Coherent Light Source FEL. We present single-shot measurements of the valence density of states in the x-ray-heated transient system, and extract simultaneously electron temperatures, ionization, and ionization potential energies. The RIXS spectrum provides a wealth of information on the valence structure of the HED system that goes beyond what can be extracted from x-ray absorption or emission spectroscopy alone.
physics
This article determines how to implement spatial spectral analysis of point processes (in two dimensions or more), by establishing the moments of raw spectral summaries of point processes. We establish the first moments of raw direct spectral estimates such as the discrete Fourier transform of a point pattern. These have a number of surprising features that departs from the properties of raw spectral estimates of random fields and time series. As for random fields, the special case of isotropic processes warrants special attention, which we discuss. For time series and random fields white noise plays a special role, mirrored by the Poisson processes in the case of the point process. For random fields bilinear estimators are prevalent in spectral analysis. We discuss how to smooth any bilinear spectral estimator for a point process. We also determine how to taper this bilinear spectral estimator, how to calculate the periodogram, sample the wavenumbers and discuss the correlation of the periodogram. In parts this corresponds to recommending suitable separable as well as isotropic tapers in d dimensions. This, in aggregation, establishes the foundations for spectral analysis of point processes.
statistics
This paper proposes to employ a Inception-ResNet inspired deep learning architecture called Tiny-Inception-ResNet-v2 to eliminate bonded labor by identifying brick kilns within "Brick-Kiln-Belt" of South Asia. The framework is developed by training a network on the satellite imagery consisting of 11 different classes of South Asian region. The dataset developed during the process includes the geo-referenced images of brick kilns, houses, roads, tennis courts, farms, sparse trees, dense trees, orchards, parking lots, parks and barren lands. The dataset is made publicly available for further research. Our proposed network architecture with very fewer learning parameters outperforms all state-of-the-art architectures employed for recognition of brick kilns. Our proposed solution would enable regional monitoring and evaluation mechanisms for the Sustainable Development Goals.
computer science
Cosmological perturbation theory for the late Universe dominated by dark matter is extended beyond the perfect fluid approximation by taking the dark matter velocity dispersion tensor as an additional field into account. A proper tensor decomposition of the latter leads to two additional scalar fields, as well as a vector and a tensor field. Most importantly, the trace of the velocity dispersion tensor can have a spatially homogeneous and isotropic expectation value. While it decays at early times, we show that a back-reaction effect quadratic in perturbations makes it grow strongly at late times. We compare sterile neutrinos as a candidate for comparatively warm dark matter to weakly interacting massive particles as a rather cold dark matter candidate and show that the late time growth of velocity dispersion is stronger for the latter. Another feature of a non-vanishing velocity dispersion expectation value is that it destroys the apparent self-consistency of the single-stream approximation and allows thereby to treat times and scales beyond shell-crossing.
astrophysics
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos. Previous work suggests that representations can be disentangled if all but a few factors in the environment stay constant at any point in time. As a result, algorithms proposed for this problem have only been tested on carefully constructed datasets with this exact property, leaving it unclear whether they will transfer to natural scenes. Here we provide evidence that objects in segmented natural movies undergo transitions that are typically small in magnitude with occasional large jumps, which is characteristic of a temporally sparse distribution. We leverage this finding and present SlowVAE, a model for unsupervised representation learning that uses a sparse prior on temporally adjacent observations to disentangle generative factors without any assumptions on the number of changing factors. We provide a proof of identifiability and show that the model reliably learns disentangled representations on several established benchmark datasets, often surpassing the current state-of-the-art. We additionally demonstrate transferability towards video datasets with natural dynamics, Natural Sprites and KITTI Masks, which we contribute as benchmarks for guiding disentanglement research towards more natural data domains.
statistics
The addition of porosity to thermoelectric materials can significantly increase the figure of merit, ZT, by reducing the thermal conductivity. Unfortunately, porosity is also detrimental to the thermoelectric power factor in the numerator of the figure of merit ZT. In this manuscript we derive strategies to recoup electrical performance in nanoporous Si by fine tuning the carrier concentration and through judicious design of the pore size and shape so as to provide energy selective electron filtering. In this study, we considered phosphorus doped silicon containing discrete pores that are either spheres, cylinders, cubes, or triangular prisms. The effects from these pores are compared with those from extended pores with circular, square and triangular cross sectional shape, and infinite length perpendicular to the electrical current. A semiclassical Boltzmann transport equation is used to model Si thermoelectric power factor. This model reveals three key results: The largest enhancement in Seebeck coefficient occurs with cubic pores. The fractional improvement is about 15% at low carrier concentration ($< 10^{20}\ \mathrm{1/cm^3}$) up to 60% at high carrier population with characteristic length around $\sim 1\ \mathrm{nm}$. To obtain the best energy filtering effect at room temperature, nanoporous Si needs to be doped to higher carrier concentration than is optimal for bulk Si. Finally, in $n$-type Si thermoelectrics the electron filtering effect that can be generated with nanoscale porosity is significantly lower than the ideal filtering effect; nevertheless, the enhancement in the Seebeck coefficient that can be obtained is large enough to offset the reduction in electrical conductivity caused by porosity.
condensed matter
In this note we discuss various classical membrane solutions in AdS$_4$ spacetime: simple embeddings given by polynomials in ambient space, solutions with non-linear waves, and piecewise linear solutions.
high energy physics theory
We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnUNet pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our final ensemble took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.
electrical engineering and systems science
Kitaev spin liquid (KSL) system has attracted tremendous attention in past years because of its fundamental significance in condensed matter physics and promising applications in fault-tolerant topological quantum computation. Material realization of such a system remains a major challenge in the field due to the unusual configuration of anisotropic spin interactions, though great effort has been made before. Here we reveal that rare-earth chalcohalides REChX (RE=rare earth, Ch=O, S, Se, Te, X=F, Cl, Br, I) can serve as a family of KSL candidates. Most family members have the typical SmSI-type structure with a high symmetry of R-3m and rare-earth magnetic ions form an undistorted honeycomb lattice. The strong spin-orbit coupling of 4f electrons intrinsically offers anisotropic spin interactions as required by Kitaev model. We have grown the crystals of YbOCl and synthesized the polycrystals of SmSI, ErOF, HoOF and DyOF, and made careful structural characterizations. We carry out magnetic and heat capacity measurements down to 1.8 K and find no obvious magnetic transition in all the samples but DyOF. The van der Waals interlayer coupling highlights the true two-dimensionality of the family which is vital for the exact realization of Abelian/non-Abelian anyons, and the graphene-like feature will be a prominent advantage for developing miniaturized devices. The family is expected to act as an inspiring material platform for the exploration of KSL physics.
condensed matter
We study the conformal window of asymptotically free gauge theories containing $N_f$ flavors of fermion matter transforming to the vector and two-index representations of $SO(N),~SU(N)$ and $Sp(2N)$ gauge groups. For $SO(N)$ we also consider the spinorial representation. We determine the critical number of flavors $N_f^{\rm cr}$, corresponding to the lower end of the conformal window, by using the conjectured critical condition on the anomalous dimension of the fermion bilinear at an infrared fixed point, $\gamma_{\bar{\psi}\psi}=1$ or equivalently $\gamma_{\bar{\psi}\psi}(2-\gamma_{\bar{\psi}\psi})=1$. To compute the anomalous dimension we employ the Banks-Zaks conformal expansion up to the $4$th order in $\Delta_{N_f}=N_f^{\rm AF}-N_f$ with $N_f^{\rm AF}$ denoting the onset of the loss of asymptotic freedom, where we show that the latter critical condition provides a better performance along with this conformal expansion. To quantify the uncertainties in our analysis, which potentially originate from nonperturbative effects, we propose two distinct approaches by assuming the large order behavior of the conformal expansion separately, either convergent or divergent asymptotic. In the former case, we take the difference in the Pad\'e approximants to the two definitions of the critical condition, whereas in the latter case the truncation error associated with the singularity in the Borel plane is taken into account. Our results are further compared to other analytical methods as well as lattice results available in the literature. In particular, we find that $SU(2)$ with six and $SU(3)$ with ten fundamental flavors are likely on the lower edge of the conformal window, which are consistent with the recent lattice results. We also predict that $Sp(4)$ theories with fundamental and antisymmetric fermions have the critical numbers of flavors, approximately ten and five, respectively.
high energy physics phenomenology
The paper proves that the number of k-skip-n-grams for a corpus of size $L$ is $$\frac{Ln + n + k' - n^2 - nk'}{n} \cdot \binom{n-1+k'}{n-1}$$ where $k' = \min(L - n + 1, k)$.
computer science
Adaptive control is a control method that has an adaptation mechanism that reacts to model uncertainties. The control method is used to realized synchronization of a new chaotic system in a unidirectional master-slave topology. The master chaotic system and the slave system are adopted as transmitter and receiver, respectively for the purpose of secure communications. Both analog and digital designs are realized. The digital system is a cycle and bit accurate design having a system rate of $450MHz$. The design is targeted at an Artix-7 Nexys 4 FPGA. The transmitters are accordingly modulated by analog signals and fixed-point signals of different resolutions, sampling and frequencies. Although the adaptive controller tends to react on introduction of a modulating signal, we show that with a special detection mechanism or filtering including exponential smoothing, the choice of which depends on the nature of the modulating signal, it is possible to recover the modulating signal at the receiver. Moreover, chaos-based spectrum-spectrum communication systems are realized based on adaptive synchronization of the new chaotic system. Furthermore, in order to ascertain the robustness of the adaptive controller, the modulated signal is transmitted via an awgn channel and the probability of error and bit-error-rate (BER) computed by sweeping across sets of SNR and noise power values in a Monte Carlo simulation. It turns out that the probabilities of error or error rates are reasonably low for effective communication through mediums with certain noise conditions. Hence, the adaptive controlled communication system can be robust.
electrical engineering and systems science
When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. Principal curves act as a nonlinear generalization of PCA and the present paper proposes a novel algorithm to automatically and sequentially learn principal curves from data streams. We show that our procedure is supported by regret bounds with optimal sublinear remainder terms. A greedy local search implementation (called \texttt{slpc}, for Sequential Learning Principal Curves) that incorporates both sleeping experts and multi-armed bandit ingredients is presented, along with its regret computation and performance on synthetic and real-life data.
statistics
This paper extends the result from Amnon Neeman regarding strong generators in Dperf(X), from X being a quasicompact, separated scheme to X being quasicompact, quasiseparated scheme that admits a separator. Neeman's result states a necessary and sufficient condition for Dperf(X) being regular. Together with being proper over a noetherian commutative ring, those conditions give an interesting description for when an R-linear functor H is representable.
mathematics
When is decoherence "effectively irreversible"? Here we examine this central question of quantum foundations using the tools of quantum computational complexity. We prove that, if one had a quantum circuit to determine if a system was in an equal superposition of two orthogonal states (for example, the $|$Alive$\rangle$ and $|$Dead$\rangle$ states of Schr\"{o}dinger's cat), then with only a slightly larger circuit, one could also $\mathit{swap}$ the two states (e.g., bring a dead cat back to life). In other words, observing interference between the $|$Alive$\rangle$and $|$Dead$\rangle$ states is a "necromancy-hard" problem, technologically infeasible in any world where death is permanent. As for the converse statement (i.e., ability to swap implies ability to detect interference), we show that it holds modulo a single exception, involving unitaries that (for example) map $|$Alive$\rangle$ to $|$Dead$\rangle$ but $|$Dead$\rangle$ to -$|$Alive$\rangle$. We also show that these statements are robust---i.e., even a $\mathit{partial}$ ability to observe interference implies partial swapping ability, and vice versa. Finally, without relying on any unproved complexity conjectures, we show that all of these results are quantitatively tight. Our results have possible implications for the state dependence of observables in quantum gravity, the subject that originally motivated this study.
quantum physics
Optimizing the discriminator in Generative Adversarial Networks (GANs) to completion in the inner training loop is computationally prohibitive, and on finite datasets would result in overfitting. To address this, a common update strategy is to alternate between k optimization steps for the discriminator D and one optimization step for the generator G. This strategy is repeated in various GAN algorithms where k is selected empirically. In this paper, we show that this update strategy is not optimal in terms of accuracy and convergence speed, and propose a new update strategy for Wasserstein GANs (WGAN) and other GANs using the WGAN loss(e.g. WGAN-GP, Deblur GAN, and Super-resolution GAN). The proposed update strategy is based on a loss change ratio comparison of G and D. We demonstrate that the proposed strategy improves both convergence speed and accuracy.
computer science
Determining the equilibrium charge of conducting spheres in plasmas is important for interpreting Langmuir probe measurements, plasma surface interactions and dust particle behaviour. The Monte Carlo code Dust in Magnetised Plasmas (DiMPl) has been developed for the purpose of determining the forces and charging behaviour of conducting spheroids under a variety of conditions and benchmarked against previous numerical results. The floating potentials of spheres in isothermal, collisionless, hydrogen plasmas as a function of magnetic field strength and size relative to Debye length are studied using DiMPl and compared with new results from the N-body tree code (pot) and recent particle in cell measurements. The results of all three simulations are similar, identifying a small range at modest ion magnetisation parameters over which the electron current is reduced relative to the ion current. The potential as a function of magnetic field strength is found to be relatively insensitive to dust size for dust smaller than the Debye length. The potential of large dust is found to depend less strongly on flow speed for modest magnetic field strengths and to decrease with increasing flow speed in the presence of strong magnetic fields for smaller dust. A semi-empirical model for the potential of small dust in a collisionless plasma as a function of magnetic field strength is developed which reproduces the expected currents and potentials in the high and low magnetic field limit.
physics
We examine the Banados-Teitelboim-Zanelli (BTZ) black hole in terms of the information geometry and consider what kind of quantum information produces the black hole metric in close connection with the anti-de Sitter space/conformal field theory (AdS/CFT) correspondence. We find a Hessian potential that exactly produces both the BTZ metric and the entanglement entropy formula for CFT_{1+1} at a finite temperature. Taking a free-falling frame near the event horizon is a key procedure to derive these exact results. We also find an alternative Hessian potential that produces the same BTZ metric, which is found using the duality relation based on the Legendre transformation. We realize that the dual representation originates from the entanglement Hamiltonian on the CFT side. Our results suggest that the present information-geometrical approach is very powerful for understanding the mechanism of the holographic renormalization group such as the AdS/CFT correspondence.
high energy physics theory
This chapter presents an overview on actuator attacks that exploit zero dynamics, and countermeasures against them. First, zero-dynamics attack is re-introduced based on a canonical representation called normal form. Then it is shown that the target dynamic system is at elevated risk if the associated zero dynamics is unstable. From there on, several questions are raised in series to ensure when the target system is immune to the attack of this kind. The first question is: Is the target system secure from zero-dynamics attack if it does not have any unstable zeros? An answer provided for this question is: No, the target system may still be at risk due to another attack surface emerging in the process of implementation. This is followed by a series of next questions, and in the course of providing answers, variants of the classic zero-dynamics attack are presented, from which the vulnerability of the target system is explored in depth. At the end, countermeasures are proposed to render the attack ineffective. Because it is known that the zero-dynamics in continuous-time systems cannot be modified by feedback, the main idea of the countermeasure is to relocate any unstable zero to a stable region in the stage of digital implementation through modified digital samplers and holders. Adversaries can still attack actuators, but due to the re-located zeros, they are of little use in damaging the target system.
electrical engineering and systems science
The entanglement of purification (EoP), which measures the classical correlations and entanglement of a given mixed state, has been conjectured to be dual to the area of the minimal cross section of the entanglement wedge in holography. Using the surface-state correspondence, we propose a `bit thread' formulation of the EoP. With this formulation, proofs of some known properties of the EoP are performed. Moreover, we show that the quantum advantage of dense code (QAoDC), which reflects the increase in the rate of classical information transmission through quantum channel due to entanglement, also admits a flow interpretation. In this picture, we can prove the monogamy relation of QAoDC with the EoP for tripartite states. We also derive a new lower bound for $S(AB)$ in terms of QAoDC, which is tighter than the one given by the Araki-Lieb inequality.
high energy physics theory
We study the learnability of a class of compact operators known as Schatten--von Neumann operators. These operators between infinite-dimensional function spaces play a central role in a variety of applications in learning theory and inverse problems. We address the question of sample complexity of learning Schatten-von Neumann operators and provide an upper bound on the number of measurements required for the empirical risk minimizer to generalize with arbitrary precision and probability, as a function of class parameter $p$. Our results give generalization guarantees for regression of infinite-dimensional signals from infinite-dimensional data. Next, we adapt the representer theorem of Abernethy \emph{et al.} to show that empirical risk minimization over an a priori infinite-dimensional, non-compact set, can be converted to a convex finite dimensional optimization problem over a compact set. In summary, the class of $p$-Schatten--von Neumann operators is probably approximately correct (PAC)-learnable via a practical convex program for any $p < \infty$.
statistics
In-situ aeroengine maintenance works are highly beneficial as it can significantly reduce the current maintenance cycle which is extensive and costly due to the disassembly requirement of engines from aircrafts. However, navigating in/out via inspection ports and performing multi-axis movements with end-effectors in constrained environments (e.g. combustion chamber) are fairly challenging. A novel extra-slender (diameter-to-length ratio <0.02) dual-stage continuum robot (16 degree-of-freedom) is proposed to navigate in/out confined environments and perform required configuration shapes for further repair operations. Firstly, the robot design presents several innovative mechatronic solutions: (i) dual-stage tendon-driven structure with bevelled disks to perform required shapes and to provide selective stiffness for carrying high payloads; (ii) various rigid-compliant combined joints to enable different flexibility and stiffness in each stage; (iii) three commanding cables for each 2-DoF section to minimise the number of actuators with precise actuations. Secondly, a segment-scaled piecewise-constant-curvature-theory based kinematic model and a Kirchhoff-elastic-rod-theory based static model are established by considering the applied forces/moments (friction, actuation, gravity and external load), where the friction coefficient is modelled as a function of bending angle. Finally, experiments were carried out to validate the proposed static modelling and to evaluate the robot capabilities of performing the predefined shape and stiffness.
computer science
In his study of the Radon Nikod\'ym property of Banach spaces, Bourgain showed (among other things) that in any closed, bounded, convex set $A$ that is nondentable, one can find a separated, weakly closed bush. In this note, we prove a generalization of Bourgain's result: in any bounded, nondentable set $A$ (not necessarily closed or convex) one can find a separated, weakly closed approximate bush. Similarly, we obtain as corollaries the existence of $A$-valued quasimartingales with sharply divergent behavior.
mathematics
In our previous paper I (del Valle--Turbiner, Int. J. Mod. Phys. A34, 1950143, 2019) it was developed the formalism to study the general $D$-dimensional radial anharmonic oscillator with potential $V(r)= \frac{1}{g^2}\,\hat{V}(gr)$. It was based on the Perturbation Theory (PT) in powers of $g$ (weak coupling regime) and in inverse, fractional powers of $g$ (strong coupling regime) in both $r$-space and in $(gr)$-space, respectively. As the result it was introduced - the Approximant - a locally-accurate uniform compact approximation of a wave function. If taken as a trial function in variational calculations it has led to variational energies of unprecedented accuracy for cubic anharmonic oscillator. In this paper the formalism is applied to both quartic and sextic, spherically-symmetric radial anharmonic oscillators with two term potentials $V(r)= r^2 + g^{2(m-1)}\, r^{2m}, m=2,3$, respectively. It is shown that a two-parametric Approximant for quartic oscillator and a five-parametric one for sextic oscillator for the first four eigenstates used to calculate the variational energy are accurate in 8-12 figures for any $D=1,2,3\ldots $ and $g \geq 0$, while the relative deviation of the Approximant from the exact eigenfunction is less than $10^{-6}$ for any $r \geq 0$.
quantum physics
We study the role of non-abelian anomalies in relativistic fluids. To this end, we compute the local functional that solves the anomaly equations, and obtain analytical expressions for the covariant currents and the Bardeen-Zumino terms. We particularize these results to a background with two flavors, and consider the cases of unbroken and broken chiral symmetry. Finally, we provide explicit results for the constitutive relations of chiral nuclear matter interacting with external electromagnetic fields and in presence of chiral imbalance. We emphasize the non-dissipative nature of the chiral electric effect.
high energy physics theory
In recent years new functionality for VLBI data processing has been added to the CASA package. This paper presents the new CASA tasks 'fringefit' and 'accor', which are closely matched to their AIPS counterparts FRING and ACCOR. Several CASA tasks received upgrades to handle VLBI specific metadata. With the current CASA release VLBI data processing is possible, and functionality will be expanded in the upcoming release. Longer term developments include fringe fitting of broad, non-continuous frequency bands and dispersive delays, which will ensure that the number of use cases for VLBI calibration will increase in future CASA releases.
astrophysics
In this work we present a clustering technique called \textit{multi-level conformal clustering (MLCC)}. The technique is hierarchical in nature because it can be performed at multiple significance levels which yields greater insight into the data than performing it at just one level. We describe the theoretical underpinnings of MLCC, compare and contrast it with the hierarchical clustering algorithm, and then apply it to real world datasets to assess its performance. There are several advantages to using MLCC over more classical clustering techniques: Once a significance level has been set, MLCC is able to automatically select the number of clusters. Furthermore, thanks to the conformal prediction framework the resulting clustering model has a clear statistical meaning without any assumptions about the distribution of the data. This statistical robustness also allows us to perform clustering and anomaly detection simultaneously. Moreover, due to the flexibility of the conformal prediction framework, our algorithm can be used on top of many other machine learning algorithms.
statistics
Blazars are a subclass of AGN and flaring in multi-TeV gamma-ray seems to be the major activity in high energy blazars a subgrup of blazars. Flaring is also unpredictable and switches between quiescent and active states involving different time scales and fluxes. While in some high energy blazars a strong temporal correlation between X-ray and multi-TeV gamma-ray has been observed, outbursts in some other have no low energy counterparts and explanation of such extreme activity needs to be addressed through different mechanisms as it is not understood well. The extragalactic background light (EBL) plays an important role in the observation of these high energy gamma-rays as it attenuates through pair production of electron-positron and also changes the spectral shape of the high energy photons. In the context of the photohadronic model and taking EBL correction into account, flaring can be explained very well. In a series of papers we have developed this model to explain multi-TeV flaring events form many blazars. Here in this review, the photohadronic model is discussed and applied to explain the multi-TeV flaring from nearby high energy blazars: Markarian 421, Markarian 501 and 1ES1959+650.
astrophysics
We classify compact Riemann surfaces of genus $g$, where $g-1$ is a prime $p$, which have a group of automorphisms of order $\rho(g-1)$ for some integer $\rho\ge 1$, and determine isogeny decompositions of the corresponding Jacobian varieties. This extends results of Belolipetzky and the second author for $\rho>6$, and of the first and third authors for $\rho=3, 4, 5$ and $6$. As a corollary we classify the orientably regular hypermaps (including maps) of genus $p+1$, together with the non-orientable regular hypermaps of characteristic $-p$, with automorphism group of order divisible by the prime $p$; this extends results of Conder, \v Sir\'a\v n and Tucker for maps.
mathematics
In the blockchain-based, distributed computing platform Ethereum, programs called smart contracts are compiled to bytecode and executed on the Ethereum Virtual Machine (EVM). Executing EVM bytecode is subject to monetary fees---a clear optimization target. Our aim is to superoptimize EVM bytecode by encoding the operational semantics of EVM instructions as SMT formulas and leveraging a constraint solver to automatically find cheaper bytecode. We implement this approach in our EVM Bytecode SuperOptimizer ebso and perform two large scale evaluations on real-world data sets.
computer science
We extend the range of validity of the ARTIS 3D radiative transfer code up to hundreds of days after explosion, when Type Ia supernovae are in their nebular phase. To achieve this, we add a non-local thermodynamic equilibrium (non-LTE) population and ionisation solver, a new multi-frequency radiation field model, and a new atomic dataset with forbidden transitions. We treat collisions with non-thermal leptons resulting from nuclear decays to account for their contribution to excitation, ionisation, and heating. We validate our method with a variety of tests including comparing our synthetic nebular spectra for the well-known one-dimensional W7 model with the results of other studies. As an illustrative application of the code, we present synthetic nebular spectra for the detonation of a sub-Chandrasekhar white dwarf in which the possible effects of gravitational settling of Ne22 prior to explosion have been explored. Specifically, we compare synthetic nebular spectra for a 1.06 M$_\odot$ white dwarf model obtained when 5.5 Gyr of very-efficient settling is assumed to a similar model without settling. We find that this degree of Ne22 settling has only a modest effect on the resulting nebular spectra due to increased Ni58 abundance. Due to the high ionisation in sub-Chandrasekhar models, the nebular [Ni II] emission remains negligible, while the [Ni III] line strengths are increased and the overall ionisation balance is slightly lowered in the model with Ne22 settling. In common with previous studies of sub-Chandrasekhar models at nebular epochs, these models overproduce [Fe III] emission relative to [Fe II] in comparison to observations of normal Type Ia supernovae.
astrophysics
Cytoarchitectonic maps provide microstructural reference parcellations of the brain, describing its organization in terms of the spatial arrangement of neuronal cell bodies as measured from histological tissue sections. Recent work provided the first automatic segmentations of cytoarchitectonic areas in the visual system using Convolutional Neural Networks. We aim to extend this approach to become applicable to a wider range of brain areas, envisioning a solution for mapping the complete human brain. Inspired by recent success in image classification, we propose a contrastive learning objective for encoding microscopic image patches into robust microstructural features, which are efficient for cytoarchitectonic area classification. We show that a model pre-trained using this learning task outperforms a model trained from scratch, as well as a model pre-trained on a recently proposed auxiliary task. We perform cluster analysis in the feature space to show that the learned representations form anatomically meaningful groups.
electrical engineering and systems science
Numerous examples of functional relations for multiple polylogarithms are known. For elliptic polylogarithms, however, tools for the exploration of functional relations are available, but only very few relations are identified. Starting from an approach of Zagier and Gangl, which in turn is based on considerations about an elliptic version of the Bloch group, we explore functional relations between elliptic polylogarithms and link them to the relations which can be derived using the elliptic symbol formalism. The elliptic symbol formalism in turn allows for an alternative proof of the validity of the elliptic Bloch relation. While the five-term identity is the prime example of a functional identity for multiple polylogarithms and implies many dilogarithm identities, the situation in the elliptic setup is more involved: there is no simple elliptic analogue, but rather a whole class of elliptic identities.
high energy physics theory
Numerous quantum information protocols make use of maximally entangled two-particle states, or Bell states, in which information is stored in the correlations between the two particles rather than their individual properties. Retrieving information stored in this way means distinguishing between different Bell states, yet the well known no-go theorem establishes that projective linear evolution and local measurement (LELM) detection schemes can only reliably distinguish three of the four qubit Bell states. We establish maximum distinguishability of the qutrit Bell states of bosons via projective LELM measurements; only three of the nine Bell states can be distinguished. Next, we extend to the case of non-projective measurements. We strengthen the no-go theorem by showing that general LELM measurements cannot reliably distinguish all four qubit Bell states. We also establish that at most five qutrit Bell states can be distinguished with generalized LELM measurements.
quantum physics
The Two Higgs Doublet Model predicts the emergence of 3 distinct domain wall solutions arising from the breaking of 3 accidental global symmetries, $Z_2$, CP1 and CP2, at the electroweak scale for specific choices of the model parameters. We present numerical kink solutions to the field equations in all three cases along with dynamical simulations of the models in (2+1) and (3+1) dimensions. For each kink solution we define an associated topological current. In all three cases simulations produce a network of domain walls which deviates from power law scaling in Minkowski and FRW simulations. This deviation is attributed to a winding of the electroweak group parameters around the domain walls in our simulations. We observe a local violation of the neutral vacuum condition on the domain walls in our simulations. This violation is attributed to relative electroweak transformations across the domain walls which is a general feature emerging from random initial conditions.
high energy physics phenomenology
We studied electron spin resonance in a quantum magnet NiCl2-4SC(NH2)2, demonstrating a field-induced quantum phase transition from a quantum-disordered phase to an antiferromagnet. We observe two branches of the antiferromagnetic resonance of the ordered phase, one of them has a gap and the other is a Goldstone mode with zero frequency at a magnetic field along the four-fold axis. This zero frequency mode acquires a gap at a small tilting of the magnetic field with respect to this direction. The upper gap was found to be reduced in the doped compound Ni(Cl(1-x)Br(x))2-4SC(NH2)2 with $x=0.21$. This reduction is unexpected because of the previously reported rise of the main exchange constant in a doped compound. Further, a nonresonant diamagnetic susceptibility $\chi^{\prime}$ was found for the ordered phase in a wide frequency range above the quasi-Goldstone mode. This dynamic diamagnetism is as large as the dynamic susceptibility of the paramagnetic resonance. We speculate that it originates from a two-magnon absorption band of low-frequency dispersive magnon branch.
condensed matter
To reduce the storage requirements, remote sensing (RS) images are usually stored in compressed format. Existing scene classification approaches using deep neural networks (DNNs) require to fully decompress the images, which is a computationally demanding task in operational applications. To address this issue, in this paper we propose a novel approach to achieve scene classification in JPEG 2000 compressed RS images. The proposed approach consists of two main steps: i) approximation of the finer resolution sub-bands of reversible biorthogonal wavelet filters used in JPEG 2000; and ii) characterization of the high-level semantic content of approximated wavelet sub-bands and scene classification based on the learnt descriptors. This is achieved by taking codestreams associated with the coarsest resolution wavelet sub-band as input to approximate finer resolution sub-bands using a number of transposed convolutional layers. Then, a series of convolutional layers models the high-level semantic content of the approximated wavelet sub-band. Thus, the proposed approach models the multiresolution paradigm given in the JPEG 2000 compression algorithm in an end-to-end trainable unified neural network. In the classification stage, the proposed approach takes only the coarsest resolution wavelet sub-bands as input, thereby reducing the time required to apply decoding. Experimental results performed on two benchmark aerial image archives demonstrate that the proposed approach significantly reduces the computational time with similar classification accuracies when compared to traditional RS scene classification approaches (which requires full image decompression).
electrical engineering and systems science
We show that generalized orbital varieties for Mirkovic-Vybornov slices can be indexed by semi-standard Young tableaux. We also check that the Mirkovic-Vybornov isomorphism sends generalized orbital varieties to (dense subsets of) Mirkovic-Vilonen cycles, such that the (combinatorial) Lusztig datum of a generalized orbital variety, which it inherits from its tableau, is equal to the (geometric) Lusztig datum of its MV cycle.
mathematics
Host load prediction is the basic decision information for managing the computing resources usage on the cloud platform, its accuracy is critical for achieving the servicelevel agreement. Host load data in cloud environment is more high volatility and noise compared to that of grid computing, traditional data-driven methods tend to have low predictive accuracy when dealing with host load of cloud computing, Thus, we have proposed a host load prediction method based on Bidirectional Long Short-Term Memory (BiLSTM) in this paper. Our BiLSTM-based apporach improve the memory capbility and nonlinear modeling ability of LSTM and LSTM Encoder-Decoder (LSTM-ED), which is used in the recent previous work, In order to evaluate our approach, we have conducted experiments using a 1-month trace of a Google data centre with more than twelve thousand machines. our BiLSTM-based approach successfully achieves higher accuracy than other previous models, including the recent LSTM one and LSTM-ED one.
electrical engineering and systems science
We study the potential utility of classical techniques of spectral sparsification of graphs as a preprocessing step for digital quantum algorithms, in particular, for Hamiltonian simulation. Our results indicate that spectral sparsification of a graph with $n$ nodes through a sampling method, e.g.\ as in \cite{Spielman2011resistances} using effective resistances, gives, with high probability, a locally computable matrix $\tilde H$ with row sparsity at most $\mathcal{O}(\text{poly}\log n)$. For a symmetric matrix $H$ of size $n$ with $m$ non-zero entries, a one-time classical runtime overhead of $\mathcal{O}(m||H||t\log n/\epsilon)$ expended in spectral sparsification is then found to be useful as a way to obtain a sparse matrix $\tilde H$ that can be used to approximate time evolution $e^{itH}$ under the Hamiltonian $H$ to precision $\epsilon$. Once such a sparsifier is obtained, it could be used with a variety of quantum algorithms in the query model that make crucial use of row sparsity. We focus on the case of efficient quantum algorithms for sparse Hamiltonian simulation, since Hamiltonian simulation underlies, as a key subroutine, several quantum algorithms, including quantum phase estimation and recent ones for linear algebra. Finally, we also give two simple quantum algorithms to estimate the row sparsity of an input matrix, which achieve a query complexity of $\mathcal{O}(n^{3/2})$ as opposed to $\mathcal{O}(n^2)$ that would be required by any classical algorithm for the task.
quantum physics
Anticrossing behavior between magnons in a non-collinear chiral magnet Cu$_2$OSeO$_3$ and a two-mode X-band microwave resonator was studied in the temperature range 5-100K. In the field-induced ferrimagnetic phase, we observed a strong coupling regime between magnons and two microwave cavity modes with a cooperativity reaching 3600. In the conical phase, cavity modes are dispersively coupled to a fundamental helimagnon mode, and we demonstrate that the magnetic phase diagram of Cu$_2$OSeO$_3$ can be reconstructed from the measurements of the cavity resonance frequency. In the helical phase, a hybridized state of a higher-order helimagnon mode and a cavity mode - a helimagnon polariton - was found. Our results reveal a new class of magnetic systems where strong coupling of microwave photons to non-trivial spin textures can be observed.
condensed matter
Materials with high thermal conductivities (k) is valuable to solve the challenge of waste heat dissipation in highly integrated and miniaturized modern devices. Herein, we report the first synthesis of atomically thin isotopically pure hexagonal boron nitride (BN) and its one of the highest k among all semiconductors and electric insulators. Single atomic layer (1L) BN enriched with 11B has a k up to 1009 W/mK at room temperature. We find that the isotope engineering mainly suppresses the out-of-plane optical (ZO) phonon scatterings in BN, which subsequently reduces acoustic-optical scatterings between ZO and transverse acoustic (TA) and longitudinal acoustic (LA) phonons. On the other hand, reducing the thickness to single atomic layer diminishes the interlayer interactions and hence Umklapp scatterings of the out-of-plane acoustic (ZA) phonons, though this thickness-induced k enhancement is not as dramatic as that in naturally occurring BN. With many of its unique properties, atomically thin monoisotopic BN is promising on heat management in van der Waals (vdW) devices and future flexible electronics. The isotope engineering of atomically thin BN may also open up other appealing applications and opportunities in 2D materials yet to be explored.
condensed matter
We study the problem of optimal subset selection from a set of correlated random variables. In particular, we consider the associated combinatorial optimization problem of maximizing the determinant of a symmetric positive definite matrix that characterizes the chosen subset. This problem arises in many domains, such as experimental designs, regression modeling, and environmental statistics. We establish an efficient polynomial-time algorithm using Determinantal Point Process for approximating the optimal solution to the problem. We demonstrate the advantages of our methods by presenting computational results for both synthetic and real data sets.
statistics
In the framework of the AdS/CFT correspondence, imposing a scalar field in the bulk space-time leads to deform the corresponding CFT in the boundary, which may produce corrections to entanglement entropy, as well as the so-called subregion complexity. We have computed such corrections for a set of singular subregions including kink, cones and creases in different dimensions. Our calculations shows new singular terms including universal logarithmic corrections for entanglement entropy and subregion complexity for some distinct values of conformal weight.
high energy physics theory
We study the temperature dependence of electrical resistivity for currents directed along all crystallographic axes of the spin-triplet superconductor UTe$_{2}$. We focus particularly on an accurate determination of the resistivity along the $c$-axis ($\rho_c$) by using transport geometries that allow extraction of two resistivities along with the primary axes directions. Measurement of the absolute values of resistivities in all current directions reveals a surprisingly (given the anticipated highly anisotropic bandstructure) nearly isotropic transport behavior at temperatures above Kondo coherence, with $\rho_c \sim \rho_b \sim 2\rho_a$, but with a qualitatively distinct behavior at lower temperatures. The temperature dependence of $\rho_c$ exhibits a Kondo-like maximum at much lower temperatures compared to that of $\rho_a$ and $\rho_b$, providing important insight into the underlying electronic structure necessary for building a microscopic model of UTe$_{2}$.
condensed matter
In September 2017, the IceCube Neutrino Observatory recorded a very-high-energy neutrino in directional coincidence with a blazar in an unusually bright gamma-ray state, TXS0506+056. Blazars are prominent photon sources in the universe because they harbor a relativistic jet whose radiation is strongly collimated and amplified. High-energy atomic nuclei known as cosmic rays can produce neutrinos; thus the recent detection may help identifying the sources of the diffuse neutrino flux and the energetic cosmic rays. Here we report on a self-consistent analysis of the physical relation between the observed neutrino and the blazar, in particular the time evolution and spectral behavior of neutrino and photon emission. We demonstrate that a moderate enhancement in the number of cosmic rays during the flare can yield a very strong increase of the neutrino flux which is limited by co-produced hard X-rays and TeV gamma rays. We also test typical radiation models for compatibility and identify several model classes as incompatible with the observations. We investigate to what degree the findings can be generalized to the entire population of blazars, to determine the relation between their output in photons, neutrinos, and cosmic rays, and suggest how to optimize the strategy of future observations.
astrophysics
We propose new diagnostics that utilize the [O IV] 25.89 $\mu$m and nuclear (subarcsecond scale) 12 $\mu$m luminosity ratio for identifying whether an AGN is deeply `buried' in their surrounding material. Utilizing a sample of 16 absorbed AGNs at redshifts lower than 0.03 in the Swift/BAT catalog observed with Suzaku, we find that AGNs with small scattering fractions ($<$0.5%) tend to show weaker [O IV]-to-12 $\mu$m luminosity ratios than the average of Seyfert 2 galaxies. This suggests that this ratio is a good indicator for identifying buried AGNs. Then, we apply this criterion to 23 local ultra/luminous infrared galaxies (U/LIRGs) in various merger stages hosting AGNs. We find that AGNs in most of mid- to late-stage mergers are buried, while those in earlier stage ones (including non-merger) are not. This result suggests that the fraction of buried AGNs in U/LIRGs increases as the galaxy-galaxy interaction becomes more significant.
astrophysics
We run three long-timescale general-relativistic magnetohydrodynamic simulations of radiatively inefficient accretion flows onto non-rotating black holes. Our aim is to achieve steady-state behavior out to large radii and understand the resulting flow structure. A simulation with adiabatic index Gamma = 4/3 and small initial alternating poloidal magnetic field loops is run to a time of 440,000 GM/c^3, reaching inflow equilibrium inside a radius of 370 GM/c^2. Variations with larger alternating field loops and with Gamma = 5/3 are run to 220,000 GM/c^3, attaining equilibrium out to 170 GM/c^2 and 440 GM/c^2. There is no universal self-similar behavior obtained at radii in inflow equilibrium: the Gamma = 5/3 simulation shows a radial density profile with power law index ranging from -1 in the inner regions to -1/2 in the outer regions, while the others have a power-law slope ranging from -1/2 to close to -2. Both simulations with small field loops reach a state with polar inflow of matter, while the more ordered initial field has polar outflows. However, unbound outflows remove only a factor of order unity of the inflowing material over a factor of ~300 in radius. Our results suggest that the dynamics of radiatively inefficient accretion flows are sensitive to how the flow is fed from larger radii, and may differ appreciably in different astrophysical systems. Millimeter images appropriate for Sgr A* are qualitatively (but not quantitatively) similar in all simulations, with a prominent asymmetric image due to Doppler boosting.
astrophysics
FASER,the ForwArd Search ExpeRiment,is a proposed experiment dedicated to searching for light, extremely weakly-interacting particles at the LHC. Such particles may be produced in the LHC's high-energy collisions and travel long distances through concrete and rock without interacting. They may then decay to visible particles in FASER, which is placed 480 m downstream of the ATLAS interaction point. In this work we briefly describe the FASER detector layout and the status of potential backgrounds. We then present the sensitivity reach for FASER for a large number of long-lived particle models, updating previous results to a uniform set of detector assumptions, and analyzing new models. In particular, we consider all of the renormalizable portal interactions, leading to dark photons, dark Higgs bosons, and heavy neutral leptons (HNLs); light B-L and $L_i - L_j$ gauge bosons; axion-like particles (ALPs) that are coupled dominantly to photons, fermions, and gluons through non-renormalizable operators; and pseudoscalars with Yukawa-like couplings. We find that FASER and its follow-up, FASER 2, have a full physics program, with discovery sensitivity in all of these models and potentially far-reaching implications for particle physics and cosmology.
high energy physics phenomenology
We study Euclidean formulations of the transverse-momentum-dependent (TMD) soft function, which is a cross section for soft gluon radiations involving color charges moving in two conjugate lightcone directions in quantum chromodynamics. We show it is related to a special form factor of a pair of color sources traveling with nearly-lightlike velocities, which can be matched to TMD physical observables in semi-inclusive deep-inelastic scattering and Drell-Yan process in the framework of large momentum effective theory. It can also be extracted by combining a large-momentum form factor of light meson and its leading TMD wave function. These formulations are useful for initiating nonperturbative calculations of this useful quantity.
high energy physics phenomenology
We introduce a new picture of vacuum decay which, in contrast to existing semiclassical techniques, provides a real-time description and does not rely on classically-forbidden tunneling paths. Using lattice simulations, we observe vacuum decay via bubble formation by generating realizations of vacuum fluctuations and evolving with the classical equations of motion. The decay rate obtained from an ensemble of simulations is in excellent agreement with existing techniques. Future applications include bubble correlation functions, fast decay rates, and decay of non-vacuum states.
high energy physics theory
The $ \beta $-functions of marginal couplings are known to be closely related to the $ A $-function through Osborn's equation, derived using the local renormalization group. It is possible to derive strong constraints on the $\beta$-functions by parametrizing the terms in Osborn's equation as polynomials in the couplings, then eliminating unknown $\tilde{A}$ and $T_{IJ}$ coefficients. In this paper we extend this program to completely general gauge theories with arbitrarily many Abelian and non-Abelian factors. We detail the computational strategy used to extract consistency conditions on $ \beta $-functions, and discuss our automation of the procedure. Finally, we implement the procedure up to 4-, 3-, and 2-loops for the gauge, Yukawa and quartic couplings respectively, corresponding to the present forefront of general $ \beta $-function computations. We find an extensive collection of highly non-trivial constraints, and argue that they constitute an useful supplement to traditional perturbative computations; as a corollary, we present the complete 3-loop gauge $\beta$-function of a general QFT in the $\bar{\text{MS}}$ scheme, including kinetic mixing.
high energy physics theory
The sustainable Internet of Things (IoT) is becoming a promising solution for the green living and smart industries. In this article, we investigate the practical issues in the radio energy harvesting and data communication systems through extensive field experiments. A number of important characteristics of energy harvesting circuits and communication modules have been studied, including the non-linear energy consumption of the communication system relative to the transmission power, the wake-up time associated with the payload, and the varying system power during consecutive packet transmissions. In order to improve the efficiency of energy harvest and energy utilization, we propose a new model to accurately describe the energy harvesting process and the power consumption for sustainable IoT devices. Experiments are performed using commercial IoT devices and RF energy harvesters to verify the accuracy of the proposed model. The experiment results show that the new model matches the performance of sustainable IoT devices very well in the real scenario.
electrical engineering and systems science
The bandgap dependence on the number of atomic layers of some families of 2D-materials, can be exploited to engineer and use lateral heterostructures (LHs) as high-performance Field-Effect Transistors (FET). This option can provide very good lattice matching as well as high heterointerface quality. More importantly, this bandgap modulation with layer stacking can give rise to steep transitions in the density of states (DOS) of the 2D material, that can eventually be used to achieve sub-60 mV/decade subthreshold swing in LH-FETs thanks to an energy-filtering source. We have observed this effect in the case of a PdS2 LH-FET due to the particular density of states of its bilayer configuration. Our results are based on ab initio and multiscale materials and device modeling, and incite the exploration of the 2D-material design space in order to find more abrupt DOS transitions and better suitable candidates.
condensed matter
In the deconfined regime of a non-Abelian gauge theory at nonzero temperature, previously it was argued that if a (gauge invariant) source is added to generate nonzero holonomy, that this source must be linear for small holonomy. The simplest example of this is the second Bernoulli polynomial. However, then there is a conundrum in computing the free energy to $\sim g^3$ in the coupling constant $g$, as part of the free energy is discontinuous as the holonomy vanishes. In this paper we investigate two ways of generating the second Bernoulli polynomial dynamically: as a mass derivative of an auxiliary field, and from two dimensional ghosts embedded isotropically in four dimensions. Computing the holonomous hard thermal loop (HHTL) in the gluon self-energy, we find that the limit of small holonomy is only well behaved for two dimensional ghosts, with a free energy which to $\sim g^3$ is continuous as the holonomy vanishes.
high energy physics phenomenology
Molecular transport of biomolecules plays a pivotal role in the machinery of life. Yet, this role is poorly understood due the lack of quantitative information. Here, the role and properties of the C-terminal region of Escherichia coli Hfq is reported, involved in controlling the flow of a DNA solution. A combination of experimental methodologies has been used to probe the interaction of Hfq with DNA and to measure the rheological properties of the complex. A physical gel with a temperature reversible elasticity modulus is formed due to formation of non-covalent crosslinks. The mechanical response of the complexes shows that they are inhomogeneous soft solids. Our experiments indicate that Hfq C-terminal region could contribute to genome mechanical response. The reported viscoelasticity of the DNA-protein complex might have implications for cellular pro-cesses involving molecular transport of DNA or segments thereof.
condensed matter
This paper investigates the synchronization of chaotic behavior in a model of Bose-Einstein condensate (BEC) held in a 1D tilted bichromatical optical lattice potential by using the active control technique. The synchronization is presented in the master-slave configuration which implies that the master system evolves freely and drives the dynamics of the slave system. Also the numerical simulations are given to indicate the practicability and the effectiveness of the used controllers.
condensed matter
Ultrahigh dimensional data sets are becoming increasingly prevalent in areas such as bioinformatics, medical imaging, and social network analysis. Sure independent screening of such data is commonly used to analyze such data. Nevertheless, few methods exist for screening for interactions among predictors. Moreover, extant interaction screening methods prove to be highly inaccurate when applied to data sets exhibiting strong interactive effects, but weak marginal effects, on the response. We propose a new interaction screening procedure based on joint cumulants which is not inhibited by such limitations. Under a collection of sensible conditions, we demonstrate that our interaction screening procedure has the strong sure screening property. Four simulations are used to investigate the performance of our method relative to two other interaction screening methods. We also apply a two-stage analysis to a real data example by first employing our proposed method, and then further examining a subset of selected covariates using multifactor dimensionality reduction.
statistics
We study this zero-flux attraction-repulsion chemotaxis model, with linear and superlinear production $g$ for the chemorepellent and sublinear rate $f$ for the chemoattractant: \begin{equation}\label{problem_abstract} \tag{$\Diamond$} \begin{cases} u_t= \Delta u - \chi \nabla \cdot (u \nabla v)+\xi \nabla \cdot (u \nabla w) & \text{ in } \Omega \times (0,T_{max}),\\ v_t=\Delta v-f(u)v & \text{ in } \Omega \times (0,T_{max}),\\ 0= \Delta w - \delta w + g(u)& \text{ in } \Omega \times (0,T_{max}). %u(x,0)=u_0(x), \; v(x,0)=v_0(x) & x \in \bar\Omega. \end{cases} \end{equation} In this problem, $\Omega$ is a bounded and smooth domain of $\R^n$, for $n\geq 1$, $\chi,\xi,\delta>0$, $f(u)$ and $g(u)$ reasonably regular functions generalizing the prototypes $f(u)=K u^\alpha$ and $g(u)=\gamma u^l$, with $K,\gamma>0$ and proper $ \alpha, l>0$. Once it is indicated that any sufficiently smooth $u(x,0)=u_0(x)\geq 0$ and $v(x,0)=v_0(x)\geq 0$ produce a unique classical and nonnegative solution $(u,v,w)$ to \eqref{problem_abstract}, which is defined in $\Omega \times (0,T_{max})$, we establish that for any such $(u_0,v_0)$, the life span $\TM=\infty$ and $u, v$ and $w$ are uniformly bounded in $\Omega\times (0,\infty)$, (i) for $l=1$, $n\in \{1,2\}$, $\alpha\in (0,\frac{1}{2}+\frac{1}{n})\cap (0,1)$ and any $\xi>0$, (ii) for $l=1$, $n\geq 3$, $\alpha\in (0,\frac{1}{2}+\frac{1}{n})$ and $\xi$ larger than a quantity depending on $\chi \lVert v_0 \rVert_{L^\infty(\Omega)}$, (iii) for $l>1$ any $\xi>0$, and in any dimensional settings. Finally, an indicative analysis about the effect by logistic and repulsive actions on chemotactic phenomena is proposed by comparing the results herein derived for the linear production case with those in \cite{LankeitWangConsumptLogistic}.
mathematics
Premise: This report is intended to provide useful background and information on how we find detect and compile Planetary Nebulae PNe candidates and then verify them. It is for postgraduate students entering the field and for more general interest too.
astrophysics
The standard procedure when evaluating integrals of a given family of Feynman integrals, corresponding to some Feynman graph, is to construct an algorithm which provides the possibility to write any particular integral as a linear combination of so-called master integrals. To do this, public (AIR, FIRE, REDUZE, LiteRed, KIRA) and private codes based on solving integration by parts relations are used. However, the choice of the master integrals provided by these codes is not always optimal. We present an algorithm to improve a given basis of the master integrals, as well as its computer implementation; see also a competitive variant [1].
high energy physics phenomenology
GRB 170817A, detected by Fermi-GBM 1.7\,s after the merger of a neutron star (NS) binary, provides the first direct evidence for a link between such a merger and a short-duration gamma-ray burst. The X-ray observations after GRB 170817A indicate a possible X-ray flare with a peak luminosity $L_{\rm peak} \sim 2\times 10^{39}\,{\rm erg\,s}^{-1}$ near day 156. Here we show that this X-ray flare may be understood based on a slim disc around a compact object. On the one hand, there exists the maximal accretion rate $\dot M_{\rm max}$ for the slim disc, above which an optically thick outflow is significant and radiation from the disc is obscured. Based on the energy balance analysis, we find that $\dot M_{\rm max}$ is in the range of $\sim 4\dot M_{\rm Edd}$ and $\sim 21\dot M_{\rm Edd}$ when the angular velocity of the slim disc is between $\rm (1/5)^{1/2}\Omega_K$ and $\rm \Omega_K$ (where $\dot M_{\rm Edd}$ is the Eddington accretion rate and $\Omega_K$ is the Keplerian angular velocity). With $\dot M_{\rm max}$, the slim disc can provide a luminosity $\sim L_{\rm peak}$ for a compact object of $2.5 M_{\sun}$. On the other hand, if the merger of two NSs forms a typical neutrino-dominated accretion disc whose accretion rate $\dot M$ follows a power-law decline with an index $-1.8$ , then the system must pass through the outflow regime and enter the slim disc in $\sim 11-355$ days. These results imply that a post-merger slim accretion disc could account for the observed late-time $L_{\rm peak}$.
astrophysics
Ridge estimators regularize the squared Euclidean lengths of parameters. Such estimators are mathematically and computationally attractive but involve tuning parameters that can be difficult to calibrate. In this paper, we show that ridge estimators can be modified such that tuning parameters can be avoided altogether. We also show that these modified versions can improve on the empirical prediction accuracies of standard ridge estimators combined with cross-validation, and we provide first theoretical guarantees.
statistics
This article aims at developing a high order pressure-based solver for the solution of the 3D compressible Navier-Stokes system at all Mach numbers. We propose a cell-centered discretization of the governing equations that splits the fluxes into a fast and a slow scale part, that are treated implicitly and explicitly, respectively. A novel semi-implicit discretization is proposed for the kinetic energy as well as the enthalpy fluxes in the energy equation, hence avoiding any need of iterative solvers. The implicit discretization yields an elliptic equation on the pressure that can be solved for both ideal gas and general equation of state (EOS). A nested Newton method is used to solve the mildly nonlinear system for the pressure in case of nonlinear EOS. High order in time is granted by implicit-explicit (IMEX) time stepping, whereas a novel CWENO technique efficiently implemented in a dimension-by-dimension manner is developed for achieving high order in space for the discretization of explicit convective and viscous fluxes. A quadrature-free finite volume solver is then derived for the high order approximation of numerical fluxes. Central schemes with no dissipation of suitable order of accuracy are finally employed for the numerical approximation of the implicit terms. Consequently, the CFL-type stability condition on the maximum admissible time step is based only on the fluid velocity and not on the sound speed, so that the novel schemes work uniformly for all Mach numbers. Convergence and robustness of the proposed method are assessed through a wide set of benchmark problems involving low and high Mach number regimes, as well as inviscid and viscous flows.
mathematics
We present a detailed temporal and spectral study of the blazar 3C\,279 using multi-wavelength observations from Swift-XRT, Swift-UVOT and Fermi-LAT during a flare in 2018 January. The temporal analysis of $\gamma$-ray light curve indicates a lag of $\sim 1$ d between the 0.1--3 GeV and 3--500 GeV emission. Additionally, the $\gamma$-ray light curve shows asymmetry with slow rise--fast decay in energy band 0.1--3 GeV and fast rise--slow decay in the 3--500 GeV band. We interpret this asymmetry as a result of shift in the Compton spectral peak. This inference is further supported by the correlation studies between the flux and the parameters of the log-parabola fit to the source spectra in the energy range 0.1--500 GeV. We found that the flux correlates well with the peak spectral energy and the log-parabola fit parameters show a hard index with large curvature at high flux states. Interestingly, the hardest index with large curvature was synchronous with a very high energy flare detected by H.E.S.S. Our study of the spectral behavior of the source suggests that $\gamma$-ray emission is most likely to be associated with the Compton up-scattering of IR photons from the dusty environment. Moreover, the fit parameters indicate the increase in bulk Lorentz factor of emission region to be a dominant cause for the flux enhancement.
astrophysics
We explore the problem of step-wise explaining how to solve constraint satisfaction problems, with a use case on logic grid puzzles. More specifically, we study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person. Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver by being able to understand and even learn from the explanations. The main challenge is that of finding a sequence of simple explanations, where each explanation should aim to be as cognitively easy as possible for a human to verify and understand. This contrasts with the arbitrary combination of facts and constraints that the solver may use when propagating. We propose the use of a cost function to quantify how simple an individual explanation of an inference step is, and identify the explanation-production problem of finding the best sequence of explanations of a CSP. Our approach is agnostic of the underlying constraint propagation mechanisms, and can provide explanations even for inference steps resulting from combinations of constraints. In case multiple constraints are involved, we also develop a mechanism that allows to break the most difficult steps up and thus gives the user the ability to zoom in on specific parts of the explanation. Our proposed algorithm iteratively constructs the explanation sequence by using an optimistic estimate of the cost function to guide the search for the best explanation at each step. Our experiments on logic grid puzzles show the feasibility of the approach in terms of the quality of the individual explanations and the resulting explanation sequences obtained.
computer science
We reconsider the thermodynamics of AdS black holes in the context of gauge-gravity duality. In this new setting where both the cosmological constant $\Lambda$ and the gravitational Newton constant $G$ are varied in the bulk, we rewrite the first law in a new form containing both $\Lambda$ (associated with thermodynamic pressure) and the central charge $C$ of the dual CFT theory and their conjugate variables. We obtain a novel thermodynamic volume, in turn leading to a new understanding of the Van der Waals behavior of the charged AdS black holes, in which phase changes are governed by the degrees of freedom in the CFT. Compared to the "old" $P-V$ criticality, this new criticality is "universal" (independent of the bulk pressure) and directly relates to the thermodynamics of the dual field theory and its central charge.
high energy physics theory
We introduce the notions of overcommutation and overcommutation length in groups, and show that these concepts are closely related to representations of the fundamental groups of 3-manifold and their Heegaard genus. We give many examples including translations in the affine group of the line and provide upper bounds for the overcommutation length in SL_2, related to the Steinberg relation.
mathematics
We present pore-scale simulations of two-phase flows in a reconstructed fibrous porous layer. The three dimensional microstructure of the material, a fuel cell gas diffusion layer, is acquired via X-ray computed tomography and used as input for lattice Boltzmann simulations. We perform a quantitative analysis of the multiphase pore-scale dynamics and we identify the dominant fluid structures governing mass transport. The results show the existence of three different regimes of transport: a fast inertial dynamics at short times, characterised by a compact uniform front, a viscous-capillary regime at intermediate times, where liquid is transported along a gradually increasing number of preferential flow paths of the size of one-two pores, and a third regime at longer times, where liquid, after having reached the outlet, is exclusively flowing along such flow paths and the two-phase fluid structures are stabilised. We observe that the fibrous layer presents significant variations in its microscopic morphology, which have an important effect on the pore invasion dynamics, and counteract the stabilising viscous force. Liquid transport is indeed affected by the presence of microstructure-induced capillary pressures acting adversely to the flow, leading to capillary fingering transport mechanism and unstable front displacement, even in the absence of hydrophobic treatments of the porous material. We propose a macroscopic model based on an effective contact angle that mimics the effects of the such a dynamic capillary pressure. Finally, we underline the significance of the results for the optimal design of face masks in an effort to mitigate the current COVID-19 pandemic.
physics