text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
The Regional Transmission Operators each have unique methods to procure frequency regulation reserves used to track real power fluctuations on the grid. Some market clearing practices result in high regulation prices with large spread compared to other markets. We present the frequency regulation market clearing formulations--as derived from operator tariffs--of the ISO New England, PJM Interconnection, and Midcontinent ISO. We offer test case examples to explain the historic market pricing behavior seen within each system operator and conclude this behavior is due to the inclusion of estimated lost opportunity costs in market clearing. | electrical engineering and systems science |
We study a fundamental model in nonlinear acoustics, precisely, the general Blackstock's model (that is, without Becker's assumption) in the whole space $\mathbb{R}^n$. This model describes nonlinear acoustics in perfect gases under the irrotational flow. By means of the Fourier analysis we will derive $L^2$ estimates for the solution of the linear homogeneous problem and its derivatives. Then, we will apply these estimates to study three different topics: the optimality of the decay estimates in the case $n\geqslant 5$ and the optimal growth rate for the $L^2$-norm of the solution for $n=3,4$; the singular limit problem in determining the first- and second-order profiles for the solution of the linear Blackstock's model with respect to the small thermal diffusivity; the proof of the existence of global (in time) small data Sobolev solutions with suitable regularity for a nonlinear Blackstock's model. | mathematics |
Rare-earth monopnictides display rich physical behaviors, featuring most notably spin and orbital orders in their ground state. Here, we grow ErBi single crystal and study its magnetic, thermal and electrical properties. An analysis of the magnetic entropy and magnetization indicates that the weak magnetic anisotropy in ErBi possibly derives from the mixing effect, namely the anisotropic ground state of Er3+ (4f11) mingles with the isotropic excited state through exchange interaction. At low temperature, an extremely large magnetoresistance (~104%) with a parabolic magnetic-field dependence is observed, which can be ascribed to the nearly perfect electron-hole compensation and ultrahigh carrier mobility. When the magnetic field is rotated in the ab (ac) plane and the current flows in the b axis, the angular magnetoresistance in ErBi shows a twofold (fourfold) symmetry. Similar case has been observed in LaBi where the anisotropic Fermi surface dominates the low-temperature transport. Our theoretical calculation suggests that near the Fermi level ErBi shares similarity with LaBi in the electronic band structures. These findings indicate that the angular magnetoresistance of ErBi could be mainly determined by its anisotropic Fermi surface topology. Besides, contributions from several other possibilities, including the spin-dependent scattering, spin-orbit scattering, and demagnetization correlation to the angular magnetoresistance of ErBi are also discussed. | condensed matter |
Multiple-merger coalescents, e.g. $\Lambda$-$n$-coalescents, have been proposed as models of the genealogy of $n$ sampled individuals for a range of populations whose genealogical structures are not captured well by Kingman's $n$-coalescent. $\Lambda$-$n$-coalescents can be seen as the limit process of the discrete genealogies of Cannings models with fixed population size, when time is rescaled and population size $N\to\infty$. As established for Kingman's $n$-coalescent, moderate population size fluctuations in the discrete population model should be reflected by a time-change of the limit coalescent. For $\Lambda$-$n$-coalescents, this has been explicitly shown for only a limited subclass of $\Lambda$-$n$-coalescents and exponentially growing populations. This article gives a general construction of time-changed $\Lambda$-$n$-coalescents as limits of specific Cannings models with rather arbitrary time changes. | mathematics |
The exploration of quantum algorithms that possess quantum advantages is a central topic in quantum computation and quantum information processing. One potential candidate in this area is quantum generative adversarial learning (QuGAL), which conceptually has exponential advantages over classical adversarial networks. However, the corresponding learning algorithm remains obscured. In this paper, we propose the first quantum generative adversarial learning algorithm-- the quantum multiplicative matrix weight algorithm (QMMW)-- which enables the efficient processing of fundamental tasks. The computational complexity of QMMW is polynomially proportional to the number of training rounds and logarithmically proportional to the input size. The core concept of the proposed algorithm combines QuGAL with online learning. We exploit the implementation of QuGAL with parameterized quantum circuits, and numerical experiments for the task of entanglement test for pure state are provided to support our claims. | quantum physics |
We provide a simple method to compute the energy in higher curvature gravity in asymptotically AdS spacetimes in even dimensions. It follows from the combined use of topological terms added to the gravity action, and the Wald charges derived from the augmented action functional. No additional boundary terms are needed. As a consistency check, we show that the formula for the conserved quantities derived in this way yields the correct result for the mass of asymptotically AdS black holes. | high energy physics theory |
The survival of time-reversal symmetry in the presence of strong multiple scattering lies at the heart of some of the most robust interference effects of light in complex media. Here, the use of time-reversed light paths for imaging in highly scattering environments is investigated. A common-path Sagnac interferometer is constructed which is able to detect objects behind a layer of strongly scattering material through up to 14 mean free paths total attenuation length. A spatial offset between the two light paths is used to suppress non-specific scattering contributions, limiting the signal to the volume of overlap. Scaling of the specific signal intensity indicates a transition from ballistic to quasi-ballistic contributions as the scattering thickness is increased. The characteristic frequency dependence for the coherent modulation signal provides a path length dependent signature, while the spatial overlap requirement allows for short-range 3D imaging. The technique of common-path, bistatic interferometry offers a conceptually novel approach which could open new applications in diverse areas such as medical imaging, machine vision, sensors, and lidar. | physics |
This paper investigates the potential of wind turbine generators (WTGs) and load aggregators (LAs) to provide supplementary damping control services for low frequency inter-area oscillations (LFOs) through the additional distributed damping control units (DCUs) proposed in their controllers. In order to provide a scalable methodology for the increasing number of WTGs and LAs, a novel distributed control framework is proposed to coordinate damping controllers. Firstly, a distributed algorithm is designed to reconstruct the system Jacobian matrix for each damping bus (buses with damping controllers). Thus, the critical LFO can be identified locally at each damping bus by applying eigen-analysis to the obtained system Jacobian matrix. Then, if the damping ratio of the critical LFO is less than a preset threshold, the control parameters of DCUs will be tuned in a distributed and coordinated manner to improve the damping ratio and minimize the total control cost at the same time. The proposed control framework is tested in a modified IEEE 39-bus test system. The simulation results with and without the proposed control framework are compared to demonstrate the effectiveness of the proposed framework. | mathematics |
Self-attention models have been successfully applied in end-to-end speech recognition systems, which greatly improve the performance of recognition accuracy. However, such attention-based models cannot be used in online speech recognition, because these models usually have to utilize a whole acoustic sequences as inputs. A common method is restricting the field of attention sights by a fixed left and right window, which makes the computation costs manageable yet also introduces performance degradation. In this paper, we propose Memory-Self-Attention (MSA), which adds history information into the Restricted-Self-Attention unit. MSA only needs localtime features as inputs, and efficiently models long temporal contexts by attending memory states. Meanwhile, recurrent neural network transducer (RNN-T) has proved to be a great approach for online ASR tasks, because the alignments of RNN-T are local and monotonic. We propose a novel network structure, called Memory-Self-Attention (MSA) Transducer. Both encoder and decoder of the MSA Transducer contain the proposed MSA unit. The experiments demonstrate that our proposed models improve WER results than Restricted-Self-Attention models by $13.5 on WSJ and $7.1 on SWBD datasets relatively, and without much computation costs increase. | electrical engineering and systems science |
In the study of randomly connected neural network dynamics there is a phase transition from a `simple' state with few equilibria to a `complex' state characterised by the number of equilibria growing exponentially with the neuron population. Such phase transitions are often used to describe pathological brain state transitions observed in neurological diseases such as epilepsy. In this paper we investigate how more realistic heterogeneous network structures affect these phase transitions using techniques from random matrix theory. Specifically, we parameterise the network structure according to Dale's Law and use the Kac-Rice formalism to compute the change in the number of equilibria when a phase transition occurs. We also examine the condition where the network is not balanced between excitation and inhibition causing outliers to appear in the eigenspectrum. This enables us to compute the effects of different heterogeneous network connectivities on brain state transitions, which can provide new insights into pathological brain dynamics. | condensed matter |
We propose a dynamic allocation procedure that increases power and efficiency when measuring an average treatment effect in sequential randomized trials exploiting some subjects' previous assessed responses. Subjects arrive iteratively and are either randomized or paired to a previously randomized subject and administered the alternate treatment. The pairing is made via a dynamic matching criterion that iteratively learns which specific covariates are important to the response. We develop estimators for the average treatment effect as well as an exact test. We illustrate our method's increase in efficiency and power over other allocation procedures in both simulated scenarios and a clinical trial dataset. An R package SeqExpMatch for use by practitioners is available. | statistics |
Energy loss in anisotropic hot dense QGP in external magnetic field is studied within holographic approach. Energy loss is calculated by estimation of behaviour of the spatial Wilson loops using the effective potential technique. We examine the dependence of the effective potential on the spatial Wilson loops orientation in fully anisotropic background. For this purpose we obtain general formulas for the effective potential and study appearance of the effective potential dynamical wall. We consider particular fully anisotropic model [arXiv:2011.07023] supported by Einstein-Dilaton-three-Maxwell action. The effective potential strongly depends on the parameters of anisotropy and magnetic field, therefore the energy loss depends on physical parameters $-$ $T$, $\mu$, $c_B$ and orientation. Orientation is determined by angles between the moving heavy quark velocity, the axis of heavy ions collision and their impact parameter vector. | high energy physics theory |
Training neural network often uses a machine learning framework such as TensorFlow and Caffe2. These frameworks employ a dataflow model where the NN training is modeled as a directed graph composed of a set of nodes. Operations in neural network training are typically implemented by the frameworks as primitives and represented as nodes in the dataflow graph. Training NN models in a dataflow-based machine learning framework involves a large number of fine-grained operations. Those operations have diverse memory access patterns and computation intensity. How to manage and schedule those operations is challenging, because we have to decide the number of threads to run each operation (concurrency control) and schedule those operations for good hardware utilization and system throughput. In this paper, we extend an existing runtime system (the TensorFlow runtime) to enable automatic concurrency control and scheduling of operations. We explore performance modeling to predict the performance of operations with various thread-level parallelism. Our performance model is highly accurate and lightweight. Leveraging the performance model, our runtime system employs a set of scheduling strategies that co-run operations to improve hardware utilization and system throughput. Our runtime system demonstrates a big performance benefit. Comparing with using the recommended configurations for concurrency control and operation scheduling in TensorFlow, our approach achieves 33% performance (execution time) improvement on average (up to 49%) for three neural network models, and achieves high performance closing to the optimal one manually obtained by the user. | computer science |
An active region filament in the upper chromosphere is studied using spectropolarimetric data in He I 10830 A from the GREGOR telescope. A Milne-Eddingon based inversion of the Unno-Rachkovsky equations is used to retrieve the velocity and the magnetic field vector of the region. The plasma velocity reaches supersonic values closer to the feet of the filament barbs and coexist with a slow velocity component. Such supersonic velocities result from the acceleration of the plasma as it drains from the filament spine through the barbs. The line-of-sight magnetic fields have strengths below 200 G in the filament spine and in the filament barbs where fast downflows are located, their strengths range between 100 - 700 G. | astrophysics |
Despite the progress within the last decades, weather forecasting is still a challenging and computationally expensive task. Current satellite-based approaches to predict thunderstorms are usually based on the analysis of the observed brightness temperatures in different spectral channels and emit a warning if a critical threshold is reached. Recent progress in data science however demonstrates that machine learning can be successfully applied to many research fields in science, especially in areas dealing with large datasets. We therefore present a new approach to the problem of predicting thunderstorms based on machine learning. The core idea of our work is to use the error of two-dimensional optical flow algorithms applied to images of meteorological satellites as a feature for machine learning models. We interpret that optical flow error as an indication of convection potentially leading to thunderstorms and lightning. To factor in spatial proximity we use various manual convolution steps. We also consider effects such as the time of day or the geographic location. We train different tree classifier models as well as a neural network to predict lightning within the next few hours (called nowcasting in meteorology) based on these features. In our evaluation section we compare the predictive power of the different models and the impact of different features on the classification result. Our results show a high accuracy of 96% for predictions over the next 15 minutes which slightly decreases with increasing forecast period but still remains above 83% for forecasts of up to five hours. The high false positive rate of nearly 6% however needs further investigation to allow for an operational use of our approach. | computer science |
We propose a method for inference on moderately high-dimensional, nonlinear, non-Gaussian, partially observed Markov process models for which the transition density is not analytically tractable. Markov processes with intractable transition densities arise in models defined implicitly by simulation algorithms. Widely used particle filter methods are applicable to nonlinear, non-Gaussian models but suffer from the curse of dimensionality. Improved scalability is provided by ensemble Kalman filter methods, but these are inappropriate for highly nonlinear and non-Gaussian models. We propose a particle filter method having improved practical and theoretical scalability with respect to the model dimension. This method is applicable to implicitly defined models having analytically intractable transition densities. Our method is developed based on the assumption that the latent process is defined in continuous time and that a simulator of this latent process is available. In this method, particles are propagated at intermediate time intervals between observations and are resampled based on a forecast likelihood of future observations. We combine this particle filter with parameter estimation methodology to enable likelihood-based inference for highly nonlinear spatiotemporal systems. We demonstrate our methodology on a stochastic Lorenz 96 model and a model for the population dynamics of infectious diseases in a network of linked regions. | statistics |
The cd-index is an invariant of Eulerian posets expressed as a polynomial in noncommuting variables c and d. It determines another invariant, the h-polynomial. In this paper, we study the relative setting, that of subdivisions of posets. We introduce the mixed cd-index, an invariant of strong formal subdivisions of posets, which determines the mixed h-polynomial introduced by the second author with Stapledon. The mixed cd-index is a polynomial in noncommuting variables c',d',c,d, and e and is defined in terms of the local cd-index of Karu. Here, use is made of the decomposition theorem for the cd-index. We extend the proof of the decomposition theorem, originally due to Ehrenborg-Karu, to the class of strong formal subdivisions. We also compute the mixed cd-index in a number of examples. | mathematics |
The segmentation of data into stationary stretches also known as multiple change point problem is important for many applications in time series analysis as well as signal processing. Based on strong invariance principles, we analyse data segmentation methodology using moving sum (MOSUM) statistics for a class of regime-switching multivariate processes where each switch results in a change in the drift. In particular, this framework includes the data segmentation of multivariate partial sum, integrated diffusion and renewal processes even if the distance between change points is sublinear. We study the asymptotic behaviour of the corresponding change point estimators, show consistency and derive the corresponding localisation rates which are minimax optimal in a variety of situations including an unbounded number of changes in Wiener processes with drift. Furthermore, we derive the limit distribution of the change point estimators for local changes - a result that can in principle be used to derive confidence intervals for the change points. | statistics |
We present a scalable tree search planning algorithm for large multi-agent sequential decision problems that require dynamic collaboration. Teams of agents need to coordinate decisions in many domains, but naive approaches fail due to the exponential growth of the joint action space with the number of agents. We circumvent this complexity through an anytime approach that allows us to trade computation for approximation quality and also dynamically coordinate actions. Our algorithm comprises three elements: online planning with Monte Carlo Tree Search (MCTS), factored representations of local agent interactions with coordination graphs, and the iterative Max-Plus method for joint action selection. We evaluate our approach on the benchmark SysAdmin domain with static coordination graphs and achieve comparable performance with much lower computation cost than our MCTS baselines. We also introduce a multi-drone delivery domain with dynamic, i.e., state-dependent coordination graphs, and demonstrate how our approach scales to large problems on this domain that are intractable for other MCTS methods. We provide an open-source implementation of our algorithm at https://github.com/JuliaPOMDP/FactoredValueMCTS.jl. | computer science |
This short note emphasises a potential tension between string models of inflation based on systems of branes and antibranes and the spectrum of strings in curved space, in particular the requirement that the leading Regge trajectory extends to the Planck scale allowing for the conventional string theory UV completion of gravity. | high energy physics theory |
Within CERN RD50 Collaboration, MCz Si was identified as a best material for the detector used in the high luminosity collider experiments. The n in p Si detectors were utmost radiation hard detectors, which can be used for the phase 2 upgrade plan of the new Compact Muon Solenoid tracker detector in 2026. The choice of n or p-MCz Si material depends upon the high electrical charge collection efficiency performance of these detectors over n or p-FZSi detectors. The bulk radiation damage model for n, p-MCz Si need to develop for the designing, development and optimization of the advanced detectors for the next generation High Energy Physics experiments. In this work, an advanced four level deep-trap mixed irradiation model (E5, CiOi, E(30K), H(152K)) for n-MCz Si is proposed by the comparison of experimental data on the full depletion voltage and leakage current to the Shockley Read Hall recombination statistics results on the mixed irradiated n-MCz Si PAD detector. Prediction uncertainty in the radiation damage mixed irradiation model considered in the full depletion voltage and leakage current, which was due to the uncertainty in macroscopic results from an experimental measurement on the mixed irradiated n-MCz Si pad detector. A very good agreement is observed in the experimental and SRH results on the full depletion voltage and leakage current. This model is also used to extrapolate the value of the full depletion voltage at different mixed (proton + neutron) higher irradiation fluences for the thin n-MCz Si microstrip detector | physics |
Global sensitivity analysis aims at measuring the relative importance of different variables or groups of variables for the variability of a quantity of interest. Among several sensitivity indices, so-called Shapley effects have recently gained popularity mainly because the Shapley effects for all the individual variables are summed up to the overall variance, which gives a better interpretability than the classical sensitivity indices called main effects and total effects. In this paper, assuming that all the input variables are independent, we introduce a quite simple Monte Carlo algorithm to estimate the Shapley effects for all the individual variables simultaneously, which drastically simplifies the existing algorithms proposed in the literature. We present a short Matlab implementation of our algorithm and show some numerical results. A possible extension to the case where the input variables are dependent is also discussed. | statistics |
By recognizing that the main difficulty of the modeling of daily precipitation amounts is the selection of an appropriate probability distribution, this study aims to establish a model selection framework to identify the appropriate probability distribution for the modeling of daily precipitation amounts from the commonly used probability distributions, i.e. the exponential, Gamma, Weibull, and mixed exponential distributions. The mixed Gamma Weibull (MGW) distribution serves this purpose because all the commonly used probability distributions are special cases of the MGW distribution, and the MGW distribution integrates all the commonly used probability distributions into one framework. Finally, via the large sample inference of likelihood ratios, a model selection criterion can be established to identify the appropriate model for the modeling of daily precipitation amounts from the MGW distribution framework. | statistics |
Factorial experiments in research on memory, language, and in other areas are often analyzed using analysis of variance (ANOVA). However, for effects with more than one numerator degrees of freedom, e.g., for experimental factors with more than two levels, the ANOVA omnibus F-test is not informative about the source of a main effect or interaction. Because researchers typically have specific hypotheses about which condition means differ from each other, a priori contrasts (i.e., comparisons planned before the sample means are known) between specific conditions or combinations of conditions are the appropriate way to represent such hypotheses in the statistical model. Many researchers have pointed out that contrasts should be "tested instead of, rather than as a supplement to, the ordinary `omnibus' F test" (Hays, 1973, p. 601). In this tutorial, we explain the mathematics underlying different kinds of contrasts (i.e., treatment, sum, repeated, polynomial, custom, nested, interaction contrasts), discuss their properties, and demonstrate how they are applied in the R System for Statistical Computing (R Core Team, 2018). In this context, we explain the generalized inverse which is needed to compute the coefficients for contrasts that test hypotheses that are not covered by the default set of contrasts. A detailed understanding of contrast coding is crucial for successful and correct specification in linear models (including linear mixed models). Contrasts defined a priori yield far more useful confirmatory tests of experimental hypotheses than standard omnibus F-test. Reproducible code is available from https://osf.io/7ukf6/. | statistics |
We are interested in the study of Gibbs and equilbrium probabilities on the lattice $\mathbb{R}^{\mathbb{N}}$. Consider the unilateral full-shift defined on the non-compact set $\mathbb{R}^{\mathbb{N}}$ and an $\alpha$-H\"older continuous potential $A$ from $\mathbb{R}^{\mathbb{N}}$ into $\mathbb{R}$. From a suitable class of a priori probability measures $\nu$ (over the Borelian sets of $\mathbb{R}$) we define the Ruelle operator associated to $A$ (using an adequate extension of this operator to the compact set $\overline{\mathbb{R}}^\mathbb{N}=(S^1)^\mathbb{N}$) and we show the existence of eigenfunctions, conformal probability measures and equilibrium states associated to $A$. We are also able to show several of the well known classical properties of Thermodynamic Formalism for both of these probability measures. The above, can be seen as a generalization of the results obtained in the compact case for the XY-model. We also introduce an extension of the definition of entropy and show the existence of $A$-maximizing measures (via ground states for $A$); we show the existence of the zero temperature limit under some mild assumptions. Moreover, we prove the existence of an involution kernel for $A$ (this requires to consider the bilateral full-shift on $\mathbb{R}^{\mathbb{Z}}$). Finally, we build a Gibbsian specification for the Borelian sets on the set $\mathbb{R}^{\mathbb{N}}$ and we show that this family of probability measures satisfies a \emph{FKG}-inequality. | mathematics |
We investigate the optimal portfolio deleveraging (OPD) problem with permanent and temporary price impacts, where the objective is to maximize equity while meeting a prescribed debt/equity requirement. We take the real situation with cross impact among different assets into consideration. The resulting problem is, however, a non-convex quadratic program with a quadratic constraint and a box constraint, which is known to be NP-hard. In this paper, we first develop a successive convex optimization (SCO) approach for solving the OPD problem and show that the SCO algorithm converges to a KKT point of its transformed problem. Second, we propose an effective global algorithm for the OPD problem, which integrates the SCO method, simple convex relaxation and a branch-and-bound framework, to identify a global optimal solution to the OPD problem within a pre-specified $\epsilon$-tolerance. We establish the global convergence of our algorithm and estimate its complexity. We also conduct numerical experiments to demonstrate the effectiveness of our proposed algorithms with both the real data and the randomly generated medium- and large-scale OPD problem instances. | mathematics |
We find a background solution of the string theory which has a special property distinguished from the usual background solutions. This background solution does not produce the NS-NS two-form fields under T-duality and therefore the background vacua described by this solution essentially do not involve NS-NS type branes in their configurations, unlikely to the case of the ordinary Calabi-Yau ansatz. As a result the non-linear $\sigma$-models, whose target space metrics are given by these T-dual partners, can both be torsion-free. | high energy physics theory |
The goal of this work is to reduce driver's range anxiety by estimating the real-time energy consumption of electric vehicles using deep convolutional neural network. The real-time estimate can be used to accurately predict the remaining range for the vehicle and hence, can reduce driver's range anxiety. In contrast to existing techniques, the non-linearity and complexity induced by the combination of influencing factors make the problem more suitable for a deep learning approach. The proposed approach requires three parameters namely, vehicle speed, tractive effort and road elevation. Multiple experiments with different variants are performed to explore the impact of number of layers and input feature descriptors. The comparison of proposed approach and five of the existing techniques show that the proposed model performed consistently better than existing techniques with lowest error. | electrical engineering and systems science |
Conditional generative adversarial networks (cGANs) have been widely researched to generate class conditional images using a single generator. However, in the conventional cGANs techniques, it is still challenging for the generator to learn condition-specific features, since a standard convolutional layer with the same weights is used regardless of the condition. In this paper, we propose a novel convolution layer, called the conditional convolution layer, which directly generates different feature maps by employing the weights which are adjusted depending on the conditions. More specifically, in each conditional convolution layer, the weights are conditioned in a simple but effective way through filter-wise scaling and channel-wise shifting operations. In contrast to the conventional methods, the proposed method with a single generator can effectively handle condition-specific characteristics. The experimental results on CIFAR, LSUN and ImageNet datasets show that the generator with the proposed conditional convolution layer achieves a higher quality of conditional image generation than that with the standard convolution layer. | computer science |
We present optical photometry and spectroscopy of SN 2016iet, an unprecedented Type I supernova (SN) at $z=0.0676$ with no obvious analog in the existing literature. The peculiar light curve has two roughly equal brightness peaks ($\approx -19$ mag) separated by 100 days, and a subsequent slow decline by 5 mag in 650 rest-frame days. The spectra are dominated by emission lines of calcium and oxygen, with a width of only $3400$ km s$^{-1}$, superposed on a strong blue continuum in the first year, and with a large ratio of $L_{\rm [Ca\,II]}/L_{\rm [O\,I]}\approx 4$ at late times. There is no clear evidence for hydrogen or helium associated with the SN at any phase. We model the light curves with several potential energy sources: radioactive decay, central engine, and circumstellar medium (CSM) interaction. Regardless of the model, the inferred progenitor mass near the end of its life (i.e., CO core mass) is $\gtrsim 55$ M$_\odot$ and up to $120$ M$_\odot$, placing the event in the regime of pulsational pair instability supernovae (PPISNe) or pair instability supernovae (PISNe). The models of CSM interaction provide the most consistent explanation for the light curves and spectra, and require a CSM mass of $\approx 35$ M$_\odot$ ejected in the final decade before explosion. We further find that SN 2016iet is located at an unusually large offset ($16.5$ kpc) from its low metallicity dwarf host galaxy ($Z\approx 0.1$ Z$_\odot$, $M\approx 10^{8.5}$ M$_\odot$), supporting the PPISN/PISN interpretation. In the final spectrum, we detect narrow H$\alpha$ emission at the SN location, likely due to a dim underlying galaxy host or an H II region. Despite the overall consistency of the SN and its unusual environment with PPISNe and PISNe, we find that the inferred properties of SN\,2016iet challenge existing models of such events. | astrophysics |
Nitrogen Vacancy (NV) centers in diamond are a platform for several important quantum technologies, including sensing, communication and elementary quantum processors. In this letter we demonstrate the creation of NV centers by implantation using a deterministic single ion source. For this we sympathetically laser-cool single $^{15}\textrm{N}_2^+$ molecular ions in a Paul trap and extract them at an energy of 5.9\,keV. Subsequently the ions are focused with a lateral resolution of 121(35)\,nm and are implanted into a diamond substrate without any spatial filtering by apertures or masks. After high-temperature annealing, we detect the NV centers in a confocal microscope and determine a conversion efficiency of about 0.6\,$\%$. The $^{15}\textrm{NV}$ centers are characterized by optically detected magnetic resonance (ODMR) on the hyperfine transition and coherence time. | quantum physics |
Diffusion broadening of spectral lines is the main limitation to frequency resolution in non-polarized liquid state nano-NMR. This problem arises from the limited amount of information that can be extracted from the signal before losing coherence. For liquid state NMR as with most generic sensing experiments, the signal is thought to decay exponentially, severely limiting resolution. However, there is theoretical evidence that predicts a power law decay of the signal's correlations due to diffusion noise in the non-polarized nano-NMR scenario. In this work we show that in the NV based nano-NMR setup such diffusion noise results in high spectral resolution. | quantum physics |
In this paper, we study the coexistence and synergy between edge and central cloud computing in a heterogeneous cellular network (HetNet), which contains a multi-antenna macro base station (MBS), multiple multi-antenna small base stations (SBSs) and multiple single-antenna user equipment (UEs). The SBSs are empowered by edge clouds offering limited computing services for UEs, whereas the MBS provides high-performance central cloud computing services to UEs via a restricted multiple-input multiple-output (MIMO) backhaul to their associated SBSs. With processing latency constraints at the central and edge networks, we aim to minimize the system energy consumption used for task offloading and computation. The problem is formulated by jointly optimizing the cloud selection, the UEs' transmit powers, the SBSs' receive beamformers, and the SBSs' transmit covariance matrices, which is {a mixed-integer and non-convex optimization problem}. Based on methods such as decomposition approach and successive pseudoconvex approach, a tractable solution is proposed via an iterative algorithm. The simulation results show that our proposed solution can achieve great performance gain over conventional schemes using edge or central cloud alone. Also, with large-scale antennas at the MBS, the massive MIMO backhaul can significantly reduce the complexity of the proposed algorithm and obtain even better performance. | computer science |
In this paper, we consider a dual-hop Multiple Input Multiple Output (MIMO) wireless relay network in the presence of imperfect channel state information (CSI), in which a source-destination pair both equipped with multiple antennas communicates through a large number of half-duplex amplify-and-forward (AF) relay terminals. We investigate the performance of three linear beamforming schemes when the CSI of relay-to-destination (R-D) link is not perfect at the relay nodes. The three efficient linear beamforming schemes are based on the matched-filter (MF), zero-forcing (ZF) precoding and regularized zero-forcing (RZF) precoding techniques, which utilize the CSI of both S-D channel and R-D channel at the relay nodes. By modeling the R-D CSI error at the relay nodes as independent complex Gaussian random variables, we derive the ergodic capacities of the three beamformers in terms of instantaneous SNR. Using Law of Large Number, we obtain the asymptotic capacities, upon which the optimized MF-RZF is derived. Simulation results show that the asymptotic capacities match with the respective ergodic capacities very well. Analysis and simulation results demonstrate that the optimized MF-RZF outperforms MF and MF-ZF for any power of R-D CSI error. | electrical engineering and systems science |
One of the key challenges in current Noisy Intermediate-Scale Quantum (NISQ) computers is to control a quantum system with high-fidelity quantum gates. There are many reasons a quantum gate can go wrong -- for superconducting transmon qubits in particular, one major source of gate error is the unwanted crosstalk between neighboring qubits due to a phenomenon called frequency crowding. We motivate a systematic approach for understanding and mitigating the crosstalk noise when executing near-term quantum programs on superconducting NISQ computers. We present a general software solution to alleviate frequency crowding by systematically tuning qubit frequencies according to input programs, trading parallelism for higher gate fidelity when necessary. The net result is that our work dramatically improves the crosstalk resilience of tunable-qubit, fixed-coupler hardware, matching or surpassing other more complex architectural designs such as tunable-coupler systems. On NISQ benchmarks, we improve worst-case program success rate by 13.3x on average, compared to existing traditional serialization strategies. | quantum physics |
We expand Mendelian Randomization (MR) methodology to deal with randomly missing data on either the exposure or the outcome variable, and furthermore with data from nonindependent individuals (eg components of a family). Our method rests on the Bayesian MR framework proposed by Berzuini et al (2018), which we apply in a study of multiplex Multiple Sclerosis (MS) Sardinian families to characterise the role of certain plasma proteins in MS causation. The method is robust to presence of pleiotropic effects in an unknown number of instruments, and is able to incorporate inter-individual kinship information. Introduction of missing data allows us to overcome the bias introduced by the (reverse) effect of treatment (in MS cases) on level of protein. From a substantive point of view, our study results confirm recent suspicion that an increase in circulating IL12A and STAT4 protein levels does not cause an increase in MS risk, as originally believed, suggesting that these two proteins may not be suitable drug targets for MS. | statistics |
The spatial diffusion of epidemic disease follows distance decay law in geography, but different diffusion processes may be modeled by different mathematical functions under different spatio-temporal conditions. This paper is devoted to modeling spatial diffusion patterns of COVID-19 stemming from Wuhan city to Hubei province. The methods include gravity and spatial auto-regression analyses. The local gravity model is derived from allometric scaling and global gravity model, and then the parameters of the local gravity model are estimated by observational data and linear regression. The main results are as below. The local gravity model based on power law decay can effectively describe the diffusion patterns and process of COVID-19 in Hubei Province, and the goodness of fit of the gravity model based on negative exponential decay to the observation data is not satisfactory. Further, the goodness of fit of the model to data entirely became better and better over time, the size elasticity coefficient increases first and then decreases, and the distance attenuation exponent decreases first and then increases. Moreover, the significance of spatial autoregressive coefficient in the model is low, and the confidence level is less than 80%. The conclusions can be reached as follows. (1) The spatial diffusion of COVID-19 of Hubei bears long range effect, and the size of a city and the distance of the city to Wuhan affect the total number of confirmed cases. (2) Wuhan direct transmission is the main process in the spatial diffusion of COVID-19 in Hubei at the early stage, and the horizontal transmission between regions is not significant. (3) The effect of spatial isolation measures taken by Chinese government against the transmission of COVID-19 is obvious. This study suggests that the role of gravity should be taken into account to prevent and control epidemic disease. | physics |
In the 2017 paper by Dougherty, Kim, Ozkaya, Sok, and Sol\'e about the linear programming bound for LCD codes the notion $\mathrm{LCD}[n,k]$ was defined for binary LCD $[n,k]$-codes. We find the formula for $\mathrm{LCD}[n,2]$. | mathematics |
We scrutinize the paradigm that conventional long-duration Gamma-Ray Bursts (GRBs) are the dominant source of the ultra-high energy cosmic rays (UHECRs) within the internal shock scenario by describing UHECR spectrum and composition and by studying the predicted (source and cosmogenic) neutrino fluxes. Since it has been demonstrated that the stacking searches for astrophysical GRB neutrinos strongly constrain the parameter space in single-zone models, we focus on the dynamics of multiple collisions for which different messengers are expected to come from different regions of the same object. We propose a model which can describe both stochastic and deterministic engines, which we study in a systematic way. We find that GRBs can indeed describe the UHECRs for a wide range of different model assumptions with comparable quality albeit with the previously known problematic energy requirements; the heavy mass fraction at injection is found to be larger than 70% (95% CL). We demonstrate that the post-dicted (from UHECR data) neutrino fluxes from sources and UHECR propagation are indeed below the current sensitivities but will be reached by the next generation of experiments. We finally critically review the required source energetics with the specific examples found in this study. | astrophysics |
Recent advances in black hole astrophysics, particularly the first visual evidence of a supermassive black hole at the center of the galaxy M87 by the Event Horizon Telescope (EHT), and the detection of an orbiting "hot spot" nearby the event horizon of Sgr A* in the Galactic center by the Gravity Collaboration, require the development of novel numerical methods to understand the underlying plasma microphysics. Non-thermal emission related to such hot spots is conjectured to originate from plasmoids that form due to magnetic reconnection in thin current layers in the innermost accretion zone. Resistivity plays a crucial role in current sheet formation, magnetic reconnection, and plasmoid growth in black hole accretion disks and jets. We included resistivity in the three-dimensional general-relativistic magnetohydrodynamics (GRMHD) code BHAC and present the implementation of an Implicit-Explicit scheme to treat the stiff resistive source terms of the GRMHD equations. The algorithm is tested in combination with adaptive mesh refinement to resolve the resistive scales and a constrained transport method to keep the magnetic field solenoidal. Several novel methods for primitive variable recovery, a key part in relativistic magnetohydrodynamics codes, are presented and compared for accuracy, robustness, and efficiency. We propose a new inversion strategy that allows for resistive-GRMHD simulations of low gas-to-magnetic pressure ratio and highly magnetized regimes as applicable for black hole accretion disks, jets, and neutron star magnetospheres. We apply the new scheme to study the effect of resistivity on accreting black holes, accounting for dissipative effects as reconnection. | physics |
Weak invariants are time-dependent observables with conserved expectation values. Their fluctuations, however, do not remain constant in time. On the assumption that time evolution of the state of an open quantum system is given in terms of a completely positive map, the fluctuations monotonically grow even if the map is not unital, in contrast to the fact that monotonic increases of both the von Neumann entropy and R\'enyi entropy require the map to be unital. In this way, the weak invariants describe temporal asymmetry in a manner different from the entropies. A formula is presented for time evolution of the covariance matrix associated with the weak invariants in the case when the system density matrix obeys the Gorini-Kossakowski-Lindblad-Sudarshan equation. | quantum physics |
We present sorting algorithms that represent the fastest known techniques for a wide range of input sizes, input distributions, data types, and machines. A part of the speed advantage is due to the feature to work in-place. Previously, the in-place feature often implied performance penalties. Our main algorithmic contribution is a blockwise approach to in-place data distribution that is provably cache-efficient. We also parallelize this approach taking dynamic load balancing and memory locality into account. Our comparison-based algorithm, In-place Superscalar Samplesort (IPS$^4$o), combines this technique with branchless decision trees. By taking cases with many equal elements into account and by adapting the distribution degree dynamically, we obtain a highly robust algorithm that outperforms the best in-place parallel comparison-based competitor by almost a factor of three. IPS$^4$o also outperforms the best comparison-based competitors in the in-place or not in-place, parallel or sequential settings. IPS$^4$o even outperforms the best integer sorting algorithms in a wide range of situations. In many of the remaining cases (often involving near-uniform input distributions, small keys, or a sequential setting), our new in-place radix sorter turns out to be the best algorithm. Claims to have the, in some sense, "best" sorting algorithm can be found in many papers which cannot all be true. Therefore, we base our conclusions on extensive experiments involving a large part of the cross product of 21 state-of-the-art sorting codes, 6 data types, 10 input distributions, 4 machines, 4 memory allocation strategies, and input sizes varying over 7 orders of magnitude. This confirms the robust performance of our algorithms while revealing major performance problems in many competitors outside the concrete set of measurements reported in the associated publications. | computer science |
A dormant generic Miura $\mathfrak{sl}_2$-oper is a flat $\mathrm{PGL}_2$-bundle over an algebraic curve in positive characteristic equipped with some additional data. In the present paper, we give a combinatorial description of dormant generic Miura $\mathfrak{sl}_2$-opers on a totally degenerate curve. The combinatorial objects that we use are certain branch numberings of $3$-regular graphs. Our description may be thought of as an analogue of the combinatorial description of dormant $\mathfrak{sl}_2$-opers given by S. Mochizuki, F. Liu, and B. Osserman. It allows us to think of the Miura transformation in terms of combinatorics. As an application, we identify the dormant generic Miura $\mathfrak{sl}_2$-opers on totally degenerate curves of genus $>0$. | mathematics |
Surface acoustic wave (SAW) based sensors for applications to gaseous environments have been widely investigated since the last 1970s. More recently, the SAW-based sensors focus has shifted towards liquid-phase sensing applications: the SAW sensor contacts directly the solution to be tested and can be utilized for characterizing physical and chemical properties of liquids, as well as for biochemical sensor applications. The design of liquid phase sensors requires the selection of several parameters, such as the acoustic wave polarizations (i.e., elliptical, longitudinal and shear horizontal), the wave-guiding medium composition (i.e., homogeneous or non-homogeneous half-spaces, finite thickness plates or composite suspended membranes), the substrate material type and its crystallographic orientation. The paper provides an overview of different types of SAW sensors suitable for application to liquid environments, and intents to direct the attention of the designers to combinations of materials, waves nature and electrode structures that affect the sensor performances. | physics |
We calculate the magnetic dipole moment of the newly observed charged hidden-charmed open strange $ Z_{cs}(3985)^- $ state, recently observed by the BESIII Collaboration. Based on the information provided by the experiment and theoretical studies followed the observation, we assign the quantum numbers $ J^{P} = 1^{+}$ and the quark composition $ c \bar c s\bar u $ to this state and estimate the magnetic dipole moment of this resonance in both the compact diquark-antidiquark and molecular pictures. We apply the light cone QCD formalism and use the distribution amplitudes of the on-shell photon with different twists. The obtained results can help experimental groups aiming to measure the electromagnetic properties of such states, comparison of which with the theoretical predictions can help us in determination of the exact nature, substructure and quantum numbers of this strange tetraquark. | high energy physics phenomenology |
In this paper, we present a theory to efficiently deal with mechanical properties of heterogeneous polymer chain in free space and the central problem is to evaluate the diffusion equation and orientation-orientation correlation function, under the condition of varying persistence length (a measure of bending rigidity) along the chain contour. Additionally, we give the specifically experimental method to measure the variable persistence length to examine our theory. In order to verify the theoretical predictions, we also performed large numbers of Brownian dynamics simulations based on the Generalized Bead-Rod (GBR) model and showed that our theory is in good agreement with simulation results. As an application, sequence dependence of the mechanical behavior of DNA chains is successfully analyzed and we have given the exact persistence length of basic dinucleotide steps, which is verified through using the basic steps to design other DNA fragments. | physics |
It is known that data rates in standard cellular networks are limited due to inter-cell interference. An effective solution of this problem is to use the multi-cell cooperation idea. In Cloud Radio Access Network (C-RAN), which is a candidate solution in 5G and future communication networks, cooperation is applied by means of central processors (CPs) connected to simple remote radio heads with finite capacity fronthaul links. In this study, we consider a downlink C-RAN with a wireless fronthaul and aim to minimize total power spent by jointly designing beamformers for fronthaul and access links. We consider the case where perfect channel state information is not available in the CP. We first derive a novel theoretical performance bound for the problem defined. Then we propose four algorithms with different complexities to show the tightness of the bound. The first two algorithms apply successive convex optimizations with semi-definite relaxation idea where other two are adapted from well-known beamforming design methods. The detailed simulations under realistic channel conditions show that as the complexity of the algorithm increases, the corresponding performance becomes closer to the bound. | computer science |
In the medical fields, ultrasound detection is often performed with piezoelectric arrays that enable one to simultaneously map the acoustic fields at several positions. In this work, we develop a novel method for transforming a single-element ultrasound detector into an effective detection array by spatially filtering the incoming acoustic fields using a binary acoustic mask coded with cyclic Hadamard patterns. By scanning the mask in front of the detector, we obtain a multiplexed measurement dataset from which a map of the acoustic field is analytically constructed. We experimentally demonstrate our method by transforming a single-element ultrasound detector into 1D arrays with up to 59 elements. | electrical engineering and systems science |
We investigate infrared dynamics of four-dimensional Einstein gravity in de Sitter space. We set up a general framework to investigate dynamical scaling relations in quantum/classical gravitational theories. The conformal mode dependence of Einstein gravity is renormalized to the extent that general covariance is not manifest. We point out that the introduction of an inflaton is necessary as a counterterm. We observe and postulate a duality between quantum effects in Einstein gravity and classical evolutions in an inflation (or quintessence) model. The effective action of Einstein gravity can be constructed as an inflation model with manifest general covariance. We show that $g=G_N H^2/\pi$: the only dimensionless coupling of the Hubble parameter $H^2$ and the Newton's coupling $G_N$ in Einstein gravity is screened by the infrared fluctuations of the conformal mode. We evaluate the one-loop $\beta$ function of $g$ with respect to the cosmic time $\log Ht$ as $\beta(g)=-(1/2)g^2$, i.e., $g$ is asymptotically free toward the future. The exact $\beta$ function with the backreaction of $g$ reveals the existence of the ultraviolet fixed point. It indicates that the de Sitter expansion stared at the Planck scale with a minimal entropy $S=2$. We have identified the de Sitter entropy $1/g$ with the von Neumann entropy of the conformal zero mode. The former evolves according to the screening of $g$ and the Gibbons-Hawking formula. The latter is found to increase by diffusion in the stochastic process at the horizon in a consistent way. Our Universe is located very close to the fixed point $g=0$ with a large entropy. We discuss possible physical implications of our results such as logarithmic decay of dark energy. | high energy physics theory |
We report on the non-equilibrium monopole dynamics in the classical spin ice Dy$_2$Ti$_2$O$_7$ detected by means of high-resolution magnetostriction measurements. Significant lattice changes occur at the transition from the kagome-ice to the saturated-ice phase, visible in the longitudinal and transverse magnetostriction. A hysteresis opening at temperatures below 0.6 K suggests a first-order transition between the kagome and saturated state. Extremely slow lattice relaxations, triggered by changes of the magnetic field, were observed. These lattice-relaxation effects result from non-equilibrium monopole formation or annihilation processes. The relaxation times extracted from our experiment are in good agreement with theoretical predictions with decay constants of the order of $10{^4}$ s at 0.3 K. | condensed matter |
We propose a generic multivariate extension of detrended fluctuation analysis (DFA) that incorporates interchannel dependencies within input multichannel data to perform its long-range correlation analysis. We next demonstrate the utility of the proposed method within multivariate signal denoising problem. Particularly, our denosing approach first obtains data driven multiscale signal representation via multivariate variational mode decomposition (MVMD) method. Then, proposed multivariate extension of DFA (MDFA) is used to reject the predominantly noisy modes based on their randomness scores. The denoised signal is reconstructed using the remaining multichannel modes albeit after removal of the noise traces using the principal component analysis (PCA). The utility of our denoising method is demonstrated on a wide range of synthetic and real life signals. | electrical engineering and systems science |
We investigate the formation processes of the Galactic globular cluster (GC) omega Cen with multiple stellar populations based on our original hydrodynamical simulations with chemical enrichment by Type II supernovae (SNe II), asymptotic giant branch (AGB) stars, and neutron star mergers (NSMs). The principal results are as follows. Multiple stellar populations with a wide range of [Fe/H] can be formed from rather massive and compact molecular cloud with a mass of 2 * 10^7 M_sun in the central region of its dwarf galaxy within less than a few hundred Myr. Gas ejected from SNe II and AGB stars can mix well to form new stars with higher He abundances (Y) and higher [Fe/H]. The He-rich stars are strongly concentrated in the GC's central region so that the GC can show a steep negative gradient of Y. Relative ratios of light elements to Fe show bimodal distributions for a given [Fe/H] owing to star formation from original gas and AGB ejecta. [La/Fe] and [Ba/Fe] can rapidly increase until [Fe/H]~-1.5 and then decrease owing to Fe ejection from SNe II. Although AGB ejecta can be almost fully retained in intra-cluster medium, NSM ejecta can be retained only partially. This difference in the retention capability is responsible for the observed unique [Eu/Fe]-[Fe/H] and [La/Eu]-[Fe/H] relations in omega Cen. Some observational results such as the [O/Na]$-$[Fe/H] relation and radial [Fe/H] gradient are yet to be well reproduced in the present model. | astrophysics |
The thermalisation of a strongly-coupled plasma is studied through the AdS/CFT correspondence. The system starts behaving as in viscous hydrodynamics shortly after the end of the perturbation. Local and nonlocal probes are used to characterise the process towards equilibrium. | high energy physics phenomenology |
Analysis of technical efficiency is an important tool in management of public libraries. We assess the efficiency of 4660 public libraries established by municipalities in the Czech Republic in the year 2017. For this purpose, we utilize the data envelopment analysis (DEA) based on the Chebyshev distance. We pay special attention to the operating environment and find that the efficiency scores significantly depend on the population of the municipality and distance to the municipality with extended powers. To remove the effect of the operating environment, we perform DEA separately for categories based on the decision tree analysis as well as categories designed by an expert. | statistics |
We detect the diffuse thermal Sunyaev-Zeldovich (tSZ) effect from the gas filaments between the Luminous Red Galaxy (LRG) pairs using a new approach relying on stacking the individual frequency maps. We apply and demonstrate our method on ~88000 LRG pairs in the SDSS DR12 catalogue selected with an improved selection criterion that ensures minimal contamination by the Galactic CO emission as well as the tSZ signal from the clusters of galaxies. We first stack the Planck channel maps and then perform the Internal Linear Combination method to extract the diffuse $y_{\rm sz}$ signal. Our $Stack$ $First$ approach makes the component separation a lot easier as the stacking greatly suppresses the noise and CMB contributions while the dust foreground becomes homogeneous in spectral-domain across the stacked patch. Thus one component, the CMB, is removed while the rest of the foregrounds are made simpler even before component separation algorithm is applied. We obtain the WHIM signal of $y_{\rm whim}=(3.78\pm 0.37)\times 10^{-8}$ in the gas filaments, accounting for the electron overdensity of $\sim 13$. We estimate the detection significance to be $\gtrsim 10.2\sigma$. This excess $y_{\rm sz}$ signal is tracing the warm-hot intergalactic medium and it could account for most of the missing baryons of the Universe. We show that the $Stack$ $First$ approach is more robust to systematics and produces a cleaner signal compared to the methods relying on stacking the $y$-maps to detect weak tSZ signal currently being used by the cosmology community. | astrophysics |
In pioneering works of Meyer and of McMullen in the early 1970s, the set of Minkowski summands of a polytope was shown to be a polyhedral cone called the type cone. Explicit computations of type cones are in general intractable. Nevertheless, we show that the type cone of the product of simplices is the cone over a simplex. This remarkably simple result derives from insights about rainbow point configurations and the work of McMullen. | mathematics |
We introduce two types of statistical quasi-separation between local observables to construct two-party Bell-type inequalities for an arbitrary dimensional systems and arbitrary number of measurement settings per site. Note that, the main difference between statistical quasi-separations and the usual statistical separations is that the former are not symmetric under exchange of the two local observables, whereas latter preserve the symmetry. We show that a variety of Bell inequalities can be derived by sequentially applying triangle inequalities which statistical quasi-separations satisfy. A sufficient condition is presented to show quantum violations of the Bell-type inequalities with infinitesimal values of critical visibility $v_c$. | quantum physics |
The long spin coherence times in ambient conditions of color centers in solids, such as nitrogen-vacancy (NV$^{-}$) centers in diamond, make these systems attractive candidates for quantum sensing. Quantum sensing provides remarkable sensitivity at room temperature to very small external perturbations, including magnetic fields, electric fields, and temperature changes. A photoreceptive molecule, such as those involved in vision, changes its charge state or conformation in response to the absorption of a single photon. We show the resulting change in local electric field modifies the properties of a nearby quantum coherent spin center in a detectable fashion. Using the formalism of positive operator values measurements (POVMs), we analyze the photo-excited electric dipole field and, by extension, the arrival of a photon based on a measured readout, using a fluorescence cycle, from the spin center. We determine the jitter time of photon arrival and the probability of measurement errors. We predict that configuring multiple independent spin sensors around the photoreceptive molecule would dramatically suppresses the measurement error. | quantum physics |
ASASSN-14dx showed an extraordinary outburst whose features are the small outburst amplitude (~ 2.3 mag) and long duration (> 4 years). Because we found a long observational gap of 123 d before the outburst detection, we propose that the main outburst plateau was missed and that this outburst is just a "fading tail" often seen after the WZ Sge-type superoutbursts. In order to distinguish between WZ Sge and SU UMa-type dwarf novae (DNe), we investigated Gaia DR2 statistically. We applied a logistic regression model and succeeded in classifying by using absolute Gaia magnitudes $M_{G}$ and Gaia colors $G_{\rm BP}-G_{\rm RP}$. Our new classifier also suggests that ASASSN-14dx is the best candidate of a WZ Sge-type DN. We estimated distances from the earth of known WZ Sge stars by using Gaia DR2 parallaxes. The result indicates that ASASSN-14dx is the third nearest WZ Sge star (next to WZ Sge and V455 And), and hence the object can show the third brightest WZ Sge-type superoutburst whose maximum is $V$ = 8-9 mag. | astrophysics |
Amid the coronavirus disease(COVID-19) pandemic, humanity experiences a rapid increase in infection numbers across the world. Challenge hospitals are faced with, in the fight against the virus, is the effective screening of incoming patients. One methodology is the assessment of chest radiography(CXR) images, which usually requires expert radiologist's knowledge. In this paper, we propose an explainable deep neural networks(DNN)-based method for automatic detection of COVID-19 symptoms from CXR images, which we call DeepCOVIDExplainer. We used 15,959 CXR images of 15,854 patients, covering normal, pneumonia, and COVID-19 cases. CXR images are first comprehensively preprocessed, before being augmented and classified with a neural ensemble method, followed by highlighting class-discriminating regions using gradient-guided class activation maps(Grad-CAM++) and layer-wise relevance propagation(LRP). Further, we provide human-interpretable explanations of the predictions. Evaluation results based on hold-out data show that our approach can identify COVID-19 confidently with a positive predictive value(PPV) of 91.6%, 92.45%, and 96.12%; precision, recall, and F1 score of 94.6%, 94.3%, and 94.6%, respectively for normal, pneumonia, and COVID-19 cases, respectively, making it comparable or improved results over recent approaches. We hope that our findings will be a useful contribution to the fight against COVID-19 and, in more general, towards an increasing acceptance and adoption of AI-assisted applications in the clinical practice. | electrical engineering and systems science |
This paper considers a reconfigurable intelligent surface (RIS)-aided millimeter wave (mmWave) downlink communication system where hybrid analog-digital beamforming is employed at the base station (BS). We formulate a power minimization problem by jointly optimizing hybrid beamforming at the BS and the response matrix at the RIS, under signal-to-interference-plus-noise ratio (SINR) constraints. The problem is highly challenging due to the non-convex SINR constraints as well as the non-convex unit-modulus constraints for both the phase shifts at the RIS and the analog beamforming at the BS. A penalty-based algorithm in conjunction with the manifold optimization technique is proposed to handle the problem, followed by an individual optimization method with much lower complexity. Simulation results show that the proposed algorithm outperforms the state-of-art algorithm. Results also show that the joint optimization of RIS response matrix and BS hybrid beamforming is much superior to individual optimization. | computer science |
Let $M$ be a random matrix chosen according to Haar measure from the unitary group $\mathrm{U}(n,\mathbb{C})$. Diaconis and Shahshahani proved that the traces of $M,M^2,\ldots,M^k$ converge in distribution to independent normal variables as $n \to \infty$, and Johansson proved that the rate of convergence is superexponential in $n$. We prove a finite field analogue of these results. Fixing a prime power $q = p^r$, we choose a matrix $M$ uniformly from the finite unitary group $\mathrm{U}(n,q)\subseteq \mathrm{GL}(n,q^2)$ and show that the traces of $\{ M^i \}_{1 \le i \le k,\, p \nmid i}$ converge to independent uniform variables in $\mathbb{F}_{q^2}$ as $n \to \infty$. Moreover we show the rate of convergence is exponential in $n^2$. We also consider the closely related problem of the rate at which characteristic polynomial of $M$ equidistributes in `short intervals' of $\mathbb{F}_{q^2}[T]$. Analogous results are also proved for the general linear, special linear, symplectic and orthogonal groups over a finite field. In the two latter families we restrict to odd characteristic. The proofs depend upon applying techniques from analytic number theory over function fields to formulas due to Fulman and others for the probability that the characteristic polynomial of a random matrix equals a given polynomial. | mathematics |
Nuclear magnetic resonance (NMR) spectroscopy is a powerful technique for analyzing the structure and function of molecules, and for performing three-dimensional imaging of the spin density. At the heart of NMR spectrometers is the detection of electromagnetic radiation, in the form of a free induction decay (FID) signal, generated by nuclei precessing around an applied magnetic field. While conventional NMR requires signals from 1e12 or more nuclei, recent advances in sensitive magnetometry have dramatically lowered this number to a level where few or even individual nuclear spins can be detected. It is natural to ask whether continuous FID detection can still be applied at the single spin level, or whether quantum back-action modifies or even suppresses the NMR response. Here we report on tracking of single nuclear spin precession using periodic weak measurements. Our experimental system consists of carbon-13 nuclear spins in diamond that are weakly interacting with the electronic spin of a nearby nitrogen-vacancy center, acting as an optically readable meter qubit. We observe and minimize two important effects of quantum back-action: measurement-induced decoherence and frequency synchronization with the sampling clock. We use periodic weak measurements to demonstrate sensitive, high-resolution NMR spectroscopy of multiple nuclear spins with a priori unknown frequencies. Our method may provide the optimum route for performing single-molecule NMR at atomic resolution. | quantum physics |
In this paper, we study three applications of recursion to problems in coding and random permutations. First, we consider locally recoverable codes with partial locality and use recursion to estimate the minimum distance of such codes. Next we consider weighted lattice representative codes and use recursive subadditive techniques to obtain convergence of the minimum code size. Finally, we obtain a recursive relation involving cycle moments in random permutations and as an illustration, evaluate recursions for the mean and variance. | mathematics |
We report anomalous enhancement of the critical current at low temperatures in gate-tunable Josephson junctions made from topological insulator BiSbTeSe$_2$ nanoribbons with superconducting Nb electrodes. In contrast to conventional junctions, as a function of the decreasing temperature $T$, the increasing critical current $I_c$ exhibits a sharp upturn at a temperature $T_*$ around 20$\%$ of the junction critical temperatures for several different samples and various gate voltages. The $I_c$ vs. $T$ demonstrates a short junction behavior for $T>T_*$, but crosses over to a long junction behavior for $T<T_*$ with an exponential $T$-dependence $I_c \propto \exp\big(-k_B T/\delta \big)$, where $k_B$ is the Boltzmann constant. The extracted characteristic energy-scale $\delta$ is found to be an order of magnitude smaller than the induced superconducting gap of the junction. We attribute the long-junction behavior with such a small $\delta$ to low-energy Andreev bound states (ABS) arising from winding of the electronic wavefunction around the circumference of the topological insulator nanoribbon (TINR). Our TINR-based Josephson junctions with low-energy ABS are promising for future topologically protected devices that may host exotic phenomena such as Majorana fermions. | condensed matter |
Line intensity mapping (LIM) is an emerging observational method to study the large-scale structure of the Universe and its evolution. LIM does not resolve individual sources but probes the fluctuations of integrated line emissions. A serious limitation with LIM is that contributions of different emission lines from sources at different redshifts are all confused at an observed wavelength. We propose a deep learning application to solve this problem. We use conditional generative adversarial networks to extract designated information from LIM. We consider a simple case with two populations of emission line galaxies; H$\rm\alpha$ emitting galaxies at $z = 1.3$ are confused with [OIII] emitters at $z = 2.0$ in a single observed waveband at 1.5 $\rm\mu$m. Our networks trained with 30,000 mock observation maps are able to extract the total intensity and the spatial distribution of H$\rm\alpha$ emitting galaxies at $z = 1.3$. The intensity peaks are successfully located with 74% precision. The precision increases to 91% when we combine the results of 5 networks. The mean intensity and the power spectrum are reconstructed with an accuracy of $\sim$10%. The extracted galaxy distributions at a wider range of redshift can be used for studies on cosmology and on galaxy formation and evolution. | astrophysics |
Orbital angular momentum of light is a core feature in photonics. Its confinement to surfaces using plasmonics has unlocked many phenomena and potential applications. Here we introduce the reflection from structural boundaries as a new degree of freedom to generate and control plasmonic orbital angular momentum. We experimentally demonstrate plasmonic vortex cavities, generating a succession of vortex pulses with increasing topological charge as a function of time. We track the spatio-temporal dynamics of these angularly decelerating plasmon pulse train within the cavities for over 300 femtoseconds using time-resolved Photoemission Electron Microscopy, showing that the angular momentum grows by multiples of the chiral order of the cavity. The introduction of this degree of freedom to tame orbital angular momentum delivered by plasmonic vortices, could miniaturize pump-probe-like quantum initialization schemes, increase the torque exerted by plasmonic tweezers and potentially achieve vortex lattice cavities with dynamically evolving topology. | physics |
We present an implementation of $WZjj$ production via vector-boson fusion in the POWHEG BOX, a public tool for the matching of next-to-leading order QCD calculations with multi-purpose parton-shower generators. We provide phenomenological results for electroweak $WZjj$ production with fully leptonic decays at the LHC in realistic setups and discuss theoretical uncertainties associated with the simulation. We find that beyond the leading-order approximation the dependence on the unphysical factorization and renormalization scales is mild. The two tagging jets are furthermore very stable against parton-shower effects. However, considerable sensitivities to the shower Monte-Carlo program used are observed for central-jet veto observables. | high energy physics phenomenology |
We construct a scalar dark matter model with $U(1)_{L_\mu-L_\tau}$ symmetry in which the dark matter interacts with the quark flavours, allowing lepton non-universal $b \to s \ell \bar{\ell}$ decays. The model can solve $b \to s \mu \mu$ ($R_{K^{(*)}}$) anomaly and accommodate the relic abundance of dark matter simultaneously while satisfying the constraints from other low energy flavour experiments and direct detection experiments of dark matter. The new fields include vector-like heavy quarks $U$ and $D$, $U(1)_{L_\mu-L_\tau}$ breaking scalar $S$, as well as the dark matter candidate $X_I$ and its heavy partner $X_R$. To explain both $b \to s \mu \mu$ anomaly and the dark matter, {\it i)} large mass difference between $X_R$ and $X_I$ is required, {\it ii)} electroweak scale dark matter and heavy quarks are favoured, {\it iii)} not only electroweak scale but ${\cal O}(10)$ TeV dark gauge boson $Z'$ and $X_R$ are allowed. | high energy physics phenomenology |
The orbits of the least chemically enriched stars open a window on the formation of our Galaxy when it was still in its infancy. The common picture is that these low-metallicity stars are distributed as an isotropic, pressure-supported component since these stars were either accreted from the early building blocks of the assembling Milky Way, or were later brought by the accretion of faint dwarf galaxies. Combining the metallicities and radial velocities from the Pristine and LAMOST surveys and Gaia DR2 parallaxes and proper motions for an unprecedented large and unbiased sample of very metal-poor stars at $[Fe/H]\leq-2.5$ we show that this picture is incomplete. This sample shows strong statistical evidence (at the $5.0\sigma$ level) of asymmetry in their kinematics, favouring prograde motion. Moreover, we find that $31\%$ of the stars that currently reside in the disk do not venture outside of the disk plane throughout their orbit. The discovery of this population implies that a significant fraction of stars with iron abundances $[Fe/H]\leq-2.5$ formed within or concurrently with the Milky Way disk and that the history of the disk was quiet enough to allow them to retain their disk-like orbital properties. | astrophysics |
Noisy intermediate-scale quantum (NISQ) computers have gate errors and decoherence, limiting the depth of circuits that can be implemented on them. A strategy for NISQ algorithms is to reduce the circuit depth at the expense of increasing the qubit count. Here, we exploit this trade-off for an application called entanglement spectroscopy, where one computes the entanglement of a state $| \psi \rangle$ on systems $AB$ by evaluating the R\'enyi entropy of the reduced state $\rho_A = {\rm Tr}_B(| \psi \rangle \langle \psi |)$. For a $k$-qubit state $\rho(k)$, the R\'enyi entropy of order $n$ is computed via ${\rm Tr}(\rho(k)^{n})$, with the complexity growing exponentially in $k$ for classical computers. Johri, Steiger, and Troyer [PRB 96, 195136 (2017)] introduced a quantum algorithm that requires $n$ copies of $| \psi \rangle$ and whose depth scales linearly in $k*n$. Here, we present a quantum algorithm requiring twice the qubit resources ($2n$ copies of $| \psi \rangle$) but with a depth that is independent of both $k$ and $n$. Surprisingly this depth is only two gates. Our numerical simulations show that this short depth leads to an increased robustness to noise. | quantum physics |
This work characterizes (dyadic) wavelet frames for $L^2({\mathbb R})$ by means of spectral techniques. These techniques use decomposability properties of the frame operator in spectral representations associated to the dilation operator. The approach is closely related to usual Fourier domain fiberization techniques, dual Gramian analysis and extension principles, which are described here on the basis of the periodized Fourier transform. In a second paper of this series, we shall show how the spectral formulas obtained here permit us to calculate all the tight wavelet frames for $L^2({\mathbb R})$ with a fixed number of generators of minimal support. | mathematics |
We show that unlike the usual topologies the $g$-topologies are closed with respect to the Cartesian products. Moreover, we bring much detailed explanations some examples of concepts related the statistical metric spaces. | mathematics |
In this paper, three techniques of internal image-representation in a quantum computer are compared: Flexible Representation of Quantum Images (FRQI), Novel Enhanced Quantum Representation of digital images (NEQR), and Quantum Boolean Image Processing (QBIP). All conspicuous technical items are considered in this comparison for complete analysis: i) performance as Classical-to-Quantum (Cl2Qu) interface, ii) characteristics of the employed qubits, iii) sparsity of the used internal registers, iv) number and size of the required registers, v) quality in the outcomes recovering, vi) number of required gates and its consequent accumulated noise, vi) decoherence, and vii) fidelity. These analyses and demonstrations are automatically extended to all variants of FRQI and NEQR. This study demonstrated the practical infeasibility in the implementation of FRQI and NEQR on a physical quantum computer (QPU), while QBIP has proven to be extremely successful on a) the four main quantum simulators on the cloud, b) two QPUs, and c) optical circuits from three labs. Moreover, QBIP also demonstrated its economy regarding the required resources needed for its proper function and its great robustness (immunity to noise), among other advantages, in fact, without any exceptions. | quantum physics |
Since upcoming telescopes will observe thousands of strong lensing systems, creating fully-automated analysis pipelines for these images becomes increasingly important. In this work, we make a step towards that direction by developing the first end-to-end differentiable strong lensing pipeline. Our approach leverages and combines three important computer science developments: (a) convolutional neural networks, (b) efficient gradient-based sampling techniques, and (c) deep probabilistic programming languages. The latter automatize parameter inference and enable the combination of generative deep neural networks and physics components in a single model. In the current work, we demonstrate that it is possible to combine a convolutional neural network trained on galaxy images as a source model with a fully-differentiable and exact implementation of gravitational lensing physics in a single probabilistic model. This does away with hyperparameter tuning for the source model, enables the simultaneous optimization of nearly one hundred source and lens parameters with gradient-based methods, and allows the use of efficient gradient-based posterior sampling techniques. These features make this automated inference pipeline potentially suitable for processing a large amount of data. By analyzing mock lensing systems with different signal-to-noise ratios, we show that lensing parameters are reconstructed with percent-level accuracy. More generally, we consider this work as one of the first steps in establishing differentiable probabilistic programming techniques in the particle astrophysics community, which have the potential to significantly accelerate and improve many complex data analysis tasks. | astrophysics |
We formulate a series of conjectures relating the geometry of conformal manifolds to the spectrum of local operators in conformal field theories in $d>2$ spacetime dimensions. We focus on conformal manifolds with limiting points at infinite distance with respect to the Zamolodchikov metric. Our central conjecture is that all theories at infinite distance possess an emergent higher-spin symmetry, generated by an infinite tower of currents whose anomalous dimensions vanish exponentially in the distance. Stated geometrically, the diameter of a non-compact conformal manifold must diverge logarithmically in the higher-spin gap. In the holographic context our conjectures are related to the Distance Conjecture in the swampland program. Interpreted gravitationally, they imply that approaching infinite distance in moduli space at fixed AdS radius, a tower of higher-spin fields becomes massless at an exponential rate that is bounded from below in Planck units. We discuss further implications for conformal manifolds of superconformal field theories in three and four dimensions. | high energy physics theory |
A geometric formalism is developed which allows to describe the non-linear regime of higher-spin gravity emerging on a cosmological quantum space-time in the IKKT matrix model. The vacuum solutions are Ricci-flat up to an effective vacuum energy-momentum tensor quadratic in the torsion, which arises from a Weitzenb\"ock-type higher spin connection. Torsion is expected to be significant only at cosmic scales and around very massive objects, and could behave like dark matter. A non-linear equation for the torsion tensor is found, which encodes the Yang-Mills equations of the matrix model. The metric and torsion transform covariantly under a higher-spin generalization of volume-preserving diffeomorphisms, which arises from the gauge invariance of the matrix model. | high energy physics theory |
We consider the problem of robust inference under the generalized linear model (GLM) with stochastic covariates. We derive the properties of the minimum density power divergence estimator of the parameters in GLM with random design and use this estimator to propose robust Wald-type tests for testing any general composite null hypothesis about the GLM. The asymptotic and robustness properties of the proposed tests are also examined for the GLM with random design. Application of the proposed robust inference procedures to the popular Poisson regression model for analyzing count data is discussed in detail both theoretically and numerically through simulation studies and real data examples. | statistics |
The dicarbon molecular anion is currently of interest as a candidate for laser cooling due to its electronic structure and favorable branching ratios to the ground electronic and vibrational states. Helium has been proposed as a buffer gas to cool the molecule's internal motion. We calculate the cross sections and corresponding rates for rovibrational inelastic collisions of the dicarbon anion with He, and also with Ne and Ar, on three-dimensional ab initio potential energy surfaces using quantum scattering theory. The rates for vibrational quenching with He and Ne are very small and are similar to those for small neutral molecules in collision with helium. The quenching rates for Ar, however, are far larger than those with the other noble gases, suggesting that this may be a more suitable gas for driving vibrational quenching in traps. The implications of these results for laser cooling of the dicarbon anion are discussed. | physics |
Lifetimes of the doubly heavy baryons ${\cal B}_{bb}$ and ${\cal B}_{bc}$ are analyzed within the framework of the heavy quark expansion (HQE). Lifetime differences arise from the spectator effects such as $W$-exchange and Pauli interference. For doubly bottom baryons, the lifetime pattern is $\tau(\Omega_{bb}^-)\sim \tau(\Xi_{bb}^{-})>\tau(\Xi_{bb}^0)$. The $\Xi_{bb}^{0}$ baryon is shortest-lived owing to the $W$-exchange contribution, while $\Xi_{bb}^{-}$ and $\Omega_{bb}^{-}$ have similar lifetimes as they both receive contributions from destructive Pauli interference. We find the lifetime ratio $\tau(\Xi_{bb}^{-})/\tau(\Xi_{bb}^0)=1.26$\,. The large $W$-exchange contribution to $\Xi_{bc}^0$ through the subprocess $cd\to us\to cd$ and the sizable destructive Pauli interference contribution to $\Xi_{bc}^+$ imply a substantial lifetime difference between $\Xi_{bc}^+$ and $\Xi_{bc}^0$. In the presence of subleading $1/m_c$ and $1/m_b$ corrections to the spectator effects, we find that $\tau(\Omega_{bc}^0)$ becomes longest-lived. This is because $\Gamma^{\rm int}_+$ and $\Gamma^{\rm semi}$ for $\Omega_{bc}^0$ are subject to large cancellation between dimension-6 and -7 operators. This implies that the subleading corrections are too large to justify the validity of the HQE. Demanding that $\Gamma^{cs}_{{\rm int+}}(\Omega_{bc}^0)$, $\Gamma^{{\rm SL},cs}_{\rm int}(\Omega_{bc}^0)$ be positive and $\Gamma^{cu}_{{\rm int-}}(\Xi^+_{bc})$ be negative, we conjecture that $1.68\times 10^{-13}s<\tau(\Omega_{bc}^0)< 3.70\times 10^{-13}s$ , $4.09\times 10^{-13}s<\tau(\Xi_{bc}^+)< 6.07\times 10^{-13}s$ and $0.93\times 10^{-13}s<\tau(\Xi_{bc}^0)< 1.18\times 10^{-13}s$. Hence, the lifetime hierarchy of ${\cal B}_{bc}$ baryons is expected to be the pattern $\tau(\Xi_{bc}^{+})>\tau(\Omega_{bc}^0)>\tau(\Xi_{bc}^0)$. | high energy physics phenomenology |
Fault-tolerant quantum computers offer the promise of dramatically improving machine learning through speed-ups in computation or improved model scalability. In the near-term, however, the benefits of quantum machine learning are not so clear. Understanding expressibility and trainability of quantum models-and quantum neural networks in particular-requires further investigation. In this work, we use tools from information geometry to define a notion of expressibility for quantum and classical models. The effective dimension, which depends on the Fisher information, is used to prove a novel generalisation bound and establish a robust measure of expressibility. We show that quantum neural networks are able to achieve a significantly better effective dimension than comparable classical neural networks. To then assess the trainability of quantum models, we connect the Fisher information spectrum to barren plateaus, the problem of vanishing gradients. Importantly, certain quantum neural networks can show resilience to this phenomenon and train faster than classical models due to their favourable optimisation landscapes, captured by a more evenly spread Fisher information spectrum. Our work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which we verify on real quantum hardware. | quantum physics |
The search for efficient spin conversion in Bi has attracted great attention in spin-orbitronics. In the present work, we employ spin-torque ferromagnetic resonance to investigate spin conversion in Bi/Ni80Fe20(Py) bilayer films with continuously varying Bi thickness. In contrast with previous studies, sizable spin-transfer torque (i.e., a sizable spin-conversion effect) is observed in Bi/Py bilayer film. Considering the absence of spin conversion in Bi/yttrium-iron-garnet bilayers and the enhancement of spin conversion in Bi-doped Cu, the present results indicate the importance of material combinations to generate substantial spin-conversion effects in Bi. | condensed matter |
Whereas most dimensionality reduction techniques (e.g. PCA, ICA, NMF) for multivariate data essentially rely on linear algebra to a certain extent, summarizing ranking data, viewed as realizations of a random permutation $\Sigma$ on a set of items indexed by $i\in \{1,\ldots,\; n\}$, is a great statistical challenge, due to the absence of vector space structure for the set of permutations $\mathfrak{S}_n$. It is the goal of this article to develop an original framework for possibly reducing the number of parameters required to describe the distribution of a statistical population composed of rankings/permutations, on the premise that the collection of items under study can be partitioned into subsets/buckets, such that, with high probability, items in a certain bucket are either all ranked higher or else all ranked lower than items in another bucket. In this context, $\Sigma$'s distribution can be hopefully represented in a sparse manner by a bucket distribution, i.e. a bucket ordering plus the ranking distributions within each bucket. More precisely, we introduce a dedicated distortion measure, based on a mass transportation metric, in order to quantify the accuracy of such representations. The performance of buckets minimizing an empirical version of the distortion is investigated through a rate bound analysis. Complexity penalization techniques are also considered to select the shape of a bucket order with minimum expected distortion. Beyond theoretical concepts and results, numerical experiments on real ranking data are displayed in order to provide empirical evidence of the relevance of the approach promoted. | statistics |
We study the generation of hybrid entanglement in a one-dimensional quantum walk. In particular, we explore the preparation of maximally entangled states between position and spin degrees of freedom. We address it as an optimization problem, where the cost function is the Schmidt norm. We then benchmark the algorithm and compare the generation of entanglement between the Hadamard quantum walk, the random quantum walk and the optimal quantum walk. Finally, we discuss an experimental scheme with a photonic quantum walk in the orbital angular momentum of light. The experimental measurement of entanglement can be achieved with quantum state tomography. | quantum physics |
The mechanical performance of Directed Energy Deposition Additive Manufactured (DED-AM) components can be highly material dependent. Through in situ and operando synchrotron X-ray imaging we capture the underlying phenomena controlling build quality of stainless steel (SS316) and titanium alloy (Ti6242 or Ti-6Al-2Sn-4Zr-2Mo). We reveal three mechanisms influencing the build efficiency of titanium alloys compared to stainless steel: blown powder sintering; reduced melt-pool wetting due to the sinter; and pore pushing in the melt-pool. The former two directly increase lack of fusion porosity, while the later causes end of track porosity. Each phenomenon influences the melt-pool characteristics, wetting of the substrate and hence build efficacy and undesirable microstructural feature formation. We demonstrate that porosity is related to powder characteristics, pool flow, and solidification front morphology. Our results clarify DED-AM process dynamics, illustrating why each alloy builds differently, facilitating the wider application of additive manufacturing to new materials. | physics |
We determine analytically the energy gap at weak coupling in the attractive multi-component Gaudin--Yang model, an integrable model which describes interacting fermions in one dimension with $\kappa$ components. We use three different methods. The first one is based on a direct analysis of the Bethe ansatz equations. The second method uses the theory of resurgence and the large order behavior of the perturbative series for the ground state energy. The third method is based on a renormalization group analysis. The three methods lead to the same answer, providing in this way a non-trivial test of the ideas of resurgence and renormalons as applied to non-relativistic many-body systems. | high energy physics theory |
In the presence of inertia-gravity waves, the geostrophic and hydrostatic balance that characterises the slow dynamics of rapidly rotating, strongly stratified flows holds in a time-averaged sense and applies to the Lagrangian-mean velocity and buoyancy. We give an elementary derivation of this wave-averaged balance and illustrate its accuracy in numerical solutions of the three-dimensional Boussinesq equations, using a simple configuration in which vertically planar near-inertial waves interact with a barotropic anticylonic vortex. We further use the conservation of the wave-averaged potential vorticity to predict the change in the barotropic vortex induced by the waves. | physics |
Approximate Bayesian computation (ABC) provides us with a way to infer parameters of models, for which the likelihood function is not available, from an observation. Using ABC, which depends on many simulations from the considered model, we develop an inferential framework to learn parameters of a stochastic numerical simulator of volcanic eruption. Moreover, the model itself is parallelized using Message Passing Interface (MPI). Thus, we develop a nested-parallelized MPI communicator to handle the expensive numerical model with ABC algorithms. ABC usually relies on summary statistics of the data in order to measure the discrepancy model output and observation. However, informative summary statistics cannot be found for the considered model. We therefore develop a technique to learn a distance between model outputs based on deep metric-learning. We use this framework to learn the plume characteristics (eg. initial plume velocity) of the volcanic eruption from the tephra deposits collected by field-work associated with the 2450 BP Pululagua (Ecuador) volcanic eruption. | statistics |
Scoliosis is a congenital disease that causes lateral curvature in the spine. Its assessment relies on the identification and localization of vertebrae in spinal X-ray images, conventionally via tedious and time-consuming manual radiographic procedures that are prone to subjectivity and observational variability. Reliability can be improved through the automatic detection and localization of spinal landmarks. To guide a CNN in the learning of spinal shape while detecting landmarks in X-ray images, we propose a novel loss based on a bipartite distance (BPD) measure, and show that it consistently improves landmark detection performance. | electrical engineering and systems science |
To obtain direct measurements of the muon content of extensive air showers with energy above $10^{16.5}$ eV, the Pierre Auger Observatory is currently being equipped with an underground muon detector (UMD), consisting of 219 10 $\mathrm{m^2}$-modules, each segmented into 64 scintillators coupled to silicon photomultipliers (SiPMs). Direct access to the shower muon content allows for the study of both of the composition of primary cosmic rays and of high-energy hadronic interactions in the forward direction. As the muon density can vary between tens of muons per m$^2$ close to the intersection of the shower axis with the ground to much less than one per m$^2$ when far away, the necessary broad dynamic range is achieved by the simultaneous implementation of two acquisition modes in the read-out electronics: the binary mode, tuned to count single muons, and the ADC mode, suited to measure a high number of them. In this work, we present the end-to-end calibration of the muon detector modules: first, the SiPMs are calibrated by means of the binary channel, and then, the ADC channel is calibrated using atmospheric muons, detected in parallel to the shower data acquisition. The laboratory and field measurements performed to develop the implementation of the full calibration chain of both binary and ADC channels are presented and discussed. The calibration procedure is reliable to work with the high amount of channels in the UMD, which will be operated continuously, in changing environmental conditions, for several years. | astrophysics |
Healthcare is an essential application of e-services, where for diagnostic testing, medical imaging acquiring, processing, analysis, storage, and protection are used. Image ciphering during storage and transmission over the networks used has seen implemented using many types of ciphering algorithms for security purpose. Current cyphering algorithms are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, RC5, RSA, ...) and chaos cryptography using continuous (Chau, Rossler, Lorenz, ...) or discreet (Logistics, Henon, ...) algorithms. The traditional algorithms have struggled to combat image data as compared to regular textual data. Whereas, the chaotic algorithms are more efficient for image ciphering. The Significance characteristics of chaos are its extreme sensitivity to initial conditions and algorithm parameters. In this paper, medical image security based on hybrid/mixed chaotic algorithms is proposed. The proposed method is implemented using MATLAB. Where the image of the Retina of the Eye to detect Blood Vessels is ciphered. The Pseudo-Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented, and their statistical properties are evaluated using the National Institute of Standards and Technology NIST and other statistical test-suits. Then, these algorithms are used to secure the data, where the statistical properties of the cipher-text are also tested. We propose two PRNGs to increase the complexity of the PRNGs and to allow many of the NIST statistical tests to be passed: one based on two-hybrid mixed chaotic logistic maps and one based on two-hybrid mixed chaotic Henon maps, where each chaotic algorithm runs side-by-side and starts with random initial conditions and parameters (encryption keys). The resulting hybrid PRNGs passed many of the NIST statistical test suits. | computer science |
Segmenting objects in images and separating sound sources in audio are challenging tasks, in part because traditional approaches require large amounts of labeled data. In this paper we develop a neural network model for visual object segmentation and sound source separation that learns from natural videos through self-supervision. The model is an extension of recently proposed work that maps image pixels to sounds. Here, we introduce a learning approach to disentangle concepts in the neural networks, and assign semantic categories to network feature channels to enable independent image segmentation and sound source separation after audio-visual training on videos. Our evaluations show that the disentangled model outperforms several baselines in semantic segmentation and sound source separation. | computer science |
The application of deep learning to pathology assumes the existence of digital whole slide images of pathology slides. However, slide digitization is bottlenecked by the high cost of precise motor stages in slide scanners that are needed for position information used for slide stitching. We propose GloFlow, a two-stage method for creating a whole slide image using optical flow-based image registration with global alignment using a computationally tractable graph-pruning approach. In the first stage, we train an optical flow predictor to predict pairwise translations between successive video frames to approximate a stitch. In the second stage, this approximate stitch is used to create a neighborhood graph to produce a corrected stitch. On a simulated dataset of video scans of WSIs, we find that our method outperforms known approaches to slide-stitching, and stitches WSIs resembling those produced by slide scanners. | electrical engineering and systems science |
Steen's (2018) Hintikka set properties for Church's type theory based on primitive equality are reduced to the Hintikka set properties of Brown (2007). Using this reduction, a model existence theorem for Steen's properties is derived. | computer science |
Nasopharyngeal Carcinoma (NPC) is a leading form of Head-and-Neck (HAN) cancer in the Arctic, China, Southeast Asia, and the Middle East/North Africa. Accurate segmentation of Organs-at-Risk (OAR) from Computed Tomography (CT) images with uncertainty information is critical for effective planning of radiation therapy for NPC treatment. Despite the stateof-the-art performance achieved by Convolutional Neural Networks (CNNs) for automatic segmentation of OARs, existing methods do not provide uncertainty estimation of the segmentation results for treatment planning, and their accuracy is still limited by several factors, including the low contrast of soft tissues in CT, highly imbalanced sizes of OARs and large inter-slice spacing. To address these problems, we propose a novel framework for accurate OAR segmentation with reliable uncertainty estimation. First, we propose a Segmental Linear Function (SLF) to transform the intensity of CT images to make multiple organs more distinguishable than existing methods based on a simple window width/level that often gives a better visibility of one organ while hiding the others. Second, to deal with the large inter-slice spacing, we introduce a novel 2.5D network (named as 3D-SepNet) specially designed for dealing with clinic HAN CT scans with anisotropic spacing. Thirdly, existing hardness-aware loss function often deal with class-level hardness, but our proposed attention to hard voxels (ATH) uses a voxel-level hardness strategy, which is more suitable to dealing with some hard regions despite that its corresponding class may be easy. Our code is now available at https://github.com/HiLab-git/SepNet. | electrical engineering and systems science |
Nanocrystalline materials exhibit properties that can differ substantially from those of their single crystal counterparts. As such, they provide ways to enhance and optimise their functionality for devices and applications. Here we report on the optical, mechanical and thermal properties of nanocrystalline silicon probed by means of optomechanical nanobeams to extract information of the dynamics of optical absorption, mechanical losses, heat generation and dissipation. The optomechanical nanobeams are fabricated using nanocrystalline films prepared by annealing amorphous silicon layers at different temperatures. The resulting crystallite sizes and the stress in the films can be controlled by the annealing temperature and time and, consequently, the properties of the films can be tuned relatively freely, as demonstrated here by means of electron microscopy and Raman scattering. We show that the nanocrystallite size and the volume fraction of the grain boundaries play a key role in the dissipation rates through non-linear optical and thermal processes. Promising optical (13000) and mechanical (1700) quality factors were found in the optomechanical cavity realised in the nanocrystalline Si resulting from annealing at 950 C. The enhanced absorption and recombination rates via the intra-gap states and the reduced thermal conductivity boost the potential to exploit these non-linear effects in applications, including NEMS, phonon lasing and chaos-based devices. | physics |
We study the generalization of the Sachdev-Ye-Kitaev (SYK) model to a $1+1$ dimensional chiral SYK model of $N$ flavors of right-moving chiral Majorana fermions with all-to-all random 4-fermion interactions. The interactions in this model are exactly marginal, leading to an exact scaling symmetry. We show the Schwinger-Dyson equation of this model in the large $N$ limit is exactly solvable. In addition, we show this model is integrable for small $N\le6$ by bosonization. Surprisingly, the two point function in the large $N$ limit has exactly the same form as that for $N=4$, although the four point functions of the two cases are quite different. The ground state entropy in the large $N$ limit is the same as that of $N$ free chiral Majorana fermions, leading to a zero ground state entropy density. The OTOC of the model in the large $N$ limit exhibits a non-trivial spacetime structure reminscent of that found by Gu and Kitaev for generic SYK-like models. Specifically we find a Lyapunov regime inside an asymmetric butterfly cone, which are signatures of quantum chaos, and that the maximal velocity dependent Lyapunov exponent approaches the chaos bound $2\pi/\beta$ as the interaction strength approaches its physical upper bound. Finally, the model is integrable for (at least) $N\le6$ but chaotic in the large $N$ limit, leading us to conjecture that there is a transition from integrability to chaos as $N$ increases past a critical value. | high energy physics theory |
This paper introduces a new property of estimators of the strength of statistical association, which helps characterize how well an estimator will perform in scenarios where dependencies between continuous and discrete random variables need to be rank ordered. The new property, termed the estimator response curve, is easily computable and provides a marginal distribution agnostic way to assess an estimator's performance. It overcomes notable drawbacks of current metrics of assessment, including statistical power, bias, and consistency. We utilize the estimator response curve to test various measures of the strength of association that satisfy the data processing inequality (DPI), and show that the CIM estimator's performance compares favorably to kNN, vME, AP, and H_{MI} estimators of mutual information. The estimators which were identified to be suboptimal, according to the estimator response curve, perform worse than the more optimal estimators when tested with real-world data from four different areas of science, all with varying dimensionalities and sizes. | statistics |
In the paper, we present QCD predictions for $\eta_{c} + \gamma$ production at an electron-position collider up to next-to-next-to-leading order (NNLO) accuracy without renormalization scale ambiguities. The NNLO total cross-section for $e^{+}+e^{-}\to\gamma+\eta_{c}$ using the conventional scale-setting approach has large renormalization scale ambiguities, usually estimated by choosing the renormalization scale to be the $e^+ e^-$ center-of-mass collision energy $\sqrt{s}$. The Principle of Maximum Conformality (PMC) provides a systematic way to eliminate such renormalization scale ambiguities by summing the nonconformal $\beta$ contributions into the QCD coupling $\alpha_s(Q^2)$. The renormalization group equation then sets the value of $\alpha_s$ for the process. The PMC renormalization scale reflects the virtuality of the underlying process, and the resulting predictions satisfy all of the requirements of renormalization group invariance, including renormalization scheme invariance. After applying the PMC, we obtain a scale-and-scheme independent prediction, $\sigma|_{\rm NNLO, PMC}\simeq 41.18$ fb for $\sqrt{s}$=10.6 GeV. The resulting pQCD series matches the series for conformal theory and thus has no divergent renormalon contributions. The large $K$ factor which contributes to this process reinforces the importance of uncalculated NNNLO and higher-order terms. Using the PMC scale-and-scheme independent conformal series and the $\rm Pad\acute{e}$ approximation approach, we predict $\sigma|_{\rm NNNLO, PMC+Pade} \simeq 21.36$ fb, which is consistent with the recent BELLE measurement $\sigma^{\rm obs}$=$16.58^{+10.51}_{-9.93}$ fb at $\sqrt{s} \simeq 10.6$ GeV. This procedure also provides a first estimate of the NNNLO contribution. | high energy physics phenomenology |
We investigate the accuracy of conventional machine learning aided algorithms for the prediction of lateral land movement in an area using the precise position time series of permanent GNSS stations. The machine learning algorithms that are used are tantamount to the ones used in [1], except for the radial basis functions, i.e. multilayer perceptron, Bayesian neural network, Gaussian processes, k-nearest neighbor, generalized regression neural network, classification and regression trees, and support vector regression. A comparative analysis is presented in which the accuracy level of the mentioned machine learning methods is checked against each other. It is shown that the most accurate method for both of the components of the time series is the Gaussian processes, achieving up to 9.5 centimeters in accuracy. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.