text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Stretching response of polymer chains under external force is crucial in understanding polymer dynamics under equilibrium and non-equilibrium conditions. Here we measure the elastic response of poly(ethylene glycol) using a lock-in based amplitude modulation-AFM with sub-angstrom amplitude in both low and high frequency regime. Appropriate analysis that takes into account the cantilever geometry and hydrodynamic loading effects of oscillating cantilever in liquid, relates X signal of lock-in amplifier linearly to stiffness.Stiffness data extracted from X signal was compared with stiffness from derivative of conventional "static" force extension curves. For stiffness data from X signal, fitting to standard entropic model of WLC gives a physically meaningful value of persistense length and also follows scaling behaviour of WLC. Entropy dominated conformational transition with its characteristic V-shaped signature was observed at arount 250 pN. Accurate measurement of stiffness enabled us in understanding the thermodynamics of conformational changes. | condensed matter |
As quantum contextuality proves to be a necessary resource for universal quantum computation, we present a general method for vector generation of Kochen-Specker (KS) contextual sets in the form of hypergraphs. The method supersedes all three previous methods: (i) fortuitous discoveries of smallest KS sets, (ii) exhaustive upward hypergraph-generation of sets, and (iii) random downward generation of sets from fortuitously obtained big master sets. In contrast to previous works, we can generate master sets which contain all possible KS sets starting with nothing but a few simple vector components. From them we can readily generate all KS sets obtained in the last half a century and any specified new KS sets. Herewith we can generate sufficiently large sets as well as sets with definite required features and structures to enable varieties of different implementations in quantum computation and communication. | quantum physics |
High penetration of plug-in electric vehicles (PEVs) can potentially put the utility assets such as transformer under overload stress causing decrease in their lifetime. The decrease in PV and battery energy storage system (BESS) prices has made them viable solutions to mitigate this situation. In this paper, the economic aspect of their optimal coordination is studied to assess transformer hottest spot temperature (HST) and loss of life. Monte Carlo simulation is employed to provide synthetic data of PEVs load in a residential complex and model their stochastic behavior. For load, temperature, energy price and PV generation, real data for City of College Station, Texas, USA in 2018 is acquired and a case study is developed for one year. The results illustrate using BESS and PV is economically effective and mitigates distribution transformer loss of life. | electrical engineering and systems science |
We characterise the sentences in Monadic Second-order Logic (MSO) that are over finite structures equivalent to a Datalog program, in terms of an existential pebble game. We also show that for every class C of finite structures that can be expressed in MSO and is closed under homomorphisms, and for all integers l,k, there exists a *canonical* Datalog program Pi of width (l,k), that is, a Datalog program of width (l,k) which is sound for C (i.e., Pi only derives the goal predicate on a finite structure A if A is in C) and with the property that Pi derives the goal predicate whenever *some* Datalog program of width (l,k) which is sound for C derives the goal predicate. The same characterisations also hold for Guarded Second-order Logic (GSO), which properly extends MSO. To prove our results, we show that every class C in GSO whose complement is closed under homomorphisms is a finite union of constraint satisfaction problems (CSPs) of countably categorical structures. | computer science |
There are two big questions cosmologists would like to answer -- How does the Universe work, and what are its origin and destiny? A long wavelength gravitational wave detector -- with million km interferometer arms, achievable only from space -- gives a unique opportunity to address both of these questions. A sensitive, mHz frequency observatory could use the inspiral and merger of massive black hole binaries as standard sirens, extending our ability to characterize the expansion history of the Universe from the onset of dark energy-domination out to a redshift z ~ 10. A low-frequency detector, furthermore, offers the best chance for discovery of exotic gravitational wave sources, including a primordial stochastic background, that could reveal clues to the origin of our Universe. | astrophysics |
We present a covariant phase space construction of hamiltonian generators of asymptotic symmetries with `Dirichlet' boundary conditions in de Sitter spacetime, extending a previous study of J\"ager. We show that the de Sitter charges so defined are identical to those of Ashtekar, Bonga, and Kesavan (ABK). We then present a comparison of ABK charges with other notions of de Sitter charges. We compare ABK charges with counterterm charges, showing that they differ only by a constant offset, which is determined in terms of the boundary metric alone. We also compare ABK charges with charges defined by Kelly and Marolf at spatial infinity of de sitter spacetime. When the formalisms can be compared, we show that the two definitions agree. Finally, we express Kerr-de Sitter metrics in four and five dimensions in an appropriate Fefferman-Graham form. | high energy physics theory |
Low-resolution analog-to-digital converters (ADCs) have been considered as a practical and promising solution for reducing cost and power consumption in massive Multiple-Input-Multiple-Output (MIMO) systems. Unfortunately, low-resolution ADCs significantly distort the received signals, and thus make data detection much more challenging. In this paper, we develop a new deep neural network (DNN) framework for efficient and low-complexity data detection in low-resolution massive MIMO systems. Based on reformulated maximum likelihood detection problems, we propose two model-driven DNN-based detectors, namely OBMNet and FBMNet, for one-bit and few-bit massive MIMO systems, respectively. The proposed OBMNet and FBMNet detectors have unique and simple structures designed for low-resolution MIMO receivers and thus can be efficiently trained and implemented. Numerical results also show that OBMNet and FBMNet significantly outperform existing detection methods. | electrical engineering and systems science |
We implement a sample-efficient method for rapid and accurate emulation of semi-analytical galaxy formation models over a wide range of model outputs. We use ensembled deep learning algorithms to produce a fast emulator of an updated version of the GALFORM model from a small number of training examples. We use the emulator to explore the model's parameter space, and apply sensitivity analysis techniques to better understand the relative importance of the model parameters. We uncover key tensions between observational datasets by applying a heuristic weighting scheme in a Markov chain Monte Carlo framework and exploring the effects of requiring improved fits to certain datasets relative to others. Furthermore, we demonstrate that this method can be used to successfully calibrate the model parameters to a comprehensive list of observational constraints. In doing so, we re-discover previous GALFORM fits in an automatic and transparent way, and discover an improved fit by applying a heavier weighting to the fit to the metallicities of early-type galaxies. The deep learning emulator requires a fraction of the model evaluations needed in similar emulation approaches, achieving an out-of-sample mean absolute error at the knee of the K-band luminosity function of 0.06 dex with less than 1000 model evaluations. We demonstrate that this is an extremely efficient, inexpensive and transparent way to explore multi-dimensional parameter spaces, and can be applied more widely beyond semi-analytical galaxy formation models. | astrophysics |
The Ubicomp Digital 2020 -- Time Series Classification Challenge from STABILO is a challenge about multi-variate time series classification. The data collected from 100 volunteer writers, and contains 15 features measured with multiple sensors on a pen. In this paper,we use a neural network to classify the data into 52 classes, that is lower and upper cases of Arabic letters. The proposed architecture of the neural network a is CNN-LSTM network. It combines convolutional neural network (CNN) for short term context with along short term memory layer (LSTM) for also long term dependencies. We reached an accuracy of 68% on our writer exclusive test set and64.6% on the blind challenge test set resulting in the second place. | computer science |
The recent discovery of ferromagnetism in two-dimensional (2D) van der Waals (vdW) materials holds promises for novel spintronic devices with exceptional performances. However, in order to utilize 2D vdW magnets for building spintronic nanodevices such as magnetic memories, key challenges remain in terms of effectively switching the magnetization from one state to the other electrically. Here, we devise a bilayer structure of Fe3GeTe2/Pt, in which the magnetization of few-layered Fe3GeTe2 can be effectively switched by the spin-orbit torques (SOTs) originated from the current flowing in the Pt layer. The effective magnetic fields corresponding to the SOTs are further quantitatively characterized using harmonic measurements. Our demonstration of the SOT-driven magnetization switching in a 2D vdW magnet could pave the way for implementing low-dimensional materials in the next-generation spintronic applications. | condensed matter |
In this study an attempt has been made to propose a way to develop new distribution. For this purpose, we need only idea about distribution function. Some important statistical properties of the new distribution like moments, cumulants, hazard and survival function has been derived. The renyi entropy, shannon entropy has been obtained. Also ML estimate of parameter of the distribution is obtained, that is not closed form. Therefore, numerical technique is used to estimate the parameter. Some real data sets are used to check the suitability of this distribution over some other existing distributions such as Lindley, Garima, Shanker and many more. AIC, BIC, -2loglikihood, K-S test suggest the proposed distribution works better than others distributions considered in this study. | statistics |
The charged and neutral pion mass difference can be attributed to both the QED and QCD contributions. The current quark mass difference ($\Delta m$) is the source of the QCD contribution. Here, in a two flavour non-local NJL model, we try to estimate the QCD contribution. Interestingly, we find that the strength of the $U(1)_A$ symmetry breaking parameter $c$ plays a crucial role for obtaining the pion mass difference while intertwined with the current quark mass difference. To obtain the QCD contribution for the pion mass difference we can scan the parameter space in $\{\Delta m,\; c\}$ and by comparing this with the existing results this parameter space is constrained. Further, using a fitted value of $c$ we can determine the allowed range for the current quark mass difference in the model. | high energy physics phenomenology |
The staggering growth of the number of vehicles worldwide has become a critical challenge resulting in tragic incidents, environment pollution, congestion, etc. Therefore, one of the promising approaches is to design a smart vehicular system as it is beneficial to drive safely. Present vehicular system lacks data reliability, security, and easy deployment. Motivated by these issues, this paper addresses a drone-enabled intelligent vehicular system, which is secure, easy to deploy and reliable in quality. Nevertheless, an increase in the number of operating drones in the communication networks makes them more vulnerable towards the cyber-attacks, which can completely sabotage the communication infrastructure. To tackle these problems, we propose a blockchain-based registration and authentication system for the entities such as drones, smart vehicles (SVs) and roadside units (RSUs). This paper is mainly focused on the blockchain-based secure system design and the optimal placement of drones to improve the spectral efficiency of the overall network. In particular, we investigate the association of RSUs with the drones by considering multiple communication-related factors such as available bandwidth, maximum number of links a drone can support, and backhaul limitations. We show that the proposed model can easily be overlaid on the current vehicular network reaping benefits of secure and reliable communications. | electrical engineering and systems science |
We first propose a general method to construct the complete set of on-shell operator bases involving massive particles with any spins. To incorporate the non-abelian little groups of massive particles, the on-shell scattering amplitude basis should be factorized into two parts: one is charged, and the other one is neutral under little groups of massive particles. The complete set of these two parts can be systematically constructed by choosing some specific Young diagrams of Lorentz subgroup and global symmetry $U(N)$ respectively ($N$ is the number of external particles), without the equation of motion and integration by part redundancy. Thus the complete massive amplitude bases without any redundancies can be obtained by combining these two complete sets. Some examples are presented to explicitly demonstrate this method. This method is applicable for constructing amplitude bases involving identical particles, and all the bases can be constructed automatically by computer programs based on it. | high energy physics phenomenology |
We discuss $B_c \to \eta_c$ and $B_c \to J/\psi$ semileptonic decays within the Standard Model (SM) and beyond. The relevant transition form factors, being the main source of theoretical uncertainties, are calculated in the sum rule approach and are provided in a full $q^2$ range. We calculate the semileptonic branching fractions and find for the ratios, $R_{\eta_c}|_{\rm SM}= 0.32 \pm 0.02$, $R_{J/\psi}|_{\rm SM} = 0.23 \pm 0.01$. Both predictions are in agreement with other existing calculations and support the current tension in $R_{J/\psi}$ at 2$\sigma$ level with the experiment. To extend the potential of testing the SM in the semileptonic $B_c$ decays, we consider the forward-backward asymmetry and polarization observables. We also study the 4-fold differential distributions of $B_c \rightarrow J/\psi (J/\psi \to \tilde{\ell}^-\tilde{\ell}^+ ) \ell^- \bar{\nu}_\ell$, where $\tilde{\ell} = e, \mu$, in the presence of different new physics scenarios and find that the new physics effects can significantly modify the angular observables and can also produce effects which do not exist in the SM. Using the constraints on the new physics couplings from the recent combined analysis of BaBar, Belle and LHCb data on semileptonic $B \to D^{(\ast)}$ decays, where the effects of new physics could be visible, we find that these different new physics scenarios are not able to simultaneously explain the current experimental value of $R_{J/\psi}$. | high energy physics phenomenology |
This work aims at modeling how the meaning of gradable adjectives of size (`big', `small') can be learned from visually-grounded contexts. Inspired by cognitive and linguistic evidence showing that the use of these expressions relies on setting a threshold that is dependent on a specific context, we investigate the ability of multi-modal models in assessing whether an object is `big' or `small' in a given visual scene. In contrast with the standard computational approach that simplistically treats gradable adjectives as `fixed' attributes, we pose the problem as relational: to be successful, a model has to consider the full visual context. By means of four main tasks, we show that state-of-the-art models (but not a relatively strong baseline) can learn the function subtending the meaning of size adjectives, though their performance is found to decrease while moving from simple to more complex tasks. Crucially, models fail in developing abstract representations of gradable adjectives that can be used compositionally. | computer science |
Renormalized fermion condensate in $SU(2)$ gauge theory has been calculated in the background of static, stable gauge field configuration using field strength formalism. It is observed that the condensate attains a negative, minimum value at low energy. | high energy physics phenomenology |
Using the ideas of E.I. Gordon we present and farther advance an approach, based on nonstandard analysis, to simultaneous approximations of locally compact abelian groups and their duals by (hyper)finite abelian groups, as well as to approximations of various types of Fourier transforms on them by the discrete Fourier transform. Combining some methods of nonstandard analysis and additive combinatorics we prove the three Gordon's Conjectures which were open since 1991 and are crucial both in the formulations and proofs of the LCA groups and Fourier transform approximation theorems. | mathematics |
The strong decays of $D_1(2420)^0$, $D_2^*(2460)^0$, $D_2^*(2460)^+$, $D_2^*(2460)^-$, $D(2550)^0$, $D_J^*(2600)^0$, $D(2740)^0$, $D_3^*(2750)^0$, $D_3^*(2750)^+$, $D_3^*(2750)^-$, $D_J(3000)^0$, $D{_{J}^*}(3000)^0$ and $D_2^*(3000)^0$ resonance states are analyzed in the heavy quark mass limit of Heavy Quark Effective Theory (HQET). The individual decay rates and the branching ratios among the strong decays determine their spin and parity. From such states the Regge trajectories are constructed in $(J, M^2)$ and $(n_r, M^2)$ planes and further predict the masses of higher excited states (1$^1D_2$, 1$^3D_3$, 3$^1S_0$, 3$^3S_1$, 1$^1F_3$, 1$^3F_4$, 2$^3D_3$, 3$^3P_2$ and 2$^3F_4$) lying on Regge lines by fixing their slopes and intercepts. Moreover, the strong decay rates and the branching ratios of these higher excited states are also examined, which can help the experimentalists to search these states into their respective decay modes. | high energy physics phenomenology |
As the new-generation precision experiments such as MOLLER and P2 look for physics beyond Standard Model, it is becoming increasingly important to evaluate the higher-order electroweak radiative corrections to a sub-percent level of uncertainty. However, due to propagators with different masses and higher-order tensor Feynman integrals, the two-loop calculations involving thousands of Feynman graphs become a demanding task requiring novel computational approaches. In this paper, we describe our dispersive sub-loop insertion approach and develop two-loop integrals using two-point functions basis which is applicable to wide range of processes. | high energy physics theory |
Two-dimensional (2D) intrinsic half-metallic materials are of great interest to explore the exciting physics and applications of nanoscale spintronic devices, but no such materials have been experimentally realized. Using first-principles calculations based on density-functional theory (DFT), we predicted that single-layer MnAsS$_4$ was a 2D intrinsic ferromagnetic (FM) half-metal. The half-metallic spin gap for single-layer MnAsS$_4$ is about 1.46 eV, and it has a large spin splitting of about 0.49 eV in the conduction band. Monte Carlo simulations predicted the Curie temperature (\emph{T}$_c$) was about 740 K. Moreover, Within the biaxial strain ranging from -5\% to 5\%, the FM half-metallic properties remain unchanged. Its ground-state with 100\% spin-polarization ratio at Fermi level may be a promising candidate material for 2D spintronic applications. | condensed matter |
The traditional image compressors, e.g., BPG and H.266, have achieved great image and video compression quality. Recently, Convolutional Neural Network has been used widely in image compression. We proposed an attention-based convolutional neural network for low bit-rate compression to post-process the output of traditional image compression decoder. Across the experimental results on validation sets, the post-processing module trained by MAE and MS-SSIM losses yields the highest PSNR of 32.10 on average at the bit-rate of 0.15. | electrical engineering and systems science |
Atrial Fibrillation (AF) is an abnormal heart rhythm which can trigger cardiac arrest and sudden death. Nevertheless, its interpretation is mostly done by medical experts due to high error rates of computerized interpretation. One study found that only about 66% of AF were correctly recognized from noisy ECGs. This is in part due to insufficient training data, class skewness, as well as semantical ambiguities caused by noisy segments in an ECG record. In this paper, we propose a K-margin-based Residual-Convolution-Recurrent neural network (K-margin-based RCR-net) for AF detection from noisy ECGs. In detail, a skewness-driven dynamic augmentation method is employed to handle the problems of data inadequacy and class imbalance. A novel RCR-net is proposed to automatically extract both long-term rhythm-level and local heartbeat-level characters. Finally, we present a K-margin-based diagnosis model to automatically focus on the most important parts of an ECG record and handle noise by naturally exploiting expected consistency among the segments associated for each record. The experimental results demonstrate that the proposed method with 0.8125 F1NAOP score outperforms all state-of-the-art deep learning methods for AF detection task by 6.8%. | electrical engineering and systems science |
We show that the groups of finite energy loops and paths (that is, those of Sobolev class $H^1$) with values in a compact connected Lie group, as well as their central extensions, satisfy an amenability-like property: they admit a left-invariant mean on the space of bounded functions uniformly continuous with regard to a left-invariant metric. Every strongly continuous unitary representation $\pi$ of such a group (which we call skew-amenable) has a conjugation-invariant state on $B({\mathcal H}_{\pi})$. | mathematics |
We study non-equilibrium dynamics of integrable and non-integrable closed quantum systems whose unitary evolution is interrupted with stochastic resets, characterized by a reset rate $r$, that project the system to its initial state. We show that the steady state density matrix of a non-integrable system, averaged over the reset distribution, retains its off-diagonal elements for any finite $r$. Consequently a generic observable $\hat O$, whose expectation value receives contribution from these off-diagonal elements, never thermalizes under such dynamics for any finite $r$. We demonstrate this phenomenon by exact numerical studies of experimentally realizable models of ultracold bosonic atoms in a tilted optical lattice. For integrable Dirac-like fermionic models driven periodically between such resets, the reset-averaged steady state is found to be described by a family of generalized Gibbs ensembles (GGE s) characterized by $r$. We also study the spread of particle density of a non-interacting one-dimensional fermionic chain, starting from an initial state where all fermions occupy the left half of the sample, while the right half is empty. When driven by resetting dynamics, the density profile approaches at long times to a nonequilibrium stationary profile that we compute exactly. We suggest concrete experiments that can possibly test our theory. | condensed matter |
We present for the first time complete next-to-next-to-leading-order coefficient functions to match flavor non-singlet quark correlation functions in position space, which are calculable in lattice QCD, to parton distribution functions (PDFs). Using PDFs extracted from experimental data and our calculated matching coefficients, we predict valence-quark correlation functions that can be confronted by lattice QCD calculations. The uncertainty of our predictions is greatly reduced with higher order matching coefficients. By performing Fourier transformation, we also obtain matching coefficients for corresponding quasi-PDFs and pseudo-PDFs. Our method of calculations can be readily generalized to evaluate the matching coefficients for sea-quark and gluon correlation functions, putting the program to extract partonic structure of hadrons from lattice QCD calculations to be comparable with and complementary to that from experimental measurements. | high energy physics phenomenology |
The formation, properties and lifetime of secondary organic aerosols in the atmosphere are largely determined by gas-particle partitioning coefficients of the participating organic vapours. Since these coefficients are often difficult to measure or compute, we developed a machine learning (ML) model to predict them given molecular structure as input. Our data-driven approach is based on the dataset by Wang et al. (Atmos. Chem. Phys., 17, 7529 (2017)), who computed the partitioning coefficients and saturation vapour pressures of 3414 atmospheric oxidation products from the master chemical mechanism using the COSMOtherm program. We train a kernel ridge regression (KRR) ML model on the saturation vapour pressure ($P_{sat}$), and on two equilibrium partitioning coefficients: between a water-insoluble organic matter phase and the gas phase ($K_{WIOM/G}$), and between an infinitely dilute solution with pure water and the gas phase ($K_{W/G}$). For the input representation of the atomic structure of each organic molecule to the machine, we test different descriptors. Our best ML model predicts $P_{sat}$ and $K_{WIOM/G}$ to within 0.3 and $K_{W/G}$ to within 0.4 logarithmic units of the original COSMOtherm calculations. This is equal or better than the typical accuracy of COSMOtherm predictions compared to experimental data. We then apply our ML model to a dataset of 35,383 molecules that we generated based on a carbon 10 backbone and functionalized with 0 to 6 carboxyl, carbonyl or hydroxyl groups to evaluate its performance for polyfunctional compounds with potentially low $P_{sat}$. The resulting $P_{sat}$ and partitioning coefficient distributions were physico-chemically reasonable, and the volatility predictions for the most highly oxidized compounds were in qualitative agreement with experimentally inferred volatilities of atmospheric oxidation products with similar elemental composition. | physics |
A number of recent experimental studies have shown that solid-state complex organic molecules (COMs) can form under conditions that are relevant to the CO freeze-out stage in dense clouds. In this work, we show that alcohols can be formed well before the CO freeze-out stage (i.e., in the H2O-rich ice phase). This joint experimental and computational investigation shows that the isomers, n- and i-propanol (H3CCH2CH2OH and H3CCHOHCH3) and n- and i-propenol (H3CCHCHOH and H3CCOHCH2), can be formed in radical-addition reactions starting from propyne (H3CCCH) + OH at the low temperature of 10 K, where H3CCCH is one of the simplest representatives of stable carbon chains already identified in the interstellar medium (ISM). The resulting average abundance ratio of 1:1 for n-propanol:i-propanol is aligned with the conclusions from the computational work that the geometric orientation of strongly interacting species is influential to the extent of which 'mechanism' is participating, and that an assortment of geometries leads to an averaged-out effect. Three isomers of propanediol are also tentatively identified in the experiments. It is also shown that propene and propane (H3CCHCH2 and H3CCH2CH3) are formed from the hydrogenation of H3CCCH. Computationally-derived activation barriers give additional insight into what types of reactions and mechanisms are more likely to occur in the laboratory and in the ISM. Our findings not only suggest that the alcohols studied here share common chemical pathways and therefore can show up simultaneously in astronomical surveys, but also that their extended counterparts that derive from polyynes containing H3C(CC)nH structures may exist in the ISM. Such larger species, like fatty alcohols, are the possible constituents of simple lipids that primitive cell membranes on the early Earth are thought to be partially composed of. | astrophysics |
The effective-Lagrangian description of Lorentz-invariance violation provided by the so-called Standard-Model Extension covers all the sectors of the Standard Model, allowing for model-independent studies of high-energy phenomena that might leave traces at relatively-low energies. In this context, the quantification of the large set of parameters characterizing Lorentz-violating effects is well motivated. In the present work, effects from the Lorentz-nonconserving Yukawa sector on the electromagnetic moments of charged leptons are calculated, estimated, and discussed. Following a perturbative approach, explicit expressions of leading contributions are derived and upper bounds on Lorentz violation are estimated from current data on electromagnetic moments. Scenarios regarding the coefficients of Lorentz violation are considered. In a scenario of two-point insertions preserving lepton flavor, the bound on the electron electric dipole moment yields limits as stringent as $10^{-28}$, whereas muon and tau-lepton electromagnetic moments determine bounds as restrictive as $10^{-14}$ and $10^{-6}$, respectively. Another scenario, defined by the assumption that Lorentz-violating Yukawa couplings are Hermitian, leads to less stringent bounds, provided by the muon anomalous magnetic moment, which turn out to be as restrictive as $10^{-14}$. | high energy physics phenomenology |
The penetration of dendrites in ceramic lithium conductors severely constrains the development of solid-state batteries (SSBs) while its nanoscopic origin remain unelucidated. We develop an in-situ nanoscale electrochemical characterization technique to reveal the nanoscopic lithium dendrite growth kinetics and use it as a guiding tool to unlock the design of interfaces for dendrite-proof SSBs. Using Li7La3Zr2O12 (LLZO) as a model system, in-situ nanoscopic dendrite triggering measurements, ex-situ electro-mechanical characterizations, and finite element simulations are carried out which reveal the dominating role of Li+ flux detouring and nano-mechanical inhomogeneity on dendrite penetration. To mitigate such nano-inhomogeneity, an ionic-conductive homogenizing layer based on poly(propylene carbonate) is designed which in-situ reacts with lithium to form a highly conformal interphase at mild conditions. A high critical current density of 1.8mA cm-2 and a low interfacial resistance of 14{\Omega} cm2 is achieved. Practical SSBs based on LiFePO4 cathodes show great cyclic stability without capacity decay over 300 cycles. Beyond this, highly reversible electrochemical dendrite healing behavior in LLZO is discovered using the nano-electrode, based on which a model memristor with a high on/off ratio of ~10^5 is demonstrated for >200 cycles. This work not only provides a novel tool to investigate and design interfaces in SSBs but offers also new opportunities for solid electrolytes beyond energy applications. | physics |
Generalization is one of the most important issues in machine learning problems. In this study, we consider generalization in restricted Boltzmann machines (RBMs). We propose an RBM with multivalued hidden variables, which is a simple extension of conventional RBMs. We demonstrate that the proposed model is better than the conventional model via numerical experiments for contrastive divergence learning with artificial data and a classification problem with MNIST. | statistics |
In assessing prediction accuracy of multivariable prediction models, optimism corrections are essential for preventing biased results. However, in most published papers of clinical prediction models, the point estimates of the prediction accuracy measures are corrected by adequate bootstrap-based correction methods, but their confidence intervals are not corrected, e.g., the DeLong's confidence interval is usually used for assessing the C-statistic. These naive methods do not adjust for the optimism bias and do not account for statistical variability in the estimation of parameters in the prediction models. Therefore, their coverage probabilities of the true value of the prediction accuracy measure can be seriously below the nominal level (e.g., 95%). In this article, we provide two generic bootstrap methods, namely (1) location-shifted bootstrap confidence intervals and (2) two-stage bootstrap confidence intervals, that can be generally applied to the bootstrap-based optimism correction methods, i.e., the Harrell's bias correction, 0.632, and 0.632+ methods. In addition, they can be widely applied to various methods for prediction model development involving modern shrinkage methods such as the ridge and lasso regressions. Through numerical evaluations by simulations, the proposed confidence intervals showed favourable coverage performances. Besides, the current standard practices based on the optimism-uncorrected methods showed serious undercoverage properties. To avoid erroneous results, the optimism-uncorrected confidence intervals should not be used in practice, and the adjusted methods are recommended instead. We also developed the R package predboot for implementing these methods (https://github.com/nomahi/predboot). The effectiveness of the proposed methods are illustrated via applications to the GUSTO-I clinical trial. | statistics |
The gravitational wave event GW170817 and the slowly-rising afterglows of short gamma-ray burst GRB 170817A clearly suggest that the GRB jet has an angular structure. However the actual jet structure remains unclear as different authors give different structures. We formulate a novel method to inversely reconstruct the jet structure from off-axis GRB afterglows, without assuming any functional form of the structure in contrast to the previous studies. The jet structure is uniquely determined from the rising part of a light curve for a given parameter set by integrating an ordinary differential equation, which is derived from the standard theory of GRB afterglows. Applying to GRB 170817A, we discover that a non-trivial hollow-cone jet is consistent with the observed afterglows, as well as Gaussian and power-law jets within errors, which implies the Blandford-Znajek mechanism or an ejecta-jet interaction. The current observations only constrain the jet core, not in principle the outer jet structure around the line of sight. More precise and high-cadence observations with our inversion method will fix the jet structure, providing a clue to the jet formation and propagation. | astrophysics |
Experimental realization of a universal set of quantum logic gates with high-fidelity is critical to quantum information processing, which is always challenging by inevitable interaction between the quantum system and environment. Geometric quantum computation is noise immune, and thus offers a robust way to enhance the control fidelity. Here, we experimentally implement the recently proposed extensible nonadiabatic holonomic quantum computation with solid spins in diamond at room-temperature, which maintains both flexibility and resilience against decoherence and system control errors. Compared with previous geometric method, the fidelities of a universal set of holonomic single-qubit and two-qubit quantum logic gates are improved in experiment. Therefore, this work makes an important step towards fault-tolerant scalable geometric quantum computation in realistic systems. | quantum physics |
Superconducting granular aluminum is attracting increasing interest due to its high kinetic inductance and low dissipation, favoring its use in kinetic inductance particle detectors, superconducting resonators or quantum bits. We perform switching current measurements on DC-SQUIDs, obtained by introducing two identical geometric constrictions in granular aluminum rings of various normal-state resistivities in the range from $\rho_\mathrm{n} = 250\,\mu\Omega\mathrm{cm}$ to $5550\,\mu\Omega\mathrm{cm}$. The relative high kinetic inductance of the SQUID loop, in the range of tens of nH, leads to a suppression of the modulation in the measured switching current versus magnetic flux, accompanied by a distortion towards a triangular shape. We observe a change in the temperature dependence of the switching current histograms with increasing normal-state film resistivity. This behavior suggests the onset of a diffusive motion of the superconducting phase across the constrictions in the two-dimensional washboard potential of the SQUIDs, which could be caused by a change of the local electromagnetic environment of films with increasing normal-state resistivities. | condensed matter |
We report quasi-simultaneous GMRT observations of seven extragalactic radio sources at 150, 325, 610 and 1400 MHz, in an attempt to accurately define their radio continuum spectra, particularly at frequencies below the observed spectral turnover. We had previously identified these sources as candidates for a sharply inverted integrated radio spectrum whose slope is close to, or even exceeds $\alpha_c$ = +2.5, the theoretical limit due to synchrotron self-absorption (SSA) in a source of incoherent synchrotron radiation arising from relativistic particles with the canonical (i.e., power-law) energy distribution. We find that four out of the seven candidates have an inverted radio spectrum with a slope close to or exceeding +2.0, while the critical spectral slope $\alpha_c$ is exceeded in at least one case. These sources, together with another one or two reported in very recent literature, may well be the archetypes of an extremely rare class, from the standpoint of violation of the SSA limit in compact extragalactic radio sources. However, the alternative possibility that free-free absorption is responsible for their ultra-sharp spectral turnover cannot yet be discounted. | astrophysics |
Phosphorene, a monolayer of black phosphorus (BP), is an elemental two-dimensional material with interesting physical properties, such as high charge carrier mobility and exotic anisotropic in-plane properties. To fundamentally understand these various physical properties, it is critically important to conduct an atomic-scale structural investigation of phosphorene, particularly regarding various defects and preferred edge configurations. However, it has been challenging to investigate mono- and few-layer phosphorene because of technical difficulties arising in the preparation of a high-quality sample and damages induced during the characterization process. Here, we successfully fabricate high-quality monolayer phosphorene using a controlled thinning process with transmission electron microscopy, and subsequently perform atomic-resolution imaging. Graphene protection suppresses the e-beam-induced damage to multi-layer BP and one-side graphene protection facilitates the layer-by-layer thinning of the samples, rendering high-quality monolayer and bilayer regions. We also observe the formation of atomic-scale crystalline edges predominantly aligned along the zigzag and (101) terminations, which is originated from edge kinetics under e-beam-induced sputtering process. Our study demonstrates a new method to image and precisely manipulate the thickness and edge configurations of air-sensitive two-dimensional materials. | condensed matter |
We aim at developing and improving the imbalanced business risk modeling via jointly using proper evaluation criteria, resampling, cross-validation, classifier regularization, and ensembling techniques. Area Under the Receiver Operating Characteristic Curve (AUC of ROC) is used for model comparison based on 10-fold cross validation. Two undersampling strategies including random undersampling (RUS) and cluster centroid undersampling (CCUS), as well as two oversampling methods including random oversampling (ROS) and Synthetic Minority Oversampling Technique (SMOTE), are applied. Three highly interpretable classifiers, including logistic regression without regularization (LR), L1-regularized LR (L1LR), and decision tree (DT) are implemented. Two ensembling techniques, including Bagging and Boosting, are applied on the DT classifier for further model improvement. The results show that, Boosting on DT by using the oversampled data containing 50% positives via SMOTE is the optimal model and it can achieve AUC, recall, and F1 score valued 0.8633, 0.9260, and 0.8907, respectively. | statistics |
Differential Evolution (DE) is quite powerful for real parameter single objective optimization. However, the ability of extending or changing search area when falling into a local optimum is still required to be developed in DE for accommodating extremely complicated fitness landscapes with a huge number of local optima. We propose a new flow of DE, termed DE with individuals redistribution, in which a process of individuals redistribution will be called when progress on fitness is low for generations. In such a process, mutation and crossover are standardized, while trial vectors are all kept in selection. Once diversity exceeds a predetermined threshold, our opposition replacement is executed, then algorithm behavior returns to original mode. In our experiments based on two benchmark test suites, we apply individuals redistribution in ten DE algorithms. Versions of the ten DE algorithms based on individuals redistribution are compared with not only original version but also version based on complete restart, where individuals redistribution and complete restart are based on the same entry criterion. Experimental results indicate that, for most of the DE algorithms, version based on individuals redistribution performs better than both original version and version based on complete restart. | computer science |
In this work, we analyze the creation of the discharge asymmetry and the concomitant formation of the DC self-bias voltage in capacitively coupled radio frequency plasmas driven by multi-frequency waveforms, as a function of the electrode surface characteristics. For this latter, we consider and vary the coefficients that characterize the elastic reflection of the electrons from the surfaces and the ion-induced secondary electron yield. Our investigations are based on Particle-in-Cell/Monte Carlo Collision simulations of the plasma and on a model that aids the understanding of the computational results. Electron reflection from the electrodes is found to affect slightly the discharge asymmetry in the presence of multi-frequency excitation, whereas secondary electrons cause distinct changes to the asymmetry of the plasma as a function of the phase angle between the harmonics of the driving voltage waveform and as a function the number of these harmonics. | physics |
Propensity score weighting is an important tool for causal inference and comparative effectiveness research. Besides the inverse probability of treatment weights (IPW), recent development has introduced a general class of balancing weights, corresponding to alternative target populations and estimands. In particular, the overlap weights (OW)lead to optimal covariate balance and estimation efficiency, and a target population of scientific and policy interest. We develop the R package PSweight to provide a comprehensive design and analysis platform for causal inference based on propensity score weighting. PSweight supports (i) a variety of balancing weights, including OW, IPW, matching weights as well as optimal trimming, (ii) binary and multiple treatments, (iii)simple and augmented (double-robust) weighting estimators, (iv) nuisance-adjusted sandwich variances, and (v) ratio estimands for binary and count outcomes. PSweight also provides diagnostic tables and graphs for study design and covariate balance assessment. In addition, PSweight allows for propensity scores and outcome models to be estimated externally by the users. We demonstrate the functionality of the package using a data example from the National Child Development Survey (NCDS), where we evaluate the causal effect of educational attainment on income. | statistics |
The influence of hydrogen on dislocation junctions was analysed by incorporating a hydrogen dependent core force into nodal based discrete dislocation dynamics. Hydrogen reduces the core energy of dislocations, which reduces the magnitude of the dislocation core force. We refer to this as hydrogen core force shielding, as it is analogous to hydrogen elastic shielding but occurs at much lower hydrogen concentrations. The dislocation core energy change due to hydrogen was calibrated at the atomic scale accounting for the nonlinear inter-atomic interactions at the dislocation core, giving the model a sound physical basis. Hydrogen was found to strengthen binary junctions and promote the nucleation of dislocations from triple junctions. Simulations of microcantilever bend tests with hydrogen core force shielding showed an increase in the junction density and subsequent hardening. These simulations were performed at a small hydrogen concentration realistic for bcc iron. | condensed matter |
We present the completion of a data analysis pipeline that self-consistently separates global 21-cm signals from large systematics using a pattern recognition technique. In the first paper of this series, we obtain optimal basis vectors from signal and foreground training sets to linearly fit both components with the minimal number of terms that best extracts the signal given its overlap with the foreground. In this second paper, we utilize the spectral constraints derived in the first paper to calculate the full posterior probability distribution of any signal parameter space of choice. The spectral fit provides the starting point for a Markov Chain Monte Carlo (MCMC) engine that samples the signal without traversing the foreground parameter space. At each MCMC step, we marginalize over the weights of all linear foreground modes and suppress those with unimportant variations by applying priors gleaned from the training set. This method drastically reduces the number of MCMC parameters, augmenting the efficiency of exploration, circumvents the need for selecting a minimal number of foreground modes, and allows the complexity of the foreground model to be greatly increased to simultaneously describe many observed spectra without requiring extra MCMC parameters. Using two nonlinear signal models, one based on EDGES observations and the other on phenomenological frequencies and temperatures of theoretically expected extrema, we demonstrate the success of this methodology by recovering the input parameters from multiple randomly simulated signals at low radio frequencies (10-200 MHz), while rigorously accounting for realistically modeled beam-weighted foregrounds. | astrophysics |
We study online learning for optimal allocation when the resource to be allocated is time. Examples of possible applications include a driver filling a day with rides, a landlord renting an estate, etc. Following our initial motivation, a driver receives ride proposals sequentially according to a Poisson process and can either accept or reject a proposed ride. If she accepts the proposal, she is busy for the duration of the ride and obtains a reward that depends on the ride duration. If she rejects it, she remains on hold until a new ride proposal arrives. We study the regret incurred by the driver first when she knows her reward function but does not know the distribution of the ride duration, and then when she does not know her reward function, either. Faster rates are finally obtained by adding structural assumptions on the distribution of rides or on the reward function. This natural setting bears similarities with contextual (one-armed) bandits, but with the crucial difference that the normalized reward associated to a context depends on the whole distribution of contexts. | statistics |
Quantum computers use quantum resources to carry out computational tasks and may outperform classical computers in solving certain computational problems. Special-purpose quantum computers such as quantum annealers employ quantum adiabatic theorem to solve combinatorial optimization problems. In this paper, we compare classical annealings such as simulated annealing and quantum annealings that are done by the D-Wave machines both theoretically and numerically. We show that if the classical and quantum annealing are characterized by equivalent Ising models, then solving an optimization problem, i.e., finding the minimal energy of each Ising model, by the two annealing procedures, are mathematically identical. For quantum annealing, we also derive the probability lower-bound on successfully solving an optimization problem by measuring the system at the end of the annealing procedure. Moreover, we present the Markov chain Monte Carlo (MCMC) method to realize quantum annealing by classical computers and investigate its statistical properties. In the numerical section, we discuss the discrepancies between the MCMC based annealing approaches and the quantum annealing approach in solving optimization problems. | statistics |
The ability to learn new concepts with small amounts of data is a crucial aspect of intelligence that has proven challenging for deep learning methods. Meta-learning for few-shot learning offers a potential solution to this problem: by learning to learn across data from many previous tasks, few-shot learning algorithms can discover the structure among tasks to enable fast learning of new tasks. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be very ambiguous to acquire a single model for that task. The Bayesian meta-learning models can naturally resolve this problem by putting a sophisticated prior distribution and let the posterior well regularized through Bayesian decision theory. However, currently known Bayesian meta-learning procedures such as VERSA suffer from the so-called {\it information preference problem}, that is, the posterior distribution is degenerated to one point and is far from the exact one. To address this challenge, we design a novel meta-regularization objective using {\it cyclical annealing schedule} and {\it maximum mean discrepancy} (MMD) criterion. The cyclical annealing schedule is quite effective at avoiding such degenerate solutions. This procedure includes a difficult KL-divergence estimation, but we resolve the issue by employing MMD instead of KL-divergence. The experimental results show that our approach substantially outperforms standard meta-learning algorithms. | statistics |
Artificial Intelligence (AI) promises to fundamentally transform society but faces multiple challenges in doing so. In particular, state-of-the-art neuromorphic devices used to implement AI typically lack processes like neuromodulation and neural oscillations that are critical for enabling many advanced cognitive abilities shown by the brain. Here, we utilize smart materials, that adapt their structure and properties in response to external stimuli, to emulate the modulatory behaviour of neurons called neuromodulation. Leveraging these materials, we have designed and simulated the dynamics of a self-adaptive artificial neuron, which comprises five magnetic skyrmions hosted in a bilayer of thulium iron garnet (TmIG) and platinum (Pt). Micromagnetic simulations show that both the amplitudes and frequencies of neuronal dynamics can be modified by reconfiguring the skyrmion lattice, thereby actualizing neuromodulation. Further, we demonstrate that this neuron achieves a significant advancement over state-of-the-art by realizing the advanced cognitive abilities of context-awareness, cross-frequency coupling as well as information fusion, while utilizing ultra-low power and being ultra-compact. Building advanced cognition into AI can fundamentally transform a wide array of fields including personalized medicine, neuro-prosthesis, human-machine interaction and help realize the next-generation of context-aware AI. | physics |
A first step when fitting multilevel models to continuous responses is to explore the degree of clustering in the data. Researchers fit variance-component models and then report the proportion of variation in the response that is due to systematic differences between clusters. Equally they report the response correlation between units within a cluster. These statistics are popularly referred to as variance partition coefficients (VPCs) and intraclass correlation coefficients (ICCs). When fitting multilevel models to categorical (binary, ordinal, or nominal) and count responses, these statistics prove more challenging to calculate. For categorical response models, researchers appeal to their latent response formulations and report VPCs/ICCs in terms of latent continuous responses envisaged to underly the observed categorical responses. For standard count response models, however, there are no corresponding latent response formulations. More generally, there is a paucity of guidance on how to partition the variation. As a result, applied researchers are likely to avoid or inadequately report and discuss the substantive importance of clustering and cluster effects in their studies. A recent article drew attention to a little-known exact algebraic expression for the VPC/ICC for the special case of the two-level random-intercept Poisson model. In this article, we make a substantial new contribution. First, we derive exact VPC/ICC expressions for more flexible negative binomial models that allows for overdispersion, a phenomenon which often occurs in practice. Then we derive exact VPC/ICC expressions for three-level and random-coefficient extensions to these models. We illustrate our work with an application to student absenteeism. | statistics |
Recently a novel hadronic state of mass 6.9 GeV, that decays mainly to a pair of charmonia, was observed in LHCb. The data also reveals a broader structure centered around 6490 MeV and suggests another unconfirmed resonance centered at around 7240 MeV, very near to the threshold of two doubly charmed $\Xi_{cc}$ baryons. We argue in this note that these exotic hadrons are genuine tetraquarks and not molecules of charmonia. It is conjectured that they are V-baryonium tetraquarks, namely, have an inner structure of a baryonic vertex with a $cc$ diquark attached to it, which is connected by a string to an anti-baryonic vertex with a $\bar c \bar c$ anti-diquark. We examine these states as the analogs of the states $\Psi(4360)$ and $Y(4630)$/$\Psi(4660)$ which are charmonium-like tetraquarks. One way to test these claims is by searching for a significant decay of the state at 7.2 GeV into $\Xi_{cc}\overline\Xi_{cc}$. Such a decay would be the analog of the decay of the state $Y(4630)$ into to $\Lambda_c\overline\Lambda_c$. We further argue that there should be trajectories of both orbital and radial excited states of the $X(6900)$. We predict their masses. It is possible that a few of these states have already been seen by LHCb. | high energy physics phenomenology |
The newly discovered 16.35 days period for repeating FRB 180916.J0158+65 provides an essential clue for understanding the sources and emission mechanism of repeating FRBs. Many models propose that the periodically repeating FRBs might be related to binary star systems that contain at least one neutron star (NSC-FRB system). It has been suggested that the NS "combed" by the strong wind from a companion star might provide a solution. Following the "Binary Comb" model, we use the population synthesis method to study in detail the properties of the companion stars and the nature of NSC-FRB systems. Our main findings are: 1) the companion star is most likely to be a B-type star; 2) the period of 16 days of FRB 180916 happens to fall in the most probable period range, which may explain why FRB 180916 was the first detected periodically repeating FRB, and we expect to observe more periodically repeating FRBs with periods around 10-30 days; 3) the birth rate for NSC-FRB system is large enough to fulfill the event rate requirement set by the observation of FRB 180916, which supports the proposal that NSC-FRB can provide one significant channel for producing periodically repeating FRBs. | astrophysics |
Topological quantum computation based on anyons is a promising approach to achieve fault-tolerant quantum computing. The Majorana zero modes in the Kitaev chain are an example of non-Abelian anyons where braiding operations can be used to perform quantum gates. Here we perform a quantum simulation of topological quantum computing, by teleporting a qubit encoded in the Majorana zero modes of a Kitaev chain. The quantum simulation is performed by mapping the Kitaev chain to its equivalent spin version, and realizing the ground states in a superconducting quantum processor. The teleportation transfers the quantum state encoded in the spin-mapped version of the Majorana zero mode states between two Kitaev chains. The teleportation circuit is realized using only braiding operations, and can be achieved despite being restricted to Clifford gates for the Ising anyons. The Majorana encoding is a quantum error detecting code for phase flip errors, which is used to improve the average fidelity of the teleportation for six distinct states from $70.76 \pm 0.35 \% $ to $84.60 \pm 0.11 \%$, well beyond the classical bound in either case. | quantum physics |
Like all computerized systems, ballot-marking devices (BMDs) can be hacked, misprogrammed, and misconfigured. Several approaches to testing BMDs have been proposed. In _logic and accuracy_ (_L&A_) tests, trusted agents input known test patterns into the BMD and check whether the printout matches. In _parallel_ or _live_ testing, agents use the BMDs on election day, emulating voters. In _passive_ testing, agents monitor the rate at which voters "spoil" ballots and request another opportunity to mark a ballot: an anomalously high rate might result from BMD malfunctions. In practice, none of these methods can protect against outcome-altering problems. L&A testing is ineffective in part because BMDs "know" the time and date of the test and the election. Neither L&A nor parallel testing can probe even a small fraction of the possible voting transactions that could comprise enough votes to change outcomes. Under mild assumptions, to develop a model of voter interactions with BMDs accurate enough to ensure that parallel tests could reliably detect changes to 5% of the votes (which could change margins by 10% or more) would require monitoring the behavior of more than a million voters in each jurisdiction in minute detail---but the median turnout by jurisdiction in the U.S. is under 3000 voters. Given an accurate model of voter behavior, the number of tests required is still larger than the turnout in a typical U.S. jurisdiction. Under optimistic assumptions, passive testing that has a 99% chance of detecting a 1% change to the margin with a 1% false alarm rate is impossible in jurisdictions with fewer than about 1 million voters, even if the "normal" spoiled ballot rate were known exactly and did not vary from election to election and place to place. | statistics |
We give an explicit formula for the arithmetic intersection number of CM cycles on Lubin-Tate spaces for all levels. We prove our formula by formulating the intersection number on the infinite level. Our CM cycles are constructed by choosing two separable quadratic extensions $K_1,K_2/F$ of non-Archimedean local fields $F$. Our formula works for all cases, $K_1$ and $K_2$ can be either the same or different, ramify or unramified. As applications, this formula translate the linear Arithmetic Fundamental Lemma (linear AFL) into a comparison of integrals. This formula can also be used to recover Gross and Keating's result on lifting endomorphism of formal modules. | mathematics |
This article proposes a Bayesian approach to estimating the spectral density of a stationary time series using a prior based on a mixture of P-spline distributions. Our proposal is motivated by the B-spline Dirichlet process prior of Edwards et al. (2019) in combination with Whittle's likelihood and aims at reducing the high computational complexity of its posterior computations. The strength of the B-spline Dirichlet process prior over the Bernstein-Dirichlet process prior of Choudhuri et al. (2004) lies in its ability to estimate spectral densities with sharp peaks and abrupt changes due to the flexibility of B-splines with variable number and location of knots. Here, we suggest to use P-splines of Eilers and Marx (1996) that combine a B-spline basis with a discrete penalty on the basis coefficients. In addition to equidistant knots, a novel strategy for a more expedient placement of knots is proposed that makes use of the information provided by the periodogram about the steepness of the spectral power distribution. We demonstrate in a simulation study and two real case studies that this approach retains the flexibility of the B-splines, achieves similar ability to accurately estimate peaks due to the new data-driven knot allocation scheme but significantly reduces the computational costs. | statistics |
The early third data release (EDR3) of the European Space Agency satellite Gaia provides coordinates, parallaxes, and proper motions for ~1.47 billion sources in our Milky Way, based on 34 months of observations. The combination of Gaia DR2 radial velocities with the more precise and accurate astrometry provided by Gaia EDR3 makes the best dataset available to search for the fastest nearby stars in our Galaxy. We compute the velocity distribution of ~7 million stars with precise parallaxes, to investigate the high-velocity tail of the velocity distribution of stars in the Milky Way. We release a catalogue with distances, total velocities, and corresponding uncertainties for all the stars considered in our analysis, available at https://sites.google.com/view/tmarchetti/research . By applying quality cuts on the Gaia astrometry and radial velocities, we identify a clean subset of 94 stars with a probability Pub > 50% to be unbound from our Galaxy. 17 of these have Pub > 80% and are our best candidates. We propagate these stars in the Galactic potential to characterize their orbits. We find that 11 stars are consistent with being ejected from the Galactic disk, and are possible hyper-runaway star candidates. The other 6 stars are not consistent with coming from a known star-forming region. We investigate the effect of adopting a parallax zero point correction, which strongly impacts our results: when applying this correction, we identify only 12 stars with Pub > 50%, 3 of these having Pub > 80%. Spectroscopic follow-ups with ground-based telescopes are needed to confirm the candidates identified in this work. | astrophysics |
Legged locomotion is a challenging task in the field of robotics but a rather simple one in nature. This motivates the use of biological methodologies as solutions to this problem. Central pattern generators are neural networks that are thought to be responsible for locomotion in humans and some animal species. As for robotics, many attempts were made to reproduce such systems and use them for a similar goal. One interesting design model is based on spiking neural networks. This model is the main focus of this work, as its contribution is not limited to engineering but also applicable to neuroscience. This paper introduces a new general framework for building central pattern generators that are task-independent, biologically plausible, and rely on learning methods. The abilities and properties of the presented approach are not only evaluated in simulation but also in a robotic experiment. The results are very promising as the used robot was able to perform stable walking at different speeds and to change speed within the same gait cycle. | computer science |
Let $G$ be a group and $k$ be a commutative ring. Our aim is to ameliorate the $G$-graded categorical structures considered by Turaev and Virelizier by fitting them into the monoidal bicategory context. We explain how these structures are monoidales in the monoidal centre of the monoidal bicategory of $k$-linear categories on which $G$ acts. This provides a useful example of a higher version of Davydov's full centre of an algebra. | mathematics |
Let n be either 2, or an odd integer greater than 1, and fix a prime p > 2(n + 1). Under standard "adequate image" assumptions, we show that the set of components of n-dimensional p-adic potentially semistable local Galois deformation rings that are seen by potentially automorphic compatible systems of polarizable Galois representations over some CM field is independent of the particular global situation. We also (under the same assumption on n) improve on the main potential automorphy result of [BLGGT14b], replacing "potentially diagonalizable" by "potentially globally realizable". | mathematics |
We consider ride-sharing networks served by human-driven vehicles (HVs) and autonomous vehicles (AVs). We propose a model for ride-sharing in this mixed autonomy setting for a multi-location network in which a ride-sharing platform sets prices for riders, compensations for drivers of HVs, and operates AVs for a fixed price with the goal of maximizing profits. When there are more vehicles than riders at a location, we consider three vehicle-to-rider assignment possibilities: rides are assigned to HVs first; rides are assigned to AVs first; rides are assigned in proportion to the number of available HVs and AVs. Next, for each of these priority possibilities, we establish a nonconvex optimization problem characterizing the optimal profits for a network operating at a steady-state equilibrium. We then provide a convex problem which we show to have the same optimal profits, allowing for efficient computation of equilibria, and we show that all three priority possibilities result in the same maximum profits for the platform. Next, we show that, in some cases, there is a regime for which the platform will choose to mix HVs and AVs in order to maximize its profit, while in other cases, the platform will use only HVs or only AVs, depending on the relative cost of AVs. For a specific class of networks, we fully characterize these thresholds analytically and demonstrate our results on an example. | electrical engineering and systems science |
The absolute separation of a polynomial is the minimum nonzero difference between the absolute values of its roots. In the case of polynomials with integer coefficients, it can be bounded from below in terms of the degree and the height (the maximum absolute value of the coefficients) of the polynomial. We improve the known bounds for this problem and related ones. Then we report on extensive experiments in low degrees, suggesting that the current bounds are still very pessimistic. | mathematics |
In the present work, we investigate the production mechanism of the $Z_b(10610)$ and $Z_b(10650)$ states from the $\Upsilon(5S,6S)$ decays. Two types of bottom-meson loops are discussed. We show that the loop contributions with all intermediate states being the $S$-wave ground state bottom mesons are negligible, while the loops with one bottom meson being the broad $B_0^\ast$ or $B_1^\prime$ resonance could provide the dominant contributions to the $\Upsilon(5S) \to Z_b^{(\prime)} \pi$. It is found that such a mechanism is not suppressed by the large width of the $B_0/B_1'$ resonance. In addition, we also estimate the branching ratios for the $\Upsilon(6S) \to Z_b^{(\prime)} \pi$ which could be tested by future precise measurements at Belle-II. | high energy physics phenomenology |
List of contributions from the Cherenkov Telescope Array (CTA) Consortium presented at the 36th International Cosmic Ray Conference, July 24 - August 1 2019, Madison, WI, EUA. | astrophysics |
We connect two key concepts in quantum information: compatibility and divisibility of quantum channels. Two channels are compatible if they can be both obtained via marginalization from a third channel. A channel divides another channel if it reproduces its action by sequential composition with a third channel. (In)compatibility is of central importance for studying the difference between classical and quantum dynamics. The relevance of divisibility stands in its close relationship with the onset of Markovianity. We emphasize the simulability character of compatibility and divisibility, and, despite their structural difference, we find a set of channels -- self-degradable channels -- for which the two notions coincide. We also show that, for degradable channels, compatibility implies divisibility, and that, for anti-degradable channels, divisibility implies compatibility. These results motivate further research on these classes of channels and shed new light on the meaning of these two largely studied notions. | quantum physics |
Monitoring mobility- and industry-relevant events is important in areas such as personal travel planning and supply chain management, but extracting events pertaining to specific companies, transit routes and locations from heterogeneous, high-volume text streams remains a significant challenge. This work describes a corpus of German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. It has also been annotated with a set of 15 traffic- and industry-related n-ary relations and events, such as accidents, traffic jams, acquisitions, and strikes. The corpus consists of newswire texts, Twitter messages, and traffic reports from radio stations, police and railway companies. It allows for training and evaluating both named entity recognition algorithms that aim for fine-grained typing of geo-entities, as well as n-ary relation extraction systems. | computer science |
Novel sparse reconstruction algorithms are proposed for beamspace channel estimation in massive multiple-input multiple-output systems. The proposed algorithms minimize a least-squares objective having a nonconvex regularizer. This regularizer removes the penalties on a few large-magnitude elements from the conventional l1-norm regularizer, and thus it only forces penalties on the remaining elements that are expected to be zeros. Accurate and fast reconstructions can be achieved by performing gradient projection updates within the framework of difference of convex functions (DC) programming. A double-loop algorithm and a single-loop algorithm are proposed via different DC decompositions, and these two algorithms have distinct computation complexities and convergence rates. Then, an extension algorithm is further proposed by designing the step sizes of the single-loop algorithm. The extension algorithm has a faster convergence rate and can achieve approximately the same level of accuracy as the proposed double-loop algorithm. Numerical results show significant advantages of the proposed algorithms over existing reconstruction algorithms in terms of reconstruction accuracies and runtimes. Compared to the benchmark channel estimation techniques, the proposed algorithms also achieve smaller mean squared error and higher achievable spectral efficiency. | computer science |
Ferroelectric topological objects (e.g. vortices, skyrmions) provide a fertile ground for exploring emerging physical properties that could potentially be utilized in future configurable nanoelectronic devices. Here, we demonstrate quasi-one-dimensional metallic high conduction channels along two types of exotic topological defects, i.e. the topological cores of (i) a quadrant vortex domain structure and (ii) a center domain (monopole-like) structure confined in high quality BiFeO3 nanoisland array, abbreviated as the vortex core and the center core. We unveil via phase-field simulations that the superfine (< 3 nm) metallic conduction channels along center cores arise from the screening charge carriers confined at the core whereas the high conductance of vortex cores results from a field-induced twisted state. These conducting channels can be repeatedly and reversibly created and deleted by manipulating the two topological states via an electric field, leading to an apparent electroresistance effect with an on/off ratio higher than 103. These results open up the possibility of utilizing these functional one-dimensional topological objects in high-density nanoelectronic devices such as ultrahigh density nonvolatile memory. | condensed matter |
A charge flow through a magnetic tunnel junction (MTJ) leads to the generation of a spin-polarized current which exerts a spin-transfer torque (STT) on the magnetization. When the density of applied direct current exceeds some critical value, the STT excites high-frequency magnetization precession in the "free" electrode of MTJ. Such precession gives rise to microwave output voltage and, furthermore, can be employed for spin pumping into adjacent normal metal or semiconductor. Here we describe theoretically the spin dynamics and charge transport in the CoFeB/MgO/CoFeB/Au tunneling heterostructure connected to a constant-current source. The magnetization dynamics in the free CoFeB layer with weak perpendicular anisotropy is calculated by numerical integration of the Landau-Lifshitz-Gilbert-Slonczewski equation accounting for both STT and voltage controlled magnetic anisotropy associated with the CoFeB|MgO interface. It is shown that a large-angle magnetization precession, resulting from electrically induced dynamic spin reorientation transition, can be generated in a certain range of relatively low current densities. An oscillating spin current, which is pumped into the Au overlayer owing to such precession, is then evaluated together with the injected spin current. Considering both the driving spin-polarized charge current and the pumped spin current, we also describe the charge transport in the CoFeB/Au bilayer with the account of anomalous and inverse spin Hall effects. An electric potential difference between the lateral sides of the CoFeB/Au bilayer is calculated as a function of distance from the CoFeB|MgO interface. It is found that this transverse voltage signal in Au is large enough for experimental detection, which indicates significant efficiency of the proposed current-driven spin injector. | condensed matter |
We discuss the effects of a gauge freedom in representing quantum information processing devices, and its implications for characterizing these devices. We demonstrate with experimentally relevant examples that there exists equally valid descriptions of the same experiment which distribute errors differently among objects in a gate-set, leading to different error rates. Consequently, it can be misleading to attach a concrete operational meaning to figures of merit for individual gate-set elements. We propose an alternative operational figure of merit for a gate-set, the mean variation error, and a protocol for measuring this figure. | quantum physics |
Based on statistical analysis of synchrotron polarization intensity, we study the anisotropic properties of compressible magnetohydrodynamic (MHD) turbulence. The second-order normalized structure function, quadrupole ratio modulus and anisotropic coefficient are synergistically used to characterize the anisotropy of the polarization intensity. On the basis of pre-decomposition data cubes, we first explore the anisotropy of the polarization intensity in different turbulence regimes and find that the most significant anisotropy occurs in the sub-Alfv\'enic regime. Using post-decomposition data cubes in this regime, we then study the anisotropy of the polarization intensity from Alfv\'en, slow and fast modes. Statistics of polarization intensity from Alfv\'en and slow modes demonstrate the significant anisotropy while statistics of polarization intensity from fast modes show isotropic structures, which is consistent with the earlier results provided in Cho & Lazarian (2002). As a result, both quadrupole ratio modulus and anisotropic coefficient for polarization intensities can quantitatively recover the anisotropy of underlying compressible MHD turbulence. The synergistic use of the two methods helps enhance the reliability of the magnetic field measurement. | astrophysics |
In this paper, we revisit the concepts of the reactivity map and the reactivity bands as an alternative to the use of perturbation theory for the determination of the phase space geometry of chemical reactions. We introduce a reformulated metric, called the asymptotic trajectory indicator, and an efficient algorithm to obtain reactivity boundaries. We demonstrate that this method has sufficient accuracy to reproduce phase space structures such as turnstiles for a 1D model of the isomerization of ketene in an external field. The asymptotic trajectory indicator can be applied to higher dimensional systems coupled to Langevin baths as we demonstrate for a 3D model of the isomerization of ketene. | physics |
[Background] Previous research has shown that developers commonly misuse cryptography APIs. [Aim] We have conducted an exploratory study to find out how crypto APIs are used in open-source Java projects, what types of misuses exist, and why developers make such mistakes. [Method] We used a static analysis tool to analyze hundreds of open-source Java projects that rely on Java Cryptography Architecture, and manually inspected half of the analysis results to assess the tool results. We also contacted the maintainers of these projects by creating an issue on the GitHub repository of each project, and discussed the misuses with developers. [Results] We learned that 85% of Cryptography APIs are misused, however, not every misuse has severe consequences. Developer feedback showed that security caveats in the documentation of crypto APIs are rare, developers may overlook misuses that originate in third-party code, and the context where a Crypto API is used should be taken into account. [Conclusion] We conclude that using Crypto APIs is still problematic for developers but blindly blaming them for such misuses may lead to erroneous conclusions. | computer science |
A fundamental pursuit in complexity theory concerns reducing worst-case problems to average-case problems. There exist complexity classes such as PSPACE that admit worst-case to average-case reductions. However, for many other classes such as NP, the evidence so far is typically negative, in the sense that the existence of such reductions would cause collapses of the polynomial hierarchy(PH). Basing cryptographic primitives, e.g., the average-case hardness of inverting one-way permutations, on NP-completeness is a particularly intriguing instance. As there is evidence showing that classical reductions from NP-hard problems to breaking these primitives result in PH collapses, it seems unlikely to base cryptographic primitives on NP-hard problems. Nevertheless, these results do not rule out the possibilities of the existence of quantum reductions. In this work, we initiate a study of the quantum analogues of these questions. Aside from formalizing basic notions of quantum reductions and demonstrating powers of quantum reductions by examples of separations, our main result shows that if NP-complete problems reduce to inverting one-way permutations using certain types of quantum reductions, then coNP $\subseteq$ QIP(2). | quantum physics |
We extend results of Denef, Zahidi, Demeyer and the second author to show the following. (1) Rational integers have a single-fold Diophantine definition over the ring of integral functions of any function field of characteristic 0. (2) Every c.e. set of integers has a finite-fold Diophantine definition over the ring of integral functions of any function field of characteristic $0$. (3) All c.e. subsets of polynomial rings over totally real number fields have finite-fold Diophantine definitions. (These are the first examples of infinite rings with this property.) (4) If $k$ is algebraic over $\Q$ and is embeddable into a finite extension of $\Q_p$ for odd $p$, and $K$ is a one-variable function field over $k$, then the valuation ring of any function field valuation of $K$ has a Diophantine definition over $K$. (5) If $k$ is algebraic over $\Q$ and is embeddable into $\R$, and $K$ is a function field over $k$, then "almost" all function field valuations of $K$ have a valuation ring Diophantine over $K$. (6) Let $K$ be a one-variable function field over a number field and let $S$ be a finite set of its primes. Then all c.e. subsets of $O_{K,S}$ are existentially definable. (Here $O_{K,S}$ is the ring of $S$-integers or a ring of integral functions.) | mathematics |
Many dark matter models generically predict invisible and displaced signatures at Belle II, but even striking events may be missed by the currently implemented search programme because of inefficient trigger algorithms. Of particular interest are final states with a single photon accompanied by missing energy and a displaced pair of electrons, muons, or hadrons. We argue that a displaced vertex trigger will be essential to achieve optimal sensitivity at Belle II. To illustrate this point, we study a simple but well-motivated model of thermal inelastic dark matter in which this signature naturally occurs and show that otherwise inaccessible regions of parameter space can be tested with such a search. We also evaluate the sensitivity of single-photon searches at BaBar and Belle II to this model and provide detailed calculations of the relic density target. | high energy physics phenomenology |
Null Hypothesis Significance Testing (NHST) has long been central to the scientific project, guiding theory development and supporting evidence-based intervention and decision-making. Recent years, however, have seen growing awareness of serious problems with NHST as it is typically used, and hence to proposals to limit the use of NHST techniques, to abandon these techniques and move to alternative statistical approaches, or even to ban the use of NHST entirely. These proposals are premature, because the observed problems with NHST all arise as a consequence of a contingent and in many cases incorrect choice: that of NHST testing against point-form nulls. We show that testing against distributional, rather than point-form, nulls is better motivated mathematically and experimentally, and that the use of distributional nulls addresses many problems with the standard point-form NHST approach. We also show that use of distributional nulls allows a form of null hypothesis testing that takes into account both the statistical significance of a given result and the probability of replication of that result in a new experiment. Rather than abandoning NHST, we should use the NHST approach in its more general form, with distributional rather than point-form nulls. | statistics |
Unlike parametric regression, machine learning (ML) methods do not generally require precise knowledge of the true data generating mechanisms. As such, numerous authors have advocated for ML methods to estimate causal effects. Unfortunately, ML algorithms can perform worse than parametric regression. We demonstrate the performance of ML-based single- and double-robust estimators. We use 100 Monte Carlo samples with sample sizes of 200, 1200, and 5000 to investigate bias and confidence interval coverage under several scenarios. In a simple confounding scenario, confounders were related to the treatment and the outcome via parametric models. In a complex confounding scenario, the simple confounders were transformed to induce complicated nonlinear relationships. In the simple scenario, when ML algorithms were used, double-robust estimators were superior to single-robust estimators. In the complex scenario, single-robust estimators with ML algorithms were at least as biased as estimators using misspecified parametric models. Double-robust estimators were less biased, but coverage was well below nominal. The use of sample splitting, inclusion of confounder interactions, reliance on a richly specified ML algorithm, and use of doubly robust estimators was the only explored approach that yielded negligible bias and nominal coverage. Our results suggest that ML based singly robust methods should be avoided. | statistics |
We introduce a two-parameter approximate counter-diabatic term into the Hamiltonian of the transverse-field Ising model for quantum annealing to accelerate convergence to the solution, generalizing an existing single-parameter approach. The protocol is equivalent to unconventional diabatic control of the longitudinal and transverse fields in the transverse-field Ising model and thus makes it more feasible for experimental realization than an introduction of new terms such as non-stoquastic catalysts toward the same goal of performance enhancement. We test the idea for the $p$-spin model with $p=3$, which has a first-order quantum phase transition, and show that our two-parameter approach leads to significantly larger ground-state fidelity and lower residual energy than those by traditional quantum annealing as well as by the single-parameter method. We also find a scaling advantage in terms of the time to solution as a function of the system size in a certain range of parameters as compared to the traditional methods. | quantum physics |
This paper focuses on optimal transmit power allocation to maximize the overall system throughput in a vehicle-to-everything (V2X) communication system. We propose two methods for solving the power allocation problem namely the weighted minimum mean square error (WMMSE) algorithm and the deep learning-based method. In the WMMSE algorithm, we solve the problem using block coordinate descent (BCD) method. Then we adopt supervised learning technique for the deep neural network (DNN) based approach considering the power allocation from the WMMSE algorithm as the target output. We exploit an efficient implementation of the mini-batch gradient descent algorithm for training the DNN. Extensive simulation results demonstrate that the DNN algorithm can provide very good approximation of the iterative WMMSE algorithm reducing the computational overhead significantly. | electrical engineering and systems science |
We theoretically investigated spectrally uncorrelated biphotons generated in a counter-propagating spontaneous parametric downconversion (CP-SPDC) from periodically poled MTiOXO4 (M = K, Rb, Cs; X = P, As) crystals. By numerical calculation, it was found that the five crystals from the KTP family can be used to generate heralded single photons with high spectral purity and wide tunability. Under the type-0 phase-matching condition, the purity at 1550 nm was between 0.91 and 0.92, and the purity can be maintained over 0.90 from 1500 nm to 2000 nm wavelength. Under the type-II phase-matching condition, the purity at 1550 nm was 0.96, 0.97, 0.97, 0.98, and 0.98 for PPKTP, PPRTP, PPKTA, PPRTA, and PPCTA, respectively; furthermore, the purity can be kept over 0.96 for more than 600 nm wavelength range. We also simulated the Hong-Ou-Mandel interference between independent photon sources for PPRTP crystals at 1550 nm, and interference visibility was 92% (97%) under type-0 (type-II) phase-matching condition. This study may provide spectrally pure narrowband single-photon sources for quantum memories and quantum networks at telecom wavelengths. | quantum physics |
With vast mmWave spectrum and narrow beam antenna technology, precise position location is now possible in 5G and future mobile communication systems. In this article, we describe how centimeterlevel localization accuracy can be achieved, particularly through the use of map-based techniques. We show how data fusion of parallel information streams, machine learning, and cooperative localization techniques further improve positioning accuracy. | computer science |
We studied physical properties of matter in 24,544 filaments ranging from 30 to 100 Mpc in length, identified in the Sloan Digital Sky Survey (SDSS). We stacked the Comptonization y map produced by the Planck Collaboration around the filaments, excluding the resolved galaxy groups and clusters above a mass of ~3*10^13 Msun. We detected the thermal Sunyaev-Zel'dovich signal for the first time at a significance of 4.4 sigma in filamentary structures on such a large scale. We also stacked the Planck cosmic microwave background (CMB) lensing convergence map in the same manner and detected the lensing signal at a significance of 8.1 sigma. To estimate physical properties of the matter, we considered an isothermal cylindrical filament model with a density distribution following a beta-model (beta=2/3). Assuming that the gas distribution follows the dark matter distribution, we estimate that the central gas and matter overdensity and gas temperature are overdensity = (19.0 +27.3 -12.1) and temperature = (1.2 +- 0.4)*10^6 K, which results in a measured baryon fraction of (0.080 +0.116 -0.051) * Omega_b. | astrophysics |
de Sitter vacuum of nonconformal gauge theories is non-equilibrium, manifested by a nonvanishing rate of the comoving entropy production at asymptotically late times. This entropy production rate is related to the entanglement entropy of the de Sitter vacuum of the theory. We use holographic correspondence to compute vacuum entanglement entropy density $s_{ent}$ of mass deformed ${\cal N}=4$ supersymmetric Yang-Mills theory - the ${\cal N}=2^*$ gauge theory - for various values of the masses and the coupling constant to the background space-time curvature. For a particular choice of the curvature coupling, the Euclidean model can be solved exactly using the supersymmetric localization. We show that ${\cal N}=2^*$ de Sitter entanglement entropy is not the thermodynamic entropy of the localization free energy at de Sitter temperature. Neither it is related to the thermal entropy of de Sitter vacuum of pair-produced particles. | high energy physics theory |
Graph states are a unique resource for quantum information processing, such as measurement-based quantum computation. Here, we theoretically investigate using continuous-variable graph states for single-parameter quantum metrology, including both phase and displacement sensing. We identified the optimal graph states for the two sensing modalities and showed that Heisenberg scaling of the accuracy for both phase and displacement sensing can be achieved with local homodyne measurements. | quantum physics |
We study the discovery potential of axion-like particles (ALP), pseudo-scalars weakly coupled to Standard Model fields, at the Large Hadron Collider (LHC). Our focus is on ALPs coupled to the electromagnetic field, which would induce anomalous scattering of light-by-light. This can be directly probed in central exclusive production of photon pairs in ultra-peripheral collisions at the LHC in proton and heavy ion collisions. We consider non-standard collision modes of the LHC, such as argon-argon collisions at $\sqrt{s_{NN}} = 7$ TeV and proton-lead collisions at $\sqrt{s_{NN}} = 8.16$ TeV to access regions in the parameter space complementary to the ones previously considered for lead-lead or proton-proton collisions. In addition, we show that, using laser beam interactions, we can constrain ALPs as resonant deviations in the refractive index, induced by anomalous light-by-light scattering effects. If we combine the aforementioned approaches, ALPs can be probed in a wide range of masses from the eV scale up to the TeV scale. | high energy physics phenomenology |
In this paper we predict a full 3D avatar of a person from a single image. We infer texture and geometry in the UV-space of the SMPL model using an image-to-image translation method. Given partial texture and segmentation layout maps derived from the input view, our model predicts the complete segmentation map, the complete texture map, and a displacement map. The predicted maps can be applied to the SMPL model in order to naturally generalize to novel poses, shapes, and even new clothing. In order to learn our model in a common UV-space, we non-rigidly register the SMPL model to thousands of 3D scans, effectively encoding textures and geometries as images in correspondence. This turns a difficult 3D inference task into a simpler image-to-image translation one. Results on rendered scans of people and images from the DeepFashion dataset demonstrate that our method can reconstruct plausible 3D avatars from a single image. We further use our model to digitally change pose, shape, swap garments between people and edit clothing. To encourage research in this direction we will make the source code available for research purpose. | computer science |
We develop a monitoring procedure to detect changes in a large approximate factor model. Letting $r$ be the number of common factors, we base our statistics on the fact that the $\left( r+1\right) $-th eigenvalue of the sample covariance matrix is bounded under the null of no change, whereas it becomes spiked under changes. Given that sample eigenvalues cannot be estimated consistently under the null, we randomise the test statistic, obtaining a sequence of \textit{i.i.d} statistics, which are used for the monitoring scheme. Numerical evidence shows a very small probability of false detections, and tight detection times of change-points. | statistics |
We use the functional renormalization approach for quantum spin systems developed by Krieg and Kopietz [Phys. Rev. B $\mathbf{99}$, 060403(R) (2019)] to calculate the spin-spin correlation function $G (\boldsymbol{k}, \omega )$ of quantum Heisenberg magnets at infinite temperature. For small wavevectors $\boldsymbol{k} $ and frequencies $\omega$ we find that $G ( \boldsymbol{k}, \omega )$ assumes the diffusive form predicted by hydrodynamics. Our result for the spin-diffusion coefficient ${\cal{D}}$ is somewhat smaller than previous theoretical predictions based on the extrapolation of the short-time expansion, but is still about $30 \%$ larger than the measured high-temperature value of ${\cal{D}}$ in the Heisenberg ferromagnet Rb$_2$CuBr$_4\cdot$2H$_2$O. In reduced dimensions $d \leq 2$ we find superdiffusion characterized by a frequency-dependent complex spin-diffusion coefficient ${\cal{D}} ( \omega )$ which diverges logarithmically in $d=2$, and as a power-law ${\cal{D}} ( \omega ) \propto \omega^{-1/3}$ in $d=1$. Our result in one dimension implies scaling with dynamical exponent $z =3/2$, in agreement with recent calculations for integrable spin chains. Our approach is not restricted to the hydrodynamic regime and allows us to calculate the dynamic structure factor $S ( \boldsymbol{k} , \omega )$ for all wavevectors. We show how the short-wavelength behavior of $S ( \boldsymbol{k}, \omega )$ at high temperatures reflects the relative sign and strength of competing exchange interactions. | condensed matter |
Predicting the evolution of mortality rates plays a central role for life insurance and pension funds. Various stochastic frameworks have been developed to model mortality patterns taking into account the main stylized facts driving these patterns. However, relying on the prediction of one specific model can be too restrictive and lead to some well documented drawbacks including model misspecification, parameter uncertainty and overfitting. To address these issues we first consider mortality modelling in a Bayesian Negative-Binomial framework to account for overdispersion and the uncertainty about the parameter estimates in a natural and coherent way. Model averaging techniques, which consists in combining the predictions of several models, are then considered as a response to model misspecifications. In this paper, we propose two methods based on leave-future-out validation which are compared to the standard Bayesian model averaging (BMA) based on marginal likelihood. Using out-of-sample errors is a well-known workaround for overfitting issues. We show that it also produces better forecasts. An intensive numerical study is carried out over a large range of simulation setups to compare the performances of the proposed methodologies. An illustration is then proposed on real-life mortality datasets which includes a sensitivity analysis to a Covid-type scenario. Overall, we found that both methods based on out-of-sample criterion outperform the standard BMA approach in terms of prediction performance and robustness. | statistics |
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations. While previous work has developed algorithmic approaches to attacking and defending VAEs, there remains a lack of formalization for what it means for a VAE to be robust. To address this, we develop a novel criterion for robustness in probabilistic models: $r$-robustness. We then use this to construct the first theoretical results for the robustness of VAEs, deriving margins in the input space for which we can provide guarantees about the resulting reconstruction. Informally, we are able to define a region within which any perturbation will produce a reconstruction that is similar to the original reconstruction. To support our analysis, we show that VAEs trained using disentangling methods not only score well under our robustness metrics, but that the reasons for this can be interpreted through our theoretical results. | statistics |
The package BeyondBenford compares the goodness of fit of Benford's and Blondeau Da Silva's (BDS's) digit distributions in a dataset. The package is used to check whether the data distribution is consistent with theoretical distributions highlighted by Blondeau Da Silva or not: this ideal theoretical distribution must be at least approximately followed by the data for the use of BDS's model to be well-founded. It also allows to draw histograms of digit distributions, both observed in the dataset and given by the two theoretical approaches. Finally, it proposes to quantify the goodness of fit via Pearson's chi-squared test. | physics |
In this short note we describe an interesting new phenomenon about the $\mathrm{Sp}(4,\mathbb{R})$-character variety. Precisely, we show that the Hitchin component and all Gothen components share the same boundary in our length spectrum compactification. | mathematics |
Compressive sensing can overcome the Nyquist criterion and record images with a fraction of the usual number of measurements required. However, conventional measurement bases are susceptible to diffraction and scattering, prevalent in high-resolution microscopy. Here, we explore the random Morlet basis as an optimal set for compressive multiphoton imaging, based on its ability to minimise the space-frequency uncertainty. We implement this approach for the newly developed method of wide-field multiphoton microscopy with single-pixel detection (TRAFIX), which allows imaging through turbid media without correction. The Morlet basis is well-suited to TRAFIX at depth, and promises a route for rapid acquisition with low photodamage. | physics |
Computer models play a key role in many scientific and engineering problems. One major source of uncertainty in computer model experiment is input parameter uncertainty. Computer model calibration is a formal statistical procedure to infer input parameters by combining information from model runs and observational data. The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data such as large time series due to the difficulty in building an emulator and the non-identifiability between effects from input parameters and data-model discrepancy. To overcome these challenges we propose a new calibration framework based on a deep neural network (DNN) with long-short term memory layers that directly emulates the inverse relationship between the model output and input parameters. Adopting the 'learning with noise' idea we train our DNN model to filter out the effects from data model discrepancy on input parameter inference. We also formulate a new way to construct interval predictions for DNN using quantile regression to quantify the uncertainty in input parameter estimates. Through a simulation study and real data application with WRF-hydro model we show that our approach can yield accurate point estimates and well calibrated interval estimates for input parameters. | statistics |
We establish lower-bounds on the number of resource states, also known as magic states, needed to perform various quantum computing tasks, treating stabilizer operations as free. Our bounds apply to adaptive computations using measurements and an arbitrary number of stabilizer ancillas. We consider (1) resource state conversion, (2) single-qubit unitary synthesis, and (3) computational tasks. To prove our resource conversion bounds we introduce two new monotones, the stabilizer nullity and the dyadic monotone, and make use of the already-known stabilizer extent. We consider conversions that borrow resource states, known as catalyst states, and return them at the end of the algorithm. We show that catalysis is necessary for many conversions and introduce new catalytic conversions, some of which are close to optimal. By finding a canonical form for post-selected stabilizer computations, we show that approximating a single-qubit unitary to within diamond-norm precision $\varepsilon$ requires at least $1/7\cdot\log_2(1/\varepsilon) - 4/3$ $T$-states on average. This is the first lower bound that applies to synthesis protocols using fall-back, mixing techniques, and where the number of ancillas used can depend on $\varepsilon$. Up to multiplicative factors, we optimally lower bound the number of $T$ or $CCZ$ states needed to implement the ubiquitous modular adder and multiply-controlled-$Z$ operations. When the probability of Pauli measurement outcomes is 1/2, some of our bounds become tight to within a small additive constant. | quantum physics |
This paper develops a new model reference adaptive control (MRAC) framework using partial-state feedback for solving a multivariable adaptive output tracking problem. The developed MRAC scheme has full capability to deal with plant uncertainties for output tracking and has desired flexibility to combine the advantages of full-state feedback MRAC and output feedback MRAC. With such a new control scheme, the plant-model matching condition is achievable as with an output or state feedback MRAC design. A stable adaptive control scheme is developed based on LDS decomposition of the plant high-frequency gain matrix, which guarantees closed-loop stability and asymptotic output tracking. The proposed partial-state feedback MRAC scheme not only expands the existing family of MRAC, but also provides new features to the adaptive control system, including additional design flexibility and feedback capacity. Based on its additional design flexibility, a minimal-order MRAC scheme is also presented, which reduces the control adaptation complexity and relaxes the feedback information requirement, compared to the existing MRAC schemes. New results are presented for plant-model matching, error model, adaptive law and stability analysis. A simulation study of a linearized aircraft model is conducted to demonstrate the effectiveness and new features of the proposed MRAC control scheme. | electrical engineering and systems science |
We propose a couple of oracle construction methods for quantum pattern matching. We in turn show that one of the construct can be used with the Grover's search algorithm for exact and partial pattern matching, deterministically. The other one also points to the matched indices, but primarily provides a means to generate the Hamming distance between the pattern to be searched and all the possible sub strings in the input string, in a probabilistic way. | quantum physics |
We look at common problems found in data that is used for predictive modeling tasks, and describe how to address them with the vtreat R package. vtreat prepares real-world data for predictive modeling in a reproducible and statistically sound manner. We describe the theory of preparing variables so that data has fewer exceptional cases, making it easier to safely use models in production. Common problems dealt with include: infinite values, invalid values, NA, too many categorical levels, rare categorical levels, and new categorical levels (levels seen during application, but not during training). Of special interest are techniques needed to avoid needlessly introducing undesirable nested modeling bias (which is a risk when using a data-preprocessor). | statistics |
Construction of tight confidence regions and intervals is central to statistical inference and decision making. This paper develops new theory showing minimum average volume confidence regions for categorical data. More precisely, consider an empirical distribution $\widehat{\boldsymbol{p}}$ generated from $n$ iid realizations of a random variable that takes one of $k$ possible values according to an unknown distribution $\boldsymbol{p}$. This is analogous to a single draw from a multinomial distribution. A confidence region is a subset of the probability simplex that depends on $\widehat{\boldsymbol{p}}$ and contains the unknown $\boldsymbol{p}$ with a specified confidence. This paper shows how one can construct minimum average volume confidence regions, answering a long standing question. We also show the optimality of the regions directly translates to optimal confidence intervals of linear functionals such as the mean, implying sample complexity and regret improvements for adaptive machine learning algorithms. | statistics |
Generative adversarial networks (GAN) became a hot topic, presenting impressive results in the field of computer vision. However, there are still open problems with the GAN model, such as the training stability and the hand-design of architectures. Neuroevolution is a technique that can be used to provide the automatic design of network architectures even in large search spaces as in deep neural networks. Therefore, this project proposes COEGAN, a model that combines neuroevolution and coevolution in the coordination of the GAN training algorithm. The proposal uses the adversarial characteristic between the generator and discriminator components to design an algorithm using coevolution techniques. Our proposal was evaluated in the MNIST dataset. The results suggest the improvement of the training stability and the automatic discovery of efficient network architectures for GANs. Our model also partially solves the mode collapse problem. | computer science |
In this letter, the secrecy performance in cognitive radio networks (CRNs) over fluctuating two-ray (FTR) channels, which is used to model the millimetre wave channel, is investigated in terms of the secrecy outage probability (SOP). Specifically, we consider the case where a source (S) transmits confidential messages to a destination (D), and an eavesdropper wants to wiretap the information from S to D. In a CRN framework, we assume that the primary user shares its spectrum with S, where S adopts the underlay strategy to control its transmit power without impairing the quality of service of the primary user. After some mathematical manipulations, an exact analytical expression for the SOP is derived. In order to get physical and technical insights into the effect of the channel parameters on the SOP, we derive an asymptotic formula for the SOP in the high signal-to-noise ratio region of the S--D link. We finally show some selected Monte-Carlo simulation results to validate the correctness of our derived analytical expressions. | electrical engineering and systems science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.