text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
In this work, we use the $C^3P_{\,0}$ model to calculate the decay widths of the low lying charmonium $J^{PC}=1^{--}$ states, nominally $J/\psi(1S)$ and $\psi(2S)$, in the following common channels: $\rho\,\pi$, $\omega\,\eta$, $\omega\,\eta^\prime$, $K^{\ast +}\,K^-$, $K^{\ast 0}\,\bar{K}^0$, $\phi\,\eta$, $\phi\,\eta^\prime$. | high energy physics phenomenology |
Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | computer science |
The rapidity dependent transverse momentum spectra of heavy quarkonia (J/psi and Upsilon mesons) produced in small collision systems such as proton-proton (pp) and proton-lead (p-Pb) collisions at center-of-mass energy (per nucleon pair) 5-13 TeV are described by a two-component statistical model which is based on the Tsallis statistics and inverse power-law. The experimental data measured by the LHCb Collaboration at the Large Hadron Collider (LHC) are well fitted by the model results. The related parameters are obtained and the dependences of parameters on rapidity are analyzed. | high energy physics phenomenology |
One of the fundamental problems in Bayesian statistics is the approximation of the posterior distribution. Gibbs sampler and coordinate ascent variational inference are renownedly utilized approximation techniques that rely on stochastic and deterministic approximations. In this paper, we define fundamental sets of densities frequently used in Bayesian inference. We shall be concerned with the clarification of the two schemes from the set-theoretical point of view. This new way provides an alternative mechanism for analyzing the two schemes endowed with pedagogical insights. | mathematics |
The area of distributed ledgers is a vast and quickly developing landscape. At the heart of most distributed ledgers is their consensus protocol. The consensus protocol describes the way participants in a distributed network interact with each other to obtain and agree on a shared state. While classical consensus Byzantine fault tolerant (BFT) algorithms are designed to work in closed, size-limited networks only, modern distributed ledgers -- and blockchains in particular -- often focus on open, permissionless networks. In this paper, we present a novel blockchain consensus algorithm, called Albatross, inspired by speculative BFT algorithms. Transactions in Albatross benefit from strong probabilistic finality. We describe the technical specification of Albatross in detail and analyse its security and performance. We conclude that the protocol is secure under regular PBFT security assumptions and has a performance close to the theoretical maximum for single-chain Proof-of-Stake consensus algorithms. | computer science |
This paper establishes some of the fundamental barriers in the theory of computations and finally settles the long-standing computational spectral problem. That is to determine the existence of algorithms that can compute spectra $\mathrm{sp}(A)$ of classes of bounded operators $A = \{a_{ij}\}_{i,j \in \mathbb{N}} \in \mathcal{B}(l^2(\mathbb{N}))$, given the matrix elements $\{a_{ij}\}_{i,j \in \mathbb{N}}$, that are sharp in the sense that they achieve the boundary of what a digital computer can achieve. Similarly, for a Schr\"odinger operator $H = -\Delta+V$, determine the existence of algorithms that can compute the spectrum $\mathrm{sp}(H)$ given point samples of the potential function $V$. In order to solve these problems, we establish the Solvability Complexity Index (SCI) hierarchy and provide a collection of new algorithms that allow for problems that were previously out of reach. The SCI is the smallest number of limits needed in the computation, yielding a classification hierarchy for all types of problems in computational mathematics that determines the boundaries of what computers can achieve in scientific computing. In addition, the SCI hierarchy provides classifications of computational problems that can be used in computer-assisted proofs. The SCI hierarchy captures many key computational issues in the history of mathematics including the insolvability of the quintic, Smale's problem on the existence of iterative generally convergent algorithm for polynomial root finding, the computational spectral problem, inverse problems, optimisation etc. | computer science |
Age of information (AoI) is a recently proposed metric for quantifying data freshness in real-time status monitoring systems where timeliness is of importance. In this paper, we explore the data freshness in the Hyperledger Fabric Blockchain-enabled monitoring network (HeMN) by leveraging the AoI metric. In HeMN, status updates from sources are transmitted through an uplink and recorded in a Hyperledger Fabric (HLF) network. To provide a stochastic guarantee of data freshness, we derive a closed-form of AoI violation probability by considering the transmission latency and the consensus latency. Then, we validate our analytic results through the implemented HLF platform. We also investigate the effect of the target successful transmission probability (STP) on the AoI violation probability. | electrical engineering and systems science |
We give a simple classification of the independent $n$-point interaction vertices for bosonic higher-spin gauge fields in $d$-dimensional Minkowski space-times. We first give a characterisation of such vertices for large dimensions, $d \geq 2n - 1$, where one does not have to consider Schouten identities due to over-antisymmetrisation of space-time indices. When the dimension is lowered, such identities have to be considered, but their appearance only leads to equivalences of large-$d$ vertices and does not lead to new types of vertices. We consider the case of low dimensions, $d<n$, in detail, where the large number of Schouten identities leads to strong restrictions on independent vertices. We also comment on the generalisation of our results to the intermediate case $n \leq d \leq 2n - 2$. In all cases, the independent vertices are expressed in terms of elementary manifestly gauge-invariant quantities, suggesting that no deformations of the gauge transformations are induced. | high energy physics theory |
It is an open problem whether a classical client can delegate quantum computing to an efficient remote quantum server in such a way that the correctness of quantum computing is somehow guaranteed. Several protocols for verifiable delegated quantum computing have been proposed, but the client is not completely free from any quantum technology: the client has to generate or measure single-qubit states. In this paper, we show that the client can be completely classical if the server is rational (i.e., economically motivated), following the "rational proofs" framework of Azar and Micali. More precisely, we consider the following protocol. The server first sends the client a message allegedly equal to the solution of the problem that the client wants to solve. The client then gives the server a monetary reward whose amount is calculated in classical probabilistic polynomial-time by using the server's message as an input. The reward function is constructed in such a way that the expectation value of the reward (the expectation over the client's probabilistic computing) is maximum when the server's message is the correct solution to the problem. The rational server who wants to maximize his/her profit therefore has to send the correct solution to the client. | quantum physics |
Simulations of radiation damage in single molecule imaging using a X-ray free electron laser use atomic rates calculated in the lowest order. We investigate the difference in ion yield predictions using Hartree-Fock and Hartree-Fock-Slater pproximations for light and heavy elements of biological significance. The results show that for the biologically abundant elements of the second and third rows of the periodic table both approximations agree to about 6%. For the heavier elements beyond the fourth row the discrepancy rices to 11% for the range of the pulse parameters covered in this work. Presented analysis can be used for an error estimation in a wide range of ab initio simulations of the X-ray pulse interaction with biological molecules. We also discuss other atomic structure effects and show that their account has considerably smaller effect on the ion yields of respective elements compared to the choice of the approximation. | physics |
We compute the spectra and total fluxes of quantum mechanically produced particles crossing the black hole and cosmological horizons in Schwarzschild de Sitter (SdS). Particle states are defined with respect to well-behaved, Kruskal coordinates near the horizons, and as a consequence we find that these spectra are generally non-thermal. The non-thermal Bogoliubov coefficient for a vacuum fluctuation near the black hole horizon to produce a particle that crosses the cosmological horizon is shown to equal to the convolution of two thermal coefficients, one at the cosmological temperature and one at the black hole temperature, weighted by the transmission coefficient for wave propagation in static SdS coordinates. In this sense virtual thermal propagation underlies the production process. This representation leads to the useful result that the geometric optics approximation is reliable when used together with a low frequency cut-off determined by the transmission coefficient. The large black hole limit is a quasi-equilibrium situation as both temperatures approach the common value of zero, the particle spectra become equal, and both emissions are exponentially suppressed. Small black holes radiate as thermal bodies and absorb a tiny flux of cosmological particles. The behavior of the quantum fluctuations on the horizons is seen to be consistent with the Schottky anomaly behavior of classical gravitational fluctuations. | high energy physics theory |
Abusers increasingly use spyware apps, account compromise, and social engineering to surveil their intimate partners, causing substantial harms that can culminate in violence. This form of privacy violation, termed intimate partner surveillance (IPS), is a profoundly challenging problem to address due to the physical access and trust present in the relationship between the target and attacker. While previous research has examined IPS from the perspectives of survivors, we present the first measurement study of online forums in which (potential) attackers discuss IPS strategies and techniques. In domains such as cybercrime, child abuse, and human trafficking, studying the online behaviors of perpetrators has led to better threat intelligence and techniques to combat attacks. We aim to provide similar insights in the context of IPS. We identified five online forums containing discussion of monitoring cellphones and other means of surveilling an intimate partner, including three within the context of investigating relationship infidelity. We perform a mixed-methods analysis of these forums, surfacing the tools and tactics that attackers use to perform surveillance. Via qualitative analysis of forum content, we present a taxonomy of IPS strategies used and recommended by attackers, and synthesize lessons for technologists seeking to curb the spread of IPS. | computer science |
We study quantum walks through chains consisting of two and three star graphs. The first star has a distinguished vertex labelled START and the last has one labelled END. There are multiple paths between these two vertices, and the object is to find these paths. We show that a quantum walk can do this with a quantum speedup. | quantum physics |
Chemically tagging stars back to common formation sites in the Milky Way and establishing a high level of chemical homogeneity in these chemically-tagged birth clusters is important for understanding the chemical and dynamical history of the Galactic disc. We constrain the intrinsic abundance scatter in 17 newly chemically-tagged dissolved birth clusters found in the APOGEE survey by modeling APOGEE spectra as a one-dimensional function of initial stellar mass, performing forward modeling of the observed stellar spectra, and comparing the data and simulations using Approximate Bayesian Computation. We test this method on the well-known open clusters M67, NGC 6819, NGC 7789, and NGC 6791. We study 15 elements in APOGEE and find that, in general, we are able to obtain very strong constraints on the intrinsic abundance scatter of most elements in the chemically-tagged birth clusters, with upper limits of <~ 0.02 dex for C, <~ 0.03 dex for O, Mn, and Fe, <~ 0.04 dex for Si and Ni, and <~ 0.05 dex for N, Mg, and Ca. While we find some evidence for a small amount of chemical inhomogeneity in the remaining elements (i.e. Na, Al, S, K, Ti, and V), we are still able to obtain stronger limits compared to those found for open clusters, consistent with previous findings. By strongly constraining the level of chemical homogeneity within reconstructed birth clusters, we can strengthen the statement that these groups of stars represent birth clusters, with promising implications for future chemical tagging studies. | astrophysics |
Using a unique data set containing about 15.06 million truck transportation records in five months, we investigate the highway freight transportation diversity of 338 Chinese cities based on the truck transportation probability $p_{ij}$ from one city to the other. The transportation probabilities are calculated from the radiation model based on the geographic distance and its cost-based version based on the driving distance as the proxy of cost. For each model, we consider both the population and the gross domestic product, and find quantitatively very similar results. We find that the transportation probabilities have nice power-law tails with the tail exponents close to 0.5 for all the models. The two transportation probabilities in each model fall around the diagonal $p_{ij}=p_{ji}$ but are often not the same. In addition, the corresponding transportation probabilities calculated from the raw radiation model and the cost-based radiation model also fluctuate around the diagonal $p_{ij}^{\rm{geo}}=p_{ij}^{\rm{cost}}$. We calculate four sets of highway truck transportation diversity according to the four sets of transportation probabilities that are found to be close to each other for each city pair. Further, it is found that the population, the gross domestic product, the in-flux, and the out-flux scale as power laws with respect to the transportation diversity in the raw and cost-based radiation models. It implies that a more developed city usually has higher diversity in highway truck transportation, which reflects the fact that a more developed city usually has a more diverse economic structure. | physics |
Many matrix completion methods assume that the data follows the uniform distribution. To address the limitation of this assumption, Chen et al. \cite{Chen20152999} propose to recover the matrix where the data follows the specific biased distribution. Unfortunately, in most real-world applications, the recovery of a data matrix appears to be incomplete, and perhaps even corrupted information. This paper considers the recovery of a low-rank matrix, where some observed entries are sampled in a \emph{biased distribution} suitably dependent on \emph{leverage scores} of a matrix, and some observed entries are uniformly corrupted. Our theoretical findings show that we can provably recover an unknown $n\times n$ matrix of rank $r$ from just about $O(nr\log^2 n)$ entries even when the few observed entries are corrupted with a small amount of noisy information. Empirical studies verify our theoretical results. | computer science |
This paper studies the attitude control of a satellite in three-axis by thrusters. The mathematical model of attitude dynamics and kinematics of the satellite is represented as a switched system with sub-systems. Each sub-system is defined according to on/off thrusters state. A training method based on dynamic programming is utilized which can find the appropriate switching between sub-systems such that a cost function is optimized. Furthermore to extend the solution for a specific domain of interest neural network is used for approximating the cost function with basis functions. The training method is offline and it finds the optimal weights of basis functions which can be used to find optimal switching. It is shown that the proposed method can execute a maneuver in fixed final time and bring the attitude to final desired condition. Moreover, the proposed method is robust against uncertainties in system modeling. Finally, it is shown that the control scheme can be used to design low cost attitude control unit for micro-satellite or as a backup unit. | electrical engineering and systems science |
In the WHO glioma classification guidelines grade, IDH mutation and 1p19q co-deletion play a central role as they are important markers for prognosis and optimal therapy planning. Therefore, we propose a fully automatic, MRI based, 3D pipeline for glioma segmentation and classification. The designed segmentation network was a 3D U-Net achieving an average whole tumor dice score of 90%. After segmentation, the 3D tumor ROI is extracted and fed into the multi-task classification network. The network was trained and evaluated on a large heterogeneous dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS 2019 databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at the Ghent University Hospital (GUH). Classification AUC scores are 0.93, 0.94 and 0.82 on the TCIA test data and 0.94, 0.86 and 0.87 on the GUH data for grade, IDH and 1p19q status respectively. | electrical engineering and systems science |
We present a semi-analytical model of the resonance phenomena occurring in a hybrid system made of a 1D array of periodic subwavelength slits deposited on an insulator/graphene layer. We show that the spectral response of this hybrid system can be fully explained by a simple semi-analytical model based on a weak and strong couplings between two elementary sub-systems. The first elementary sub-system consists of a 1D array of periodic subwavelength slits viewed as a homogeneous medium. In this medium lives a metal-insulator-metal lattice mode interacting with surface and cavity plasmon modes. A weak coupling with surface plasmon modes on both faces of the perforated metal film leads to a broadband spectrum while a strong coupling between this first sub-system and a second one made of a graphene-insulator-metal gap leads to a narrow band spectrum. We provide a semi-analytical model based on these two interactions allowing to efficiently access the full spectrum of the hybrid system. | physics |
A light CP-even Standard Model (SM) gauge-singlet scalar $S$ can be produced abundantly in the supernova core, via the nucleon bremsstrahlung process $N N \to N N S$, due to its mixing with the SM Higgs boson. Including the effective $S$ coupling to both nucleons and the pion mediators, we evaluate the production amplitude for the $S$ particle and point out a key difference with the well-known light CP-odd scalar (axion) and vector boson (dark photon) cases. Taking the subsequent decay and re-absorption of $S$ into account, we present a complete calculation of the energy loss rate for the $S$ particle. We then use the SN1987A luminosity constraints to derive updated supernova limits on the mixing of the scalar $S$ with the SM Higgs boson. We find that the mixing angle $\sin\theta$ with the SM Higgs is excluded only in the narrow range of $3.9 \times 10^{-7}$ to $7.0 \times 10^{-6}$, depending on the scalar mass up to about 147 MeV, beyond which the supernova limit disappears. | high energy physics phenomenology |
This paper presents a low-power 10-bit 130-MS/s successive approximation register (SAR) analog-to-digital converter (ADC) in 90 nm CMOS process. The proposed asynchronous ADC consists of a comparator, SAR logic block and two control blocks for the capacitive digital to analog converters (DAC). At a 1.2 V supply and 130 MS/s, the ADC achieves an SNDR of 55.2 dB and consumes 860 uW, resulting in a figure of merit (FOM) of 50.9 fJ/MHz. It achieves an ENOB of 8.8 bits with a differential input range of 1570 mV. | electrical engineering and systems science |
The distribution of spin directions of $\sim6.4\cdot10^4$ SDSS spiral galaxies with spectra was examined, and compared to the distribution of $\sim3.3\cdot10^4$ Pan-STARRS galaxies. The analysis shows a statistically significant asymmetry between the number of SDSS galaxies with opposite spin directions, and the magnitude and direction of the asymmetry changes with the direction of observation and with the redshift. The redshift dependence shows that the distribution of the spin direction of SDSS galaxies becomes more asymmetric as the redshift gets higher. Fitting the distribution of the galaxy spin directions to a quadrupole alignment provides fitness with statistical significance >5$\sigma$, which grows to >8$\sigma$ when just galaxies with z>0.15 are used. Similar analysis with Pan-STARRS galaxies provides dipole and quadrupole alignments nearly identical to the analysis of SDSS galaxies, showing that the source of the asymmetry is not necessarily a certain unknown flaw in a specific telescope system. While these observations are clearly provocative, there is no known error that could exhibit itself in such form. The data analysis process is fully automatic, and uses deterministic and symmetric algorithms with defined rules. It does not involve either manual analysis that can lead to human perceptual bias, or machine learning that can capture human biases or other subtle differences that are difficult to identify due to the complex nature of machine learning processes. Also, an error in the galaxy annotation process is expected to show consistent bias in all parts of the sky, rather than change with the direction of observation to form a clear and definable pattern. | astrophysics |
In this paper, we formulate a theory of the second-rank antisymmetric (pseudo)tensor field minimally coupled to a spinor, calculate the one-loop effective potential of the (pseudo)tensor field, and, explicitly, demonstrate that it is positively defined and possesses a continuous set of minima, both for tensor and pseudotensor cases. Therefore, our model turns out to display the dynamical Lorentz symmetry breaking. We also argue that, contrarily to the derivative coupling we use here, derivative-free couplings of the antisymmetric tensor field to a spinor do not generate the positively defined potential and thus do not allow for the dynamical Lorentz symmetry breaking. | high energy physics theory |
We consider the real time dynamics of $N$ noninteracting fermions in $d=1$. They evolve in a trapping potential $V(x)$, starting from the equilibrium state in a potential $V_0(x)$. We study the time evolution of the Wigner function $W(x,p,t)$ in the phase space $(x,p)$, and the associated kernel which encodes all correlation functions. At $t=0$ the Wigner function for large $N$ is uniform in phase space inside the Fermi volume, and vanishes at the Fermi surf over a scale $e_N$ being described by a universal scaling function related to the Airy function. We obtain exact solutions for the Wigner function, the density, and the correlations in the case of harmonic and inverse square potentials, for several $V_0(x)$. In the large $N$ limit, near the edges where the density vanishes, we obtain limiting kernels (of the Airy or Bessel types) that retain the form found in equilibrium, up to a time dependent rescaling. For non-harmonic traps the evolution of the Fermi volume is more complex. Nevertheless we show that, for intermediate times, the Fermi surf is still described by the same equilibrium scaling function, with a non-trivial time and space dependent width which we compute analytically. We discuss the multi-time correlations and obtain their explicit scaling forms valid near the edge for the harmonic oscillator. Finally, we address the large time limit where relaxation to the Generalized Gibbs Ensemble (GGE) was found to occur in the "classical" regime $\hbar \sim 1/N$. Using the diagonal ensemble we compute the Wigner function in the quantum case (large $N$, fixed $\hbar$) and show that it agrees with the GGE. We also obtain the higher order (non-local) correlations in the diagonal ensemble. | condensed matter |
Graph sampling with noise is a fundamental problem in graph signal processing (GSP). Previous works assume an unbiased least square (LS) signal reconstruction scheme and select samples greedily via expensive extreme eigenvector computation. A popular biased scheme using graph Laplacian regularization (GLR) solves a system of linear equations for its reconstruction. Assuming this GLR-based scheme, we propose a reconstruction-cognizant sampling strategy to maximize the numerical stability of the linear system---\textit{i.e.}, minimize the condition number of the coefficient matrix. Specifically, we maximize the eigenvalue lower bounds of the matrix, represented by left-ends of Gershgorin discs of the coefficient matrix. To accomplish this efficiently, we propose an iterative algorithm to traverse the graph nodes via Breadth First Search (BFS) and align the left-ends of all corresponding Gershgorin discs at lower-bound threshold $T$ using two basic operations: disc shifting and scaling. We then perform binary search to maximize $T$ given a sample budget $K$. Experiments on real graph data show that the proposed algorithm can effectively promote large eigenvalue lower bounds, and the reconstruction MSE is the same or smaller than existing sampling methods for different budget $K$ at much lower complexity. | electrical engineering and systems science |
We propose supervised systems for speech activity detection (SAD) and speaker identification (SID) tasks in Fearless Steps Challenge Phase-2. The proposed systems for both the tasks share a common convolutional neural network (CNN) architecture. Mel spectrogram is used as features. For speech activity detection, the spectrogram is divided into smaller overlapping chunks. The network is trained to recognize the chunks. The network architecture and the training steps used for the SID task are similar to that of the SAD task, except that longer spectrogram chunks are used. We propose a two-level identification method for SID task. First, for each chunk, a set of speakers is hypothesized based on the neural network posterior probabilities. Finally, the speaker identity of the utterance is identified using the chunk-level hypotheses by applying a voting rule. On SAD task, a detection cost function score of 5.96%, and 5.33% are obtained on dev and eval sets, respectively. A top 5 retrieval accuracy of 82.07% and 82.42% are obtained on the dev and eval sets for SID task. A brief analysis is made on the results to provide insights into the miss-classified cases in both the tasks. | electrical engineering and systems science |
Recent observations of global velocity gradients across and along molecular filaments have been interpreted as signs of gas accreting onto and along these filaments, potentially feeding star-forming cores and proto-clusters. The behavior of velocity gradients in filaments, however, has not been studied in detail, particularly on small scales (< 0.1 pc). In this paper, we present MUFASA, an efficient, robust, and automatic method to fit ammonia lines with multiple velocity components, generalizable to other molecular species. We also present CRISPy, a Python package to identify filament spines in 3D images (e.g., position-position-velocity cubes), along with a complementary technique to sort fitted velocity components into velocity-coherent filaments. In NGC 1333, we find a wealth of velocity gradient structures on a beam-resolved scale of ~0.05 pc. Interestingly, these local velocity gradients are not randomly oriented with respect to filament spines and their perpendicular, i.e., radial, component decreases in magnitude towards the spine for many filaments. Together with remarkably constant velocity gradients on larger scales along many filaments, these results suggest a scenario in which gas falling onto filaments is progressively damped and redirected to flow along these filaments. | astrophysics |
This work presents our ongoing research of unsupervised pretraining in neural machine translation (NMT). In our method, we initialize the weights of the encoder and decoder with two language models that are trained with monolingual data and then fine-tune the model on parallel data using Elastic Weight Consolidation (EWC) to avoid forgetting of the original language modeling tasks. We compare the regularization by EWC with the previous work that focuses on regularization by language modeling objectives. The positive result is that using EWC with the decoder achieves BLEU scores similar to the previous work. However, the model converges 2-3 times faster and does not require the original unlabeled training data during the fine-tuning stage. In contrast, the regularization using EWC is less effective if the original and new tasks are not closely related. We show that initializing the bidirectional NMT encoder with a left-to-right language model and forcing the model to remember the original left-to-right language modeling task limits the learning capacity of the encoder for the whole bidirectional context. | computer science |
One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filter-based methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker. | physics |
Erosion, as a key control of landslide dynamics, significantly increases the destructive power by rapidly amplifying its volume, mobility and impact energy. Mobility is directly linked to the threat posed by an erosive landslide. No clear-cut mechanical condition has been presented so far for when, how and how much energy the erosive landslide gains or loses, resulting in enhanced or reduced mobility. We pioneer a mechanical model for the energy budget of an erosive landslide that controls its mobility. A fundamentally new understanding is that the increased inertia due to the increased mass is related to an entrainment velocity. With this, the true inertia of an erosive landslide can be ascertained, making a breakthrough in correctly determining the mobility of the erosive landslide. Outstandingly, erosion velocity regulates the energy budget and decides whether the landslide mobility will be enhanced or reduced. This provides the first-ever explicit mechanical quantification of the state of erosional energy and a precise description of mobility. This addresses the long-standing question of why many erosive landslides generate higher mobility, while others reduce mobility. By introducing three key concepts: erosion-velocity, entrainment-velocity and energy-velocity, we demonstrate that erosion and entrainment are essentially different processes. Landslides gain energy and enhance mobility if the erosion velocity is greater than the entrainment velocity. We introduce two dimensionless numbers, mobility scaling and erosion number, delivering explicit measure of mobility. We establish a mechanism of landslide-propulsion providing the erosion-thrust to the landslide. Analytically obtained velocity indicates that erosion controls the landslide dynamics. We also present a full set of dynamical equations in conservative form which correctly includes the erosion induced net momentum production. | physics |
We investigate aperiodic X-ray flux variability in accreting highly magnetized neutron stars - X-ray pulsars (XRPs). The X-ray variability is largely determined by mass accretion rate fluctuations at the NS surface, which replicate accretion rate fluctuations at the inner radius of the accretion disc. The variability at the inner radius is due to fluctuations arising all over the disc and propagating inwards under the influence of viscous diffusion. The inner radius varies with mean mass accretion rate and can be estimated from the known magnetic field strength and accretion luminosity of XRPs. Observations of transient XRPs covering several orders of magnitude in luminosity give a unique opportunity to study effects arising due to the changes of the inner disc radius. We investigate the process of viscous diffusion in XRP accretion discs and construct new analytical solutions of the diffusion equation applicable for thin accretion discs truncated both from inside and outside. Our solutions are the most general ones derived in the approximation of Newtonian mechanics. We argue that the break observed at high frequencies in the power density spectra of XRPs corresponds to the minimal time scale of the dynamo process, which is responsible for the initial fluctuations. Comparing data from the bright X-ray transient A 0535+26 with our model, we conclude that the time scale of initial variability in the accretion disc is a few times longer than the local Keplerian time scale. | astrophysics |
Various Markov chain Monte Carlo (MCMC) methods are studied to improve upon random walk Metropolis sampling, for simulation from complex distributions. Examples include Metropolis-adjusted Langevin algorithms, Hamiltonian Monte Carlo, and other recent algorithms related to underdamped Langevin dynamics. We propose a broad class of irreversible sampling algorithms, called Hamiltonian assisted Metropolis sampling (HAMS), and develop two specific algorithms with appropriate tuning and preconditioning strategies. Our HAMS algorithms are designed to achieve two distinctive properties, while using an augmented target density with momentum as an auxiliary variable. One is generalized detailed balance, which induces an irreversible exploration of the target. The other is a rejection-free property, which allows our algorithms to perform satisfactorily with relatively large step sizes. Furthermore, we formulate a framework of generalized Metropolis--Hastings sampling, which not only highlights our construction of HAMS at a more abstract level, but also facilitates possible further development of irreversible MCMC algorithms. We present several numerical experiments, where the proposed algorithms are found to consistently yield superior results among existing ones. | statistics |
Spin-wave resonance measurements were performed in the mixed magnetic phase regime of a Pd-doped FeRh epilayer that appears as the first-order ferromagnetic-antiferromagnetic phase transition takes place. It is seen that the measured value of the exchange stiffness is suppressed throughout the measurement range when compared to the expected value of the fully ferromagnetic regime, extracted via the independent means of a measurement of the Curie point, for only slight changes in the ferromagnetic volume fraction. This behavior is attributed to the influence of the antiferromagnetic phase: inspired by previous experiments that show ferromagnetism to be most persistent at the surfaces and interfaces of FeRh thin films, we modelled the antiferromagnetic phase as forming a thin layer in the middle of the epilayer through which the two ferromagnetic layers are coupled up to a certain critical thickness. The development of this exchange stiffness is then consistent with that expected from the development of an exchange coupling across the magnetic phase boundary, as a consequence of a thickness dependent phase transition taking place in the antiferromagnetic regions and is supported by complimentary computer simulations of atomistic spin-dynamics. The development of the Gilbert damping parameter extracted from the ferromagnetic resonance investigations is consistent with this picture. | condensed matter |
Comets in the Oort cloud evolve under the influence of internal and external perturbations, such as giant planets, stellar passages, and the galactic tidal field. We aim to study the dynamical evolution of the comets in the Oort cloud, accounting for external perturbations (passing stars and the galactic tide). We first construct an analytical model of stellar encounters. We find that individual perturbations do not modify the dynamics of the comets in the cloud unless very close (< 0.5pc) encounters occur. Using proper motions, parallaxes, and radial velocities from Gaia DR2, we construct an astrometric catalogue of 14,659 stars that are within 50pc from the Sun. For all these stars we calculate the time and the closest distance to the Sun. We find that the cumulative effect of relatively distant ($\leq1$ pc) passing stars can perturb the comets in the Oort cloud. Finally, we study the dynamical evolution of the comets in the Oort cloud under the influence of multiple stellar encounters within 2.5pc from the Sun and the galactic tidal field over $\pm10$Myr. We considered two models for the Oort cloud, compact (a $\leq$0.25 pc) and extended (a$ \leq0.5$ pc). We find that the cumulative effect of stellar encounters is the major perturber of the Oort cloud for a compact configuration while for the extended, the galactic tide is the major perturber. In both cases, the effect of passing stars and the galactic tide raises the semi-major axis of $\sim1.1$\% of the comets at the edge of the cloud up to interstellar regions ($a >0.5$pc). This leads to the creation of transitional interstellar comets, which might become interstellar objects due to external perturbations. This raises the question about the existence of a cloud of objects in the interstellar space which might overlap with our Oort cloud if we consider that other planetary systems face similar processes for the ejection of comets. | astrophysics |
In this paper we extend the construction of special representations to convex-cocompact isometry groups of CAT(-1) spaces which admits complementary series. We prove that these limits of complementary series have a natural non-vanishing reduced cohomology class [c]. As a by product we generalize Kuhn Vershik formula and characterize geometrically CAT(-1) groups that admit complementary series. Investigating dynamical properties of the cohomology class [c] we prove an cocycle equidistribution theorem \`a la Roblin-Margulis and deduce the irreducibility of the associated affine actions. The constructions and results apply beyond the class of rank 1 linear groups where the irreducibility of the affine actions associated to the canonical class [c], even in the case of uniform lattices in SO(n,1), SU(n,1) or SL2(Qp) with n>1 and p prime, had not been established. | mathematics |
Background: DIS on the polarized deuteron with detection of a proton in the nuclear breakup region (spectator tagging) represents a unique method for extracting the neutron spin structure functions and studying nuclear modifications. The tagged proton momentum controls the nuclear configuration during the DIS process and enables a differential analysis of nuclear effects. Such measurements could be performed with the future electron-ion collider (EIC) and forward proton detectors if deuteron beam polarization could be achieved. Purpose: Develop theoretical framework for polarized deuteron DIS with spectator tagging. Formulate procedures for neutron spin structure extraction. Methods: A covariant spin density matrix formalism is used to describe general deuteron polarization in collider experiments (vector/tensor, pure/mixed). Light-front (LF) quantum mechanics is employed to factorize nuclear and nucleonic structure in the DIS process. A 4-dimensional representation of LF spin structure is used to construct the polarized deuteron LF wave function and efficiently evaluate the spin sums. Free neutron structure is extracted using the impulse approximation and analyticity in the tagged proton momentum (pole extrapolation). Results: General expressions of the polarized tagged DIS observables in collider experiments. Analytic and numerical study of the polarized deuteron LF spectral function and nucleon momentum distributions. Practical procedures for neutron spin structure extraction from the tagged deuteron spin asymmetries. Conclusions: Spectator tagging provides new tools for precise neutron spin structure measurements. D-wave depolarization and nuclear binding effects can be eliminated through the tagged proton momentum dependence. The methods can be extended to tensor-polarized observables, spin-orbit effects, and diffractive processes. | high energy physics phenomenology |
We outline a systematic procedure to obtain horizonless microstate geometries that have the same charges as three-charge five-dimensional black holes with a macroscopically-large horizon area and an arbitrarily-small angular momentum. There are two routes through which such solutions can be constructed: using multi-center Gibbons-Hawking (GH) spaces or using superstratum technology. So far the only solutions corresponding to microstate geometries for black holes with no angular momentum have been obtained via superstrata, and multi-center Gibbons-Hawking spaces have been believed to give rise only to microstate geometries of BMPV black holes with a large angular momentum. We perform a thorough search throughout the parameter space of smooth horizonless solutions with four GH centers and find that these have an angular momentum that is generally larger than 80% of the cosmic censorship bound. However, we find that solutions with three GH centers and one supertube (which are smooth in six-dimensional supergravity) can have an arbitrarily-low angular momentum. Our construction thus gives a recipe to build large classes of microstate geometries for zero-angular-momentum black holes without resorting to superstratum technology. | high energy physics theory |
In this paper, we investigate when system identification is statistically easy or hard, in the finite sample regime. Statistically easy to learn linear system classes have sample complexity that is polynomial with the system dimension. Most prior research in the finite sample regime falls in this category, focusing on systems that are directly excited by process noise. Statistically hard to learn linear system classes have worst-case sample complexity that is at least exponential with the system dimension, regardless of the identification algorithm. Using tools from minimax theory, we show that classes of linear systems can be hard to learn. Such classes include, for example, under-actuated or under-excited systems with weak coupling among the states. Having classified some systems as easy or hard to learn, a natural question arises as to what system properties fundamentally affect the hardness of system identifiability. Towards this direction, we characterize how the controllability index of linear systems affects the sample complexity of identification. More specifically, we show that the sample complexity of robustly controllable linear systems is upper bounded by an exponential function of the controllability index. This implies that identification is easy for classes of linear systems with small controllability index and potentially hard if the controllability index is large. Our analysis is based on recent statistical tools for finite sample analysis of system identification as well as a novel lower bound that relates controllability index with the least singular value of the controllability Gramian. | electrical engineering and systems science |
We propose a model which explains the baryon asymmetry of the universe and dark matter relic density at the same time. In this model, dark matter candidate is the dark baryon composed by dark quarks. A scalar mediator, which couples to the standard model leptons and dark quarks, is introduced to generate the asymmetry of baryon and dark baryon simultaneously. Direct detection and collider detection of this model are studied. We find that current underground direct detection experiments and LHC can hardly detect this model. But future lepton colliders, such as CEPC, have great potential to detect a large portion of the model parameter space by "displaced lepton jet" signal. | high energy physics phenomenology |
There is no denying how machine learning and computer vision have grown in the recent years. Their highest advantages lie within their automation, suitability, and ability to generate astounding results in a matter of seconds in a reproducible manner. This is aided by the ubiquitous advancements reached in the computing capabilities of current graphical processing units and the highly efficient implementation of such techniques. Hence, in this paper, we survey the key studies that are published between 2014 and 2020, showcasing the different machine learning algorithms researchers have used to segment the liver, hepatic-tumors, and hepatic-vasculature structures. We divide the surveyed studies based on the tissue of interest (hepatic-parenchyma, hepatic-tumors, or hepatic-vessels), highlighting the studies that tackle more than one task simultaneously. Additionally, the machine learning algorithms are classified as either supervised or unsupervised, and further partitioned if the amount of works that fall under a certain scheme is significant. Moreover, different datasets and challenges found in literature and websites, containing masks of the aforementioned tissues, are thoroughly discussed, highlighting the organizers original contributions, and those of other researchers. Also, the metrics that are used excessively in literature are mentioned in our review stressing their relevancy to the task at hand. Finally, critical challenges and future directions are emphasized for innovative researchers to tackle, exposing gaps that need addressing such as the scarcity of many studies on the vessels segmentation challenge, and why their absence needs to be dealt with in an accelerated manner. | electrical engineering and systems science |
In the present review we discuss different aspects of the two-photon exchange (TPE) physics in elastic $ep$ scattering, at high $Q^2$ as well as at low $Q^2$. The imaginary part of the TPE amplitude gives rise to beam and target single-spin asymmetries. Different theoretical approaches to calculation of these observables are considered. The real part of the TPE amplitude influence unpolarized cross section and double-spin observables and is, most likely, responsible for discrepancy between two methods of proton form factors measurements. We review different methods of calculations of the TPE amplitudes the framework of "hadron" and "quark-gluon" approaches. We discuss the dispersion approach suitable for low and intermediate $Q^2$, which includes elastic and inelastic intermediate hadronic states, as well as connection of TPE to proton radius puzzle. The present situation with direct experimental searches for the TPE amplitude in $e^+p/e^-p$ charge asymmetry is also discussed, as well as attempts to extract the TPE amplitudes from existing experimental data obtained by the Rosenbluth and double polarization techniques. The TPE physics in other processes, such as elastic $\mu p$, $e$-nucleus and $e\pi$ scattering is also reviewed. | high energy physics phenomenology |
Extratropical cyclones are large-scale weather systems which are often the source of extreme weather events in Northern Europe, often leading to mass infrastructural damage and casualties. Such systems create a local vorticity maxima which tracks across the Atlantic Ocean and from which can be determined a climatology for the region. While there have been considerable advances in developing algorithms for extracting the track and evolution of cyclones from reanalysis datasets, the data record is relatively short. This justifies the need for a statistical model to represent the more extreme characteristics of these weather systems, specifically their intensity and the spatial variability in their tracks. This paper presents a novel simulation-based approach to modelling the lifecycle of extratropical cyclones in terms of both their tracks and vorticity, incorporating various aspects of cyclone evolution and movement. By drawing on methods from extreme value analysis, we can simulate more extreme storms than those observed, representing a useful tool for practitioners concerned with risk assessment with regard to these weather systems. | statistics |
These are lecture notes for a minicourse on applications of microlocal analysis in inverse problems, given in Helsinki and Shanghai in June 2019. | mathematics |
The Standard Model Effective Field Theory (SMEFT) provides a systematic and model-independent framework to study neutrino non-standard interactions (NSIs). We study the constraining power of the on-going neutrino oscillation experiments T2K, NO$\nu$A, Daya Bay, Double Chooz and RENO in the SMEFT framework. A full consideration of matching is provided between different effective field theories and the renormalization group running at different scales, filling the gap between the low-energy neutrino oscillation experiments and SMEFT at the UV scale. We first illustrate our method with a top-down approach in a simplified scalar leptoquark model, showing more stringent constraints from the neutrino oscillation experiments compared to collider studies. We then provide a bottom-up study on individual dimension-6 SMEFT operators and find NSIs in neutrino experiments already sensitive to new physics at $\sim$20 TeV when the Wilson coefficients are fixed at unity. We also investigate the correlation among multiple operators at the UV scale and find it could change the constraints on SMEFT operators by several orders of magnitude compared with when only one operator is considered. Furthermore, we find that accelerator and reactor neutrino experiments are sensitive to different SMEFT operators, which highlights the complementarity of the two experiment types. | high energy physics phenomenology |
Neural abstractive summarization systems have achieved promising progress, thanks to the availability of large-scale datasets and models pre-trained with self-supervised methods. However, ensuring the factual consistency of the generated summaries for abstractive summarization systems is a challenge. We propose a post-editing corrector module to address this issue by identifying and correcting factual errors in generated summaries. The neural corrector model is pre-trained on artificial examples that are created by applying a series of heuristic transformations on reference summaries. These transformations are inspired by an error analysis of state-of-the-art summarization model outputs. Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset. We also find that transferring from artificial error correction to downstream settings is still very challenging. | computer science |
Molecular communication via diffusion (MCvD) is considered as one of the most feasible communication paradigms for nanonetworks, especially for bio-nanonetworks which are usually in water-rich biological environments. Two effects that deteriorates the signal in MCvD are noise and inter-symbol interference (ISI). The expected channel impulse response of MCvD has a long and slow attenuating tail due to molecular diffusion which causes ISI and further limits the slow data rate of MCvD. The extent that ISI and noise are suppressed in an MCvD system determines its effectiveness, especially at a high data rate. Although ISI-suppression approaches have been investigated, most of them are addressed as non-essential parts in other topics, such as signal detection or modulation. Furthermore, most of the state-of-the-art ISI-suppression approaches are performed by subtracting the estimated ISI from the total signal. In this work, we investigate ISI-suppression from a new perspective of filters to filter ISI out without any ISI estimation. The principles for a good design of ISI-suppression filters in MCvD are investigated. Based on the principles, an ISI-suppression filter with good anti-noise capability and an associated signal detection scheme is proposed for MCvD scenarios with both ISI and noise. We compare the proposed scheme with the state-of-the-art ISI-suppression approaches. The result manifests that the proposed ISI-suppression scheme could recover signals deteriorated severely by both ISI and noise, which could not be effectively detected by the state-of-the-art ISI-suppression approaches. | electrical engineering and systems science |
This paper tackles one of the most fundamental goals in functional time series analysis which is to provide reliable predictions for functions. Existing functional time series methods seek to predict a complete future functional observation based on a set of observed complete trajectories. The problem of interest discussed here is how to advance prediction methodology to cases where partial information on the next trajectory is available, with the aim of improving prediction accuracy. To solve this problem, we propose a new method "partial functional prediction (PFP)". The proposed method combines "next-interval" prediction and fully functional regression prediction, so that the partially observed part of the trajectory can aid in producing a better prediction for the unobserved part of the future curve. In PFP, we include automatic selection criterion for tuning parameters based on minimizing the prediction error. Simulations indicate that the proposed method can outperform existing methods with respect to mean-square prediction error and its practical utility is illustrated in an analysis of environmental and traffic flow data. | statistics |
Effective theory framework based on symmetry has recently gained widespread interest in the field of cosmology. In this paper, we apply the same idea on the genesis of the primordial magnetic field and its evolution throughout the cosmological universe. Given the broken time-diffeomorphism symmetry by the cosmological background, we considered the most general Lagrangian of electromagnetic and metric fluctuation up to second order, which naturally breaks conformal symmetry in the electromagnetic (EM) sector. We also include parity violation in the electromagnetic sector with the motivation that has potential observational significance. In such a set-up, we explore the evolution of EM, scalar, and tensor perturbations considering different observational constraints. In our analysis we emphasize the role played by the intermediate reheating phase which has got limited interest in all the previous studies. Assuming the vanishing electrical conductivity during the entire period of reheating, the well-known Faraday electromagnetic induction has been shown to play a crucial role in enhancing the strength of the present-day magnetic field. We show how such physical effects combined with the PLANCK and the large scale magnetic field observation makes a large class of models viable and severely restricts the reheating equation of state parameter within a very narrow range of $0.01 < \omega_\mathrm{eff} < 0.27$, which is nearly independent of reheating scenarios we have considered. | high energy physics theory |
We produce the first numerical predictions of the dynamical diquark model of multiquark exotic hadrons. Using Born-Oppenheimer potentials calculated numerically on the lattice, we solve coupled and uncoupled systems of Schroedinger equations to obtain mass eigenvalues for multiplets of states that are, at this stage, degenerate in spin and isospin. Assuming reasonable values for these fine-structure splittings, we obtain a series of bands of exotic states with a common parity eigenvalue that agree well with the experimentally observed charmoniumlike states, and we predict a number of other unobserved states. In particular, the most suitable fit to known pentaquark states predicts states below the charmonium-plus-nucleon threshold. Finally, we examine the strictest form of Born-Oppenheimer decay selection rules for exotics and, finding them to fail badly, we propose a resolution by relaxing the constraint that exotics must occur as heavy-quark spin-symmetry eigenstates. | high energy physics phenomenology |
The flow taking place in the rear part of the fuselage during the emergency landing on water is investigated experimentally in realistic conditions. To this aim, tests on a double curvature specimen have been performed at horizontal velocities ranging from 21 m/s to 45 m/s. Tests data highlight different cavitation and/or ventilation modalities which are highly dependent on the horizontal velocity, with substantial variations in the flow features occurring with velocity variations of few meters per second. For the specimen considered here, the inception of the cavitation is found at about 30 m/s, confirming that scaled model tests performed at small horizontal velocities are unable to capture the hydrodynamics correctly. By comparing pressure data, underwater movies and force measurements, it is shown that the transition from cavitation to ventilation condition has a significant effect of the longitudinal distribution of the loading which, together with inertia, aerodynamic loads and engine thrust, governs the aircraft dynamics. | physics |
We use DNS to study inter-scale and inter-space energy exchanges in the near-field of a turbulent wake of a square prism in terms of the KHMH equation written for a triple decomposition of the velocity field accounting for the quasi-periodic vortex shedding. Orientation-averaged terms of the KHMH are computed on the plane of the mean flow and on the geometric centreline. We consider locations between $2$ and $8$ times the width $d$ of the prism. The mean flow produces kinetic energy which feeds the vortex shedding coherent structures. In turn, these structures transfer energy to the stochastic fluctuations over all length-scales $r$ from the Taylor length $\lambda$ to $d$ and dominate spatial turbulent transport of two-point stochastic turbulent fluctuations. The orientation-averaged non-linear inter-scale transfer rate $\Pi^{a}$ which was found to be approximately independent of $r$ by Alves Portela et. al. (2017) in the range $\lambda\le r \le 0.3d$ at a distance $x_{1}=2d$ from the square prism requires an inter-scale transfer contribution of coherent structures for this approximate constancy. However, the near-constancy of $\Pi^a$ at $x_1=8d$ which was also found by Alves Portela et. al. (2017) is mostly due to stochastic fluctuations. Even so, the proximity of $-\Pi^a$ to the turbulence dissipation rate $\varepsilon$ in the range $\lambda\le r\le d$ at $x_1=8d$ requires contributions of the coherent structures. Spatial inhomogeneity also makes a direct and distinct contribution to $\Pi^a$, and the constancy of $-\Pi^a/\varepsilon$ close to 1 would not have been possible without it either in this near-field flow. Finally, the pressure-velocity term is also an important contributor to the KHMH, particularly at scales r larger than about $0.4d$, and appears to correlate with the purely stochastic non-linear inter-scale transfer rate when the orientation average is lifted. | physics |
Proton radiation damage is an important failure mechanism for electronic devices in near-Earth orbits, deep space and high energy physics facilities. Protons can cause ionizing damage and atomic displacements, resulting in device degradation and malfunction. Shielding of electronics increases the weight and cost of the systems but does not eliminate destructive single events produced by energetic protons. Modern electronics based on semiconductors - even those specially designed for radiation hardness - remain highly susceptible to proton damage. Here we demonstrate that room temperature (RT) charge-density-wave (CDW) devices with quasi-two-dimensional (2D) 1T-TaS2 channels show remarkable immunity to bombardment with 1.8 MeV protons to a fluence of at least 10^14 H+cm^2. Current-voltage I-V characteristics of these 2D CDW devices do not change as a result of proton irradiation, in striking contrast to most conventional semiconductor devices or other 2D devices. Only negligible changes are found in the low-frequency noise spectra. The radiation immunity of these "all-metallic" CDW devices can be attributed to their two-terminal design, quasi-2D nature of the active channel, and high concentration of charge carriers in the utilized CDW phases. Such devices, capable of operating over a wide temperature range, can constitute a crucial segment of future electronics for space, particle accelerator and other radiation environments. | physics |
Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting. While most existing work studies the problem in the context of computer vision or console games, this paper focuses on reinforcement learning in autonomous cyber defence under partial observability. We demonstrate that under the black-box setting, where the attacker has no direct access to the target RL model, causative attacks---attacks that target the training process---can poison RL agents even if the attacker only has partial observability of the environment. In addition, we propose an inversion defence method that aims to apply the opposite perturbation to that which an attacker might use to generate their adversarial samples. Our experimental results illustrate that the countermeasure can effectively reduce the impact of the causative attack, while not significantly affecting the training process in non-attack scenarios. | statistics |
Although time is one of our most intuitive physical concepts, its understanding at the fundamental level is still an open question in physics. For instance, time in quantum mechanics and general relativity are two distinct and incompatible entities. While relativity deals with events (points in spacetime), with time being observer-dependent and dynamical, quantum mechanics describes physical systems by treating time as an independent parameter. To resolve this conflict, in this work, we extend the classical concept of an event to the quantum domain by defining an event as a transfer of information between physical systems. Then, by describing the universe from the perspective of a certain observer, we introduce quantum states of events with space-time-symmetric wave functions that predict the joint probability distribution of a measurement (observation) at $ (t, {\vec x}) $. Under these circumstances, we propose that a well-defined instant of time, like any other observable, arises from a single event, thus being an observer-dependent property. As a result, a counterfactual asymmetry along a particular sequence of events within a stationary quantum state gives rise to the flow of time as being successive "snapshots" from the observer's perspective. In this proposal, it is the many distinguishable states in which the observer stores information that makes the existence of time possible. | quantum physics |
Trend filtering---first introduced into the astronomical literature in Paper I of this series---is a state-of-the-art statistical tool for denoising one-dimensional signals that possess varying degrees of smoothness. In this work, we demonstrate the broad utility of trend filtering to observational astronomy by discussing how it can contribute to a variety of spectroscopic and time-domain studies. The observations we discuss are (1) the Lyman-$\alpha$ forest of quasar spectra; (2) more general spectroscopy of quasars, galaxies, and stars; (3) stellar light curves with planetary transits; (4) eclipsing binary light curves; and (5) supernova light curves. We study the Lyman-$\alpha$ forest in the greatest detail---using trend filtering to map the large-scale structure of the intergalactic medium along quasar-observer lines of sight. The remaining studies share broad themes of: (1) estimating observable parameters of light curves and spectra; and (2) constructing observational spectral/light-curve templates. We also briefly discuss the utility of trend filtering as a tool for one-dimensional data reduction and compression. | astrophysics |
It was shown in arXiv:0906.2527, that in finite-dimensional Hilbert spaces each operator system corresponds to some channel, for which this operator system will be an operator graph. This work is devoted to finding necessary and sufficient conditions for this property to hold in infinite-dimensional case. | quantum physics |
Quantum decoherence arises due to uncontrollable entanglement between a system with its environment. However the effects of decoherence are often thought of and modeled through a simpler picture in which the role of the environment is to introduce classical noise in the system's degrees of freedom. Here we establish necessary conditions that the classical noise models need to satisfy to quantitatively model the decoherence. Specifically, for pure-dephasing processes we identify well-defined statistical properties for the noise that are determined by the quantum many-point time correlation function of the environmental operators that enter into the system-bath interaction. In particular, for the exemplifying spin-boson problem with a Lorentz-Drude spectral density we show that the high-temperature quantum decoherence is quantitatively mimicked by colored Gaussian noise. In turn, for dissipative environments we show that classical noise models cannot describe decoherence effects due to spontaneous emission induced by a dissipative environment. These developments provide a rigorous platform to assess the validity of classical noise models of decoherence. | quantum physics |
We show that for some classes of groups $G$, the homotopy fiber $E_{\mathrm{com}} G$ of the inclusion of the classifying space for commutativity $E_{\mathrm{com}} G$ into the classifying space $BG$, is contractible if and only if $G$ is abelian. We show this both for compact connected Lie groups and for discrete groups. To prove those results, we define an interesting map $\mathfrak{c} \colon E_{\mathrm{com}} G \to B[G,G]$ and show it is not nullhomotopic for the non-abelian groups in those classes. Additionally, we show that $\mathfrak{c}$ is 3-connected for $G=O(n)$ when $n \ge 3$. | mathematics |
Giant, star-forming clumps are a common feature prevalent amongst high-redshift star-forming galaxies and play a critical role in shaping their chaotic morphologies and yet, their nature and role in galaxy evolution remains to be fully understood. A majority of the effort to study clumps has been focused at high redshifts, and local clump studies have often suffered from small sample sizes. In this work, we present an analysis of clump properties in the local universe, and for the first time, performed with a statistically significant sample. With the help of the citizen science-powered Galaxy Zoo: Hubble project, we select a sample of 92 $z<0.06$ clumpy galaxies in Sloan Digital Sky Survey Stripe 82 galaxies. Within this sample, we identify 543 clumps using a contrast-based image analysis algorithm and perform photometry as well as estimate their stellar population properties. The overall properties of our $z<0.06$ clump sample are comparable to the high-redshift clumps. However, contrary to the high-redshift studies, we find no evidence of a gradient in clump ages or masses as a function of their galactocentric distances. Our results challenge the inward migration scenario for clump evolution for the local universe, potentially suggesting a larger contribution of ex-situ clumps and/or longer clump migration timescales. | astrophysics |
This paper contributes a novel realtime multi-person motion capture algorithm using multiview video inputs. Due to the heavy occlusions in each view, joint optimization on the multiview images and multiple temporal frames is indispensable, which brings up the essential challenge of realtime efficiency. To this end, for the first time, we unify per-view parsing, cross-view matching, and temporal tracking into a single optimization framework, i.e., a 4D association graph that each dimension (image space, viewpoint and time) can be treated equally and simultaneously. To solve the 4D association graph efficiently, we further contribute the idea of 4D limb bundle parsing based on heuristic searching, followed with limb bundle assembling by proposing a bundle Kruskal's algorithm. Our method enables a realtime online motion capture system running at 30fps using 5 cameras on a 5-person scene. Benefiting from the unified parsing, matching and tracking constraints, our method is robust to noisy detection, and achieves high-quality online pose reconstruction quality. The proposed method outperforms the state-of-the-art method quantitatively without using high-level appearance information. We also contribute a multiview video dataset synchronized with a marker-based motion capture system for scientific evaluation. | computer science |
Density functional theory calculations within the generalized gradient approximation are employed to study the ground state of Co2FeAl. Various magnetic configurations are considered to find out its most stable phase. The ferromagnetic ground state of the Co2FeAl is energetically observed with an optimized lattice constant of 5.70 {\AA}. Thereafter, the system was subjected under uniform and non-uniform strains to see their effects on spin polarization (P) and half-metallicity. The effect of spin orbit coupling is considered in the present study. Half-metallicity (and 100 % P) is only retained under uniform strains started from 0 to +4%, and dropped rapidly from 90% to 16% for the negative strains started from -1% to -6%. We find that the present system is much sensitive under tetragonal distortions as half-metallicity (and 100% P) is preserved only for the cubic case. The main reason for the loss of half-metallicity is due to the shift of the bands with respect to the Fermi level. We also discuss the influence of these results on spintronics devices. | condensed matter |
The single-mode bosonic channel is addressed with classical interference in the modulation and with side information at the transmitter. This model can viewed as the quantum counterpart of the classical random-parameter Gaussian channel. Based on Costa's writing-on-dirty-paper result (1983), the effect of the channel parameter can be canceled even when the decoder has no side information, and regardless of the input power constraint. For both homodyne and heterodyne detection with a coherent-state protocol, the model reduces to a classical channel with either real or complex-valued Gaussian noise. Thereby, by applying Costa's dirty paper coding strategy, we observe that the effect of the classical interference can be canceled for those channels as well. Then, we consider the bosonic channel with joint detection, for which the classical results do not apply, and derive a dirty-paper coding lower bound. Furthermore, considering the special case of a pure-loss bosonic channel, we demonstrate that the optimal coefficient for dirty paper coding is not necessarily the MMSE estimator coefficient as in the classical setting. | quantum physics |
We review the applications of the Quantum Spectral Curve (QSC) method to the Regge (BFKL) limit in N=4 supersymmetric Yang-Mills theory. QSC, based on quantum integrability of the AdS$_5$/CFT$_4$ duality, was initially developed as a tool for the study of the spectrum of anomalous dimensions of local operators in the N=4 SYM in the planar, $N_c\to\infty$ limit. We explain how to apply the QSC for the BFKL limit, which requires non-trivial analytic continuation in spin $S$ and extends the initial construction to non-local light-ray operators. We give a brief review of high precision non-perturbative numerical solutions and analytic perturbative data resulting from this approach. We also describe as a simple example of the QSC construction the leading order in the BFKL limit. We show that the QSC substantially simplifies in this limit and reduces to the Faddeev-Korchemsky Baxter equation for Q-functions. Finally, we review recent results for the Fishnet CFT, which carries a number of similarities with the Lipatov's integrable spin chain for interacting reggeized gluons. | high energy physics theory |
Partial Differential Equations (PDE) are fundamental to model different phenomena in science and engineering mathematically. Solving them is a crucial step towards a precise knowledge of the behaviour of natural and engineered systems. In general, in order to solve PDEs that represent real systems to an acceptable degree, analytical methods are usually not enough. One has to resort to discretization methods. For engineering problems, probably the best known option is the finite element method (FEM). However, powerful alternatives such as mesh-free methods and Isogeometric Analysis (IGA) are also available. The fundamental idea is to approximate the solution of the PDE by means of functions specifically built to have some desirable properties. In this contribution, we explore Deep Neural Networks (DNNs) as an option for approximation. They have shown impressive results in areas such as visual recognition. DNNs are regarded here as function approximation machines. There is great flexibility to define their structure and important advances in the architecture and the efficiency of the algorithms to implement them make DNNs a very interesting alternative to approximate the solution of a PDE. We concentrate in applications that have an interest for Computational Mechanics. Most contributions that have decided to explore this possibility have adopted a collocation strategy. In this contribution, we concentrate in mechanical problems and analyze the energetic format of the PDE. The energy of a mechanical system seems to be the natural loss function for a machine learning method to approach a mechanical problem. As proofs of concept, we deal with several problems and explore the capabilities of the method for applications in engineering. | statistics |
We develop a general stochastic thermodynamics of RLC electrical networks built on top of a graph-theoretical representation of the dynamics commonly used by engineers. The network is: open, as it contains resistors and current and voltage sources, nonisothermal as resistors may be at different temperatures, and driven, as circuit elements may be subjected to external parametric driving. The proper description of the heat dissipated in each resistor requires care within the white noise idealization as it depends on the network topology. Our theory provides the basis to design circuits-based thermal machines, as we illustrate by designing a refrigerator using a simple driven circuit. We also derive exact results for the low temperature regime in which the quantum nature of the electrical noise must be taken into account. We do so using a semiclassical approach which can be shown to coincide with a fully quantum treatment of linear circuits for which canonical quantization is possible. We use it to generalize the Landauer-Buttiker formula for energy currents to arbitrary time-dependent driving protocols. | quantum physics |
In this work, we use ML techniques to develop presumed PDF models for large eddy simulations of reacting flows. The joint sub-filter PDF of mixture fraction and progress variable is modeled using various ML algorithms and commonly used analytical models. The ML algorithms evaluated in the work are representative of three major classes of ML techniques: traditional ensemble methods (random forests), deep learning (deep neural networks), and generative learning (variational autoencoders). The first two algorithms are supervised learning algorithms, and the third is an unsupervised learning algorithm. Data from direct numerical simulation of the low-swirl burner (Day et al. 2012) are used to develop training data for sub-filter PDF models. Models are evaluated on predictions of the sub-filter PDFs as well as predictions of the filtered reaction rate of the progress variable, computed through a convolution of the sub-filter PDF and the conditional means of the reaction rate. This a-priori modeling study demonstrates that deep learning models for presumed PDF modeling are three times more accurate than analytical beta-beta PDF models. These models are as accurate as random forest models while using five times fewer trainable parameters and being 25 times faster for inference. We illustrate how models generalize to other regions of the flow and develop criteria based on the Jensen-Shannon divergence to quantify the performance of a model on new data. | physics |
Theoretically modelling the 21-cm signals caused by Population III stars (Pop III stars) is the key to extracting fruitful information on Pop III stars from current and forthcoming 21-cm observations. In this work we develop a new module of Pop III stars in which the escape fractions of ionizing photons and Lyman-Werner (LW) photons, photo-heating by UV radiation, and LW feedback are consistently incorporated. By implementing the module into a public 21-cm semi-numerical simulation code, 21CMFAST, we demonstrate 21-cm signal calculations and investigate the importance of Pop III star modelling. What we find is that the contribution from Pop III stars to cosmic reionization significantly depends on the treatment of the escape fraction. With our escape fraction model, Pop III stars hardly contribute to reionization because less massive halos, whose escape fraction are high, cannot host Pop III stars due to LW feedback. On the other hand, Pop III stars well contribute to reionization with the conventional constant escape fraction. We also find that UV photo-heating has non-negligible impact on the 21-cm global signal and the 21-cm power spectrum if the ionization fraction of the Universe is higher than roughly 1 percent. In this case, the strength of the 21-cm global signal depends on the photo-heating efficiency and thus on the Pop III star mass. We conclude that detailed modelling of Pop III stars is imperative to predict 21-cm observables accurately for future observations. | astrophysics |
In this article, we construct the color-singlet-color-singlet type currents to study the scalar and axialvector $\Xi_{cc}\Sigma_c$ dibaryon states with QCD sum rules in details by taking into account both the dibaryon states and two-baryon scattering sates at the hadron side, and examine the existence of the $\Xi_{cc}\Sigma_c$ dibaryon states. Our calculations indicate that the two-baryon scattering states cannot saturate the QCD sum rules, it is necessary to introduce the dibaryon states, the color-singlet-color-singlet type currents couple potentially to the molecular states, not to the two-particle scattering states, the molecular states begin to receive contributions at the order $\mathcal{O}(\alpha_s^0)$, not at the order $\mathcal{O}(\alpha_s^2)$. | high energy physics phenomenology |
We pursue a low-wavenumber, second-order homogenized solution of the time-harmonic wave equation at both low and high frequency in periodic media with a source term whose frequency resides inside a band gap. Considering the wave motion in an unbounded medium $\mathbb{R}^d$ ($d\geqslant1$), we first use the (Floquet-)Bloch transform to formulate an equivalent variational problem in a bounded domain. By investigating the source term's projection onto certain periodic functions, the second-order model can then be derived via asymptotic expansion of the Bloch eigenfunction and the germane dispersion relationship. We establish the convergence of the second-order homogenized solution, and we include numerical examples to illustrate the convergence result. | mathematics |
This paper presents the design and implementation of a new open-source view-based graph analytics system called Graphsurge. Graphsurge is designed to support applications that analyze multiple snapshots or views of a large-scale graph. Users program Graphsurge through a declarative graph view definition language (GVDL) to create views over input graphs and a Differential Dataflow-based programming API to write analytics computations. A key feature of GVDL is the ability to organize views into view collections, which allows Graphsurge to automatically share computation across views, without users writing any incrementalization code, by performing computations differentially. We then introduce two optimization problems that naturally arise in our setting. First is the collection ordering problem to determine the order of views that leads to minimum differences across consecutive views. We prove this problem is NP-hard and show a constant-factor approximation algorithm drawn from literature. Second is the collection splitting problem to decide on which views to run computations differentially vs from scratch, for which we present an adaptive solution that makes decisions at runtime. We present extensive experiments to demonstrate the benefits of running computations differentially for view collections and our collection ordering and splitting optimizations. | computer science |
We consider the graph $k$-partitioning problem under the min-max objective, termed as Minmax $k$-cut. The input here is a graph $G=(V,E)$ with non-negative edge weights $w:E\rightarrow \mathbb{R}_+$ and an integer $k\geq 2$ and the goal is to partition the vertices into $k$ non-empty parts $V_1, \ldots, V_k$ so as to minimize $\max_{i=1}^k w(\delta(V_i))$. Although minimizing the sum objective $\sum_{i=1}^k w(\delta(V_i))$, termed as Minsum $k$-cut, has been studied extensively in the literature, very little is known about minimizing the max objective. We initiate the study of Minmax $k$-cut by showing that it is NP-hard and W[1]-hard when parameterized by $k$, and design a parameterized approximation scheme when parameterized by $k$. The main ingredient of our parameterized approximation scheme is an exact algorithm for Minmax $k$-cut that runs in time $(\lambda k)^{O(k^2)}n^{O(1)}$, where $\lambda$ is value of the optimum and $n$ is the number of vertices. Our algorithmic technique builds on the technique of Lokshtanov, Saurabh, and Surianarayanan (FOCS, 2020) who showed a similar result for Minsum $k$-cut. Our algorithmic techniques are more general and can be used to obtain parameterized approximation schemes for minimizing $\ell_p$-norm measures of $k$-partitioning for every $p\geq 1$. | computer science |
Continuous-time assessments of game outcomes in sports have become increasingly common in the last decade. In American football, only discrete-time estimates of play value were possible, since the most advanced public football datasets were recorded at the play-by-play level. While measures such as expected points and win probability are useful for evaluating football plays and game situations, there has been no research into how these values change throughout the course of a play. In this work, we make two main contributions: First, we introduce a general framework for continuous-time within-play valuation in the National Football League using player-tracking data. Our modular framework incorporates several modular sub-models, to easily incorporate recent work involving player tracking data in football. Second, we use a long short-term memory recurrent neural network to construct a ball-carrier model to estimate how many yards the ball-carrier is expected to gain from their current position, conditional on the locations and trajectories of the ball-carrier, their teammates and opponents. Additionally, we demonstrate an extension with conditional density estimation so that the expectation of any measure of play value can be calculated in continuous-time, which was never before possible at such a granular level. | statistics |
Acoustic modeling of raw waveform and learning feature extractors as part of the neural network classifier has been the goal of many studies in the area of automatic speech recognition (ASR). Recently, one line of research has focused on frameworks that can be pre-trained on audio-only data in an unsupervised fashion and aim at improving downstream ASR tasks. In this work, we investigate the usefulness of one of these front-end frameworks, namely wav2vec, for hybrid ASR systems. In addition to deploying a pre-trained feature extractor, we explore how to make use of an existing acoustic model (AM) trained on the same task with different features as well. Another neural front-end which is only trained together with the supervised ASR loss as well as traditional Gammatone features are applied for comparison. Moreover, it is shown that the AM can be retrofitted with i-vectors for speaker adaptation. Finally, the described features are combined in order to further advance the performance. With the final best system, we obtain a relative improvement of 4% and 6% over our previous best model on the LibriSpeech test-clean and test-other sets. | electrical engineering and systems science |
Resistive plate chambers (RPC) lined with $^{10}B_{4}$C neutron converters is a promising cost effective technology for position-sensitive thermal neutron detection capable to outperform $^{3}$He-based detectors in terms of spatial resolution and timing. However, as for the other types of gaseous detectors with a single layer of $^{10}B_{4}$C at normal beam incidence, the detection efficiency to thermal neutrons of a single-gap $^{10}B$-RPC is only about 6%. Aiming to overcome this limitation, we introduce a multi-layer $^{10}B$-RPCs detector with a stack of ten double-gap hybrid RPCs. A description of the detector design and the results of its characterization performed at the TREFF neutron beamline at the FRM II neutron facility are presented. The results demonstrate that the detection efficiency exceeds 60% for neutrons with a wavelength of 4.7 \r{A} and the spatial resolution (FWHM) is about 0.25 mm and 0.35 mm in the X and Y direction, respectively. | physics |
Here, we report an evolution of structural, magnetic and transport behavior in doped SrRu$_{1-x}$Ga$_x$O$_3$ ($x$ $\le$ 0.2). The nonmagnetic dopant Ga$^{3+}$ (3$d^{10}$) not only acts for magnetic site dilution in SrRuO$_3$ but also it modifies the Ru charge state and electronic density. Our studies show that Ga$^{3+}$ substitution does not affect the original orthorhombic-\textit{Pbnm} structure of SrRuO$_3$ which is due to its matching ionic radii with Ru$^{4+}$. However, Ga$^{3+}$ has a substantial effect on the magnetic behavior of SrRuO$_3$ where it decreases both magnetic moment as well as magnetic transition temperature $T_c$. Further, this dilution induces Griffiths phase behavior across $T_c$ and cluster-glass behavior at low temperature with higher concentration of doping. The magnetic critical exponent $\beta$ increases with $x$ due to this site dilution effect. The Ga$^{3+}$ induces an insulating state in SrRuO$_3$ with $x$ $>$ 0.05. The charge transport in paramagnetic as well as in insulating state of samples can be well described with Mott's modified variable-range-hopping model. The metallic charge transport just below $T_c$ in SrRuO$_3$ obeys Fermi liquid behavior which, however breaks down at low temperature. We further find a correlation between field dependent magnetoresistance and magnetization through power-law behavior over the series. | condensed matter |
We analyze the prospects of probing the $CP$-odd $i \tilde \kappa \bar t \gamma^5 t h$ interaction at the LHC and its projected upgrades, the high-luminosity and high-energy LHC, directly using associated on-shell Higgs boson and top quark or top quark pair production. To this end we first construct a $CP$-odd observable based on top quark polarization in $W b \to t h$ scattering with optimal linear sensitivity to $\tilde \kappa$. For the corresponding hadronic process $pp \to t h j$ we present a method of extracting the phase-space dependent weight function that allows to retain close to optimal sensitivity to $\tilde{\kappa}$. We project future sensitivity to the signal in $pp \to t(\to \ell \nu b )h(\to b\bar b) j$. Based on these insights we propose novel $CP$-odd observables for top quark pair production in association with the Higgs, $pp \to t \bar t h$, with semileptonically decaying tops and $h\to \bar b b$, that rely solely on measuring the momenta of leptons and $b$-jets from the decaying tops without having to distinguish the charge of the $b$-jets. Among the many possibilities we single out an observable that can potentially probe $\tilde \kappa \sim 0.5$ at the high-luminosity LHC and $\tilde \kappa \sim 0.1$ at high-energy LHC with $2\sigma$ confidence. | high energy physics phenomenology |
We study the density of polynomials in $H^2(\Omega,e^{-\varphi})$, the space of square integrable holomorphic functions in a bounded domain $\Omega$ in $\mathbb{C}$, where $\varphi$ is a subharmonic function. In particular, we prove that the density holds in Carath\'{e}odory domains for any subharmonic function $\varphi$ in a neighborhood of $\overline{\Omega}$. In non-Carath\'{e}odory domains, we prove that the density depends on the weight function, giving examples. | mathematics |
A three-step procedure is proposed in type IIA string theory to stabilize multiple moduli in a dS vacuum. The first step is to construct a progenitor model with a localized stable supersymmetric Minkowski vacuum, or a discrete set of such vacua. It can be done, for example, using two non-perturbative exponents in the superpotential for each modulus, as in the KL model. A large set of supersymmetric Minkowski vacua with strongly stabilized moduli is protected by a theorem on stability of these vacua in absence of flat directions. The second step involves a parametrically small downshift to a supersymmetric AdS vacuum, which can be achieved by a small change of the superpotential. The third step is an uplift to a dS vacuum with a positive cosmological constant using the $\overline {D6}$-brane contribution. Stability of the resulting dS vacuum is inherited from the stability of the original supersymmetric Minkowski vacuum if the supersymmetry breaking in dS vacuum is parametrically small. | high energy physics theory |
We develop a data-driven methodology based on parametric It\^{o}'s Stochastic Differential Equations (SDEs) to capture the real asymmetric dynamics of forecast errors. Our SDE framework features time-derivative tracking of the forecast, time-varying mean-reversion parameter, and an improved state-dependent diffusion term. Proofs of the existence, strong uniqueness, and boundedness of the SDE solutions are shown under a principled condition for the time-varying mean-reversion parameter. Inference based on approximate likelihood, constructed through the moment-matching technique both in the original forecast error space and in the Lamperti space, is performed through numerical optimization procedures. We propose another contribution based on the fixed-point likelihood optimization approach in the Lamperti space. All the procedures are agnostic of the forecasting technology, and they enable comparisons between different forecast providers. We apply our SDE framework to model historical Uruguayan normalized wind power production and forecast data between April and December 2019. Sharp empirical confidence bands of future wind power production are obtained for the best-selected model. | statistics |
We consider a general statistical learning problem where an unknown fraction of the training data is corrupted. We develop a robust learning method that only requires specifying an upper bound on the corrupted data fraction. The method minimizes a risk function defined by a non-parametric distribution with unknown probability weights. We derive and analyse the optimal weights and show how they provide robustness against corrupted data. Furthermore, we give a computationally efficient coordinate descent algorithm to solve the risk minimization problem. We demonstrate the wide range applicability of the method, including regression, classification, unsupervised learning and classic parameter estimation, with state-of-the-art performance. | statistics |
The mechanism behind the $^1$H NMR frequency dependence of $T_1$ and the viscosity dependence of $T_2$ for polydisperse polymers and bitumen remains elusive. We elucidate the matter through NMR relaxation measurements of polydisperse polymers over an extended range of frequencies ($f_0 = 0.01 \leftrightarrow$ 400 MHz) and viscosities ($\eta = 385 \leftrightarrow 102,000$ cP) using $T_{1}$ and $T_2$ in static fields, $T_{1}$ field-cycling relaxometry, and $T_{1\rho}$ in the rotating frame. We account for the anomalous behavior of the log-mean relaxation times $T_{1LM} \propto f_0$ and $T_{2LM} \propto (\eta/T)^{-1/2}$ with a phenomenological model of $^1$H-$^1$H dipole-dipole relaxation which includes a distribution in molecular correlation times and internal motions of the non-rigid polymer branches. We show that the model also accounts for the anomalous $T_{1LM}$ and $T_{2LM}$ in previously reported bitumen measurements. We find that molecular dynamics (MD) simulations of the $T_{1} \propto f_0$ dispersion and $T_2$ of similar polymers simulated over a range of viscosities ($\eta = 1 \leftrightarrow 1,000$ cP) are in good agreement with measurements and the model. The $T_{1} \propto f_0$ dispersion at high viscosities agrees with previously reported MD simulations of heptane confined in a polymer matrix, which suggests a common NMR relaxation mechanism between viscous polydisperse fluids and fluids under confinement, without the need to invoke paramagnetism. | physics |
Recently, Abualrub et al. illustrated the algebraic structure of additive conjucyclic codes over F_4 (Finite Fields Appl. 65 (2020) 101678). In this paper, our main objective is to generalize their theory. Via an isomorphic map, we give a canonical bijective correspondence between F_q-linear additive conjucyclic codes of length n over F_{q^2} and q-ary linear cyclic codes of length 2n. By defining the alternating inner product, our proposed isomorphic map preserving the orthogonality can also be proved. From the factorization of the polynomial x^{2n}-1 over F_q, the enumeration of F_{q}-linear additive conjucyclic codes of length n over F_{q^2} will be obtained. Moreover, we provide the generator and parity-check matrices of these q^2-ary additive conjucyclic codes of length n. | computer science |
We prove lower bounds for the minimum distance of algebraic geometry codes over surfaces whose canonical divisor is either nef or anti-strictly nef and over surfaces without irreducible curves of small genus. We sharpen these lower bounds for surfaces whose arithmetic Picard number equals one, surfaces without curves with small self-intersection and fibered surfaces. Finally we specify our bounds to the case of surfaces of degree $d\geq 3$ embedded in $\mathbb{P)^3$. | mathematics |
We address the emergent quantum critical phenomena for (pseudo)spin-3/2 birefringent fermions, featuring two effective Fermi velocities, when they reside close to itinerant Mott transitions realized through spontaneous symmetry breaking and triggered by strong local or Hubbardlike repulsive interactions. Irrespective of the nature of the mass orderings that produce fully gapped quasiparticle spectra in the ordered phase, which otherwise can be grouped into three classes, the system always possesses a \emph{unique} terminal velocity near the corresponding quantum critical point. The associated critical regime accommodates a relativistic non-Fermi liquid of strongly coupled collective bosonic and spin-1/2 Dirac excitations with vanishing weight of the quasiparticle pole. These conclusions are also operative near superconducting critical points. Therefore, relativistic non-Fermi liquid possibly constitutes a robust superuniversal description for the entire family of strongly correlated arbitrary half-integer spin Dirac materials. | condensed matter |
We investigate the time evolution of the complexity of the operator by the Sachdev-Ye-Kitaev (SYK) model with $N$ Majorana fermions. We follow Nielsen's idea of complexity geometry and geodesics thereof. We show that it is possible that the bi-invariant complexity geometry can exhibit the conjectured time evolution of the complexity in chaotic systems: i) linear growth until $t\sim e^{N}$, ii) saturation and small fluctuations after then. We also show that the Lloyd's bound is realized in this model. Interestingly, these characteristic features appear only if the complexity geometry is the most natural "non-Riemannian" Finsler geometry. This serves as a concrete example showing that the bi-invariant complexity may be a competitive candidate for the complexity in quantum mechanics/field theory (QM/QFT). We provide another argument showing a naturalness of bi-invariant complexity in QM/QFT. That is that the bi-invariance naturally implies the equivalence of the right-invariant complexity and left-invariant complexity, either of which may correspond to the complexity of a given operator. Without bi-invariance, one needs to answer why only right (left) invariant complexity corresponds to the "complexity", instead of only left (right) invariant complexity. | high energy physics theory |
Travel time on a route varies substantially by time of day and from day to day. It is critical to understand to what extent this variation is correlated with various factors, such as weather, incidents, events or travel demand level in the context of dynamic networks. This helps a better decision making for infrastructure planning and real-time traffic operation. We propose a data-driven approach to understand and predict highway travel time using spatio-temporal features of those factors, all of which are acquired from multiple data sources. The prediction model holistically selects the most related features from a high-dimensional feature space by correlation analysis, principle component analysis and LASSO. We test and compare the performance of several regression models in predicting travel time 30 min in advance via two case studies: (1) a 6-mile highway corridor of I-270N in D.C. region, and (2) a 2.3-mile corridor of I-376E in Pittsburgh region. We found that some bottlenecks scattered in the network can imply congestion on those corridors at least 30 minutes in advance, including those on the alternative route to the corridors of study. In addition, real-time travel time is statistically related to incidents on some specific locations, morning/afternoon travel demand, visibility, precipitation, wind speed/gust and the weather type. All those spatio-temporal information together help improve prediction accuracy, comparing to using only speed data. In both case studies, random forest shows the most promise, reaching a root-mean-squared error of 16.6\% and 17.0\% respectively in afternoon peak hours for the entire year of 2014. | statistics |
The concept of low-congestion shortcuts is initiated by Ghaffari and Haeupler [SODA2016] for addressing the design of CONGEST algorithms running fast in restricted network topologies. Specifically, given a specific graph class $X$, an $f$-round algorithm of constructing shortcuts of quality $q$ for any instance in $X$ results in $\tilde{O}(q + f)$-round algorithms of solving several fundamental graph problems such as minimum spanning tree and minimum cut, for $X$. In this paper, we consider the relationship between the quality of low-congestion shortcuts and three major graph parameters, chordality, diameter, and clique-width. The main contribution of the paper is threefold: (1) We show an $O(1)$-round algorithm which constructs a low-congestion shortcut with quality $O(kD)$ for any $k$-chordal graph, and prove that the quality and running time of this construction is nearly optimal up to polylogarithmic factors. (2) We present two algorithms, each of which constructs a low-congestion shortcut with quality $\tilde{O}(n^{1/4})$ in $\tilde{O}(n^{1/4})$ rounds for graphs of $D=3$, and that with quality $\tilde{O}(n^{1/3})$ in $\tilde{O}(n^{1/3})$ rounds for graphs of $D=4$ respectively. These results obviously deduce two MST algorithms running in $\tilde{O}(n^{1/4})$ and $\tilde{O}(n^{1/3})$ rounds for $D=3$ and $4$ respectively, which almost close the long-standing complexity gap of the MST construction in small-diameter graphs originally posed by Lotker et al. [Distributed Computing 2006]. (3) We show that bounding clique-width does not help the construction of good shortcuts by presenting a network topology of clique-width six where the construction of MST is as expensive as the general case. | computer science |
All the 7 dynamical second order transport coefficients of the nonconformal fluids that correspond to Dp-branes with one or more world-volume directions compactified are derived via fluid/gravity correspondence. The conditions that considered in this paper include D4-brane with 1, 2 or 3 compact directions, D3-brane with 1 or 2 compact directions, as well as D2-brane with 1 direction compactified. The derived second order transport coefficients satisfy the Haack-Yarom, Romatschke and Kleinert-Probst relations. | high energy physics theory |
Drones will have extensive use cases across various commercial, government, and military sectors, ranging from delivery of consumer goods to search and rescue operations. To maintain the safety and security of people and infrastructure, it becomes critically important to quickly and accurately detect non-cooperating drones. In this paper we formulate a received signal strength (RSS) based detector, leveraging the existing wireless infrastructures that might already be serving other devices. Thus the detector can detect the presence of a drone signal buried in radio frequency (RF) interference and thermal noise, in a mixed line-of-sight (LOS) and non-LOS (NLOS) environment. We develop analytical expressions for the probability of false alarm and the probability of detection of a drone, which quantify the impact of aggregate interference and air-to-ground (A2G) propagation characteristics on the detection performance of individual sensors. We also provide analytical expressions for the average network probability of detection, which capture the impact of sensor density on a network's detection coverage. Finally, we find the critical sensor density that maximizes the average network probability of detection for a given requirement of the probability of false alarm. | electrical engineering and systems science |
Solving depth estimation with monocular cameras enables the possibility of widespread use of cameras as low-cost depth estimation sensors in applications such as autonomous driving and robotics. However, learning such a scalable depth estimation model would require a lot of labeled data which is expensive to collect. There are two popular existing approaches which do not require annotated depth maps: (i) using labeled synthetic and unlabeled real data in an adversarial framework to predict more accurate depth, and (ii) unsupervised models which exploit geometric structure across space and time in monocular video frames. Ideally, we would like to leverage features provided by both approaches as they complement each other; however, existing methods do not adequately exploit these additive benefits. We present $S^3$Net, a self-supervised framework which combines these complementary features: we use synthetic and real-world images for training while exploiting geometric, temporal, as well as semantic constraints. Our novel consolidated architecture provides a new state-of-the-art in self-supervised depth estimation using monocular videos. We present a unique way to train this self-supervised framework, and achieve (i) more than $15\%$ improvement over previous synthetic supervised approaches that use domain adaptation and (ii) more than $10\%$ improvement over previous self-supervised approaches which exploit geometric constraints from the real data. | computer science |
There are currently no reliable methods to measure transverse velocities of galaxies. This is an important piece of information that could allow us to probe the physics of structure formation as well as testing the underlying theory of gravity. The slingshot effect, a special case of the Integrated Sachs-Wolfe effect, is expected to create dipole signals in the temperature fluctuations of the Cosmic Microwave Background Radiation (CMB). This effect creates a hot spot behind and a cold spot in front of moving massive objects. The dipole signal created by the slingshot effect can be used to measure transverse velocities, but because the signal is expected to be weak, the effect has not been measured yet. The aim is to show that the slingshot effect can be measured by stacking the signals of galaxies falling into a collapsing cluster. We evaluate if the effect can probe modified gravity. We use data from a simulated galaxy catalogue (MDPL2) to mimic observations. We identify a massive galaxy cluster, and make maps of the slingshot effect around infalling galaxies. We add uncorrelated Gaussian noise to each map. The maps are rotated according to the direction to the cluster centre, such that the dipole signal will add up constructively when stacking. We compare each stack to a dipole stencil and we find the probability for a false positive in the absence of the slingshot signal. Each galaxy gives a signal of around $\Delta T/T \approx 10^{-9}$, while the precision of CMB experiments of today are $\Delta T/T \approx 4 \times 10^{-6}$. By stacking around 10 000 galaxies, the slingshot signal can be over the detectable threshold with experiments of today. However, future CMB experiments must be used to be certain of the strength of the observed signal. | astrophysics |
We theoretically generalize a systematic language to describe the phase-imprinting technique, by which to investigate the dynamical generation of soliton in one-dimensional Raman-type spin-orbit coupled Fermi superfluid. We check our method with the simulation of time-dependent Bogoliubov-de Gennes equations, and find that our method can not only generate stable dark and even gray solitons in conventional Fermi superfluid by controlling the transferred phase jump, but also is feasible to create a stable dark soliton in both Bardeen-Cooper-Schrieffer and topological states of spin-orbit coupled Fermi superfluid. We also discuss the physical implication of our method. | condensed matter |
In non-inverted heating scenarios, a lower hybrid (LH) resonance can appear in the plasma edge of tokamaks. This resonance can lead to large edge power deposition when heating in the ion cyclotron resonance frequency (ICRF) range. In this paper, the edge power loss associated with this LH resonance is analytically computed for a cold plasma description using an asymptotic approach and analytical continuation. This power loss can be directly linked to the local radial electric field and is then compared to the corresponding power loss computed with the semi-analytical code ANTITER IV. This method offers the possibility to check the precision of the numerical integration made in ANTITER IV and gives insights in the physics underlying the edge power absorption. Finally, solutions to minimize this edge power absorption are investigated and applied to the case of ITER's ion cyclotron resonance heating (ICRH) launcher. This study is also of direct relevance to DEMO. | physics |
We present solutions to the relativistic thin disc evolutionary equation using a modified description of the mean fluid flow within the disc. The model takes into account the effects of sub-circular velocities in the innermost disc regions, and resolves otherwise unsustainable behaviour present in simple finite ISCO stress disc models. We show that the behaviour of a relativistic thin disc evolving with a finite ISCO stress is comprised of three distinct stages which join the ordinarily distinct finite and vanishing ISCO stress solutions into a fully continuous model parameterisation. The most important prediction of our model is the existence of an intermediate stage of "stalled accretion", controlled by a single dimensionless parameter. The hallmarks of this evolutionary phase appear to have been seen in GRMHD simulations as well as in the late time X-ray observations of tidal disruption events, but dedicated simulations and extended observations are needed for a deeper understanding. | astrophysics |
We consider estimation and control of the cylinder wake at low Reynolds numbers. A particular focus is on the development of efficient numerical algorithms to design optimal linear feedback controllers when there are many inputs (disturbances applied everywhere) and many outputs (perturbations measured everywhere). We propose a resolvent-based iterative algorithm to perform i) optimal estimation of the flow using a limited number of sensors; and ii) optimal control of the flow when the entire flow is known but only a limited number of actuators are available for control. The method uses resolvent analysis to take advantage of the low-rank characteristics of the cylinder wake and solutions are obtained without any model-order reduction. Optimal feedback controllers are also obtained by combining the solutions of the estimation and control problems. We show that the performance of the estimators and controllers converges to the true global optima, indicating that the important physical mechanisms for estimation and control are of low rank. | mathematics |
The aim of this paper is to investigate the use of an entropic projection method for the iterative regularization of linear ill-posed problems. We derive a closed form solution for the iterates and analyze their convergence behaviour both in a case of reconstructing general nonnegative unknowns as well as for the sake of recovering probability distributions. Moreover, we discuss several variants of the algorithm and relations to other methods in the literature. The effectiveness of the approach is studied numerically in several examples. | mathematics |
Using the new results on coherent elastic neutrino-nucleus scattering data in cesium-iodide provided by the COHERENT experiment, we determine a new measurement of the average neutron rms radius of $^{133}\text{Cs}$ and $^{127}\text{I}$. In combination with the atomic parity violation (APV) experimental result, we derive the most precise measurement of the neutron rms radii of $^{133}\text{Cs}$ and $^{127}\text{I}$, disentangling for the first time the contributions of the two nuclei. By exploiting these measurements we determine the corresponding neutron skin values for $^{133}\text{Cs}$ and $^{127}\text{I}$. These results suggest a preference for models which predict large neutron skin values, as corroborated by the only other electroweak measurements of the neutron skin of $^{208}\text{Pb}$ performed by PREX and PREX-II. Moreover, for the first time, we obtain a data-driven APV+COHERENT measurement of the low-energy weak mixing angle with a percent uncertainty, independent of the value of the average neutron rms radius of $^{133}\text{Cs}$ and $^{127}\text{I}$, that is allowed to vary freely in the fit. The value of the low-energy weak mixing angle that we found is slightly larger than the standard model prediction. | high energy physics phenomenology |
We study the metric projection onto the closed convex cone in a real Hilbert space $\mathscr{H}$ generated by a sequence $\mathcal{V} = \{v_n\}_{n=0}^\infty$. The first main result of this paper provides a sufficient condition under which we can identify the closed convex cone generated by $\mathcal{V}$ with the following set: \[ \mathcal{C}[[\mathcal{V}]]: = \bigg\{\sum_{n=0}^\infty a_n v_n\Big|a_n\geq 0,\text{ the series }\sum_{n=0}^\infty a_n v_n\text{ converges in $\mathscr{H}$}\bigg\}. \] Then, by adapting classical results on general convex cones, we give a useful description of the metric projection of a vector onto $\mathcal{C}[[\mathcal{V}]]$. As applications, we obtain the best approximations of many concrete functions in $L^2([-1,1])$ by polynomials with non-negative coefficients. | mathematics |
We investigate the strain-rate-dependent mechanical behavior and deformation mechanisms of a refractory high entropy alloy, Ti29Zr24Nb23Hf24 (at.%), with a single-phase body-centered cubic (BCC) structure. High-temperature compression tests were conducted at temperatures from 700 to 1100{\deg}C at strain rates ranging from 10-3 to 10 s-1. A sudden stress drop after yield point was observed at higher temperatures and lower strain rates with the Zener-Holloman parameter, lnZ, in the range of 17.2-20.7. Such a softening behavior can be related to the interaction of dislocations with short-range clustering. However, at higher strain rates or lower temperatures (lnZ>25.0), kink bands were activated being responsible for the continuous strengthening of the alloy in competition with the softening mechanism. Systematic TEM investigations reveal that dislocation walls formed along {110} planes and dislocation loops were observed at a low strain of 6% at a high strain rate of 1 s-1 and 800{\deg}C. Kink band induced dynamic recrystallization is evident upon further straining. On the other hand, at low strain rate of 10-3 s-1 and 800{\deg}C, discontinuous recrystallization mechanisms become dominant with arrays of dislocations forming in front of the bulged boundaries of parent grains. These sub-grain boundaries eventually turn into high-angle grain boundaries. We also investigate the deformation mechanism of the alloy under extremely high strain rate (103 s-1) at room temperature. The specimen exhibits extensive kink bands with arrays of dislocation walls. As further strained, multiple slip systems can be activated and the interaction of dislocation walls plays a vital role in the strain hardening of the alloy. | condensed matter |
We apply the measurement reduction technique to optimally reconstruct an object image from multiplexed ghost images (GI) while taking into account both GI correlations and object image sparsity. We show that one can reconstruct an image in that way even if the object is illuminated by a small photon number. We consider frequency GI multiplexing using coupled parametric processes. We revealed that the imaging condition depends on the type of parametric process, namely, whether down- or up-conversion is used. Influence of information about sparsity in discrete cosine transform and Haar transform bases on reconstruction quality is studied. In addition, we compared ordinary and ghost images when the detectors are additionally illuminated by noise photons in a computer experiment, which showed increased noise immunity of GI, especially with processing via the proposed technique. | quantum physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.