text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Eukaryotic cells adhere to extracellular matrix during the normal development of the organism, forming static adhesion as well as during cell motility. We study this process by considering a simplified coarse-grained model of a vesicle that has uniform adhesion energy with a flat substrate, mobile curved membrane proteins and active forces. We find that a high concentration of curved proteins alone increases the spreading of the vesicle, by the self-organization of the curved proteins at the high curvature vesicle-substrate contact line, thereby reducing the bending energy penalty at the vesicle rim. This is most significant in the regime of low bare vesicle-substrate adhesion. When these curved proteins induce protrusive forces, representing the actin cytoskeleton, we find efficient spreading, in the form of sheet-like lamellipodia. Finally, the same mechanism of spreading is found to include a minimal set of ingredients needed to give rise to motile phenotypes.
|
physics
|
We present an English translation and discussion of an essay that a Japanese physicist, Torahiko Terada, wrote in 1922. In the essay, he described the waiting-time paradox, also called the bus paradox, which is a known mathematical phenomenon in queuing theory, stochastic processes, and modern temporal network analysis. He also observed and analyzed data on Tokyo City trams to verify the relevance of the waiting-time paradox to busy passengers in Tokyo at the time. This essay seems to be one of the earliest documentations of the waiting-time paradox in a sufficiently scientific manner.
|
physics
|
Efficient stochastic simulation algorithms are of paramount importance to the study of spreading phenomena on complex networks. Using insights and analytical results from network science, we discuss how the structure of contacts affects the efficiency of current algorithms. We show that algorithms believed to require $\mathcal{O}(\log N)$ or even $\mathcal{O}(1)$ operations per update---where $N$ is the number of nodes---display instead a polynomial scaling for networks that are either dense or sparse and heterogeneous. This significantly affects the required computation time for simulations on large networks. To circumvent the issue, we propose a node-based method combined with a composition and rejection algorithm, a sampling scheme that has an average-case complexity of $\mathcal{O} [\log(\log N)]$ per update for general networks. This systematic approach is first set-up for Markovian dynamics, but can also be adapted to a number of non-Markovian processes and can enhance considerably the study of a wide range of dynamics on networks.
|
physics
|
We show that the central representation is nontrivial for all one-dimensional central extensions of nilpotent Lie algebras possessing a codimension one abelian ideal.
|
mathematics
|
The Seebeck coefficient and electrical conductivity are two critical quantities to optimize simultaneously in designing thermoelectric materials, and they are determined by the dynamics of carrier scattering. We uncover a new regime where the co-existence at the Fermi level of multiple bands with different effective masses leads to strongly energy-dependent carrier lifetimes due to intrinsic electron-phonon scattering. In this anomalous regime, electrical conductivity decreases with carrier concentration, Seebeck coefficient reverses sign even at high doping, and power factor exhibits an unusual second peak. We discuss the origin and magnitude of this effect using first-principles Boltzmann transport calculations and simplified models. We also identify general design rules for using this paradigm to engineer enhanced performance in thermoelectric materials.
|
condensed matter
|
Symmetric nonnegative matrix factorization (SNMF) has demonstrated to be a powerful method for data clustering. However, SNMF is mathematically formulated as a non-convex optimization problem, making it sensitive to the initialization of variables. Inspired by ensemble clustering that aims to seek a better clustering result from a set of clustering results, we propose self-supervised SNMF (S$^3$NMF), which is capable of boosting clustering performance progressively by taking advantage of the sensitivity to initialization characteristic of SNMF, without relying on any additional information. Specifically, we first perform SNMF repeatedly with a random nonnegative matrix for initialization each time, leading to multiple decomposed matrices. Then, we rank the quality of the resulting matrices with adaptively learned weights, from which a new similarity matrix that is expected to be more discriminative is reconstructed for SNMF again. These two steps are iterated until the stopping criterion/maximum number of iterations is achieved. We mathematically formulate S$^3$NMF as a constraint optimization problem, and provide an alternative optimization algorithm to solve it with the theoretical convergence guaranteed. Extensive experimental results on $10$ commonly used benchmark datasets demonstrate the significant advantage of our S$^3$NMF over $12$ state-of-the-art methods in terms of $5$ quantitative metrics. The source code is publicly available at https://github.com/jyh-learning/SSSNMF.
|
computer science
|
In quantum error correction, the description of noise channel cannot be completely accurate, and fluctuation always appears in noise channel. It is found that when fluctuation of physical noise channel is considered, the average effective channel is dependent only on the average of physical noise channel, and the average of physical noise channel here plays the role of the independent error model in the previous works. Now, one may conclude that in the independent error model, the results in previous works are also valid for average channel where fluctuation exists. In some typical cases, our numerical simulations in the concatenated QEC protocol with 5-qubit code, 7-qubit Steane code and 9-qubit Shor confirm this conjecture. For 5-qubit code, the effective channels are approximate to depolarizing channel as the concatenated level increases. For Steane code, the effective channels are approximate to one Pauli channel as the concatenated level increases. For Shor code, the effective channels are approximate to one of Pauli-$X$ and Pauli-$Z$ channels in each level, and in next concatenated level, the effective channels are approximate to the other. Meanwhile, for these codes, the numerical results indicate that the degree of approximation increases with the concatenated level increases, and the fluctuation of noise channel decays exponentially as concatenated QEC performed. On the error-correction threshold, attenuation ratio of standard deviation of channel fidelity roughly has a stable value. On the other hand, standard deviations of off-diagonal elements of quantum process matrix (Pauli Form) decay more quickly than standard deviations of diagonal elements.
|
quantum physics
|
The signal measured by an astronomical spectrometer may be due to radiation from a multi-component mixture of plasmas with a range of physical properties (e.g. temperature, Doppler velocity). Confusion between multiple components may be exacerbated if the spectrometer sensor is illuminated by overlapping spectra dispersed from different slits, with each slit being exposed to radiation from a different portion of an extended astrophysical object. We use a compressed sensing method to robustly retrieve the different components. This method can be adopted for a variety of spectrometer configurations, including single-slit, multi-slit (e.g., the proposed MUlti-slit Solar Explorer mission; MUSE) and slot spectrometers (which produce overlappograms).
|
astrophysics
|
The soft X-ray excess - the excess of X-rays below 2 keV with respect to the extrapolation of the hard X-ray spectral continuum model - is a very common feature among type 1 active galactic nuclei (AGN); yet the nature of the soft X-ray excess is still poorly understood and hotly debated. To shed some light on this issue, we have measured in a model-independent way the soft excess strength in a flux-limited sample of broad-line and narrow-line Seyfert 1 galaxies (BLS1s and NLS1s) that are matched in X-ray luminosity but different in terms of the black hole mass and the accretion rate values, with NLS1s being characterized by smaller MBH and larger accretion rate values. Our analysis, in agreement with previous studies carried out with different AGN samples, indicates that: 1) a soft excess is ubiquitously detected in both BLS1s and NLS1s; 2) the strength of the soft excess is significantly larger in the NLS1 sample, compared to the BLS1 sample; 3) combining the two samples, the strength of the soft excess appears to positively correlate with the photon index as well as with the accretion rate, whereas there is no correlation with the black hole mass. Importantly, our work also reveals the lack of an anticorrelation between the soft excess strength and the luminosity of the primary X-ray component, predicted by the absorption and reflection scenarios. Our findings suggest that the soft excess is consistent with being produced by a warm Comptonization component. Larger, more complete samples of NLS1s and BLS1s are needed to confirm these conclusions.
|
astrophysics
|
The Alfv\'en waves are fundamental wave phenomena in magnetized plasmas and the dynamics of Alfv\'en waves are governed by a system of nonlinear partial differential equations called the MHD system. In this paper, we study the rigidity aspect of the scattering problem for the MHD equations: We prove that the Alfv\'en waves must vanish if their scattering fields vanish at infinities. The proof is based on a careful study of the null structure and a family of weighted energy estimates.
|
mathematics
|
Bosonic fields can give rise to self-gravitating structures. These are interesting hypothetical new "dark matter stars" and good descriptions of dark matter haloes if the fields are very light. We study the dynamical response of Newtonian boson stars (NBS) when excited by external matter (stars, planets or black holes) in their vicinities. Our setup can describe the interaction between a massive black hole and the surrounding environment, shortly after the massive body has undergone a "kick", due to the collapse of baryonic matter at the galactic center, or dark matter depletion as a reaction to an inspiralling binary. We perform the first self-consistent calculation of dynamical friction acting on moving bodies in these backgrounds. Binaries close to coalescence "stir" the NBS core, and backreaction affects gravitational waveforms at leading $-6PN$ order with respect to the dominant quadrupolar term; the coefficient is too small to allow detection by next-generation interferometers. We also show that the gravitational collapse to a supermassive black hole at the center of a NBS is accompanied by only a small change in the surrounding core. The NBS eventually gets accreted, but for astrophysical parameters this occurs only after several Hubble times.
|
astrophysics
|
In distributed optimization problems, a technique called gradient coding, which involves replicating data points, has been used to mitigate the effect of straggling machines. Recent work has studied approximate gradient coding, which concerns coding schemes where the replication factor of the data is too low to recover the full gradient exactly. Our work is motivated by the challenge of creating approximate gradient coding schemes that simultaneously work well in both the adversarial and stochastic models. To that end, we introduce novel approximate gradient codes based on expander graphs, in which each machine receives exactly two blocks of data points. We analyze the decoding error both in the random and adversarial straggler setting, when optimal decoding coefficients are used. We show that in the random setting, our schemes achieve an error to the gradient that decays exponentially in the replication factor. In the adversarial setting, the error is nearly a factor of two smaller than any existing code with similar performance in the random setting. We show convergence bounds both in the random and adversarial setting for gradient descent under standard assumptions using our codes. In the random setting, our convergence rate improves upon block-box bounds. In the adversarial setting, we show that gradient descent can converge down to a noise floor that scales linearly with the adversarial error to the gradient. We demonstrate empirically that our schemes achieve near-optimal error in the random setting and converge faster than algorithms which do not use the optimal decoding coefficients.
|
statistics
|
The permeability is one of the most fundamental transport properties in soft matter physics, material engineering, and nanofluidics. Here we report by means of Langevin simulations of ideal penetrants in a nanoscale membrane made of a fixed lattice of attractive interaction sites, how the permeability can be massively tuned, even minimized or maximized, by tailoring the potential energy landscape for the diffusing penetrants, depending on the membrane attraction, topology, and density. Supported by limiting scaling theories we demonstrate that the observed non-monotonic behavior and the occurrence of extreme values of the permeability is far from trivial and triggered by a strong anti-correlation and substantial (orders of magnitude) cancellation between penetrant partitioning and diffusivity, especially within dense and highly attractive membranes.
|
condensed matter
|
The two-level normal hierarchical model has played an important role in statistical theory and applications. In this paper, we first introduce a general adjusted maximum likelihood method for estimating the unknown variance component of the model and the associated empirical best linear unbiased predictor of the random effects. We then discuss a new idea for selecting prior for the hyperparameters. The prior, called a multi-goal prior, produces Bayesian solutions for hyperparmeters and random effects that match (in the higher order asymptotic sense) the corresponding classical solution in linear mixed model with respect to several properties. Moreover, we establish for the first time an analytical equivalence of the posterior variances under the proposed multi-goal prior and the corresponding parametric bootstrap second-order mean squared error estimates in the context of a random effects model.
|
statistics
|
With the development of intelligent vehicles, security and reliability communication between vehicles has become a key problem to be solved in Internet of vehicles(IoVs). Blockchain is considered as a feasible solution due to its advantages of decentralization, unforgeability and collective maintenance. However, the computing power of nodes in IoVs is limited, while the consensus mechanism of blockchain requires that the miners in the system have strong computing power for mining calculation. It consequently cannot satisfy the requirements, which is the challenges for the application of blockchain in IoVs. In fact, the application of blockchain in IoVs can be implemented by employing edge computing. The key entity of edge computing is the edge servers(ESs). Roadside nodes(RSUs) can be deployed as ESs of edge computing in IoVs. We have studied the ES deployment scheme for covering more vehicle nodes in IoVs, and propose a randomized algorithm to calculate approximation solutions. Finally, we simulated the performance of the proposed scheme and compared it with other deployment schemes.
|
electrical engineering and systems science
|
The present work presents our efforts at identifying new mercury-manganese (HgMn/CP3) stars using spectra obtained with the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Suitable candidates were searched for among pre-selected early-type spectra from LAMOST DR4 using a modified version of the MKCLASS code that probes several Hg II and Mn II features. The spectra of the resulting 332 candidates were visually inspected. Using parallax data and photometry from Gaia DR2, we investigated magnitudes, distances from the Sun, and the evolutionary status of our sample stars. We also searched for variable stars using diverse photometric survey sources. We present 99 bona fide CP3 stars, 19 good CP3 star candidates, and seven candidates. Our sample consists of mostly new discoveries and contains, on average, the faintest CP3 stars known (peak distribution 9.5 < G < 13.5 mag). All stars are contained within the narrow spectral temperature-type range from B6 to B9.5, in excellent agreement with the expectations and the derived mass estimates (2.4 < M(Sun) < 4 for most objects). Our sample stars are between 100 Myr and 500 Myr old and cover the whole age range from zero-age to terminal-age main sequence. They are almost homogeneously distributed at fractional ages on the main sequence < 80%, with an apparent accumulation of objects between fractional ages of 50% to 80%. We find a significant impact of binarity on the mass and age estimates. Eight photometric variables were discovered, most of which show monoperiodic variability in agreement with rotational modulation. Together with the recently published catalogue of APOGEE CP3 stars, our work significantly increases the sample size of known Galactic CP3 stars, paving the way for future in-depth statistical studies.
|
astrophysics
|
Electrostatic interactions make a large contribution to solvation free energy in ionic fluids such as electrolytes and colloidal dispersions. The electrostatic contribution to solvation free energy has been ascribed to the self-energy of a charged particle. Here we apply a variational field theory based on lower bound inequality to the inhomogeneous fluids of one-component charged hard-spheres, thereby verifying that the self-energy is given by the difference between the total correlation function and direct correlation function. Based on the knowledge of the liquid state theory, the self-energy specified in this study not only relates a direct correlation function to the Gaussian smearing of each charged sphere, but also provides the electrostatic contribution to solvation free energy that shows good agreement with simulation results. Furthermore, the Ornstein-Zernike equation leads to a new set of generalized Debye-H\"uckel equations reflecting the Gaussian distributed charges.
|
condensed matter
|
Compressive sensing (CS) has been studied and applied in structural health monitoring for wireless data acquisition and transmission, structural modal identification, and spare damage identification. The key issue in CS is finding the optimal solution for sparse optimization. In the past years, many algorithms have been proposed in the field of applied mathematics. In this paper, we propose a machine-learning-based approach to solve the CS data-reconstruction problem. By treating a computation process as a data flow, the process of CS-based data reconstruction is formalized into a standard supervised-learning task. The prior knowledge, i.e., the basis matrix and the CS-sampled signals, are used as the input and the target of the network; the basis coefficient matrix is embedded as the parameters of a certain layer; the objective function of conventional compressive sensing is set as the loss function of the network. Regularized by l1-norm, these basis coefficients are optimized to reduce the error between the original CS-sampled signals and the masked reconstructed signals with a common optimization algorithm. Also, the proposed network can handle complex bases, such as a Fourier basis. Benefiting from the nature of a multi-neuron layer, multiple signal channels can be reconstructed simultaneously. Meanwhile, the disassembled use of a large-scale basis makes the method memory-efficient. A numerical example of multiple sinusoidal waves and an example of field-test wireless data from a suspension bridge are carried out to illustrate the data-reconstruction ability of the proposed approach. The results show that high reconstruction accuracy can be obtained by the machine learning-based approach. Also, the parameters of the network have clear meanings; the inference of the mapping between input and output is fully transparent, making the CS data reconstruction neural network interpretable.
|
electrical engineering and systems science
|
Four-dimensional (4D) left ventricular myocardial velocity mapping (MVM) is a cardiac magnetic resonance (CMR) technique that allows assessment of cardiac motion in three orthogonal directions. Accurate and reproducible delineation of the myocardium is crucial for accurate analysis of peak systolic and diastolic myocardial velocities. In addition to the conventionally available magnitude CMR data, 4D MVM also acquires three velocity-encoded phase datasets which are used to generate velocity maps. These can be used to facilitate and improve myocardial delineation. Based on the success of deep learning in medical image processing, we propose a novel automated framework that improves the standard U-Net based methods on these CMR multi-channel data (magnitude and phase) by cross-channel fusion with attention module and shape information based post-processing to achieve accurate delineation of both epicardium and endocardium contours. To evaluate the results, we employ the widely used Dice scores and the quantification of myocardial longitudinal peak velocities. Our proposed network trained with multi-channel data shows enhanced performance compared to standard U-Net based networks trained with single-channel data. Based on the results, our method provides compelling evidence for the design and application for the multi-channel image analysis of the 4D MVM CMR data.
|
electrical engineering and systems science
|
In this paper, we study mathematical functions of relevance to pure gravity in AdS3. Modular covariance places stringent constraints on the space of such functions; modular invariance places even stronger constraints on how they may be combined into physically viable candidate partition functions. We explicitly detail the list of holomorphic and anti-holomorphic functions that serve as candidates for chiral and anti-chiral partition functions and note that modular covariance is only consistent with such functions when the left (resp. right) central charge is an integer multiple of 8, $c\in 8\mathbb{N}$. We then find related constraints on the symmetry group of the corresponding topological, Chern-Simons, theory in the bulk of AdS. The symmetry group of the theory can be one of two choices: either $SO(2; 1) \times SO(2; 1)$ or its three-fold diagonal cover. We introduce the generalized Hecke operators which map the modular covariant functions to the modular covariant functions. With these mathematical results, we obtain conjectural partition functions for extremal CFT2s, and the corresponding microcanonical entropies, when the chiral central charges are multiples of eight. Finally, we compute subleading corrections to the Beckenstein-Hawking entropy in the bulk gravitational theory with these conjectural partition functions
|
high energy physics theory
|
Contextuality is a key signature of quantum non-classicality, which has been shown to play a central role in enabling quantum advantage for a wide range of information-processing and computational tasks. We study the logic of contextuality from a structural point of view, in the setting of partial Boolean algebras introduced by Kochen and Specker in their seminal work. These contrast with traditional quantum logic \`a la Birkhoff and von Neumann in that operations such as conjunction and disjunction are partial, only being defined in the domain where they are physically meaningful. We study how this setting relates to current work on contextuality such as the sheaf-theoretic and graph-theoretic approaches. We introduce a general free construction extending the commeasurability relation on a partial Boolean algebra, i.e. the domain of definition of the binary logical operations. This construction has a surprisingly broad range of uses. We apply it in the study of a number of issues, including: - establishing the connection between the abstract measurement scenarios studied in the contextuality literature and the setting of partial Boolean algebras; - formulating various contextuality properties in this setting, including probabilistic contextuality as well as the strong, state-independent notion of contextuality given by Kochen-Specker paradoxes, which are logically contradictory statements validated by partial Boolean algebras, specifically those arising from quantum mechanics; - investigating a Logical Exclusivity Principle, and its relation to the Probabilistic Exclusivity Principle widely studied in recent work on contextuality as a step towards closing in on the set of quantum-realisable correlations; - developing some work towards a logical presentation of the Hilbert space tensor product, using logical exclusivity to capture some of its salient quantum features.
|
quantum physics
|
We give a fast oblivious L2-embedding of $A\in \mathbb{R}^{n x d}$ to $B\in \mathbb{R}^{r x d}$ satisfying $(1-\varepsilon)\|A x\|_2^2 \le \|B x\|_2^2 <= (1+\varepsilon) \|Ax\|_2^2.$ Our embedding dimension $r$ equals $d$, a constant independent of the distortion $\varepsilon$. We use as a black-box any L2-embedding $\Pi^T A$ and inherit its runtime and accuracy, effectively decoupling the dimension $r$ from runtime and accuracy, allowing downstream machine learning applications to benefit from both a low dimension and high accuracy (in prior embeddings higher accuracy means higher dimension). We give applications of our L2-embedding to regression, PCA and statistical leverage scores. We also give applications to L1: 1.) An oblivious L1-embedding with dimension $d+O(d\ln^{1+\eta} d)$ and distortion $O((d\ln d)/\ln\ln d)$, with application to constructing well-conditioned bases; 2.) Fast approximation of L1-Lewis weights using our L2 embedding to quickly approximate L2-leverage scores.
|
computer science
|
Any particular classical system and its quantum version are normally viewed as separate formulations that are strictly distinct. Our goal is to overcome the two separate languages and create a smooth and common procedure that provides a clear and continuous passage between the conventional distinction of either a strictly classical or a strictly quantized state. While path integration, among other procedures, provides an alternative route to connect classical and quantum expressions, it normally involves complicated, model-dependent, integrations. Our alternative procedures involve only model-independent procedures, and use more natural and straightforward integrations that are universal in kind. To introduce the basic procedures our presentation begins with familiar methods that are limited to basic, conventional, canonical quantum mechanical examples. In the final sections we illustrate how alternative quantization procedures, e.g., spin and affine quantizations, can also have smooth paths between classical and quantum stories, and with a few brief remarks, can also lead to similar stories for non-renormalizable covariant scalar fields as well as quantum gravity.
|
quantum physics
|
Clinical researchers often select among and evaluate risk prediction models using standard machine learning metrics based on confusion matrices. However, if these models are used to allocate interventions to patients, standard metrics calculated from retrospective data are only related to model utility (in terms of reductions in outcomes) under certain assumptions. When predictions are delivered repeatedly throughout time (e.g. in a patient encounter), the relationship between standard metrics and utility is further complicated. Several kinds of evaluations have been used in the literature, but it has not been clear what the target of estimation is in each evaluation. We synthesize these approaches, determine what is being estimated in each of them, and discuss under what assumptions those estimates are valid. We demonstrate our insights using simulated data as well as real data used in the design of an early warning system. Our theoretical and empirical results show that evaluations without interventional data either do not estimate meaningful quantities, require strong assumptions, or are limited to estimating best-case scenario bounds.
|
statistics
|
We derive positivity bounds on low energy effective field theories which admit gapped, analytic, unitary, Lorentz invariant, and possibly non-local UV completions, by considering 2 to 2 scatterings of Jaffe fields whose Lehmann-K\"{a}ll\'{e}n spectral density can grow exponentially. Several properties of S-matrix, such as analyticity properties, are assumed in our derivation. Interestingly, we find that some of the positivity bounds obtained in the literature, such as sub-leading order forward-limit bounds, must be satisfied even when UV completions fall into non-localizable theories in Jaffe's language, unless momentum space Wightman functions grow too rapidly at high energy. Under this restriction on the growth rate, such bounds may provide IR obstructions to analytic, unitary, and Lorentz invariant UV completions.
|
high energy physics theory
|
In this paper we study the Kotzinian-Mulders effect of a single hadron production in semi-inclusive deep inelastic scattering (SIDIS) within the framework of transverse momentum dependent (TMD) factorization. The asymmetry is contributed by the convolution of the Kotzinian-Mulders function $g_{1T}$ and the unpolarized fragmentation function $D_1$. As a TMD distribution, the Kotzinian-Mulders function in the coordinate space in the perturbative region can be represented as the convolution of the $C$-coefficients and the corresponding collinear correlation function. The Wandzura-Wilczek approximation is used to obtain this correlation function. We perform a detailed phenomenological numerical analysis of the Kotzinian-Mulders effect in the SIDIS process within TMD factorization at the kinematics of the HERMES and COMPASS measurements. It is found that the obtained $x_B$-, $z_h$- and $P_{h\perp}$-dependent Kotzinian-Mulders effect are basically consistent with the HERMES and COMPASS measurements.
|
high energy physics phenomenology
|
Likelihood-free methods are useful for parameter estimation of complex models with intractable likelihood functions for which it is easy to simulate data. Such models are prevalent in many disciplines including genetics, biology, ecology and cosmology. Likelihood-free methods avoid explicit likelihood evaluation by finding parameter values of the model that generate data close to the observed data. The general consensus has been that it is most efficient to compare datasets on the basis of a low dimensional informative summary statistic, incurring information loss in favour of reduced dimensionality. More recently, researchers have explored various approaches for efficiently comparing empirical distributions in the likelihood-free context in an effort to avoid data summarisation. This article provides a review of these full data distance based approaches, and conducts the first comprehensive comparison of such methods, both qualitatively and empirically. We also conduct a substantive empirical comparison with summary statistic based likelihood-free methods. The discussion and results offer guidance to practitioners considering a likelihood-free approach. Whilst we find the best approach to be problem dependent, we also find that the full data distance based approaches are promising and warrant further development. We discuss some opportunities for future research in this space.
|
statistics
|
The spectrum of the non-backtracking matrix plays a crucial role in determining various structural and dynamical properties of networked systems, ranging from the threshold in bond percolation and non-recurrent epidemic processes, to community structure, to node importance. Here we calculate the largest eigenvalue of the non-backtracking matrix and the associated non-backtracking centrality for uncorrelated random networks, finding expressions in excellent agreement with numerical results. We show however that the same formulas do not work well for many real-world networks. We identify the mechanism responsible for this violation in the localization of the non-backtracking centrality on network subgraphs whose formation is highly unlikely in uncorrelated networks, but rather common in real-world structures. Exploiting this knowledge we present an heuristic generalized formula for the largest eigenvalue, which is remarkably accurate for all networks of a large empirical dataset. We show that this newly uncovered localization phenomenon allows to understand the failure of the message-passing prediction for the percolation threshold in many real-world structures.
|
physics
|
Free-electron lasers (FELs) operate at wavelengths from millimeter waves through hard x-rays. At x-ray wavelengths, FELs typically rely on self-amplified spontaneous emission (SASE). Typical SASE emission contains multiple temporal "spikes" which limit the longitudinal coherence of the optical output; hence, alternate schemes that improve on the longitudinal coherence of the SASE emission are of interest. In this paper, we consnider electron bunches that are shorter than the SASE spike separation. In such cases, the spontaneously generated radiation consists of a single optical pulse with better longitudinal coherence than is found in typical SASE FELs. To investigate this regime, we use two FEL simulation codes. One (MINERVA) uses the slowly-varying envelope approximation(SVEA) which breaks down for extremely short pulses. The second (PUFFIN) is a particle-in-cell (PiC) simulation code that is considered to be a more complete model of the underlying physics and which is able to simulate very short pulses. We first anchor these codes by showing that there is substantial agreement between the codes in simulation of the SPARC SASE FEL experiment at ENEA Frascati. We then compare the two codes for simulations using electron bunch lengths that are shorter than the SASE slice separation. The comparisons between the two codes for short bunch simulations elucidate the limitations of the SVEA in this regime but indicate that the SVEA can treat short bunches that are comparable to the cooperation length.
|
physics
|
In this paper, we conduct data selection analysis in building an English-Mandarin code-switching (CS) speech recognition (CSSR) system, which is aimed for a real CSSR contest in China. The overall training sets have three subsets, i.e., a code-switching data set, an English (LibriSpeech) and a Mandarin data set respectively. The code-switching data are Mandarin dominated. First of all, it is found using the overall data yields worse results, and hence data selection study is necessary. Then to exploit monolingual data, we find data matching is crucial. Mandarin data is closely matched with the Mandarin part in the code-switching data, while English data is not. However, Mandarin data only helps on those utterances that are significantly Mandarin-dominated. Besides, there is a balance point, over which more monolingual data will divert the CSSR system, degrading results. Finally, we analyze the effectiveness of combining monolingual data to train a CSSR system with the HMM-DNN hybrid framework. The CSSR system can perform within-utterance code-switch recognition, but it still has a margin with the one trained on code-switching data.
|
electrical engineering and systems science
|
We analyze the optimization landscapes of deep learning with wide networks. We highlight the importance of constraints for such networks and show that constraint -- as well as unconstraint -- empirical-risk minimization over such networks has no confined points, that is, suboptimal parameters that are difficult to escape from. Hence, our theories substantiate the common belief that wide neural networks are not only highly expressive but also comparably easy to optimize.
|
computer science
|
Recent advances in electrostatic gating provide a novel way to modify the carrier concentration in materials via electrostatic means instead of chemical doping, thus minimizing the impurity scattering. Here, we use first-principles Density Functional Theory combined with a tight-binding approach to compare and contrast the effects of electrostatic gating and Co chemical doping on the ferromagnetic transition of FeS$_2$, a transition metal disulfide with the pyrite structure. Using tight-binding parameters obtained from maximally-localized Wannier functions, we calculate the magnetic susceptibility across a wide doping range. We find that electrostatic gating requires a higher electron concentration than the equivalent in Co doping to induce ferromagnetism via a Stoner-like mechanism. We attribute this behavior to the formation of a narrow Co band near the bottom of the conduction band under chemical doping, which is absent in the electrostatic gating case. Our results reveal that the effects of electrostatic gating go beyond a simple rigid band shift, and highlight the importance of the changes in the crystal structure promoted by gating.
|
condensed matter
|
We prove a generalization of the parallel adversary method to multi-valued functions, and apply it to prove that there is no parallel quantum advantage for approximate counting.
|
quantum physics
|
In this paper, we consider the problem of distributed consensus optimization over multi-agent networks with directed network topology. Assuming each agent has a local cost function that is smooth and strongly convex, the global objective is to minimize the average of all the local cost functions. To solve the problem, we introduce a robust gradient tracking method (R-Push-Pull) adapted from the recently proposed Push-Pull/AB algorithm. R-Push-Pull inherits the advantages of Push-Pull and enjoys linear convergence to the optimal solution with exact communication. Under noisy information exchange, R-Push-Pull is more robust than the existing gradient tracking based algorithms; the solutions obtained by each agent reach a neighborhood of the optimum in expectation exponentially fast under a constant stepsize policy. We provide a numerical example that demonstrate the effectiveness of R-Push-Pull.
|
mathematics
|
The band-gaps of CsPbI$_3$ perovskite nanocrystals are measured by absorption spectroscopy at cryogenic temperatures. Anomalous band-gap shifts are observed in CsPbI$_3$ nanocubes and nanoplatelets, which are modeled accurately by band-gap renormalization due to lattice vibrational modes. We find that decreasing dimensionality of the CsPbI$_3$ lattice in nanoplatelets greatly reduces electron-phonon coupling, and dominant out-of-plane quantum confinement results in a homogeneously broadened absorption lineshape down to cryogenic temperatures. An absorption tail forms at low-temperatures in CsPbI$_3$ nanocubes, which we attribute to shallow defect states positioned near the valence band-edge.
|
physics
|
We give rigorous analytical results on the temporal behavior of two-point correlation functions --also known as dynamical response functions or Green's functions-- in closed many-body quantum systems. We show that in a large class of translation-invariant models the correlation functions factorize at late times $\langle A(t) B\rangle_\beta \rightarrow \langle A \rangle_\beta \langle B \rangle_\beta$, thus proving that dissipation emerges out of the unitary dynamics of the system. We also show that for systems with a generic spectrum the fluctuations around this late-time value are bounded by the purity of the thermal ensemble, which generally decays exponentially with system size. For auto-correlation functions we provide an upper bound on the timescale at which they reach the factorized late time value. Remarkably, this bound is only a function of local expectation values, and does not increase with system size. We give numerical examples that show that this bound is a good estimate in non-integrable models, and argue that the timescale that appears can be understood in terms of an emergent fluctuation-dissipation theorem. Our study extends to further classes of two point functions such as the symmetrized ones and the Kubo function that appears in linear response theory, for which we give analogous results.
|
quantum physics
|
We study the strong and radiative decays of the anti-quark-quark ground state $J^{PC} = 3^{--}$ ($n^{2 S + 1} L_J = 1^3 D_3$) nonet {$\rho_{3} (1690)$, $K_{3}^{\ast} (1780)$, $\phi_{3} (1850)$, $\omega_{3} (1670)$} in the framework of an effective quantum field theory approach, based on the $SU_\mathrm{V}(3)$-flavor-symmetry. The effective model is fitted to experimental data listed by the Particle Data Group. We predict numerous experimentally unknown decay widths and branching ratios. An overall agreement of theory (fit and predictions) with experimental data confirms the $\bar{q} q$ nature of the states and qualitatively validates the effective approach. Naturally, experimental clarification as well as advanced theoretical description is needed for trustworthy quantitative predictions, which is observed from some of the decay channels. Besides conventional spin-$3$ mesons, theoretical predictions for ratios of strong and radiative decays of a hypothetical glueball state $G_3 (4200)$ with $J^{PC} = 3^{--}$ are also presented.
|
high energy physics phenomenology
|
The Hartle-Hawking and Tunneling (Vilenkin) wave functions are treated in the Hamiltonian formalism. We find that the leading (i.e. quadratic) terms in the fluctuations around a maximally symmetric background, are indeed Gaussian (rather than inverse Gaussian), for both types of wave function, when properly interpreted. However the suppression of non-Gaussianities and hence the recovery of the Bunch-Davies state is not transparent.
|
high energy physics theory
|
We present Arepo-MCRT, a novel Monte Carlo radiative transfer (MCRT) radiation-hydrodynamics (RHD) solver for the unstructured moving-mesh code Arepo. Our method is designed for general multiple scattering problems in both optically thin and thick conditions. We incorporate numerous efficiency improvements and noise reduction schemes to help overcome efficiency barriers that typically inhibit convergence. These include continuous absorption and energy deposition, photon weighting and luminosity boosting, local packet merging and splitting, path-based statistical estimators, conservative (face-centered) momentum coupling, adaptive convergence between time steps, implicit Monte Carlo algorithms for thermal emission, and discrete-diffusion Monte Carlo techniques for unresolved scattering, including a novel advection scheme. We primarily focus on the unique aspects of our implementation and discussions of the advantages and drawbacks of our methods in various astrophysical contexts. Finally, we consider several test applications including the levitation of an optically thick layer of gas by trapped infrared radiation. We find that the initial acceleration phase and revitalized second wind are connected via self-regulation of the RHD coupling, such that the RHD method accuracy and simulation resolution each leave important imprints on the long-term behavior of the gas.
|
astrophysics
|
The IoT ecosystem suffers from a variety of problems around security, identity, access control, data flow and data storage that introduce friction into interactions between various parties. In many respects, the situation is similar to the early days of the Internet, where, prior to the establishment of Internet Exchanges, routing between different BGP autonomous systems was often point to point. We propose a similar solution, the IoT Exchange, where IoT device owners can register their devices and offer data for sale or can upload data into the IoT services of any of the big hyperscale cloud platforms for further processing. The goal of the IoT Exchange is to break down the silos within which device wireless connectivity types and cloud provider IoT systems constrain users to operate. In addition, if the device owner needs to maintain the data close to the edge to reduce access latency, the MillenniumDB service running in an edge data center with minimal latency to the edge device, provides a database with a variety of schema engines (SQL, noSQL, etc). The IoT exchange uses decentralized identifiers for identity management and verifiable credentials for authorizing software updates and to control access to the devices, to avoid dependence on certificate authorities and other centralized identity and authorization management systems. In addition, verifiable credentials provide a way whereby privacy preserving processing can be applied to traffic between a device and an end data or control customer, if some risk of privacy compromise exists.
|
computer science
|
Lung cancer is one of the most deadly diseases in the world. Detecting such tumors at an early stage can be a tedious task. Existing deep learning architecture for lung nodule identification used complex architecture with large number of parameters. This study developed a cascaded architecture which can accurately segment and classify the benign or malignant lung nodules on computed tomography (CT) images. The main contribution of this study is to introduce a segmentation network where the first stage trained on a public data set can help to recognize the images which included a nodule from any data set by means of transfer learning. And the segmentation of a nodule improves the second stage to classify the nodules into benign and malignant. The proposed architecture outperformed the conventional methods with an area under curve value of 95.67\%. The experimental results showed that the classification accuracy of 97.96\% of our proposed architecture outperformed other simple and complex architectures in classifying lung nodules for lung cancer detection.
|
electrical engineering and systems science
|
Humans can infer approximate interaction force between objects from only vision information because we already have learned it through experiences. Based on this idea, we propose a recurrent convolutional neural network-based method using sequential images for inferring interaction force without using a haptic sensor. For training and validating deep learning methods, we collected a large number of images and corresponding interaction forces through an electronic motor-based device. To concentrate on changing shapes of a target object by the external force in images, we propose a sequential image-based attention module, which learns a salient model from temporal dynamics. The proposed sequential image-based attention module consists of a sequential spatial attention module and a sequential channel attention module, which are extended to exploit multiple sequential images. For gaining better accuracy, we also created a weighted average pooling layer for both spatial and channel attention modules. The extensive experimental results verified that the proposed method successfully infers interaction forces under the various conditions, such as different target materials, illumination changes, and external force directions.
|
computer science
|
The Los Alamos National Laboratory designed and built Mars Odyssey Neutron Spectrometer (MONS) has been in excellent health operating from February 2002 to the present. MONS measures the neutron leakage albedo from galactic cosmic ray bombardment of Mars. These signals can indicate the presence of near-surface water deposits on Mars, and can also be used to study properties of the seasonal polar CO$_2$ ice caps. This work outlines a new analysis of the MONS data that results in new and extended time-series maps of MONS thermal and epithermal neutron data. The new data are compared to previous publications on the MONS instrument. We then present preliminary results studying the inter-annual variability in the polar regions of Mars based on 8 Mars-Years of MONS data from the new dataset.
|
astrophysics
|
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumination source is a challenging task with vital applications including surveillance and robotics. Recent NLOS reconstruction advances have been achieved using time-resolved measurements which requires expensive and specialized detectors and laser sources. In contrast, we propose a data-driven approach for NLOS 3D localization and object identification requiring only a conventional camera and projector. To generalize to complex line-of-sight (LOS) scenes with non-planar surfaces and occlusions, we introduce an adaptive lighting algorithm. This algorithm, based on radiosity, identifies and illuminates scene patches in the LOS which most contribute to the NLOS light paths, and can factor in system power constraints. We achieve an average identification of 87.1% object identification for four classes of objects, and average localization of the NLOS object's centroid with a mean-squared error (MSE) of 1.97 cm in the occluded region for real data taken from a hardware prototype. These results demonstrate the advantage of combining the physics of light transport with active illumination for data-driven NLOS imaging.
|
electrical engineering and systems science
|
In the past decade, the analysis of exoplanet atmospheric spectra has revealed the presence of water vapour in almost all the planets observed, with the exception of a fraction of overcast planets. Indeed, water vapour presents a large absorption signature in the wavelength coverage of the Hubble Space Telescope's (HST) Wide Field Camera 3 (WFC3), which is the main space-based observatory for atmospheric studies of exoplanets, making its detection very robust. However, while carbon-bearing species such as methane, carbon monoxide and carbon dioxide are also predicted from current chemical models, their direct detection and abundance characterisation has remained a challenge. Here we analyse the transmission spectrum of the puffy, clear hot-Jupiter KELT-11 b from the HST WFC3 camera. We find that the spectrum is consistent with the presence of water vapor and an additional absorption at longer wavelengths than 1.5um, which could well be explained by a mix of carbon bearing molecules. CO2, when included is systematically detected. One of the main difficulties to constrain the abundance of those molecules is their weak signatures across the HST WFC3 wavelength coverage, particularly when compared to those of water. Through a comprehensive retrieval analysis, we attempt to explain the main degeneracies present in this dataset and explore some of the recurrent challenges that are occurring in retrieval studies (e.g: the impact of model selection, the use of free vs self-consistent chemistry and the combination of instrument observations). Our results make this planet an exceptional example of chemical laboratory where to test current physical and chemical models of hot-Jupiters' atmospheres.
|
astrophysics
|
For an arbitrary prime number $p$, we propose an action for bosonic $p$-adic strings in curved target spacetime, and show that the vacuum Einstein equations of the target are a consequence of worldsheet scaling symmetry of the quantum $p$-adic strings, similar to the ordinary bosonic strings case. It turns out that certain $p$-adic automorphic forms are the plane wave modes of the bosonic fields on $p$-adic strings, and that the regularized normalization of these modes on the $p$-adic worldsheet presents peculiar features which reduce part of the computations to familiar setups in quantum field theory, while also exhibiting some new features that make loop diagrams much simpler. Assuming a certain product relation, we also observe that the adelic spectrum of the bosonic string corresponds to the nontrivial zeros of the Riemann Zeta function.
|
high energy physics theory
|
We suggest that an interplay between microscopic and macroscopic physics can give rise to dark matter (DM) whose interactions with the visible sector fundamentally undulate in time, independent of celestial dynamics. A concrete example is provided by fermionic DM with an electric dipole moment (EDM) sourced by an oscillating axion-like field, resulting in undulations in the scattering rate. The discovery potential of light DM searches can be enhanced by additionally searching for undulating scattering rates, especially in detection regions where background rates are large and difficult to estimate, such as for DM masses in the vicinity of 1 MeV where DM-electron scattering dominantly populates the single electron bin. An undulating signal could also reveal precious dark sector information after discovery. In this regard we emphasise that, if the recent XENON1T excess of events is due to light DM scattering exothermically off electrons, future analyses of the time-dependence of events could offer clues as to the microscopic origins of the putative signal.
|
high energy physics phenomenology
|
A dynamic treatment regimen (DTR) is a pre-specified sequence of decision rules which maps baseline or time-varying measurements on an individual to a recommended intervention or set of interventions. Sequential multiple assignment randomized trials (SMARTs) represent an important data collection tool for informing the construction of effective DTRs. A common primary aim in a SMART is the marginal mean comparison between two or more of the DTRs embedded in the trial. This manuscript develops a mixed effects modeling and estimation approach for these primary aim comparisons based on a continuous, longitudinal outcome. The method is illustrated using data from a SMART in autism research.
|
statistics
|
For fixed integers $b\geq k$, a problem of relevant interest in computer science and combinatorics is that of determining the asymptotic growth, with $n$, of the largest set for which a $(b, k)$-hash family of $n$ functions exists. Equivalently, determining the asymptotic growth of a largest subset of $\{1,2,\ldots,b\}^n$ such that, for any $k$ distinct elements in the set, there is a coordinate where they all differ. An important asymptotic upper bound for general $b, k$, was derived by Fredman and Koml\'os in the '80s and improved for certain $b\neq k$ by K\"orner and Marton and by Arikan. Only very recently better bounds were derived for the general $b,k$ case by Guruswami and Riazanov while stronger results for small values of $b=k$ were obtained by Arikan, by Dalai, Guruswami and Radhakrishnan and by Costa and Dalai. In this paper, we both show how some of the latter results extend to $b\neq k$ and further strengthen the bounds for some specific small values of $b$ and $k$. The method we use, which depends on the reduction of an optimization problem to a finite number of cases, shows that further results might be obtained by refined arguments at the expense of higher complexity which could be reduced by using more sophisticated and optimized algorithmic approaches.
|
mathematics
|
In many network problems, graphs may change by the addition of nodes, or the same problem may need to be solved in multiple similar graphs. This generates inefficiency, as analyses and systems that are not transferable have to be redesigned. To address this, we consider graphons, which are both limit objects of convergent graph sequences and random graph models. We define graphon signals and introduce the Graphon Fourier Transform (WFT), to which the Graph Fourier Transform (GFT) is shown to converge. This result is demonstrated in two numerical experiments where, as expected, the GFT converges, hinting to the possibility of centralizing analysis and design on graphons to leverage transferability.
|
electrical engineering and systems science
|
Quantum key distribution can provide unconditionally secure key exchange for remote users in theory. In practice, however, in most quantum key distribution systems, quantum hackers might steal the secure keys by listening to the side channels in the source, such as the photon frequency spectrum, emission time, propagation direction, spatial angular momentum, and so on. It is hard to prevent such kinds of attacks because side channels may exist in any of the encoding space whether the designers take care of or not. Here we report an experimental realization of a side-channel-free quantum key distribution protocol which is not only measurement-device-independent, but also immune to all side-channel attacks in the source. We achieve a secure key rate of 4.80e-7 per pulse through 50 km fiber spools.
|
quantum physics
|
Topological states of matter, such as fractional quantum Hall states, are an active field of research due to their exotic excitations. In particular, ultracold atoms in optical lattices provide a highly controllable and adaptable platform to study such new types of quantum matter. However, finding a clear route to realize non-Abelian quantum Hall states in these systems remains challenging. Here we use the density-matrix renormalization-group (DMRG) method to study the Hofstadter-Bose-Hubbard model at filling factor $\nu = 1$ and find strong indications that at $\alpha=1/6$ magnetic flux quanta per plaquette the ground state is a lattice analog of the continuum non-Abelian Pfaffian. We study the on-site correlations of the ground state, which indicate its paired nature at $\nu = 1$, and find an incompressible state characterized by a charge gap in the bulk. We argue that the emergence of a charge density wave on thin cylinders and the behavior of the two- and three-particle correlation functions at short distances provide evidence for the state being closely related to the continuum Pfaffian. The signatures discussed in this letter are accessible in current cold atom experiments and we show that the Pfaffian-like state is readily realizable in few-body systems using adiabatic preparation schemes.
|
condensed matter
|
Traditionally, measuring the center-of-mass (c.m.) velocity of an atomic ensemble relies on measuring the Doppler shift of the absorption spectrum of single atoms in the ensemble. Mapping out the velocity distribution of the ensemble is indispensable when determining the c.m. velocity using this technique. As a result, highly sensitive measurements require preparation of an ensemble with a narrow Doppler width. Here, we use a dispersive measurement of light passing through a moving room temperature atomic vapor cell to determine the velocity of the cell in a single shot with a short-term sensitivity of 5.5 $\mu$m s$^{-1}$ Hz$^{-1/2}$. The dispersion of the medium is enhanced by creating quantum interference through an auxiliary transition for the probe light under electromagnetically induced transparency condition. In contrast to measurement of single atoms, this method is based on the collective motion of atoms and can sense the c.m. velocity of an ensemble without knowing its velocity distribution. Our results improve the previous measurements by 3 orders of magnitude and can be used to design a compact motional sensor based on thermal atoms.
|
physics
|
Wide-area data and algorithms in large power systems are creating new opportunities for implementation of measurement-based dynamic load modeling techniques. These techniques improve the accuracy of dynamic load models, which are an integral part of transient stability analysis. Measurement-based load modeling techniques commonly assume response error is correlated to system or model accuracy. Response error is the difference between simulation output and phasor measurement units (PMUs) samples. This paper investigates similarity measures, output types, simulation time spans, and disturbance types used to generate response error and the correlation of the response error to system accuracy. This paper aims to address two hypotheses: 1) can response error determine the total system accuracy? and 2) can response error indicate if a dynamic load model being used at a bus is sufficiently accurate? The results of the study show only specific combinations of metrics yield statistically significant correlations, and there is a lack of pattern of combinations of metrics that deliver significant correlations. Less than 20% of all simulated tests in this study resulted in statistically significant correlations. These outcomes highlight concerns with common measurement-based load modeling techniques, raising awareness to the importance of careful selection and validation of similarity measures and response output metrics. Naive or untested selection of metrics can deliver inaccurate and misleading results.
|
electrical engineering and systems science
|
We integrate information-theoretic concepts into the design and analysis of optimistic algorithms and Thompson sampling. By making a connection between information-theoretic quantities and confidence bounds, we obtain results that relate the per-period performance of the agent with its information gain about the environment, thus explicitly characterizing the exploration-exploitation tradeoff. The resulting cumulative regret bound depends on the agent's uncertainty over the environment and quantifies the value of prior information. We show applicability of this approach to several environments, including linear bandits, tabular MDPs, and factored MDPs. These examples demonstrate the potential of a general information-theoretic approach for the design and analysis of reinforcement learning algorithms.
|
statistics
|
We present analysis of the gas kinematics of the Integral Shaped Filament (ISF) in Orion~A using four different molecular lines, $^{12}$CO (1-0), $^{13}$CO (1-0), NH$_3$ (1,1), and N$_2$H$^+$ (1-0). We describe our method to visualize the position-velocity (PV) structure using the intensity-weighted line velocity centroid, which enables us to identify structures that were previously muddled or invisible. We observe a north to south velocity gradient in all tracers that terminates in a velocity peak near the center of the Orion Nebula Cluster (ONC), consistent with the previously reported "wave-like" properties of the ISF. We extract the velocity dispersion profiles and compare the non-thermal line widths to the gas gravitational potential. We find supersonic Mach number profiles, yet the line widths are consistent with the gas being deeply gravitationally bound. We report the presence of two $^{12}$CO velocity components along the northern half of the ISF; if interpreted as circular rotation, the angular velocity is $\omega=1.4\,{\rm Myr}^{-1}$. On small scales we report the detection of N$_2$H$^+$ and NH$_3$ "twisting and turning" structures, with short associated timescales that give the impression of a torsional wave. Neither the nature of these structures nor their relation to the larger scale wave is presently understood.
|
astrophysics
|
Fictitious Play (FP) is a simple and natural dynamic for repeated play in zero-sum games. Proposed by Brown in 1949, FP was shown to converge to a Nash Equilibrium by Robinson in 1951, albeit at a slow rate that may depend on the dimension of the problem. In 1959, Karlin conjectured that FP converges at the more natural rate of $O(1/\sqrt{t})$. However, Daskalakis and Pan disproved a version of this conjecture in 2014, showing that a slow rate can occur, although their result relies on adversarial tie-breaking. In this paper, we show that Karlin's conjecture is indeed correct for the class of diagonal payoff matrices, as long as ties are broken lexicographically. Specifically, we show that FP converges at a $O(1/\sqrt{t})$ rate in the case when the payoff matrix is diagonal. We also prove this bound is tight by showing a matching lower bound in the identity payoff case under the lexicographic tie-breaking assumption.
|
computer science
|
We extract the pion transverse momentum dependent (TMD) parton distribution by fitting the pion-induced Drell-Yan process within the framework of TMD factorization. The analysis is done at the next-to-next-to-leading order (NNLO) with proton TMD distribution and non-perturbative TMD evolution extracted earlier in the global fit. We observe the significant difference in the normalization of transverse momentum differential cross-section measured by E615 experiment and the theory prediction.
|
high energy physics phenomenology
|
Compiling quantum algorithms for near-term quantum computers (accounting for connectivity and native gate alphabets) is a major challenge that has received significant attention both by industry and academia. Avoiding the exponential overhead of classical simulation of quantum dynamics will allow compilation of larger algorithms, and a strategy for this is to evaluate an algorithm's cost on a quantum computer. To this end, we propose a variational hybrid quantum-classical algorithm called quantum-assisted quantum compiling (QAQC). In QAQC, we use the overlap between a target unitary $U$ and a trainable unitary $V$ as the cost function to be evaluated on the quantum computer. More precisely, to ensure that QAQC scales well with problem size, our cost involves not only the global overlap ${\rm Tr} (V^\dagger U)$ but also the local overlaps with respect to individual qubits. We introduce novel short-depth quantum circuits to quantify the terms in our cost function, and we prove that our cost cannot be efficiently approximated with a classical algorithm under reasonable complexity assumptions. We present both gradient-free and gradient-based approaches to minimizing this cost. As a demonstration of QAQC, we compile various one-qubit gates on IBM's and Rigetti's quantum computers into their respective native gate alphabets. Furthermore, we successfully simulate QAQC up to a problem size of 9 qubits, and these simulations highlight both the scalability of our cost function as well as the noise resilience of QAQC. Future applications of QAQC include algorithm depth compression, black-box compiling, noise mitigation, and benchmarking.
|
quantum physics
|
In the present work we investigate the r-mode instability windows, spindown and spindown rates of sub- and super-Chandrasekhar magnetized white dwarfs in presence of Landau quantization of the electron gas and magnetic braking. The gravitational wave strain amplitudes due to r-mode instability is also calculated. The dominant damping mechanism is taken to be the shear viscosity arising due to scattering of the degenerate electrons from the ion liquid. We find that the critical frequencies of Landau quantized magnetized white dwarfs are the lowest, those of non-Landau quantized ones are higher and those of non-magnetized ones are the highest at the same temperature. This implies that magnetic braking and Landau quantization both enhance r-mode instability. We have also seen that there is rapid spindown of magnetized white dwarfs due to additional magnetic braking term but there is no considerable effect of Landau quantization on the spindown and spindown rates for magnetic field strengths relevant for white dwarf interiors. We find that the r-mode gravitational wave strain amplitude for a rapidly rotating super-Chandrasekhar white dwarf at 1 kpc is $\sim 10^{-27}$, making isolated massive rapidly rotating hot magnetized white dwarfs prime candidates for search of gravitational waves in the future.
|
astrophysics
|
Recovery after stroke is often incomplete, but rehabilitation training may potentiate recovery by engaging endogenous neuroplasticity. In preclinical models of stroke, high doses of rehabilitation training are required to restore functional movement to the affected limbs of animals. In humans, however, the necessary dose of training to potentiate recovery is not known. This ignorance stems from the lack of objective, pragmatic approaches for measuring training doses in rehabilitation activities. Here, to develop a measurement approach, we took the critical first step of automatically identifying functional primitives, the basic building block of activities. Forty-eight individuals with chronic stroke performed a variety of rehabilitation activities while wearing inertial measurement units (IMUs) to capture upper body motion. Primitives were identified by human labelers, who labeled and segmented the associated IMU data. We performed automatic classification of these primitives using machine learning. We designed a convolutional neural network model that outperformed existing methods. The model includes an initial module to compute separate embeddings of different physical quantities in the sensor data. In addition, it replaces batch normalization (which performs normalization based on statistics computed from the training data) with instance normalization (which uses statistics computed from the test data). This increases robustness to possible distributional shifts when applying the method to new patients. With this approach, we attained an average classification accuracy of 70%. Thus, using a combination of IMU-based motion capture and deep learning, we were able to identify primitives automatically. This approach builds towards objectively-measured rehabilitation training, enabling the identification and counting of functional primitives that accrues to a training dose.
|
electrical engineering and systems science
|
Negative solutions are possible for Einstein's Special Relativity equation, as well as Dirac's, Maxwell's, and Schrodinger's equations; and Schrodinger's equation utilizes antitime to calculate quantum probabilities. Also, since no ground state seems to exist for matter, Dirac predicted that negative matter must exist to keep all of matter from disappearing into negative energy states. And Einstein showed that all things are relative except the speed of light. Together, these imply that 4 negative dimensions (Ds) are overlapped by the 4 positive Ds; that positive and negative matters reside in these 2 separate sets of Ds; and that their separateness prevents their mutual annihilation. Light would then exist as an oscillation of the interface between these 2 sets of Ds, with the 2 relative matters varying equally on each side; thus allowing light to remain constant. And since 2 negatives make a positive, the 2 sets would move as one, even though they are opposites. This means that the 2 sets always remain exactly overlapped; and so, the negative Ds remain disguised. Our universe therefore could be a dual universe. Also, 8+2 Ds makes a total of 10 Ds; 4x4=16 interactions, and 10+16 = 26. So our 4 D world could act as a 10/26 D world; which matches the 2 sets within which string theory is self- consistent. And this is why N =10. Also, an initial matter singularity would be balanced by antimatter being infinitely spread out at the opposite end of time; so a dual universe would allow an initial singularity to exist with both a known position and momentum, without violating quantum mechanics. And since a dual universe having two equal sets of Ds and matter would always be exactly balanced; a dual universe would be extremely stable all thorough its existence. And this means that our universe is likely to be a dual universe.
|
physics
|
Very recently the new pathogen severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was identified and the coronavirus disease 2019 (COVID-19) declared a pandemic by the World Health Organization. The pandemic has a number of consequences for the ongoing clinical trials in non-COVID-19 conditions. Motivated by four currently ongoing clinical trials in a variety of disease areas we illustrate the challenges faced by the pandemic and sketch out possible solutions including adaptive designs. Guidance is provided on (i) where blinded adaptations can help; (ii) how to achieve type I error rate control, if required; (iii) how to deal with potential treatment effect heterogeneity; (iv) how to utilize early readouts; and (v) how to utilize Bayesian techniques. In more detail approaches to resizing a trial affected by the pandemic are developed including considerations to stop a trial early, the use of group-sequential designs or sample size adjustment. All methods considered are implemented in a freely available R shiny app. Furthermore, regulatory and operational issues including the role of data monitoring committees are discussed.
|
statistics
|
The reconstruction of accurate three-dimensional environment models is one of the most fundamental goals in the field of photogrammetry. Since satellite images provide suitable properties for obtaining large-scale environment reconstructions, there exist a variety of Stereo Matching based methods to reconstruct point clouds for satellite image pairs. Recently, the first Structure from Motion (SfM) based approach has been proposed, which allows to reconstruct point clouds from multiple satellite images. In this work, we propose an extension of this SfM based pipeline that allows us to reconstruct not only point clouds but watertight meshes including texture information. We provide a detailed description of several steps that are mandatory to exploit state-of-the-art mesh reconstruction algorithms in the context of satellite imagery. This includes a decomposition of finite projective camera calibration matrices, a skew correction of corresponding depth maps and input images as well as the recovery of real-world depth maps from reparameterized depth values. The paper presents an extensive quantitative evaluation on multi-date satellite images demonstrating that the proposed pipeline combined with current meshing algorithms outperforms state-of-the-art point cloud reconstruction algorithms in terms of completeness and median error. We make the source code of our pipeline publicly available.
|
computer science
|
We review properties of the null-field solutions of source-free Maxwell equations. We focus on the electric and magnetic field lines, especially on limit cycles, which actually can be knotted and/or linked at every given moment. We analyse the fact that the Poynting vector induces self-consistent time evolution of these lines and demonstrate that the Abelian link invariant is integral of motion. The same is expected to be true also for the non-Abelian invariants (like Jones and HOMFLY-PT polynomials or Vassiliev invariants), and many integrals of motion can imply that the Poynting evolution is actually integrable. We also consider particular examples of the field lines for the particular family of finite energy source-free "knot" solutions, attempting to understand when the field lines are closed -- and can be discussed in terms of knots and links. Based on computer simulations we conjecture that Ranada's solution, where every pair of lines forms a Hopf link, is rather exceptional. In general, only particular lines (a set of measure zero) are limit cycles and represent closed lines forming knots/links, while all the rest are twisting around them and remain unclosed. Still, conservation laws of Poynting evolution and associated integrable structure should persist.
|
high energy physics theory
|
This paper introduces a new stochastic process with values in the set Z of integers with sign. The increments of process are Poisson differences and the dynamics has an autoregressive structure. We study the properties of the process and exploit the thinning representation to derive stationarity conditions and the stationary distribution of the process. We provide a Bayesian inference method and an efficient posterior approximation procedure based on Monte Carlo. Numerical illustrations on both simulated and real data show the effectiveness of the proposed inference.
|
statistics
|
We consider a wireless sensor network, consisting of N heterogeneous sensors and a fusion center (FC), tasked with detecting a known signal in uncorrelated Gaussian noises. Each sensor can harvest randomly arriving energy and store it in a finite-size battery. Sensors communicate directly with the FC over orthogonal fading channels. Each sensor adapts its transmit symbol power, such that the larger its stored energy and its quantized channel gain are, the higher its transmit symbol power is. To strike a balance between energy harvesting and energy consumption, we formulate two constrained optimization problems (P1) and (P2), where in both problems we seek the jointly optimal local decision thresholds and channel gain quantization thresholds. While in (P1) we maximize the Jdivergence of the received signal densities at the FC, in (P2) we minimize the average total transmit power, subject to certain constraints. We solve (P1) and (P2), assuming that the batteries reach their steady-state. Our simulation results demonstrate the effectiveness of our optimization on enhancing the detection performance in (P1), and lowering the average total transmit power in (P2). They also reveal how the energy harvesting rate, the battery size, the sensor observation and communication channel parameters impact obtained solutions.
|
electrical engineering and systems science
|
Pneumothorax is a critical condition that requires timely communication and immediate action. In order to prevent significant morbidity or patient death, early detection is crucial. For the task of pneumothorax detection, we study the characteristics of three different deep learning techniques: (i) convolutional neural networks, (ii) multiple-instance learning, and (iii) fully convolutional networks. We perform a five-fold cross-validation on a dataset consisting of 1003 chest X-ray images. ROC analysis yields AUCs of 0.96, 0.93, and 0.92 for the three methods, respectively. We review the classification and localization performance of these approaches as well as an ensemble of the three aforementioned techniques.
|
electrical engineering and systems science
|
We consider futuristic, intelligent reflecting surfaces (IRS)-aided communication between a base station (BS) and a user equipment (UE) for two distinct scenarios: a single-input, single-output (SISO) system whereby the BS has a single antenna, and a multi-input, single-output (MISO) system whereby the BS has multiple antennas. For the considered IRS-assisted downlink, we compute the effective capacity (EC), which is a quantitative measure of the statistical quality-of-service (QoS) offered by a communication system experiencing random fading. For our analysis, we consider the two widely-known assumptions on channel state information (CSI) -- i.e., perfect CSI and no CSI, at the BS. Thereafter, we first derive the distribution of the signal-to-noise ratio (SNR) for both SISO and MISO scenarios, and subsequently derive closed-form expressions for the EC under perfect CSI and no CSI cases, for both SISO and MISO scenarios. Furthermore, for the SISO and MISO systems with no CSI, it turns out that the EC could be maximized further by searching for an optimal transmission rate $r^*$, which is computed by exploiting the iterative gradient-descent method. We provide extensive simulation results which investigate the impact of the various system parameters, e.g., QoS exponent, power budget, number of transmit antennas at the BS, number of reflective elements at the IRS etc., on the EC of the system.
|
electrical engineering and systems science
|
We present a novel framework for the reconstruction of 1D composite signals assumed to be a mixture of two additive components, one sparse and the other smooth, given a finite number of linear measurements. We formulate the reconstruction problem as a continuous-domain regularized inverse problem with multiple penalties. We prove that these penalties induce reconstructed signals that indeed take the desired form of the sum of a sparse and a smooth component. We then discretize this problem using Riesz bases, which yields a discrete problem that can be solved by standard algorithms. Our discretization is exact in the sense that we are solving the continuous-domain problem over the search space specified by our bases without any discretization error. We propose a complete algorithmic pipeline and demonstrate its feasibility on simulated data.
|
electrical engineering and systems science
|
ALPS II, the Any Light Particle Search, is a second-generation Light Shining through a Wall experiment that hunts for axion-like particles. The experiment is currently transitioning from the design and construction phase to the commissioning phase, with science runs expected to start in 2021. ALPS II plans to use two different sensing schemes to confirm the potential detection of axion-like particles or to verify an upper limit on their coupling strength to two photons of $g_{a\gamma\gamma}\leq2\times10^{-11}\text{GeV}^{-1}$. This paper discusses a heterodyne sensing scheme (HET) which will be the first scheme deployed to detect the regenerated light. It presents critical details of the optical layout, the length and alignment sensing scheme, design features to minimize spurious signals from stray light, as well as several control and veto channels specific to HET which are needed to commission and operate the instrument and to calibrate the detector sensitivity.
|
physics
|
We prove that the graded quotients of the filtration by ramification groups of any henselian discrete valuation field of residue characteristic $p>0$ are $F_p$-vector spaces. We define an injection of the character group of each graded quotient to a twisted cotangent space defined using a cotangent complex.
|
mathematics
|
We show that statistical criticality, i.e. the occurrence of power law frequency distributions, arises in samples that are maximally informative about the underlying generating process. In order to reach this conclusion, we first identify the frequency with which different outcomes occur in a sample, as the variable carrying useful information on the generative process. The entropy of the frequency, that we call relevance, provides an upper bound to the number of informative bits. This differs from the entropy of the data, that we take as a measure of resolution. Samples that maximise relevance at a given resolution - that we call maximally informative samples - exhibit statistical criticality. In particular, Zipf's law arises at the optimal trade-off between resolution (i.e. compression) and relevance. As a byproduct, we derive a bound of the maximal number of parameters that can be estimated from a dataset, in the absence of prior knowledge on the generative model. Furthermore, we relate criticality to the statistical properties of the representation of the data generating process. We show that, as a consequence of the concentration property of the Asymptotic Equipartition Property, representations that are maximally informative about the data generating process are characterised by an exponential distribution of energy levels. This arises from a principle of minimal entropy, that is conjugate of the maximum entropy principle in statistical mechanics. This explains why statistical criticality requires no parameter fine tuning in maximally informative samples.
|
physics
|
We investigate the properties of global cosmic string networks as a function of the ratio of string tension to Goldstone-field coupling, and as a function of the Hubble damping strength. Our results show unambiguously that the string density is sensitive to this ratio. We also find that existing semi-analytical (one-scale) models must be missing some important aspect of the network dynamics. Our results point the way towards improving such models.
|
high energy physics phenomenology
|
Majorana bound states are interesting candidates for applications in topological quantum computation. Low energy models allowing to grasp their properties are hence conceptually important. The usual scenario in these models is that two relevant gapped phases, separated by a gapless point, exist. In one of the phases, topological boundary states are absent, while the other one supports Majorana bound states. We show that a customary model violates this paradigm. The phase that should not host Majorana fermions supports a fractional soliton exponentially localized at only one end. By varying the parameters of the model, we describe analytically the transition between the fractional soliton and two Majorana fermions. Moreover, we provide a possible physical implementation of the model. We further characterize the symmetry of the superconducting pairing, showing that the odd-frequency component is intimately related to the spatial profile of the Majorana wavefunctions.
|
condensed matter
|
Property inference attacks consider an adversary who has access to the trained model and tries to extract some global statistics of the training data. In this work, we study property inference in scenarios where the adversary can maliciously control part of the training data (poisoning data) with the goal of increasing the leakage. Previous work on poisoning attacks focused on trying to decrease the accuracy of models either on the whole population or on specific sub-populations or instances. Here, for the first time, we study poisoning attacks where the goal of the adversary is to increase the information leakage of the model. Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications where some of the data sources may be malicious. We describe our \emph{property inference poisoning attack} that allows the adversary to learn the prevalence in the training data of any property it chooses. We theoretically prove that our attack can always succeed as long as the learning algorithm used has good generalization properties. We then verify the effectiveness of our attack by experimentally evaluating it on two datasets: a Census dataset and the Enron email dataset. We were able to achieve above $90\%$ attack accuracy with $9-10\%$ poisoning in all of our experiments.
|
computer science
|
We derive integral equation for superconducting gap, which takes into account the quantum nature of electron motion in a parallel magnetic field in a quasi-two-dimensional (Q2D) superconductor in the presence of a non-zero perpendicular field component. By comparison of our theoretical results with the recent experimental data obtained on the NbS$_2$, we show that the orbital effect against superconductivity partially destroys superconductivity in the so-called Ginzburg-Landau area of this Q2D conductor, as expected. Nevertheless, at relatively high magnetic fields, $H \simeq 15 \ T$, the orbital effect starts to improve the Fulde-Ferrell-Larkin-Ovchinnikov phase in the NbS$_2$, due to the quantum nature of electron motion in a parallel magnetic field. In our opinion, this is the most clear demonstration that the orbital effect against superconductivity in a parallel magnetic field has a reversible nature.
|
condensed matter
|
The variational quantum eigensolver (VQE) has emerged as one of the most promising near-term quantum algorithms that can be used to simulate many-body systems such as molecular electronic structures. Serving as an attractive ansatz in the VQE algorithm, unitary coupled cluster (UCC) theory has seen a renewed interest in recent literature. However, unlike the original classical UCC theory, implementation on a quantum computer requires a finite-order Suzuki-Trotter decomposition to separate the exponentials of the large sum of Pauli operators. While previous literature has recognized the non-uniqueness of different orderings of the operators in the Trotterized form of UCC methods, the question of whether or not different orderings matter at the chemical scale has not been addressed. In this letter, we explore the effect of operator ordering on the Trotterized UCCSD ansatz, as well as the much more compact $k$-UpCCGSD ansatz recently proposed by Lee et al. We observe a significant, system-dependent variation in the energies of Trotterizations with different operator orderings. The energy variations occur on a chemical scale, sometimes on the order of hundreds of kcal/mol. This letter establishes the need to define not only the operators present in the ansatz, but also the order in which they appear. This is necessary for adhering to the quantum chemical notion of a ``model chemistry'', in addition to the general importance of scientific reproducibility. As a final note, we suggest a useful strategy to select out of the combinatorial number of possibilities, a single well-defined and effective ordering of the operators.
|
quantum physics
|
Factor-adjusted multiple testing is used for handling strong correlated tests. Since most of previous works control the false discovery rate under sparse alternatives, we develop a two-step method, namely the AdaFAT, for any true false proportion. In this paper, the proposed procedure is adjusted by latent factor loadings. Under the existence of explanatory variables, a uniform convergence rate of the estimated factor loadings is given. We also show that the power of AdaFAT goes to one along with the controlled false discovery rate. The performance of the proposed procedure is examined through simulations calibrated by China A-share market.
|
mathematics
|
The question of association between outcome and feature is generally framed in the context of a model on functional and distributional forms. Our motivating application is that of identifying serum biomarkers of angiogenesis, energy metabolism, apoptosis, and inflammation, predictive of recurrence after lung resection in node-negative non-small cell lung cancer patients with tumor stage T2a or less. We propose an omnibus approach for testing association that is free of assumptions on functional forms and distributions and can be used as a black box method. This proposed maximal permutation test is based on the idea of thresholding, is readily implementable and is computationally efficient. We illustrate that the proposed omnibus tests maintain their levels and have strong power as black box tests for detecting linear, nonlinear and quantile-based associations, even with outlier-prone and heavy-tailed error distributions and under nonparametric setting. We additionally illustrate the use of this approach in model-free feature screening and further examine the level and power of these tests for binary outcome. We compare the performance of the proposed omnibus tests with comparator methods in our motivating application to identify preoperative serum biomarkers associated with non-small cell lung cancer recurrence in early stage patients.
|
statistics
|
One of the most fundamental topics in subspace coding is to explore the maximal possible value ${\bf A}_q(n,d,k)$ of a set of $k$-dimensional subspaces in $\mathbb{F}_q^n$ such that the subspace distance satisfies $\operatorname{d_S}(U,V) = \dim(U+V)-\dim(U\cap V) \geq d$ for any two different $k$-dimensional subspaces $U$ and $V$ in this set. In this paper, we propose a construction for constant dimension subspace codes by inserting a composite structure composing of an MRD code and its sub-codes. Its vast advantage over the previous constructions has been confirmed through extensive examples. At least $49$ new constant dimension subspace codes which exceeds the currently best codes are constructed.
|
computer science
|
Modern programming follows the continuous integration (CI) and continuous deployment (CD) approach rather than the traditional waterfall model. Even the development of modern programming languages uses the CI/CD approach to swiftly provide new language features and to adapt to new development environments. Unlike in the conventional approach, in the modern CI/CD approach, a language specification is no more the oracle of the language semantics because both the specification and its implementations can co-evolve. In this setting, both the specification and implementations may have bugs, and guaranteeing their correctness is non-trivial. In this paper, we propose a novel N+1-version differential testing to resolve the problem. Unlike the traditional differential testing, our approach consists of three steps: 1) to automatically synthesize programs guided by the syntax and semantics from a given language specification, 2) to generate conformance tests by injecting assertions to the synthesized programs to check their final program states, 3) to detect bugs in the specification and implementations via executing the conformance tests on multiple implementations, and 4) to localize bugs on the specification using statistical information. We actualize our approach for the JavaScript programming language via JEST, which performs N+1-version differential testing for modern JavaScript engines and ECMAScript, the language specification describing the syntax and semantics of JavaScript in a natural language. We evaluated JEST with four JavaScript engines that support all modern JavaScript language features and the latest version of ECMAScript (ES11, 2020). JEST automatically synthesized 1,700 programs that covered 97.78% of syntax and 87.70% of semantics from ES11. Using the assertion-injection, it detected 44 engine bugs in four engines and 27 specification bugs in ES11.
|
computer science
|
We investigate the implications of interference detection for experiments that are pursuing a detection of the redshifted 21-cm signals from the Epoch of Reionization. Interference detection causes samples to be sporadically flagged and rejected. As a necessity to reduce the data volume, flagged samples are typically (implicitly) interpolated during time or frequency averaging or uv-gridding. This so-far unexplored systematic biases the 21-cm power spectrum, and it is important to understand this bias for current 21-cm experiments as well as the upcoming SKA Epoch of Reionization experiment. We analyse simulated data using power spectrum analysis and Gaussian process regression. We find that the combination of flagging and averaging causes tiny spectral fluctuations, resulting in `flagging excess power'. This excess power does not substantially average down over time and, without extra mitigation techniques, can exceed the power of realistic models of the 21-cm reionization signals in LOFAR observations. We mitigate the bias by i) implementing a novel way to average data using a Gaussian-weighted interpolation scheme; ii) using unitary instead of inverse-variance weighting of visibilities; and iii) using low-resolution forward modelling of the data. After these modifications, which have been integrated in the LOFAR EoR processing pipeline, the excess power reduces by approximately three orders of magnitude, and is no longer preventing a detection of the 21-cm signals.
|
astrophysics
|
In this work we establish universal ensemble independent bounds on the mean and variance of the mutual information and channel capacity for imaging through a complex medium. Both upper and lower bounds are derived and are solely dependent on the mean transmittance of the medium and the number of degrees of freedom $N$. In the asymptotic limit of large $N$, upper bounds on the channel capacity are shown to be well approximated by that of a bimodal channel with independent identically Bernoulli distributed transmission eigenvalues. Reflection based imaging modalities are also considered and permitted regions in the transmission-reflection information plane defined. Numerical examples drawn from the circular and DMPK random matrix ensembles are used to illustrate the validity of the derived bounds. Finally, although the mutual information and channel capacity are shown to be non-linear statistics of the transmission eigenvalues, the existence of central limit theorems is demonstrated and discussed.
|
physics
|
Let $F$ be a finite field of characteristic $p>0$ with $q = p^{n}$ elements. In this paper, a complete characterization of the unit groups $U(FG)$ of group algebras $FG$ for the abelian groups of order $32$, over finite field of characteristic $p>0$ has been obtained.
|
mathematics
|
In this review I am going to present biased by personal experience review of QCD. The review covers topics like Improved Transverse Momentum Dependent Factorization, $k_T$ dependent splitting functions, Transversal Momentum Dependent parton shower, non-Gaussian broadening of jet traversing quark gluon plasma.
|
high energy physics phenomenology
|
Light, weakly-coupled dark sectors may be naturally decoupled in the early universe and enter equilibrium with the Standard Model bath during the epoch of primordial nucleosynthesis. The equilibration and eventual decoupling of dark sector states modifies the expansion rate of the universe, which alters the predicted abundances of the light elements. This effect can be encompassed in a time-varying contribution to $N_{\mathrm{eff}}$, the effective number of neutrino species, such that $N_{\mathrm{eff}}$ during nucleosynthesis differs from its measured value at the time of recombination. We investigate the impact of such variations on the light element abundances with model-independent templates for the time-dependence of $N_{\mathrm{eff}}$ as well as in specific models where a dark sector equilibrates with neutrinos or photons. We find that significant modifications of the expansion rate are consistent with the measured abundances of light nuclei, provided that they occur during specific periods of nucleosynthesis. In constraining concrete models, the relative importance of the cosmic microwave background and primordial nucleosynthesis is highly model-dependent.
|
high energy physics phenomenology
|
Jellyfish - majestic, energy efficient, and one of the oldest species that inhabits the oceans. It is perhaps the second item, their efficiency, that has captivated scientists for decades into investigating their locomotive behavior. Yet, no one has specifically explored the role that their tentacles and oral arms may have on their potential swimming performance, arguably the very features that give jellyfish their beauty while instilling fear into their prey (and beach-goers). We perform comparative in silico experiments to study how tentacle/oral arm number, length, placement, and density affect forward swimming speeds, cost of transport, and fluid mixing. An open source implementation of the immersed boundary method was used (IB2d) to solve the fully coupled fluid-structure interaction problem of an idealized flexible jellyfish bell with poroelastic tentacles/oral arms in a viscous, incompressible fluid. Overall tentacles/oral arms inhibit forward swimming speeds, by appearing to suppress vortex formation. Non-linear relationships between length and fluid scale (Reynolds Number) as well as tentacle/oral arm number, density, and placement are observed, illustrating that small changes in morphology could result in significant decreases in swimming speeds, in some cases by downwards of 400% between cases with to without tentacles/oral arms.
|
physics
|
The possibility to operate massive mechanical resonators in the quantum regime has become central in fundamental sciences, in particular to test the boundaries of quantum mechanics. Optomechanics, where photons (e.g. optical, microwave) are coupled to mechanical motion, provide the tools to control mechanical motion near the fundamental quantum limits. Reaching single-photon strong coupling would allow to prepare the mechanical resonator in non-Gaussian quantum states. Yet, this regime remains challenging to achieve with massive resonators due to the small optomechanical couplings. Here we demonstrate a novel approach where a massive mechanical resonator is magnetically coupled to a microwave cavity. By improving the coupling by one order of magnitude over current microwave optomechanical systems, we achieve single-photon strong cooperativity, an important intermediate step to reach single-photon strong coupling. Such strong interaction allows for cooling the mechanical resonator with on average a single photon in the microwave cavity. Beyond tests for quantum foundations, our approach is also well suited as a quantum sensor or a microwave to optical transducer.
|
quantum physics
|
The formation of light nuclei can be described as the coalescence of clusters of nucleons into nuclei. In the case of small interacting systems, such as dark matter and $e^+e^-$ annihilations or $pp$ collisions, the coalescence condition is often imposed only in momentum space and hence the size of the interaction region is neglected. On the other hand, in most coalescence models used for heavy ion collisions, the coalescence probability is controlled mainly by the size of the interaction region, while two-nucleon momentum correlations are either neglected or treated as collective flow. Recent experimental data from $pp$ collisions at LHC have been interpreted as evidence for such collective behaviour, even in small interacting systems. We argue that these data are naturally explained in the framework of conventional QCD inspired event generators when both two-nucleon momentum correlations and the size of the hadronic emission volume are taken into account. To include both effects, we employ a per-event coalescence model based on the Wigner function representation of the produced nuclei states. This model reproduces well the source size for baryon emission and the coalescence factor $B_2$ measured recently by the ALICE collaboration in $pp$ collisions.
|
high energy physics phenomenology
|
Gauge theory is the framework of the Standard Model of particle physics and is also important in condensed matter physics. As its major non-perturbative approach, lattice gauge theory is traditionally implemented using Monte Carlo simulation, consequently it usually suffers such problems as the Fermion sign problem and the lack of real-time dynamics. Hopefully they can be avoided by using quantum simulation, which simulates quantum systems by using controllable true quantum processes. The field of quantum simulation is under rapid development. Here we present a circuit-based digital scheme of quantum simulation of quantum $\mathbb{Z}_2$ lattice gauge theory in $2+1$ and $3+1$ dimensions, using quantum adiabatic algorithms implemented in terms of universal quantum gates. Our algorithm generalizes the Trotter and symmetric decompositions to the case that the Hamiltonian varies at each step in the decomposition. Furthermore, we carry through a complete demonstration of this scheme in classical GPU simulator, and obtain key features of quantum $\mathbb{Z}_2$ lattice gauge theory, including quantum phase transitions, topological properties, gauge invariance and duality. Hereby dubbed pseudoquantum simulation, classical demonstration of quantum simulation in state-of-art fast computers not only facilitates the development of schemes and algorithms of real quantum simulation, but also represents a new approach of practical computation.
|
quantum physics
|
This paper is concerned with the asymptotic stability of the initial-boundary value problem of a singular PDE-ODE hybrid chemotaxis system in the half space $\R_+=[0, \infty)$. We show that when the non-zero flux boundary condition at $x=0$ is prescribed and the initial data are suitably chosen, the solution of the initial-boundary value problem converges, as time tend to infinity, to a shifted traveling wavefront restricted in the half space $[0,\infty)$ where the wave profile and speed are uniquely selected by the boundary flux data. The results are proved by a Cole-Hopf type transformation and weighted energy estimates along with the technique of taking {\color{black} the} anti-derivative.
|
mathematics
|
Inspired by the allure of additive fabrication, we pose the problem of origami design from a new perspective: how can we grow a folded surface in three dimensions from a seed so that it is guaranteed to be isometric to the plane? We solve this problem in two steps: by first identifying the geometric conditions for the compatible completion of two separate folds into a single developable four-fold vertex, and then showing how this foundation allows us to grow a geometrically compatible front at the boundary of a given folded seed. This yields a complete marching, or additive, algorithm for the inverse design of the complete space of developable quad origami patterns that can be folded from flat sheets. We illustrate the flexibility of our approach by growing ordered, disordered, straight and curved folded origami and fitting surfaces of given curvature with folded approximants. Overall, our simple shift in perspective from a global search to a local rule has the potential to transform origami-based meta-structure design.
|
condensed matter
|
Motivated by objects such as electric fields or fluid streams, we study the problem of learning stochastic fields, i.e. stochastic processes whose samples are fields like those occurring in physics and engineering. Considering general transformations such as rotations and reflections, we show that spatial invariance of stochastic fields requires an inference model to be equivariant. Leveraging recent advances from the equivariance literature, we study equivariance in two classes of models. Firstly, we fully characterise equivariant Gaussian processes. Secondly, we introduce Steerable Conditional Neural Processes (SteerCNPs), a new, fully equivariant member of the Neural Process family. In experiments with Gaussian process vector fields, images, and real-world weather data, we observe that SteerCNPs significantly improve the performance of previous models and equivariance leads to improvements in transfer learning tasks.
|
computer science
|
Determining design principles that boost robustness of interdependent networks is a fundamental question of engineering, economics, and biology. It is known that maximizing the degree correlation between replicas of the same node leads to optimal robustness. Here we show that increased robustness might also come at the expense of introducing multiple phase transitions. These results reveal yet another possible source of fragility of multiplex networks that has to be taken into the account during network optimisation and design.
|
physics
|
In this work we analyze the role of $\alpha '$-corrections to type IIB orientifold compactifications in K\"ahler moduli stabilization and inflation. In particular, we propose a model independent scenario to achieve non-supersymmetric Minkowski and de Sitter vacua for geometric backgrounds with positive Euler-characteristic and generic number of K\"ahler moduli. The vacua are obtained by a tuning of the flux superpotential. Moreover, in the one-modulus case we argue for a mechanisms to achieve model independent slow-roll.
|
high energy physics theory
|
A significant challenge in seasonal climate prediction is whether a prediction can beat climatology. We hereby present results from two data-driven models - a convolutional (CNN) and a recurrent (RNN) neural network - that predict 2 m temperature out to 52 weeks for six geographically-diverse locations. The motivation for testing the two classes of ML models is to allow the CNN to leverage information related to teleconnections and the RNN to leverage long-term historical temporal signals. The ML models boast improved accuracy of long-range temperature forecasts up to a lead time of 30 weeks for PCC and up 52 weeks for RMSESS, however only for select locations. Further iteration is required to ensure the ML models have value beyond regions where the climatology has a noticeably reduced correlation skill, namely the tropics.
|
physics
|
Electroweak instantons are a prediction of the Standard Model and have been studied in great detail in the past although they have not been observed. Earlier calculations of the instanton production cross section at colliders revealed that it was exponentially suppressed at low energies, but may grow large at energies (much) above the sphaleron mass. Such calculations faced difficulty in the breakdown of the instanton perturbation theory in the high-energy regime. In this paper we review the calculation for the electroweak instanton cross section using the optical theorem, including quantum effects arising from interactions in the initial state and show that this leads to an exponential suppression of the cross section at all energies, rendering the process unobservable.
|
high energy physics phenomenology
|
An attractor decomposition meta-algorithm for solving parity games is given that generalizes the classic McNaughton-Zielonka algorithm and its recent quasi-polynomial variants due to Parys (2019), and to Lehtinen, Schewe, and Wojtczak (2019). The central concepts studied and exploited are attractor decompositions of dominia in parity games and the ordered trees that describe the inductive structure of attractor decompositions. The main technical results include the embeddable decomposition theorem and the dominion separation theorem that together help establish a precise structural condition for the correctness of the universal algorithm: it suffices that the two ordered trees given to the algorithm as inputs embed the trees of some attractor decompositions of the largest dominia for each of the two players, respectively. The universal algorithm yields McNaughton-Zielonka, Parys's, and Lehtinen-Schewe-Wojtczak algorithms as special cases when suitable universal trees are given to it as inputs. The main technical results provide a unified proof of correctness and deep structural insights into those algorithms. A symbolic implementation of the universal algorithm is also given that improves the symbolic space complexity of solving parity games in quasi-polynomial time from $O(d \lg n)$---achieved by Chatterjee, Dvo\v{r}\'{a}k, Henzinger, and Svozil (2018)---down to $O(\lg d)$, where $n$ is the number of vertices and $d$ is the number of distinct priorities in a parity game. This not only exponentially improves the dependence on $d$, but it also entirely removes the dependence on $n$.
|
computer science
|
The efficiency of flow-based networking mechanisms strongly depends on traffic characteristics and should thus be assessed using accurate flow models. For example, in the case of algorithms based on the distinction between elephant and mice flows, it is extremely important to ensure realistic flows' length and size distributions. Credible models or data are not available in literature. Numerous works contain only plots roughly presenting empirical distribution of selected flow parameters, without providing distribution mixture models or any reusable numerical data. This paper aims to fill that gap and provide reusable models of flow length and size derived from real traffic traces. Traces were collected at the Internet-facing interface of the university campus network and comprise four billion layer-4 flow (275 TB). These models can be used to assess a variety of flow-oriented solutions under the assumption of realistic conditions. Additionally, this paper provides a tutorial on constructing network flow models from traffic traces. The proposed methodology is universal and can be applied to traffic traces gathered in any network. We also provide an open source software framework to analyze flow traces and fit general mixture models to them.
|
computer science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.