text
stringlengths
11
9.77k
label
stringlengths
2
104
The extraction and proper utilization of convolution neural network (CNN) features have a significant impact on the performance of image super-resolution (SR). Although CNN features contain both the spatial and channel information, current deep techniques on SR often suffer to maximize performance due to using either the spatial or channel information. Moreover, they integrate such information within a deep or wide network rather than exploiting all the available features, eventually resulting in high computational complexity. To address these issues, we present a binarized feature fusion (BFF) structure that utilizes the extracted features from residual groups (RG) in an effective way. Each residual group (RG) consists of multiple hybrid residual attention blocks (HRAB) that effectively integrates the multiscale feature extraction module and channel attention mechanism in a single block. Furthermore, we use dilated convolutions with different dilation factors to extract multiscale features. We also propose to adopt global, short and long skip connections and residual groups (RG) structure to ease the flow of information without losing important features details. In the paper, we call this overall network architecture as hybrid residual attention network (HRAN). In the experiment, we have observed the efficacy of our method against the state-of-the-art methods for both the quantitative and qualitative comparisons.
computer science
The work presents a thermomechanical model for polycrystalline NiTi-based shape memory alloys developed within the framework of generalized standard solids, which is able to cover loading-mode dependent localization of the martensitic transformation. The key point is the introduction of a novel austenite-martensite interaction term responsible for strain-softening of the material. Mathematical properties of the model are analyzed and a suitable regularization and a time-discrete approximation for numerical implementation to the finite-element method are proposed. Model performance is illustrated on two numerical simulations: tension of a superelastic NiTi ribbon and bending of a superelastic NiTi tube.
condensed matter
We study critical dynamics through time evolution of quantum field theories driven to a Lifshitz-like fixed point, with $z>1$, under relevant deformations. The deformations we consider are fast smooth quantum quenches, namely when the quench scale $\delta t^{-z}$ is large compared to the deformation scale. We show that in holographic models the response of the system merely depends on the scaling dimension of the quenched operator as $\delta\lambda\cdot\delta t^{d-2\Delta+z-1}$, where $\delta\lambda$ is the deformation amplitude. This scaling behavior is enhanced logarithmically in certain cases. We also study free Lifshitz scalar theory deformed by mass operator and show that the universal scaling of the response completely matches with holographic analysis. We argue that this scaling behavior is universal for any relevant deformation around Lifshitz-like UV fixed points.
high energy physics theory
Steady state simulations} of magnetized electron fluid equations with strong anisotropic diffusion based on the first-order hyperbolic approach is carried out using cell-centered higher order upwind schemes, linear and weighted essentially non-oscillatory (WENO). Along with the magnetized electrons, the diffusion equation is also simulated to demonstrate the implementation and design order of the accuracy of the approach due to their similar upwind structure. We show the adequacy of linear upwind schemes for diffusion equation and the use of shock-capturing scheme like WENO does not have any adverse effect on the solution, unlike the total-variation diminishing (TVD) methods. We further extended the approach to advection-diffusion equation, and appropriate boundary conditions have obtained a consistent design accuracy of the third and fifth order. We implemented the WENO approach to advection-diffusion equation by using the split hyperbolic method to demonstrate the advantage of non-oscillatory schemes to capture sharp gradients in boundary layer type problems without spurious oscillations. Finally, numerical results for magnetized electrons simulations indicate that with increasing strength of magnetic confinement it is possible to capture sharp gradients without oscillations by WENO scheme.
physics
Optomechanical couplings involve both beam-splitter and two-mode-squeezing types of interactions. While the former underlies the utility of many applications, the latter creates unwanted excitations and is usually detrimental. In this work, we propose a simple but powerful method based on cavity parametric driving to suppress the unwanted excitation that does not require working with a deeply sideband-resolved cavity. Our approach is based on a simple observation: as both the optomechanical two-mode-squeezing interaction and the cavity parametric drive induce squeezing transformations of the relevant photonic bath modes, they can be made to cancel one another. We illustrate how our method can cool a mechanical oscillator below the quantum back-action limit, and significantly suppress the output noise of a sideband-unresolved optomechanical transducer.
quantum physics
When a Monte Carlo algorithm is used to evaluate a physical observable A, it is possible to slightly modify the algorithm so that it evaluates simultaneously A and the derivatives $\partial$ $\varsigma$ A of A with respect to each problem-parameter $\varsigma$. The principle is the following: Monte Carlo considers A as the expectation of a random variable, this expectation is an integral, this integral can be derivated as function of the problem-parameter to give a new integral, and this new integral can in turn be evaluated using Monte Carlo. The two Monte Carlo computations (of A and $\partial$ $\varsigma$ A) are simultaneous when they make use of the same random samples, i.e. when the two integrals have the exact same structure. It was proven theoretically that this was always possible, but nothing insures that the two estimators have the same convergence properties: even when a large enough sample-size is used so that A is evaluated very accurately, the evaluation of $\partial$ $\varsigma$ A using the same sample can remain inaccurate. We discuss here such a pathological example: null-collision algorithms are very successful when dealing with radiative transfer in heterogeneous media, but they are sources of convergence difficulties as soon as sensitivity-evaluations are considered. We analyse theoretically these convergence difficulties and propose an alternative solution.
physics
In this article, we tentatively assign the $P_{c}(4312)$ to be a $\bar{D}\Sigma_{c}$ molecular state with quantum number $J^{P}=\frac{1}{2}^{-}$, and calculate its magnetic moment using the QCD sum rule method in external weak electromagnetic field. Starting with the two-point correlation function in external electromagnetic field and expanding it in power of the electromagnetic interaction Hamiltonian, we extract the magnetic moment from the linear response to the external electromagnetic field. The numerical value is $\mu_{P_{c}}=0.59^{+0.10}_{-0.20}$.
high energy physics phenomenology
We describe the ADAPT system for the 2020 IWPT Shared Task on parsing enhanced Universal Dependencies in 17 languages. We implement a pipeline approach using UDPipe and UDPipe-future to provide initial levels of annotation. The enhanced dependency graph is either produced by a graph-based semantic dependency parser or is built from the basic tree using a small set of heuristics. Our results show that, for the majority of languages, a semantic dependency parser can be successfully applied to the task of parsing enhanced dependencies. Unfortunately, we did not ensure a connected graph as part of our pipeline approach and our competition submission relied on a last-minute fix to pass the validation script which harmed our official evaluation scores significantly. Our submission ranked eighth in the official evaluation with a macro-averaged coarse ELAS F1 of 67.23 and a treebank average of 67.49. We later implemented our own graph-connecting fix which resulted in a score of 79.53 (language average) or 79.76 (treebank average), which would have placed fourth in the competition evaluation.
computer science
The novel thermally degenerate plasma model (based on a system containing relativistically and thermally degenerate inertial-less electron species, non-relativistically and thermally degenerate inertial light nucleus species, and stationary heavy nucleus species) is considered. The basic features of planar and nonplanar solitary structures associated with the thermally degenerate pressure driven nucleus-acoustic waves propagating in such a thermally degenerate plasma system has been investigated. The reductive perturbation method, which is valid for small amplitude solitary waves, is used. It is found that the effects of nonplanar cylindrical and spherical geometries, non and ultra-relativistically degenerate electron species, thermal and degenerate pressures of electron and light nucleus species, and number densities of light and heavy nucleus species significantly modify the basic features (viz. speed, amplitude, and width) of the solitary potential structures associated with thermally degenerate pressure driven nucleus-acoustic waves. The degenerate plasma model under consideration is so general and realistic that it is applicable not only in astrophysical compact objects like hot white dwarfs, but also in space plasma systems like mesospheres containing positively charged heavy particles in addition to electron and ion plasma species.
physics
We present the supersymmetric extension of the unified model for inflation and Dark Matter studied in Ref. arXiv:1811.02302. The scenario is based on the incomplete decay of the inflaton field into right-handed (s)neutrino pairs. By imposing a discrete interchange symmetry on the inflaton and the right-handed (s)neutrinos, one can ensure the stability of the inflaton field at the global minimum today, while still allowing it to partially decay and reheat the Universe after inflation. Compatibility of inflationary predictions, BBN bounds and obtaining the right DM abundance for the inflaton Dark Matter candidate typically requires large values of its coupling to the neutrino sector, and we use supersymmetry to protect the inflaton from potentially dangerous large radiative corrections which may spoil the required flatness of its potential. In addition, the inflaton will decay now predominatly into sneutrinos during reheating, which in turn give rise both to the thermal bath made of Standard Model particles, and inflaton particles. We have performed a through analyses of the reheating process following the evolution of all the partners involved, identifying the different regimes in the parameter space for the final Dark Matter candidate. This as usual can be a WIMP-like inflaton particle or an oscillating condensate, but we find a novel regime for a FIMP-like candidate.
high energy physics phenomenology
Utilizing a 3D mean-field lattice-gas model, we analyze the effect of confinement on the nature of capillary phase transition in granular aggregates with varying disorder and their inverse porous structures obtained by interchanging particles and pores. Surprisingly, the confinement effects are found to be much less pronounced in granular aggregates as opposed to porous structures. We show that this discrepancy can be understood in terms of the surface-surface correlation length with a connected path through the fluid domain, suggesting that this length captures the true degree of confinement. We also find that the liquid-gas phase transition in these porous materials is of second order nature near capillary critical temperature, which is shown to represent a true critical temperature, i.e. independent of the degree of disorder and the nature of solid matrix, discrete or continuous. The critical exponents estimated here from finite-size scaling analysis suggest that this transition belongs to the 3D random field Ising model universality class as hypothesized by P.G. de Gennes, with the underlying random fields induced by local disorder in fluid-solid interactions.
condensed matter
During aqueous corrosion, atoms in the solid react chemically with oxygen, leading either to the formation of an oxide film or to the dissolution of the host material. Commonly, the first step in corrosion involves an oxygen atom from the dissociated water that reacts with the surface atoms and breaks near surface bonds. In contrast, hydrogen on the surface often functions as a passivating species. Here, we discovered that the roles of O and H are reversed in the early corrosion stages on a Si terminated SiC surface. O forms stable species on the surface, and chemical attack occurs by H that breaks the Si-C bonds. This so-called hydrogen scission reaction is enabled by a newly discovered metastable bridging hydroxyl group that can form during water dissociation. The Si atom that is displaced from the surface during water attack subsequently forms H2SiO3, which is a known precursor to the formation of silica and silicic acid. This study suggests that the roles of H and O in oxidation need to be reconsidered.
condensed matter
We introduce partial duality of hypermaps, which include the classical Euler-Poincar\'e duality as a particular case. Combinatorially, hypermaps may be described in one of three ways: as three involutions on the set of flags (bi-rotation system or $\tau$-model), or as three permutations on the set of half-edges (rotation system or $\sigma$-model in orientable case), or as edge 3-coloured graphs. We express partial duality in each of these models. We give a formula for the genus change under partial duality.
mathematics
Biological processes underlying the basic functions of a cell involve complex interactions between genes. From a technical point of view, these interactions can be represented through a graph where genes and their connections are, respectively, nodes and edges. The main objective of this paper is to develop a statistical framework for modelling the interactions between genes when the activity of genes is measured on a discrete scale. In detail, we define a new algorithm for learning the structure of undirected graphs, PC-LPGM, proving its theoretical consistence in the limit of infinite observations. The proposed algorithm shows promising results when applied to simulated data as well as to real data.
statistics
This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. In each round, a user, randomly picked from a population of $m$ users, requests a recommendation. The decision-maker observes the user and selects an item from a catalogue of $n$ items. Importantly, an item cannot be recommended twice to the same user. The probabilities that a user likes each item are unknown. The performance of the recommendation algorithm is captured through its regret, considering as a reference an Oracle algorithm aware of these probabilities. We investigate various structural assumptions on these probabilities: we derive for each structure regret lower bounds, and devise algorithms achieving these limits. Interestingly, our analysis reveals the relative weights of the different components of regret: the component due to the constraint of not presenting the same item twice to the same user, that due to learning the chances users like items, and finally that arising when learning the underlying structure.
statistics
We compare spectra of the zonal harmonics of the large-scale magnetic field of the Sun using observation results and solar dynamo models. The main solar activity cycle as recorded in these tracers is a much more complicated phenomenon than the eigen solution of solar dynamo equations with the growth saturated by a back reaction of the dynamo-driven magnetic field on solar hydrodynamics. The nominal 11(22)-year cycle as recorded in each mode has a specific phase shift varying from cycle to cycle; the actual length of the cycle varies from one cycle to another and from tracer to tracer. Both the observation and the dynamo model show an exceptional role of the axisymmetric $\ell_{5}$ mode. Its origin seems to be readily connected with the formation and evolution of sunspots on the solar surface. The results of observations and dynamo models show a good agreement for the low $\ell_{1}$ and $\ell_{3}$ modes. The results for these modes do not differ significantly for the axisymmetric and nonaxisymmetric models. Our findings support the idea that the sources of the solar dynamo arise as a result of both the distributed dynamo processes in the bulk of the convection zone and the surface magnetic activity.
astrophysics
We propose a quantum classifier, which can classify data under the supervised learning scheme using a quantum feature space. The input feature vectors are encoded in a single qu$N$it (a $N$ level quantum system), as opposed to more commonly used entangled multi-qubit systems. For training we use the much used quantum variational algorithm -- a hybrid quantum-classical algorithm -- in which the forward part of the computation is performed on a quantum hardware whereas the feedback part is carried out on a classical computer. We introduce "single shot training" in our scheme, with all input samples belonging to the same class being used to train the classifier simultaneously. This significantly speeds up the training procedure and provides an advantage over classical machine learning classifiers. We demonstrate successful classification of popular benchmark datasets with our quantum classifier and compare its performance with respect to some classical machine learning classifiers. We also show that the number of training parameters in our classifier is significantly less than the classical classifiers.
quantum physics
Mn-Zn ferrite nanoparticles have been subject of increasing research due to their desired properties for a wide range of applications. These properties include nanometer particle size control, tunable magnetic properties and low toxicity, providing these ferrites with the necessary requirements for cancer treatment via magnetic hyperthermia. During this master thesis, powders of Mn1-xZnxFe2O4 (x=0; 0.5; 0.8; 1) were synthesized via the sol-gel autocombustion and hydrothermal methods, aiming to optimize their structural and magnetic properties for further application in a ferrofluid. Samples were characterized by XRD, SQUID, SEM, TEM and magnetic induction heating (MIH) techniques. The XRD diffractograms of hydrothermally produced samples present spinel crystal structure with high single-phase percentage (>88%). Rietveld refinement and Williamson-Hall analysis reveal a decrease of lattice constant and crystallite size with increase of Zn/Mn ratio. TEM images reveal narrow particle size distributions and decrease of the mean particle size with increase of Zn/Mn. SQUID results show that the increase of Zn results in a decrease of saturation magnetization and remnant magnetization. More noticeably, the M(T) curves present a shift in the samples magnetic ordering temperature towards lower temperatures with the increase of Zn content, from ~556 to ~284 K. The MIH experiment also unveil a decrease in the heating rate with the increase of Zn. Nanocrystals of Mn-Zn ferrite produced by hydrothermal method present better crystallinity and magnetic properties than the sol-gel auto-combustion samples. The hydrothermally synthesized samples revealed dependence of its structural and magnetic properties with Mn/Zn ratio.The magnetic ordering temperature of these ferrites can be used as a self-controlled mechanism of heating, raising these ferrites to a class of smart materials.
condensed matter
Schwinger pair production is analyzed in a BPST instanton background and in its SL$(2,\mathbb{C})$ complex extension for complex scalar particles. A non-Abelian extension of the worldline instanton method is utilized, wherein Wong's equations in a coherent state picture adopted for SL$(2,\mathbb{C})$ are solved in Euclidean spacetime. While pair production is not predicted in the BPST instanton, a complex extension of the BPST instanton, existing as parallel fields in Minkowski spacetime, is shown to decay via the Schwinger effect.
high energy physics theory
The geographically weighted regression (GWR) is a well-known statistical approach to explore spatial non-stationarity of the regression relationship in spatial data analysis. In this paper, we discuss a Bayesian recourse of GWR. Bayesian variable selection based on spike-and-slab prior, bandwidth selection based on range prior, and model assessment using a modified deviance information criterion and a modified logarithm of pseudo-marginal likelihood are fully discussed in this paper. Usage of the graph distance in modeling areal data is also introduced. Extensive simulation studies are carried out to examine the empirical performance of the proposed methods with both small and large number of location scenarios, and comparison with the classical frequentist GWR is made. The performance of variable selection and estimation of the proposed methodology under different circumstances are satisfactory. We further apply the proposed methodology in analysis of a province-level macroeconomic data of 30 selected provinces in China. The estimation and variable selection results reveal insights about China's economy that are convincing and agree with previous studies and facts.
statistics
Pull request (PR) based development, which is a norm for the social coding platforms, entails the challenge of evaluating the contributions of, often unfamiliar, developers from across the open source ecosystem and, conversely, submitting a contribution to a project with unfamiliar maintainers. Previous studies suggest that the decision of accepting or rejecting a PR may be influenced by a diverging set of technical and social factors, but often focus on relatively few projects, do not consider ecosystem-wide measures, or the possible non-monotonic relationships between the predictors and PR acceptance probability. We aim to shed light on this important decision making process by testing which measures significantly affect the probability of PR acceptance on a significant fraction of a large ecosystem, rank them by their relative importance in predicting PR acceptance, and determine the shape of the functions that map each predictor to PR acceptance. We proposed seven hypotheses regarding which technical and social factors might affect PR acceptance and created 17 measures based on them. Our dataset consisted of 470,925 PRs from 3349 popular NPM packages and 79,128 GitHub users who created those. We tested which of the measures affect PR acceptance and ranked the significant measures by their importance in a predictive model. Our predictive model had and AUC of 0.94, and 15 of the 17 measures were found to matter, including five novel ecosystem-wide measures. Measures describing the number of PRs submitted to a repository and what fraction of those get accepted, and signals about the PR review phase were most significant. We also discovered that only four predictors have a linear influence on the PR acceptance probability while others showed a more complicated response.
computer science
Existing survival analysis techniques heavily rely on strong modelling assumptions and are, therefore, prone to model misspecification errors. In this paper, we develop an inferential method based on ideas from conformal prediction, which can wrap around any survival prediction algorithm to produce calibrated, covariate-dependent lower predictive bounds on survival times. In the Type I right-censoring setting, when the censoring times are completely exogenous, the lower predictive bounds have guaranteed coverage in finite samples without any assumptions other than that of operating on independent and identically distributed data points. Under a more general conditionally independent censoring assumption, the bounds satisfy a doubly robust property which states the following: marginal coverage is approximately guaranteed if either the censoring mechanism or the conditional survival function is estimated well. Further, we demonstrate that the lower predictive bounds remain valid and informative for other types of censoring. The validity and efficiency of our procedure are demonstrated on synthetic data and real COVID-19 data from the UK Biobank.
statistics
We study the low energy dynamics of a single Dp-brane carrying sufcient large number of D0-brane charges in type IIA theory. We assume the D-brane topology to be $R \times \mathcal{M}_{2n} $ , where $\mathcal{M}_{2n}$ is a closed manifold admitting a symplectic structure. We propose a new gauge fixing condition which eliminates the spatial gauge fluctuations on the Dp-brane. Using a conventional regularization method, one finds that the dynamics is characterized by D0-brane matrix description when the density of D0-branes is large enough. We also calculate the leading order interactions between two D2-branes carrying both electric and magnetic fluxes in matrix theory.
high energy physics theory
We examine the vacuum structure of 4D effective theories of moduli fields in spacetime compactifications with quantized background fluxes. Imposing the no-scale structure for the volume deformations, we numerically investigate the distributions of flux vacua of the effective potential in complex structure moduli and axio-dilaton directions for two explicit examples in Type IIB string theory and F-theory compactifications. It turns out that distributions of non-supersymmetric flux vacua exhibit a non-increasing functional behavior of several on-shell quantities with respect to the string coupling. We point out that this phenomena can be deeply connected with a previously-reported possible correspondence between the flux vacua in moduli stabilization problem and the attractor mechanism in supergravity, and our explicit demonstration implies that such a correspondence generically exist even in the framework of F-theory. In particular, we confirm that the solutions of the effective potential we explicitly evaluated in Type IIB and F-theory flux compactifications indeed satisfy the generalized form of the attractor equations simultaneously.
high energy physics theory
Efficient energy transfer from electromagnetic waves to ions has been demanded to control laboratory plasmas for various applications and could be useful to understand the nature of space and astrophysical plasmas. However, there exists a severe unsolved problem that most of the wave energy is converted quickly to electrons, but not to ions. Here, an energy conversion process to ions in overdense plasmas associated with whistler waves is investigated by numerical simulations and theoretical model. Whistler waves propagating along a magnetic field in space and laboratories often form the standing waves by the collision of counter-propagating waves or through the reflection. We find that ions in the standing whistler waves acquire a large amount of energy directly from the waves in a short timescale comparable to the wave oscillation period. Thermalized ion temperature increases in proportion to the square of the wave amplitude and becomes much higher than the electron temperature in a wide range of wave-plasma conditions. This efficient ion-heating mechanism applies to various plasma phenomena in space physics and fusion energy sciences.
physics
Nonlinear mixed effects models have received a great deal of attention in the statistical literature in recent years because of their flexibility in handling longitudinal studies, including human immunodeficiency virus viral dynamics, pharmacokinetic analyses, and studies of growth and decay. A standard assumption in nonlinear mixed effects models for continuous responses is that the random effects and the within-subject errors are normally distributed, making the model sensitive to outliers. We present a novel class of asymmetric nonlinear mixed effects models that provides efficient parameters estimation in the analysis of longitudinal data. We assume that, marginally, the random effects follow a multivariate scale mixtures of skew--normal distribution and that the random errors follow a symmetric scale mixtures of normal distribution, providing an appealing robust alternative to the usual normal distribution. We propose an approximate method for maximum likelihood estimation based on an EM-type algorithm that produces approximate maximum likelihood estimates and significantly reduces the numerical difficulties associated with the exact maximum likelihood estimation. Techniques for prediction of future responses under this class of distributions are also briefly discussed. The methodology is illustrated through an application to Theophylline kinetics data and through some simulating studies.
statistics
We apply the recently developed positivity bounds for particles with spin, applied away from the forward limit, to the low energy effective theories of massive spin-1 and spin-2 theories. For spin-1 theories, we consider the generic Proca EFT which arises at low energies from a heavy Higgs mechanism, and the special case of a charged Galileon for which the EFT is reorganized by the Galileon symmetry. For spin-2, we consider generic $\Lambda_5$ massive gravity theories and the special `ghost-free' $\Lambda_3$ theories. Remarkably we find that at the level of 2-2 scattering, the positivity bounds applied to $\Lambda_5$ massive gravity theories impose the special tunings which generate the $\Lambda_3$ structure. For $\Lambda_3$ massive gravity theories, the island of positivity derived in the forward limit appears relatively stable against further bounds.
high energy physics theory
We review a bit our earlier novel string field theory\cite{self2,self8} stressing the interesting property, that it becomes expressed in terms of particle like objects called by us "objects" which in our formalism do not at all develop in time. So in this way there is in our picture, in spite of it being supposed to reproduce string theory with an arbitrary number strings present -- in this sense a string field theory -- in fact no time! This strange missing of time in the formalism gives rise to slight speculations about the philosophy of the concept of time. There is course then also no need for a Hamiltonian, but we construct or rather attempt to do so, a fake Hamiltonian or phantasy Hamiltonian,
high energy physics theory
For the first time, we present a Bayesian time-resolved spectral study of the X-ray afterglow datasets of GW170817/GRB17017A observed by the Chandra X-ray Observatory. These include all 12 public datasets, from the earliest observation taken at $\rm t \sim 9~d$ to the newest observation at $\rm \sim 359~d$ post-merger. While our results are consistent with the other works using Cash statistic within uncertainty, the Bayesian analysis we performed in this work have yielded Gaussian-like parameter distributions. We also obtained the parameter uncertainties directly from their posterior probability distributions. We are able to confirm that the power-law photon index has remained constant of $\Gamma \sim 1.6$ throughout the entire year-long observing period, except for the first dataset observed at $\rm t = 8.9~d$ when $\Gamma =1.04\pm0.44$ is marginally harder. We also found that the unabsorbed X-ray flux peaked at $\rm t \sim 155~d$, temporally consistent with the X-ray flare model suggested recently by Piro et al (2018). The X-ray flux has been fading since $\sim160$ days after the merger and has returned to the level as first discovered after one year. Our result shows that the X-ray spectrum of GW170817/GRB170817A is well-described by a simple power-law originated from non-thermal slow-cooling synchrotron radiation.
astrophysics
Strongly interacting electrons in solid-state systems often display tendency towards multiple broken symmetries in the ground state. The complex interplay between different order parameters can give rise to a rich phase diagram. Here, we report on the identification of intertwined phases with broken rotational symmetry in magic-angle twisted bilayer graphene (TBG). Using transverse resistance measurements, we find a strongly anisotropic phase located in a 'wedge' above the underdoped region of the superconducting dome. Upon crossing the superconducting dome, a reduction of the critical temperature is observed, similar to the behavior of certain cuprate superconductors. Furthermore, the superconducting state exhibits a anisotropic response to an directional-dependent in-plane magnetic field, revealing a nematic pairing state across the entire superconducting dome. These results indicate that nematic fluctuations might play an important role in the low-temperature phases of magic-angle TBG, and pave the way for using highly-tunable moir\'{e} superlattices to investigate intertwined phases in quantum materials.
condensed matter
Across many areas, from neural tracking to database entity resolution, manual assessment of clusters by human experts presents a bottleneck in rapid development of scalable and specialized clustering methods. To solve this problem we develop C-FAR, a novel method for Fast, Automated and Reproducible assessment of multiple hierarchical clustering algorithms simultaneously. Our algorithm takes any number of hierarchical clustering trees as input, then strategically queries pairs for human feedback, and outputs an optimal clustering among those nominated by these trees. While it is applicable to large dataset in any domain that utilizes pairwise comparisons for assessment, our flagship application is the cluster aggregation step in spike-sorting, the task of assigning waveforms (spikes) in recordings to neurons. On simulated data of 96 neurons under adverse conditions, including drifting and 25\% blackout, our algorithm produces near-perfect tracking relative to the ground truth. Our runtime scales linearly in the number of input trees, making it a competitive computational tool. These results indicate that C-FAR is highly suitable as a model selection and assessment tool in clustering tasks.
statistics
For the purpose of analyzing observed phenomena, it has been convenient, and thus far sufficient, to regard gravity as subject to the deterministic principles of classical physics, with the gravitational field obeying Newton's law or Einstein's equations. Here we treat the gravitational field as a quantum field and determine the implications of such treatment for experimental observables. We find that falling bodies in gravity are subject to random fluctuations ("noise") whose characteristics depend on the quantum state of the gravitational field. We derive a stochastic equation for the separation of two falling particles. Detection of this fundamental noise, which may be measurable at gravitational wave detectors, would vindicate the quantization of gravity, and reveal important properties of its sources.
high energy physics theory
Over the past few years, the use of swarms of Unmanned Aerial Vehicles (UAVs) in monitoring and remote area surveillance applications has become widespread thanks to the price reduction and the increased capabilities of drones. The drones in the swarm need to cooperatively explore an unknown area, in order to identify and monitor interesting targets, while minimizing their movements. In this work, we propose a distributed Reinforcement Learning (RL) approach that scales to larger swarms without modifications. The proposed framework relies on the possibility for the UAVs to exchange some information through a communication channel, in order to achieve context-awareness and implicitly coordinate the swarm's actions. Our experiments show that the proposed method can yield effective strategies, which are robust to communication channel impairments, and that can easily deal with non-uniform distributions of targets and obstacles. Moreover, when agents are trained in a specific scenario, they can adapt to a new one with minimal additional training. We also show that our approach achieves better performance compared to a computationally intensive look-ahead heuristic.
computer science
Young massive stars are usually found embedded in dense and massive molecular clumps and are known for being highly obscured and distant. During their formation process, deuteration is regarded as a potentially good indicator of the formation stage. Therefore, proper observations of such deuterated molecules are crucial, but still, hard to perform. In this work, we test the observability of the transition o-H$_2$D$^+(1_{10}$-$1_{11})$, using a synthetic source, to understand how the physical characteristics are reflected in observations through interferometers and single-dish telescopes. In order to perform such tests, we post-processed a magneto-hydrodynamic simulation of a collapsing magnetized core using the radiative transfer code POLARIS. Using the resulting intensity distributions as input, we performed single-dish (APEX) and interferometric (ALMA) synthetic observations at different evolutionary times, always mimicking realistic configurations. Finally, column densities were derived to compare our simulations with real observations previously performed. Our derivations for o-H$_2$D$^+$ are in agreement with values reported in the literature, in the range of 10$^{\!10-11}$cm$^{\!-2}$ and 10$^{\!12-13}$cm$^{\!-2}$ for single-dish and interferometric measurements, respectively.
astrophysics
Reconfigurable Intelligent Surfaces (RIS) are an emerging technology that can be used to reconfigure the propagation environment to improve cellular communication link rates. RIS, which are thin metasurfaces composed of discrete elements, passively manipulate incident electromagnetic waves through controlled reflective phase tuning. In this paper, we investigate co-design of the multiantenna basestation beamforming vector and multielement RIS phase shifts. The downlink narrowband transmission uses sub-6 GHz frequency bands, and the user equipment has a single antenna. Subject to the non-convex constraints due to the RIS phase shifts, we maximize the spectral efficiency or equivalent channel power as a proxy. Our contributions in improving RIS-aided links include (1) design of gradient ascent codesign algorithms, and (2) comparison of seven codesign algorithms in spectral efficiency vs. computational complexity. In simulation, the best spectral efficiency vs. computational complexity tradeoffs are shown by two of our proposed gradient ascent algorithms.
electrical engineering and systems science
This work considers multirate generalized-structure additively partitioned Runge-Kutta (MrGARK) methods for solving stiff systems of ordinary differential equations (ODEs) with multiple time scales. These methods treat different partitions of the system with different timesteps for a more targeted and efficient solution compared to monolithic single rate approaches. With implicit methods used across all partitions, methods must find a balance between stability and the cost of solving nonlinear equations for the stages. In order to characterize this important trade-off, we explore multirate coupling strategies, problems for assessing linear stability, and techniques to efficiently implement Newton iterations for stage equations. Unlike much of the existing multirate stability analysis which is limited in scope to particular methods, we present general statements on stability and describe fundamental limitations for certain types of multirate schemes. New implicit multirate methods up to fourth order are derived, and their accuracy and efficiency properties are verified with numerical tests.
mathematics
Collagen fibrils are the main structural component of load-bearing tissues such as tendons, ligaments, skin, the cornea of the eye, and the heart. The D-band of collagen fibrils is a periodic axial density modulation that can be easily characterized by tissue-level X-ray scattering. During mechanical testing, D-band strain is often used as a proxy for fibril strain. However, this approach ignores the coupling between strain and molecular tilt. We examine the validity of this approximation using an elastomeric collagen fibril model that includes both the D-band and a molecular tilt field. We show that the D-band strain substantially underestimates fibril strain for strongly twisted collagen fibrils -- such as fibrils from skin or corneal tissue.
physics
We propose a gauged $U(1)_{B-L}$ extension of the standard model (SM) to explain simultaneously the light neutrino masses and dark matter (DM). The generation of neutrino masses occurs through a variant of type-II seesaw mechanism in which one of the scalar triplets lies at the TeV scale yet have a large dilepton coupling, which paves a path for probing this model at colliders. The gauging of $U(1)_{B-L}$ symmetry in a type-II seesaw framework introduces $B-L$ anomalies. Therefore we invoke three right handed neutrinos $\nu_{R_{i}}$(i=1,2,3) with $B-L$ charges -4,-4,+5 to cancel the anomalies. We further show that the lightest one among the three right handed neutrinos can be a viable DM candidate. The stability of DM can be owed to a remnant $Z_2$ symmetry under which the right handed neutrinos are odd while all other particles are even. We then discuss the constraints on the model parameters from observed DM abundance and the search at direct detection experiments.
high energy physics phenomenology
Given a finitely generated algebra $A$, it is a fundamental question whether $A$ has a full rank discrete (Krull) valuation $\mathfrak{v}$ with finitely generated value semigroup. We give a necessary and sufficient condition for this, in terms of tropical geometry of $A$. In the course of this we introduce the notion of a Khovanskii basis for $(A, \mathfrak{v})$ which provides a framework for far extending Gr\"obner theory on polynomial algebras to general finitely generated algebras. In particular, this makes a direct connection between the theory of Newton-Okounkov bodies and tropical geometry, and toric degenerations arising in both contexts. We also construct an associated compactification of $Spec(A)$. Our approach includes many familiar examples such as the Gel'fand-Zetlin degenerations of coordinate rings of flag varieties as well as wonderful compactifications of reductive groups. We expect that many examples coming from cluster algebras naturally fit into our framework.
mathematics
We explore here an scenario for massive black hole formation driven by stellar collisions in galactic nuclei, proposing a new formation regime of global instability in nuclear stellar clusters triggered by runaway stellar collisions. Using order of magnitude estimations, we show that observed nuclear stellar clusters avoid the regime where stellar collisions are dynamically relevant over the whole system, while resolved detections of massive black holes are well into such collision-dominated regime. We interpret this result in terms of massive black holes and nuclear stellar clusters being different evolutionary paths of a common formation mechanism, unified under the standard terminology of being both central massive objects. We propose a formation scenario where central massive objects more massive than $\rm \sim 10^8 \, Msun$, which also have relaxation times longer that their collision times, will be too dense (in virial equilibrium) to be globally stable against stellar collisions and most of its mass will collapse towards the formation of a massive black hole. Contrarily, this will only be the case at the core of less dense central massive objects leading to the formation of black holes with much lower black hole efficiencies $\rm \epsilon_{BH} = \frac{M_{BH}}{M_{CMO}}$, with these efficiencies $\rm \epsilon_{BH}$ drastically growing for central massive objects more massive than $\rm \sim 10^7 \, Msun$, approaching unity around $\rm M_{CMO} \sim 10^8 \, Msun$. We show that the proposed scenario successfully explains the relative trends observed in the masses, efficiencies, and scaling relations between massive black holes and nuclear stellar clusters.
astrophysics
We calculate the complete set of two-loop leading-colour QCD helicity amplitudes for $\gamma \gamma j$-production at hadron colliders. Our results are presented in a compact, fully-analytical form.
high energy physics phenomenology
In this work, we systemically investigate the molecular states from the $\Sigma^{(*)}_c\bar{D}^{(*)}-\Lambda_c\bar{D}^{(*)}$ interaction with the help of the Lagrangians with heavy quark and chiral symmetries in a quasipotential Bethe-Salpeter equation (qBSE) approach. The molecular states are produced from isodoublet (I=1/2) $\Sigma_c\bar{D}$ interaction with spin parity $J^P=1/2^-$ and $\Sigma_c\bar{D}^*$ interaction with $1/2^-$ and $ 3/2^-$. Their masses and widths are consistent with the $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ observed at LHCb. The states, $\Sigma_c^*\bar{D}^*(1/2^-)$, $\Sigma_c^*\bar{D}^*(3/2^-)$ and $\Sigma^*_c\bar{D}(3/2^-)$, are also produced with the same parameters. The isodoublet $\Sigma_c^*\bar{D}^*$ interaction with $5/2^-$, as well as the isoquartet (I=3/2) $\Sigma_c\bar{D}^*$ interactions with $1/2^-$ and $3/2^-$, $\Sigma_c^*\bar{D}^*$ interaction with $3/2^-$ and $5/2^-$, are also attractive while very large cutoff is required to produce a molecular state. We also investigate the origin of the widths of these molecular states in the same qBSE frame. The $\Lambda\bar{D}^*$ channel is dominant in the decays of the states, $\Sigma_c\bar{D}^*(1/2^-)$, $\Sigma_c\bar{D}^*(3/2^-)$, $\Sigma_c^*\bar{D}(3/2^-)$, and $\Sigma_c\bar{D}(1/2^-)$. The $\Sigma^*_c\bar{D}^*(1/2^-)$ state has large coupling to $\Sigma_c\bar{D}$ channel while the $\Sigma_c\bar{D}^*$, $\Sigma^*_c\bar{D}$ and $\Lambda_c\bar{D}^*$ channels provide similar contributions to the width of the $\Sigma^*_c\bar{D}^*(3/2^-)$ state. These results will be helpful to understand the current LHCb experimental results, and the three predicted states and the decay pattern of these hidden-charmed molecular pentaquarks can be checked in future experiments.
high energy physics phenomenology
Isolated massive elliptical galaxies, or that are present at the center of cool-core clusters, are believed to be powered by hot gas accretion directly from their surrounding hot X-ray emitting gaseous medium. This leads to a giant Bondi-type spherical/quasi-spherical accretion flow onto their host SMBHs, with the accretion flow region extending well beyond the Bondi radius. In this work, we present a detailed study of Bondi-type spherical flow in the context of these massive ellipticals by incorporating the effect of entire gravitational potential of the host galaxy in the presence of cosmological constant $\Lambda$, considering a five-component galactic system (SMBH + stellar + dark matter + hot gas + $\Lambda$). The current work is an extension of Ghosh \& Banik (2015), who studied only the cosmological aspect of the problem. The galactic contribution to the potential renders the (adiabatic) spherical flow to become {\it multi-transonic} in nature, with the flow topology and flow structure significantly deviating from that of classical Bondi solution. More notably, corresponding to moderate to higher values of galactic mass-to-light ratios, we obtain Rankine-Hugoniot shocks in spherical wind flows. Galactic potential enhances the Bondi accretion rate. Our study reveals that there is a strict lower limit of ambient temperature below which no Bondi accretion can be triggered; which is as high as $\sim 9 \times 10^6 \, K$ for flows from hot ISM-phase, indicating that the hot phase tightly regulates the fueling of host nucleus. Our findings may have wider implications, particularly in the context of outflow/jet dynamics, and radio-AGN feedback, associated with these massive galaxies in the contemporary Universe.
astrophysics
In the last few years it was realized that every fermionic theory in 1+1 dimensions is a generalized Jordan-Wigner transform of a bosonic theory with a non-anomalous $\mathbb{Z}_2$ symmetry. In this note we determine how the boundary states are mapped under this correspondence. We also interpret this mapping as the fusion of the original boundary with the fermionization interface.
high energy physics theory
We present a novel method to model galactic scale star formation and the resulting emission from star clusters and the multi-phase interstellar medium. We combine global parameters, such as SFR and CMF, with {\sc warpfield} which provides a description of the feedback-driven evolution of individual star-forming regions. Our approach includes stellar evolution, stellar winds, radiation pressure, supernovae, all of which couple to the dynamical evolution of the parental cloud in a highly non-linear fashion. The heating of diffuse gas and dust is calculated self-consistently with the age, mass and density dependent escape fractions of the local star-forming regions. From this we construct the interstellar radiation field at any point in the galaxy, and we employ the multi-frequency Monte Carlo radiative transfer code {\sc polaris} to produce synthetic emission maps for the one-to-one comparison with observational data. We demonstrate the capabilities of our approach by applying the method to a Milky Way like galaxy built-up in a high-resolution cosmological MHD simulation. We give three examples. First, we compute the multi-scale distribution of electron $n_{e^-}$ and $T_{e^{-}}$ and synthesize the MW all-sky H$\alpha$ emission. We use a multipole expansion method to show that the resulting maps are consistent with observations. Second, we predict the expected \SIII 9530~\AA\ emission. This line is a key target of several planned large survey programs. It suffers less extinction than other diagnostic lines and provides information about star formation in very dense environments that are otherwise observationally not readily accessible. Third, we explore the effects of differential extinction as seen by an extra-galactic observer, and discuss the consequences for the correct interpretation of \Ha emission as a star-formation rate tracer at different viewing angles.(abridged)
astrophysics
We address perfect discrimination of two separable states. When available states are restricted to separable states, we can theoretically consider a larger class of measurements than the class of measurements allowed in quantum theory. The framework composed of the class of separable states and the above extended class of measurements is a typical example of general probabilistic theories. In this framework, we give a necessary and sufficient condition to discriminate two separable pure states perfectly. In particular, we derive measurements explicitly to discriminate two separable pure states perfectly, and find that some non-orthogonal states are perfectly distinguishable. However, the above framework does not improve the capacity, namely, the maximum number of states that are simultaneously and perfectly distinguishable.
quantum physics
By performing three-dimensional radiation hydrodynamics simulations, we study the formation of young massive star clusters (YMCs, $M_{*}>10^4~M_{\odot}$) in clouds with the surface density ranging from $\Sigma_{\rm cl} = 80$ to $3200~M_{\odot}\;{\rm pc^{-2}}$. We find that photoionization feedback suppresses star formation significantly in clouds with low surface density. Once the initial surface density exceeds $\sim 100~M_{\odot}\;{\rm pc^{-2}}$ for clouds with $M_{\rm cl}=10^{6}~M_{\odot}$ and $Z= Z_{\odot}$, most of the gas is converted into stars because the photoionization feedback is inefficient in deep gravitational potential. In this case, the star clusters are massive and gravitationally bounded as YMCs. The transition surface density increases as metallicity decreases, and it is $\sim 350~M_{\odot}\;{\rm pc^{-2}}$ for $Z=10^{-2}~Z_{\odot}$. We show that more than 10 percent of star-formation efficiency (SFE) is needed to keep a star cluster gravitationally bounded even after the disruption of a cloud. Also, we develop a semi-analytical model reproducing the SFEs obtained in our simulations. We find that the SFEs are fit with a power-law function with the dependency $\propto \Sigma_{\rm cl}^{1/2}$ for low-surface density and rapidly increases at the transition surface densities. The conditions of the surface density and the metallicity match recent observations of giant molecular clouds forming YMCs in nearby galaxies.
astrophysics
Researchers rely on the distance function to model multiple product production using multiple inputs. A stochastic directional distance function (SDDF) allows for noise in potentially all input and output variables. Yet, when estimated, the direction selected will affect the functional estimates because deviations from the estimated function are minimized in the specified direction. The set of identified parameters of a parametric SDDF can be narrowed via data-driven approaches to restrict the directions considered. We demonstrate a similar narrowing of the identified parameter set for a shape constrained nonparametric method, where the shape constraints impose standard features of a cost function such as monotonicity and convexity. Our Monte Carlo simulation studies reveal significant improvements, as measured by out of sample radial mean squared error, in functional estimates when we use a directional distance function with an appropriately selected direction. From our Monte Carlo simulations we conclude that selecting a direction that is approximately orthogonal to the estimated function in the central region of the data gives significantly better estimates relative to the directions commonly used in the literature. For practitioners, our results imply that selecting a direction vector that has non-zero components for all variables that may have measurement error provides a significant improvement in the estimator's performance. We illustrate these results using cost and production data from samples of approximately 500 US hospitals per year operating in 2007, 2008, and 2009, respectively, and find that the shape constrained nonparametric methods provide a significant increase in flexibility over second order local approximation parametric methods.
statistics
Resource reservation is an essential step to enable wireless data networks to support a wide range of user demands. In this paper, we consider the problem of joint resource reservation in the backhaul and Radio Access Network (RAN) based on the statistics of user demands and channel states, and also network availability. The goal is to maximize the sum of expected traffic flow rates, subject to link and access point budget constraints, while minimizing the expected outage of downlinks. The formulated problem turns out to be non-convex and difficult to solve to global optimality. We propose an efficient Block Coordinate Descent (BCD) algorithm to approximately solve the problem. The proposed BCD algorithm optimizes the link capacity reservation in the backhaul using a novel multipath routing algorithm that decomposes the problem down to link-level and parallelizes the computation across backhaul links, while the reservation of transmission resources in RAN is carried out via a novel scalable and distributed algorithm based on Block Successive Upper-bound Minimization (BSUM). We prove that the proposed BCD algorithm converges to a Karush-Kuhn-Tucker solution. Simulation results verify the efficiency and the efficacy of our BCD approach against two heuristic algorithms.
electrical engineering and systems science
Nuclear parton distribution functions (nuclear PDFs) are non-perturbative objects that encode the partonic behaviour of bound nucleons. To avoid potential higher-twist contributions, the data probing the high-$x$ end of nuclear PDFs are sometimes left out from the global extractions despite their potential to constrain the fit parameters. In the present work we focus on the kinematic corner covered by the new high-$x$ data measured by the CLAS/JLab collaboration. By using the Hessian re-weighting technique, we are able to quantitatively test the compatibility of these data with globally analyzed nuclear PDFs and explore the expected impact on the valence-quark distributions at high $x$. We find that the data are in a good agreement with the EPPS16 and nCTEQ15 nuclear PDFs whereas they disagree with TuJu19. The implications on flavour separation, higher-twist contributions and models of EMC effect are discussed.
high energy physics phenomenology
Whitney proved in 1931 that 4-connected planar triangulations are Hamiltonian. Hakimi, Schmeichel, and Thomassen conjectured in 1979 that if $G$ is a 4-connected planar triangulation with $n$ vertices then $G$ contains at least $2(n-2)(n-4)$ Hamiltonian cycles, with equality if and only if $G$ is a double wheel. On the other hand, a recent result of Alahmadi, Aldred, and Thomassen states that there are exponentially many Hamiltonian cycles in 5-connected planar triangulations. In this paper, we consider 4-connected planar $n$-vertex triangulations $G$ that do not have too many separating 4-cycles or have minimum degree 5. We show that if $G$ has $O(n/{\log}_2 n)$ separating 4-cycles then $G$ has $\Omega(n^2)$ Hamiltonian cycles, and if $\delta(G)\ge 5$ then $G$ has $2^{\Omega(n^{1/4})}$ Hamiltonian cycles. Both results improve previous work. Moreover, the proofs involve a "double wheel" structure, providing further evidence to the above conjecture.
mathematics
We introduce a new kind of foliated quantum field theory (FQFT) of gapped fracton orders in the continuum. FQFT is defined on a manifold with a layered structure given by one or more foliations, which each decompose spacetime into a stack of layers. FQFT involves a new kind of gauge field, a foliated gauge field, which behaves similar to a collection of independent gauge fields on this stack of layers. Gauge invariant operators (and their analogous particle mobilities) are constrained to the intersection of one or more layers from different foliations. The level coefficients are quantized and exhibit a duality that spatially transforms the coefficients. This duality occurs because the FQFT is a foliated fracton order. That is, the duality can decouple 2+1D gauge theories from the FQFT through a process we dub exfoliation.
high energy physics theory
The proposed model of this study is a single supply of electrical energy and is used in distribution systems. The objective of this study is to optimize the distribution of active and reactive energy in the supply of sub-distribution or the market. The model proposed in this paper accounts for the effects associated with switching capacitors on distribution and dispatching centers. Using capacitors in the power grid has an effect on bus voltage and consequently the amount of active and reactive energy demand of the consumer. The optimized supply is determined through the solution of a mathematical operation model. In this study, the effect of voltage changes on consumer demand for residential,commercial and industrial power are quantified and embodied in an analytical model, which is then solved using General Algebraic Modeling System (GAMS) software.ware.
electrical engineering and systems science
MASnI$_3$, an organometallic halide, has great potential in the field of lead-free perovskite solar cells. Ultraviolet photons have been shown to generate deep trapping electronic defects in mesoporous TiO$_2$-based perovskite, affecting its performance and stability. In this study, the structure, electronic properties, and optical properties of the cubic, tetragonal, and hexagonal phases of MASnI$_3$ were studied using first-principles calculations. The results indicate that the hexagonal phase of MASnI$_3$ possesses a larger indirect band gap and larger carrier effective mass along the \emph{c}-axis compared with the cubic and tetragonal phases. These findings were attributed to the enhanced electronic coupling and localization in the hexagonal phase. Moreover, the hexagonal phase exhibited high absorption of ultraviolet photons and high transmission of visible photons, particularly along the \emph{c}-axis. These characteristics demonstrate the potential of hexagonal MASnI$_3$ for application in multijunction perovskite tandem solar cells or as coatings in mesoporous TiO$_2$-based perovskite solar cells to enhance ultraviolet stability and photon utilization.
condensed matter
Due to the sensitive nature of diabetes-related data, preventing them from being shared between studies, progress in the field of glucose prediction is hard to assess. To address this issue, we present GLYFE (GLYcemia Forecasting Evaluation), a benchmark of machine-learning-based glucose-predictive models. To ensure the reproducibility of the results and the usability of the benchmark in the future, we provide extensive details about the data flow. Two datasets are used, the first comprising 10 in-silico adults from the UVA/Padova Type 1 Diabetes Metabolic Simulator (T1DMS) and the second being made of 6 real type-1 diabetic patients coming from the OhioT1DM dataset. The predictive models are personalized to the patient and evaluated on 3 different prediction horizons (30, 60, and 120 minutes) with metrics assessing their accuracy and clinical acceptability. The results of nine different models coming from the glucose-prediction literature are presented. First, they show that standard autoregressive linear models are outclassed by kernel-based non-linear ones and neural networks. In particular, the support vector regression model stands out, being at the same time one of the most accurate and clinically acceptable model. Finally, the relative performances of the models are the same for both datasets. This shows that, even though data simulated by T1DMS are not fully representative of real-world data, they can be used to assess the forecasting ability of the glucose-predictive models. Those results serve as a basis of comparison for future studies. In a field where data are hard to obtain, and where the comparison of results from different studies is often irrelevant, GLYFE gives the opportunity of gathering researchers around a standardized common environment.
electrical engineering and systems science
We present a joint 3D pose and focal length estimation approach for object categories in the wild. In contrast to previous methods that predict 3D poses independently of the focal length or assume a constant focal length, we explicitly estimate and integrate the focal length into the 3D pose estimation. For this purpose, we combine deep learning techniques and geometric algorithms in a two-stage approach: First, we estimate an initial focal length and establish 2D-3D correspondences from a single RGB image using a deep network. Second, we recover 3D poses and refine the focal length by minimizing the reprojection error of the predicted correspondences. In this way, we exploit the geometric prior given by the focal length for 3D pose estimation. This results in two advantages: First, we achieve significantly improved 3D translation and 3D pose accuracy compared to existing methods. Second, our approach finds a geometric consensus between the individual projection parameters, which is required for precise 2D-3D alignment. We evaluate our proposed approach on three challenging real-world datasets (Pix3D, Comp, and Stanford) with different object categories and significantly outperform the state-of-the-art by up to 20% absolute in multiple different metrics.
computer science
In this paper we address the problem of tracking control of nonlinear systems via contraction analysis. The necessary conditions of the systems which can achieve universal asymptotic tracking are studied under several different cases. We show the links to the well developed control contraction metric, as well as its invariance under dynamic extension. In terms of these conditions, we identify a differentially detectable output, based on which a simple differential controller for trajectory tracking is designed via damping injection. As illustration we apply to electrostatic microactuators.
electrical engineering and systems science
Turbulent flows are fundamental in engineering and the environment, but their chaotic and three-dimensional (3-D) nature makes them computationally expensive to simulate. In this work, a dimensionality reduction technique is investigated to exploit flows presenting an homogeneous direction, such as wake flows of extruded two-dimensional (2-D) geometries. First, we examine the effect of the homogeneous direction span on the wake turbulence dynamics of incompressible flow past a circular cylinder at $Re=10^4$. It is found that the presence of a solid wall induces 3-D structures even in highly constricted domains. The 3-D structures are rapidly two-dimensionalised by the large-scale K\'{a}rm\'{a}n vortices if the cylinder span is 50\% of the diameter or less, as a result of the span being shorter than the natural wake Mode B instability wavelength[...]. With this physical understanding, a 2-D data-driven model that incorporates 3-D effects, as found in the 3-D wake flow, is presented. The 2-D model is derived from a novel flow decomposition based on a local spanwise average of the flow, yielding the spanwise-averaged Navier-Stokes (SANS) equations[...]. A machine-learning (ML) model is employed to provide closure to the SANS equations. In the a-priori framework, the ML model yields accurate predictions of the SSR terms, in contrast to a standard eddy-viscosity model which completely fails to capture the closure term structures[...]. In the a-posteriori analysis, while we find evidence of known stability issues with long-time ML predictions for dynamical systems, the closed SANS equations are still capable of predicting wake metrics and induced forces with errors from 1-10%. This results in approximately an order of magnitude improvement over standard 2-D simulations while reducing the computational cost of 3-D simulations by 99.5%.
physics
Consider the set $\mathcal{S}=\lbrace\rho_{SE}\rbrace$ of possible initial states of the system-environment, steered from a tripartite reference state $\omega_{RSE}$. Buscemi [F. Buscemi, Phys. Rev. Lett. 113, 140502 (2014)] showed that the reduced dynamics of the system, for each $\rho_{S}\in \mathrm{Tr}_{E}\mathcal{S}$, is always completely positive if and only if $\omega_{RSE}$ is a Markov state. There, during the proof, it has been assumed that the dimensions of the system and the environment can vary through the evolution. Here, we show that this assumption is necessary: we give an example for which, though $\omega_{RSE}$ is not a Markov state, the reduced dynamics of the system is completely positive, for any evolution of the system-environment during which the dimensions of the system and the environment remain unchanged. As our next result, we show that the result of Muller-Hermes and Reeb [A. Muller-Hermes and D. Reeb, Ann. Henri Poincare 18, 1777 (2017)], of monotonicity of the quantum relative entropy under positive maps, cannot be generalized to the Hermitian maps, even within their physical domains.
quantum physics
Largest theoretical contribution to Neural Networks comes from VC Dimension which characterizes the sample complexity of classification model in a probabilistic view and are widely used to study the generalization error. So far in the literature the VC Dimension has only been used to approximate the generalization error bounds on different Neural Network architectures. VC Dimension has not yet been implicitly or explicitly stated to fix the network size which is important as the wrong configuration could lead to high computation effort in training and leads to over fitting. So there is a need to bound these units so that task can be computed with only sufficient number of parameters. For binary classification tasks shallow networks are used as they have universal approximation property and it is enough to size the hidden layer width for such networks. The paper brings out a theoretical justification on required attribute size and its corresponding hidden layer dimension for a given sample set that gives an optimal binary classification results with minimum training complexity in a single layered feed forward network framework. The paper also establishes proof on the existence of bounds on the width of the hidden layer and its range subjected to certain conditions. Findings in this paper are experimentally analyzed on three different dataset using Mathlab 2018 (b) software.
computer science
Nonadiabatic holonomic quantum computation (NHQC) provides a method to implement error resilient gates and that has attracted considerable attention recently. Since it was proposed, three-level {\Lambda} systems have become the typical building block for NHQC and a number of NHQC schemes have been developed based on such systems. In this paper, we investigate the realization of NHQC beyond the standard three-level setting. The central idea of our proposal is to improve NHQC by enlarging the Hilbert space of the building block system and letting it have a bipartite graph structure in order to ensure purely holonomic evolution. Our proposal not only improves conventional qubit-based NHQC by efficiently reducing its duration, but also provides implementations of qudit-based NHQC. Therefore, our proposal provides a further development of NHQC that can contribute significantly to the physical realization of efficient quantum information processors.
quantum physics
A large class of 5d superconformal field theories (SCFTs) can be constructed by integrating out BPS particles from 6d SCFTs compactified on a circle. We describe a general method for extracting the flavor symmetry of any 5d SCFT lying in this class. For this purpose, we utilize the geometric engineering of 5d N=1 theories in M-theory, where the flavor symmetry is encoded in a collection of non-compact surfaces.
high energy physics theory
The determination of the Higgs self coupling is one of the key ingredients for understanding the mechanism behind the electroweak symmetry breaking. An indirect method for constraining the Higgs trilinear self coupling via single Higgs production at next-to-leading order (NLO) has been proposed in order to avoid the drawbacks of studies with double Higgs production. In this paper we study the Higgs self interaction through the vector boson fusion (VBF) process $e^{-} p \to \nu_{e} h j$ at the future LHeC. At NLO level, we compute analytically the scattering amplitudes for relevant processes, in particular those induced by the Higgs self interaction. A Monte Carlo simulation and a statistical analysis utilizing the analytic results are then carried out for Higgs production through VBF and decay to $b\bar{b}$, which yield for the trilinear Higgs self-coupling rescaling parameter $\kappa_{\lambda}$ the limit [-0.57, 2.98] with $2~\text{ab}^{-1}$ integrated luminosity. If we assume about 10% of the signal survives the event selection cuts, and include all the background, the constraint will be broadened to [-2.11, 4.63].
high energy physics phenomenology
In this paper, we design erasure-correcting codes for channels with burst and random erasures, when a strict decoding delay constraint is in place. We consider the sliding-window-based packet erasure model proposed by Badr et al., where any time-window of width $w$ contains either up to $a$ random erasures or an erasure burst of length at most $b$. One needs to recover any erased packet, where erasures are as per the channel model, with a strict decoding delay deadline of $\tau$ time slots. Presently existing rate-optimal constructions in the literature require, in general, a field-size which grows exponential in $\tau$, for a constant $\frac{a}{\tau}$. In this work, we present a new rate-optimal code construction covering all channel and delay parameters, which requires an $O(\tau^2)$ field-size. As a special case, when $(b-a)=1$, we have a field-size linear in $\tau$. We also present three other constructions having linear field-size, under certain constraints on channel and decoding delay parameters. As a corollary, we obtain low field-size, rate-optimal convolutional codes for any given column distance and column span. Simulations indicate that the newly proposed streaming code constructions offer lower packet-loss probabilities compared to existing schemes, for selected instances of Gilbert-Elliott and Fritchman channels.
computer science
Panoramic X-ray (PX) provides a 2D picture of the patient's mouth in a panoramic view to help dentists observe the invisible disease inside the gum. However, it provides limited 2D information compared with cone-beam computed tomography (CBCT), another dental imaging method that generates a 3D picture of the oral cavity but with more radiation dose and a higher price. Consequently, it is of great interest to reconstruct the 3D structure from a 2D X-ray image, which can greatly explore the application of X-ray imaging in dental surgeries. In this paper, we propose a framework, named Oral-3D, to reconstruct the 3D oral cavity from a single PX image and prior information of the dental arch. Specifically, we first train a generative model to learn the cross-dimension transformation from 2D to 3D. Then we restore the shape of the oral cavity with a deformation module with the dental arch curve, which can be obtained simply by taking a photo of the patient's mouth. To be noted, Oral-3D can restore both the density of bony tissues and the curved mandible surface. Experimental results show that Oral-3D can efficiently and effectively reconstruct the 3D oral structure and show critical information in clinical applications, e.g., tooth pulling and dental implants. To the best of our knowledge, we are the first to explore this domain transformation problem between these two imaging methods.
electrical engineering and systems science
The advent of Generative Adversarial Network (GAN) architectures has given anyone the ability of generating incredibly realistic synthetic imagery. The malicious diffusion of GAN-generated images may lead to serious social and political consequences (e.g., fake news spreading, opinion formation, etc.). It is therefore important to regulate the widespread distribution of synthetic imagery by developing solutions able to detect them. In this paper, we study the possibility of using Benford's law to discriminate GAN-generated images from natural photographs. Benford's law describes the distribution of the most significant digit for quantized Discrete Cosine Transform (DCT) coefficients. Extending and generalizing this property, we show that it is possible to extract a compact feature vector from an image. This feature vector can be fed to an extremely simple classifier for GAN-generated image detection purpose.
computer science
We offer a brief response to the criticisms put forward by Cusin et al in arXiv:1811.03582 about our work arXiv:1810.13435 and arXiv:1806.01718, emphasising that none of these criticisms are relevant to our main results.
astrophysics
We make use of Hecke operators and arithmetic of imaginary quadratic fields to derive an explicit version of a special case of Siegel's mass formula.
mathematics
Following the idea that dissipation in turbulence at high Reynolds number is by events singular in space-time and described by solutions of the inviscid Euler equations, we draw the conclusion that in such flows scaling laws should depend only on quantities appearing in the Euler equations. This excludes viscosity or a turbulent length as scaling parameters and constrains drastically possible analytical pictures of this limit. We focus on the law of drag by Newton for a projectile moving quickly in a fluid at rest. Inspired by the Newton's drag force law (proportional to the square of the speed of the moving object in the limit of large Reynolds numbers), which is well verified in experiments when the location of the detachment of the boundary layer is defined, we propose an explicit relationship between Reynolds's stress in the turbulent wake and quantities depending on the velocity field (averaged in time but depending on space), in the form of an integro-differential equation for the velocity which is solved for a Poiseuille flow in a circular pipe.
physics
Models for double parton distributions that are realistic and consistent with theoretical constraints are crucial for a reliable description of double parton scattering. We show how an ansatz that has the correct behaviour in the limit of small transverse distance between the partons can be improved step by step, such as to fulfil the sum rules for double parton distributions with an accuracy around 10%.
high energy physics phenomenology
We initiate the study of decorated character stacks and their quantizations using the framework of stratified factorization homology. We thereby extend the construction by Fock and Goncharov of (quantum) decorated character varieties to encompass also the stacky points, in a way that is both compatible with cutting and gluing and equivariant with respect to canonical actions of the modular group of the surface. In the cases $G=SL_2,PGL_2$ we construct a system of categorical charts and flips on the quantum decorated character stacks which generalize the well--known cluster structures on the Fock--Goncharov moduli spaces.
mathematics
We consider a class of non-homogeneous Markov chains, that contains many natural examples. Next, using martingale methods, we establish some deviation and moment inequalities for separately Lipschitz functions of such a chain, under moment conditions on some dominating random variables.
mathematics
We study the problem of maximizing opinion diversity in a social network that includes opinion leaders with binary opposing opinions. The members of the network who are not leaders form their opinions using the French-DeGroot model of opinion dynamics. To quantify the diversity of such a system, we adapt two diversity measures from ecology to our setting, the Simpson Diversity Index and the Shannon Index. Using these two measures, we formalize the problem of how to place a single leader with opinion 1, given a network with a leader with opinion 0, so as to maximize the opinion diversity. We give analytical solutions to these problems for paths, cycles, and trees, and we highlight our results through a numerical example.
mathematics
We discuss how high energy neutrino communications could be synchronized to large-scale astrophysical events either in addition to or instead of electromagnetic signals.
astrophysics
We extract the proton magnetic radius from the high-precision electron-proton elastic scattering cross section data. Our theoretical framework combines dispersion analysis and chiral effective field theory and implements the dynamics governing the shape of the low-$Q^2$ form factors. It allows us to use data up to $Q^2\sim$ 0.5 GeV$^2$ for constraining the radii and overcomes the difficulties of empirical fits and $Q^2 \rightarrow 0$ extrapolation. We obtain a magnetic radius $r_M^p$ = 0.850 $\pm$0.001 (fit 68%) $\pm$0.010 (theory full range) fm, significantly different from earlier results obtained from the same data, and close to the extracted electric radius $r_E^p$ = 0.842 $\pm$0.002 (fit) $\pm$0.010 (theory) fm.
high energy physics phenomenology
A probabilistic model describes a system in its observational state. In many situations, however, we are interested in the system's response under interventions. The class of structural causal models provides a language that allows us to model the behaviour under interventions. It can been taken as a starting point to answer a plethora of causal questions, including the identification of causal effects or causal structure learning. In this chapter, we provide a natural and straight-forward extension of this concept to dynamical systems, focusing on continuous time models. In particular, we introduce two types of causal kinetic models that differ in how the randomness enters into the model: it may either be considered as observational noise or as systematic driving noise. In both cases, we define interventions and therefore provide a possible starting point for causal inference. In this sense, the book chapter provides more questions than answers. The focus of the proposed causal kinetic models lies on the dynamics themselves rather than corresponding stationary distributions, for example. We believe that this is beneficial when the aim is to model the full time evolution of the system and data are measured at different time points. Under this focus, it is natural to consider interventions in the differential equations themselves.
statistics
Preferential attachment networks are a type of random network where new nodes are connected to existing ones at random, and are more likely to connect to those that already have many connections. We investigate further a family of models introduced by Antunovi\'{c}, Mossel and R\'{a}cz where each vertex in a preferential attachment graph is assigned a type, based on the types of its neighbours. Instances of this type of process where the proportions of each type present do not converge over time seem to be rare. Previous work found that a "rock-paper-scissors" setup where each new node's type was determined by a rock-paper-scissors contest between its two neighbours does not converge. Here, two cases similar to that are considered, one which is like the above but with an arbitrarily small chance of picking a random type and one where there are four neighbours which perform a knockout tournament to determine the new type. These two new setups, despite seeming very similar to the rock-paper-scissors model, do in fact converge, perhaps surprisingly.
mathematics
Relations for the optimum well width, barrier width and width of the spacer layer to achieve highest PVCR on the basis of effective mass and barrier height in RTDs is proposed. The optimum spacer layer is found to be half of the de-Broglie wavelength associated with the bound state of the corresponding finite quantum well. The proposed relations for the optimum parameters can be used to design RTD based on any two appropriate materials to attain highest PVCR. The effect of doping concentrations on PVCR and peak current was studied. As case study, we have considered the GaAs/Ga0.7Al0.3As and GaN/Ga0.7Al0.3N RTDs. The current density obtained using the tunneling coefficient based on transfer matrix approach takes in to account the variation in the electric field in the well and barrier region on account of variation in the dielectric constant in the material.
condensed matter
A statistical analysis of an electoral quick count based on the total count of votes in the election of the State of Mexico's governor in 2017 is performed in order to verify precision, confidence level of interval estimations, possible bias and derived conclusions therein, with the main purpose of checking compliance with the objectives of such statistical procedure. ----- Se realiza un an\'alisis estad\'istico de las estimaciones del conteo r\'apido institucional desde la perspectiva ideal de los resultados de los c\'omputos distritales de la elecci\'on de gobernador del Estado de M\'exico del a\~no 2017, particularmente aspectos como la precisi\'on de las estimaciones, el nivel de confianza de los intervalos, el posible sesgo respecto al c\'omputo distrital y las conclusiones que se derivaron y reportaron, con el objetivo de determinar el grado de cumplimiento de los objetivos de este ejercicio estad\'istico de car\'acter informativo.
statistics
Non-Hermitian quantum many-body systems are a fascinating subject to be explored. Using the generalized density matrix renormalisation group method and complementary exact diagonalization, we elucidate the many-body ground states and dynamics of a 1D interacting non-Hermitian Aubry-Andre-Harper model for bosons. We find stable ground states in the superfluid and Mott insulating regimes under wide range of conditions in this model. We reveal a skin superfluid state induced by the non-Hermiticity from the nonreciprocal hopping. We investigate the topology of the Mott insulating phase and find its independence of the non-Hermiticity. The topological Mott insulators in this non-Hermitian system are characterized by four equal Chern numbers and a quantized shift of biorthogonal many-body polarizations. Furthermore, we show generic asymmetric expansion and correlation dynamics in the system.
condensed matter
We discuss spin-$\frac12$ Heisenberg antiferromagnet on simple square lattice in magnetic field $H$ using recently proposed bond-operator technique. It is well known that magnetically ordered phases of quantum magnets are well described at least qualitatively by the conventional spin-wave theory that only introduces quantum corrections into the classical solution of the problem. We observe that quantum fluctuations change drastically dynamical properties of the considered model at $H$ close to its saturation value: the dynamical structure factor shows anomalies corresponding to Green's function poles which have no counterparts in the spin-wave theory. That is, quantum fluctuations produce multiple short-wavelength magnon modes not changing qualitatively the long-wavelength spin dynamics. Our results are in agreement with previous quantum Monte-Carlo simulations and exact diagonalization of finite clusters.
condensed matter
Product reviews summarization is a type of Multi-Document Summarization (MDS) task in which the summarized document sets are often far larger than in traditional MDS (up to tens of thousands of reviews). We highlight this difference and coin the term "Massive Multi-Document Summarization" (MMDS) to denote an MDS task that involves hundreds of documents or more. Prior work on product reviews summarization considered small samples of the reviews, mainly due to the difficulty of handling massive document sets. We show that summarizing small samples can result in loss of important information and provide misleading evaluation results. We propose a schema for summarizing a massive set of reviews on top of a standard summarization algorithm. Since writing large volumes of reference summaries needed for advanced neural network models is impractical, our solution relies on weak supervision. Finally, we propose an evaluation scheme that is based on multiple crowdsourced reference summaries and aims to capture the massive review collection. We show that an initial implementation of our schema significantly improves over several baselines in ROUGE scores, and exhibits strong coherence in a manual linguistic quality assessment.
computer science
We investigate the collision of a new class of topological defects that tends to become compact as a control parameter increases to larger and larger values These new compactlike defects have, in general, more than one internal discrete mode depending on the value of the control parameter and, as usual, there is a critical velocity above which the defects escape after the collision. We noticed that below the critical velocity there are the windows of escape presenting fractal structure. An interesting novelty is the appearance of metastable structures with the formation of compactlike defects, maintaining a fixed distance from each other. Another new feature is the formation of boosted localized distributions of the scalar field which we called moving oscillons. These oscillons carry away almost all scalar field energy producing a complete disruption of the compactlike defects. The pattern of the moving oscillons depends on the control parameter, and becomes more complex as we increase its value. We conjecture that the new effects may be connected with the presence of more than one vibrational mode in the spectrum of the stability potential of the model under investigation.
high energy physics theory
We examine the range of applicability of Taylor's hypothesis used in observations of magnetic turbulence in the solar wind. We do not refer to turbulence theory. We simply ask whether in a turbulent magnetohydrodynamic flow the observed magnetic frequency spectrum can be interpreted as mapping of the wavenumber turbulence into the stationary spacecraft frame. In addition to the known restrictions on the angle of propagation with respect to the fluctuation spectrum and the question on the wavenumber dependence of the frequency in turbulence which we briefly review, we show that another restriction concerns the inclusion or exclusion of turbulent fluctuations in the velocity field. Taylor's hypothesis in application to magnetic (MHD) turbulence encounters its strongest barriers here. It is applicable to magnetic turbulence only when the turbulent velocity fluctuations can practically be completely neglected against the bulk flow speed. For low flow speeds the transformation becomes rather involved. This account makes even no use of the additional scale dependence of the turbulent frequency, viz. the existence of a "turbulent dispersion relation".
physics
With the relentless rise of computer power, there is a widespread expectation that computers can solve the most pressing problems of science, and even more besides. We explore the limits of computational modelling and conclude that, in the domains of science and engineering that are relatively simple and firmly grounded in theory, these methods are indeed powerful. Even so, the availability of code, data and documentation, along with a range of techniques for validation, verification and uncertainty quantification, are essential for building trust in computer generated findings. When it comes to complex systems in domains of science that are less firmly grounded in theory, notably biology and medicine, to say nothing of the social sciences and humanities, computers can create the illusion of objectivity, not least because the rise of big data and machine learning pose new challenges to reproducibility, while lacking true explanatory power. We also discuss important aspects of the natural world which cannot be solved by digital means. In the long-term, renewed emphasis on analogue methods will be necessary to temper the excessive faith currently placed in digital computation.
computer science
With the large-scale deployment of industrial internet of things (IIoT) devices, the number of vulnerabilities that threaten IIoT security is also growing dramatically, including a mass of undisclosed IIoT vulnerabilities that lack mitigation measures. Coordination Vulnerabilities Disclosure (CVD) is one of the most popular vulnerabilities sharing solutions, in which some security workers (SWs) can develop undisclosed vulnerabilities patches together. However, CVD assumes that sharing participants (SWs) are all honest, and thus offering chances for dishonest SWs to leak undisclosed IIoT vulnerabilities. To combat such threats, we propose an Undisclosed IIoT Vulnerabilities Trusted Sharing Protection (UIV-TSP) scheme with dynamic token. In this article, a dynamic token is an implicit access credential for an SW to acquire an undisclosed vulnerability information, which is only held by the system and constantly updated as the SW access. Meanwhile, the latest updated token can be stealthily sneaked into the acquired information as the traceability token. Once the undisclosed vulnerability information leaves the SW host, the embedded self-destruct program will be automatically triggered to prevent leaks since the destination MAC address in the traceability token has changed. To quickly distinguish dishonest SWs, trust mechanism is adopted to evaluate the trust value of SWs. Moreover, we design a blockchain-assisted continuous logs storage method to achieve the tamper-proofing of dynamic token and the transparency of undisclosed IIoT vulnerabilities sharing. The simulation results indicate that our proposed scheme is resilient to suppress dishonest SWs and protect the IoT undisclosed vulnerabilities effectively.
computer science
Manipulation of spin-polarized electronic states of two-dimensional (2D) materials under ambient conditions is necessary for developing new quantum devices with small physical dimensions. Here, we explore spin-dependent electronic structures of ultra-thin films of recently introduced 2D synthetic materials MSi$_2$Z$_4$ (M = Mo or W and Z = N or As) using first-principles modeling. Stacking of MSi$_2$Z$_4$ monolayers is found to generate dynamically stable bilayer and bulk materials with thickness-dependent properties. When spin-orbit coupling (SOC) is included in the computations, MSi$_2$P$_4$ monolayers display indirect bandgaps and large spin-split states at the $K$ and $K'$ symmetry points at the corners of the Brillouin zone with nearly 100% spin-polarization. The spins are locked in opposite directions along an out-of-the-plane direction at $K$ and $K'$, leading to spin-valley coupling effects. As expected, spin-polarization due to the presence of inversion symmetry in the pristine bilayers is absent, but it can be induced via an external out-of-plane electric field much like the case of Mo(W)S$_2$ bilayers. A transition from indirect to direct bandgap can be driven by replacing N by As in MSi$_2$(N, As)$_4$ monolayers. Our study indicates that the MSi$_2$Z$_4$ materials can provide a viable alternative to the MoS$_2$ class of 2D materials for valleytronic and optoelectronic applications.
condensed matter
Motility-induced wall aggregation of active Brownian particles (ABPs) is a well-studied phenomenon. Here, we study the aggregation of ABPs on porous walls, which allows the particles to penetrate through at large motility. We show that the active aggregates undergo a morphological transition from a connected dense-phase to disconnected droplets with an increase in wall porosity and the particle self-motility, similar to wetting-dewetting transitions in equilibrium fluids. We show that both morphologically distinct states are stable, and independent of initial conditions at least in some parameter regions. Our analysis reveals that changes in wall porosity affect the intrinsic properties of the aggregates and changes the effective wall-aggregate interfacial tension, consistent with the appearance of the morphological transition. Accordingly, a close analysis of the density, as well as orientational distribution, indicates that the underlying reason for such morphological transitions is not necessarily specific to the systems with porous walls, and it can be possible to observe in a larger class of confined, active systems by tuning the properties of confining walls.
condensed matter
Network theory is a useful framework for studying interconnected systems of interacting entities. Many networked systems evolve continuously in time, but most existing methods for the analysis of time-dependent networks rely on discrete or discretized time. In this paper, we propose an approach for studying networks that evolve in continuous time by distinguishing between \emph{interactions}, which we model as discrete contacts, and \emph{ties}, which encode the strengths of relationships as functions of time. To illustrate our tie-decay network formalism, we adapt the well-known PageRank centrality score to our tie-decay framework in a mathematically tractable and computationally efficient way. We apply this framework to a synthetic example and then use it to study a network of retweets during the 2012 National Health Service controversy in the United Kingdom. Our work also provides guidance for similar generalizations of other tools from network theory to continuous-time networks with tie decay, including for applications to streaming data.
physics
We describe a curious dynamical system that results in sequences of real numbers in $[0,1]$ with seemingly remarkable properties. Let the function $f:\mathbb{T} \rightarrow \mathbb{R}$ satisfy $\hat{f}(k) \geq c|k|^{-2}$ and define a sequence via $$ x_n = \arg\min_x \sum_{k=1}^{n-1}{f(x-x_k)}.$$ Such sequences $(x_n)_{n=1}^{\infty}$ seem to be astonishingly regularly distributed in various ways (satisfying favorable exponential sum estimates; every interval $J \subset [0,1]$ contains $\sim |J|n$ elements). We prove $$ W_2\left( \frac{1}{n} \sum_{k=1}^{n}{\delta_{x_k}}, dx\right) \leq \frac{c}{\sqrt{n}},$$ where $W_2$ is the 2-Wasserstein distance. Much stronger results seem to be true and it seems like an interesting problem to understand this dynamical system better. We obtain optimal results in dimension $d \geq 3$: using $G(x,y)$ to denote the Green's function of the Laplacian on a compact manifold, we show that $$ x_n = \arg\min_{x \in M} \sum_{k=1}^{n-1}{G(x,x_k)} \quad \mbox{satisfies} \quad W_2\left( \frac{1}{n} \sum_{k=1}^{n}{\delta_{x_k}}, dx\right) \lesssim \frac{1}{n^{1/d}}.$$
mathematics
While solving Partial Differential Equations (PDEs) with finite element methods (FEM), serendipity elements allow us to obtain the same order of accuracy as rectangular tensor-product elements with many fewer degrees of freedom (DOFs). To realize the possible computational savings, we develop some additive Schwarz methods (ASM) based on solving local patch problems. Adapting arguments from Pavarino for the tensor-product case, we prove that patch smoothers give conditioning estimates independent of the polynomial degree for a model problem. We also combine this with a low-order global operator to give an optimal two-grid method, with conditioning estimates independent of the mesh size and polynomial degree. The theory holds for serendipity elements in two and three dimensions, and can be extended to full multigrid algorithms. Numerical experiments using Firedrake and PETSc confirm this theory and demonstrate efficiency relative to standard elements.
mathematics
The uncertainty principle imposes a fundamental limit on predicting the measurement outcomes of incompatible observables even if complete classical information of the system state is known. The situation is different if one can build a quantum memory entangled with the system. Minimum uncertainty states are peculiar quantum states that can eliminate uncertainties of incompatible von Neumann observables once assisted by suitable measurements on the memory. Here we determine all minimum uncertainty states of any given set of observables and determine the minimum entanglement required. It turns out all minimum uncertainty states are maximally entangled in a generic case, and vice versa, even if these observables are only weakly incompatible. Our work establishes a precise connection between minimum uncertainty and maximum entanglement, which is of interest to foundational studies and practical applications, including quantum certification and verification.
quantum physics
Observations of galactic white dwarfs with Gaia have allowed for unprecedented modeling of white dwarf cooling, resolving core crystallization and sedimentary heating from neutron rich nuclei. These cooling sequences are sensitive to the diffusion coefficients of nuclei in Coulomb plasmas which have order 10\% uncertainty and are often not valid across coupling regimes. Using large scale molecular dynamics simulations we calculate diffusion coefficients at high resolution in the regime relevant for white dwarf modeling. We present a physically motivated law for diffusion with a semi-empirical correction which is accurate at the percent level. Implemented along with linear mixing in stellar evolution codes, this law should reduce the error from diffusion coefficients by an order of magnitude.
astrophysics
Modern software relies on libraries and uses them via application programming interfaces (APIs). Correct API usage as well as many software engineering tasks are enabled when APIs have formal specifications. In this work, we analyze the implementation of each method in an API to infer a formal postcondition. Conventional wisdom is that, if one has preconditions, then one can use the strongest postcondition predicate transformer (SP) to infer postconditions. However, SP yields postconditions that are exponentially large, which makes them difficult to use, either by humans or by tools. Our key idea is an algorithm that converts such exponentially large specifications into a form that is more concise and thus more usable. This is done by leveraging the structure of the specifications that result from the use of SP. We applied our technique to infer postconditions for over 2,300 methods in seven popular Java libraries. Our technique was able to infer specifications for 75.7% of these methods, each of which was verified using an Extended Static Checker. We also found that 84.6% of resulting specifications were less than 1/4 page (20 lines) in length. Our technique was able to reduce the length of SMT proofs needed for verifying implementations by 76.7% and reduced prover execution time by 26.7%.
computer science
Hrube\v{s} and Wigderson [HW14] initiated the study of noncommutative arithmetic circuits with division computing a noncommutative rational function in the free skew field, and raised the question of rational identity testing. It is now known that the problem can be solved in deterministic polynomial time in the white-box model for noncommutative formulas with inverses, and in randomized polynomial time in the black-box model [GGOW16, IQS18, DM18], where the running time is polynomial in the size of the formula. The complexity of identity testing of noncommutative rational functions remains open in general (when the formula size is not polynomially bounded). We solve the problem for a natural special case. We consider polynomial expressions in the free group algebra $\mathbb{F}\langle X, X^{-1}\rangle$ where $X=\{x_1, x_2, \ldots, x_n\}$, a subclass of rational expressions of inversion height one. Our main results are the following. 1. Given a degree $d$ expression $f$ in $\mathbb{F}\langle X, X^{-1}\rangle$ as a black-box, we obtain a randomized $\text{poly}(n,d)$ algorithm to check whether $f$ is an identically zero expression or not. We obtain this by generalizing the Amitsur-Levitzki theorem [AL50] to $\mathbb{F}\langle X, X^{-1}\rangle$. This also yields a deterministic identity testing algorithm (and even an expression reconstruction algorithm) that is polynomial time in the sparsity of the input expression. 2. Given an expression $f$ in $\mathbb{F}\langle X, X^{-1}\rangle$ of degree at most $D$, and sparsity $s$, as black-box, we can check whether $f$ is identically zero or not in randomized $\text{poly}(n,\log s, \log D)$ time.
computer science
Testing for association or dependence between pairs of random variables is a fundamental problem in statistics. In some applications, data are subject to selection bias that causes dependence between observations even when it is absent from the population. An important example is truncation models, in which observed pairs are restricted to a specific subset of the X-Y plane. Standard tests for independence are not suitable in such cases, and alternative tests that take the selection bias into account are required. To deal with this issue, we generalize the notion of quasi-independence with respect to the sampling mechanism, and study the problem of detecting any deviations from it. We develop two test statistics motivated by the classic Hoeffding's statistic, and use two approaches to compute their distribution under the null: (i) a bootstrap-based approach, and (ii) a permutation-test with non-uniform probability of permutations, sampled using either MCMC or importance sampling with various proposal distributions. We show that our tests can tackle cases where the biased sampling mechanism is estimated from the data, with an important application to the case of censoring with truncation. We prove the validity of the tests, and show, using simulations, that they perform well for important special cases of the problem and improve power compared to competing methods. The tests are applied to four datasets, two that are subject to truncation, with and without censoring, and two to positive bias mechanisms related to length bias.
statistics
We present a simple argument which seems to favor, when applied to a large class of strongly-coupled chiral gauge theories, a dynamical-Higgs-phase scenario, characterized by certain bifermion condensates. Flavor symmetric confining vacua described in the infrared by a set of baryonlike massless composite fermions saturating the conventional 't Hooft anomaly matching equations, appear instead disfavored. Our basic criterion is that it should be possible to write a strong-anomaly effective action, analogous to the one used in QCD to describe the solution of the $U(1)_A$ problem in the low-energy effective action, by using the low-energy degrees of freedom in the hypothesized infrared theory. We also comment on some well-known ideas such as the complementarity and the large $N$ planar dominance in the context of these chiral gauge theories.Some striking analogies and contrasts between the massless QCD and chiral gauge theories seem to emerge from this discussion.
high energy physics theory
The security of private information is becoming the bedrock of an increasingly digitized society. While the users are flooded with passwords and PINs, these gold-standard explicit authentications are becoming less popular and valuable. Recent biometric-based authentication methods, such as facial or finger recognition, are getting popular due to their higher accuracy. However, these hard-biometric-based systems require dedicated devices with powerful sensors and authentication models, which are often limited to most of the market wearables. Still, market wearables are collecting various private information of a user and are becoming an integral part of life: accessing cars, bank accounts, etc. Therefore, time demands a burden-free implicit authentication mechanism for wearables using the less-informative soft-biometric data that are easily obtainable from modern market wearables. In this work, we present a context-dependent soft-biometric-based authentication system for wearables devices using heart rate, gait, and breathing audio signals. From our detailed analysis using the "leave-one-out" validation, we find that a lighter $k$-Nearest Neighbor ($k$-NN) model with $k = 2$ can obtain an average accuracy of $0.93 \pm 0.06$, $F_1$ score $0.93 \pm 0.03$, and {\em false positive rate} (FPR) below $0.08$ at 50\% level of confidence, which shows the promise of this work.
electrical engineering and systems science
With downscaling of electronic circuits, components based on semiconductor quantum dots are assuming increasing relevance for future technologies. Their response under external stimuli intrinsically depend on their quantum properties. Here we investigate single-electron tunneling in hard-wall InAs/InP nanowires in the presence of an off-resonant microwave drive. Our heterostructured nanowires include InAs quantum dots (QDs) and exhibit different tunnel-current regimes. In particular, for source-drain bias up to few mV Coulomb diamonds spread with increasing contrast as a function of microwave power and present multiple current polarity reversals. This behavior can be modelled in terms of voltage fluctuations induced by the microwave field and presents features that depend on the interplay of the discrete energy levels that contribute to the tunneling process.
condensed matter
In the leading order of the large-$N$ approximation, we study the renormalon ambiguity in the gluon (or, more appropriately, photon) condensate in the 2D supersymmetric $\mathbb{C}P^{N-1}$ model on~$\mathbb{R}\times S^1$ with the $\mathbb{Z}_N$ twisted boundary conditions. In our large~$N$ limit, the combination $\Lambda R$, where $\Lambda$ is the dynamical scale and $R$~is the $S^1$ radius, is kept fixed (we set $\Lambda R\ll1$ so that the perturbative expansion with respect to the coupling constant at the mass scale~$1/R$ is meaningful). We extract the perturbative part from the large-$N$ expression of the gluon condensate and obtain the corresponding Borel transform~$B(u)$. For~$\mathbb{R}\times S^1$, we find that the Borel singularity at~$u=2$, which exists in the system on the uncompactified~$\mathbb{R}^2$ and corresponds to twice the minimal bion action, disappears. Instead, an unfamiliar renormalon singularity \emph{emerges\/} at~$u=3/2$ for the compactified space~$\mathbb{R}\times S^1$. The semi-classical interpretation of this peculiar singularity is not clear because $u=3/2$ is not dividable by the minimal bion action. It appears that our observation for the system on~$\mathbb{R}\times S^1$ prompts reconsideration on the semi-classical bion picture of the infrared renormalon.
high energy physics theory