text
stringlengths
11
9.77k
label
stringlengths
2
104
Continuous-time quantum walks can be used to solve the spatial search problem, which is an essential component for many quantum algorithms that run quadratically faster than their classical counterpart, in $\mathcal O(\sqrt n)$ time for $n$ entries. However the capability of models found in nature is largely unexplored - e.g., in one dimension only nearest-neighbour Hamiltonians have been considered so far, for which the quadratic speedup does not exist. Here, we prove that optimal spatial search, namely with $\mathcal O(\sqrt n)$ run time and large fidelity, is possible in one-dimensional spin chains with long-range interactions that decay as $1/r^\alpha$ with distance $r$. In particular, near unit fidelity is achieved for $\alpha\approx 1$ and, in the limit $n\to\infty$, we find a continuous transition from a region where optimal spatial search does exist ($\alpha<1.5$) to where it does not ($\alpha>1.5$). Numerically, we show that spatial search is robust to dephasing noise and that, for realistic conditions, $\alpha \lesssim 1.2$ should be sufficient to demonstrate optimal spatial search experimentally with near unit fidelity.
quantum physics
Deep generative models have recently demonstrated the ability to synthesize photorealistic images of human faces with novel identities. A key challenge to the wide applicability of such techniques is to provide independent control over semantically meaningful parameters: appearance, head pose, face shape, and facial expressions. In this paper, we propose VariTex - to the best of our knowledge the first method that learns a variational latent feature space of neural face textures, which allows sampling of novel identities. We combine this generative model with a parametric face model and gain explicit control over head pose and facial expressions. To generate images of complete human heads, we propose an additive decoder that generates plausible additional details such as hair. A novel training scheme enforces a pose independent latent space and in consequence, allows learning of a one-to-many mapping between latent codes and pose-conditioned exterior regions. The resulting method can generate geometrically consistent images of novel identities allowing fine-grained control over head pose, face shape, and facial expressions, facilitating a broad range of downstream tasks, like sampling novel identities, re-posing, expression transfer, and more.
computer science
Variational autoencoder (VAE) is a very popular and well-investigated generative model vastly used in neural learning research. To leverage VAE in practical tasks dealing with a massive dataset of large dimensions it is required to deal with the difficulty of building low variance evidence lower bounds (ELBO). Markov ChainMonte Carlo (MCMC) is one of the effective approaches to tighten the ELBO for approximating the posterior distribution. Hamiltonian Variational Autoencoder(HVAE) is an effective MCMC inspired approach for constructing a low-variance ELBO which is also amenable to the reparameterization trick. In this work, we propose a Quasi-symplectic Langevin Variational autoencoder (Langevin-VAE) by incorporating the gradients information in the inference process through the Langevin dynamic. We show the effectiveness of the proposed approach by toy and real-world examples.
statistics
Gravitational waves (GWs) from strong first-order phase transitions (SFOPTs) in the early Universe are a prime target for upcoming GW experiments. In this paper, I construct novel peak-integrated sensitivity curves (PISCs) for these experiments, which faithfully represent their projected sensitivities to the GW signal from a cosmological SFOPT by explicitly taking into account the expected shape of the signal. Designed to be a handy tool for phenomenologists and model builders, PISCs allow for a quick and systematic comparison of theoretical predictions with experimental sensitivities, as I illustrate by a large range of examples. PISCs also offer several advantages over the conventional power-law-integrated sensitivity curves (PLISCs); in particular, they directly encode information on the expected signal-to-noise ratio for the GW signal from a SFOPT. I provide semianalytical fit functions for the exact numerical PISCs of LISA, DECIGO, and BBO. In an appendix, I moreover present a detailed review of the strain noise power spectra of a large number of GW experiments. The numerical results for all PISCs, PLISCs, and strain noise power spectra presented in this paper can be downloaded from the Zenodo online repository [https://doi.org/10.5281/zenodo.3689582]. In a companion paper [1909.11356], the concept of PISCs is used to perform an in-depth study of the GW signal from the cosmological phase transition in the real-scalar-singlet extension of the standard model. The PISCs presented in this paper will need to be updated whenever new theoretical results on the expected shape of the signal become available. The PISC approach is therefore suited to be used as a bookkeeping tool to keep track of the theoretical progress in the field.
high energy physics phenomenology
Let $A$ be a non-commutative, non-unital $\mathrm{C}^\ast$-algebra. Given a set of commuting positive elements in the corona algebra $Q(A)$, we study some obstructions to the existence of a commutative lifting of such set to the multiplier algebra $M(A)$. Our focus are the obstructions caused by the size of the collection we want to lift. It is known that no obstacles show up when lifting a countable family of commuting projections, or of pairwise orthogonal positive elements. However, this is not the case for larger collections. We prove in fact that for every primitive, non-unital, $\sigma$-unital $\mathrm{C}^\ast$-algebra $A$, there exists an uncountable set of pairwise orthogonal positive elements in $Q(A)$ such that no uncountable subset of it can be lifted to a set of commuting elements of $M(A)$. Moreover, the positive elements in $Q(A)$ can be chosen to be projections if $A$ has real rank zero.
mathematics
The discovery of habitable zone (HZ) planets around low-mass stars has highlighted the need for a comprehensive understanding of the radiation environments in which such planets reside. Of particular importance is knowledge of the far-ultraviolet (FUV) radiation, as low-mass stars are typically much more active than solar-type stars and the proximity of their HZs can be one tenth the distance. The vast majority of the flux emitted by low-mass stars at FUV wavelengths occurs in the Lyman-$\alpha$ line at 1216 Angstroms. However, measuring a low-mass star's Lyman-$\alpha$ emission directly is almost always impossible because of the contaminating effects of interstellar hydrogen and geocoronal airglow. We observed Ross 825 (K3) and Ross 1044 (M0), two stars with exceptional radial velocities, with the STIS spectrograph aboard the Hubble Space Telescope (HST). Their radial velocities resulted in significant line shifts, allowing for a more complete view of their Lyman-$\alpha$ line profiles. We provide an updated relation between effective temperature and Lyman-$\alpha$ flux using Gaia DR2 astrometry as well as updated, model-independent relationships between Lyman-$\alpha$ flux and UV flux measurements from the Galaxy Evolution Explorer (GALEX) for low-mass stars. These new relations, in combination with GALEX's considerable spatial coverage, provide substantial predictive power for the Lyman-$\alpha$ environments for thousands of nearby, low-mass stars.
astrophysics
This paper extends the concept of informative selection, population distribution and sample distribution to a spatial process context. These notions were first defined in a context where the output of the random process of interest consists of independent and identically distributed realisations for each individual of a population. It has been showed that informative selection was inducing a stochastic dependence among realisations on the selected units. In the context of spatial processes, the "population" is a continuous space and realisations for two different elements of the population are not independent. We show how informative selection may induce a different dependence among selected units and how the sample distribution differs from the population distribution.
mathematics
Topologically non-trivial spin textures, such as skyrmions and dislocations, display emergent electrodynamics and can be moved by spin currents over macroscopic distances. These unique properties and their nanoscale size make them excellent candidates for the development of next-generation logic gates, race-track memory, and artificial synapses for neuromorphic computing. A major challenge for these applications - and the investigation of nanoscale magnetic structures in general - is the realization of adequate detection schemes that provide the required resolution and sensitivity. Here, the local magnetic properties of topological defects in FeGe are studied, revealing a pronounced non-linear response that distinguishes the individual spin textures from the helimagnetic background. Combining magnetic force microscopy and micromagnetic simulations, the non-linear response is linked to the local magnetic susceptibility, representing an innovative approach for detecting topologically non-trivial spin textures and domain walls. Based on the findings, a read-out scheme is proposed using planar micro-coils compatible with semiconductor fabrication methods, facilitating the transfer to spintronics devices.
condensed matter
We revise the treatment of fermionic dark matter interacting with photons via dimension-5 and -6 effective operators. We show how the application of the effective operators beyond their validity introduces unphysical, gauge violating effects that are relevant for current experimental searches. Restoring gauge invariance by coupling dark matter to the hypercharge gauge field has implications for the parameter space above and below the electroweak scale. We review the phenomenology of these hypercharge form factors at the LHC as well as for direct and indirect detection experiments. We highlight where the electromagnetic and hypercharge descriptions lead to wildly different conclusions about the viable parameter space and the relative sensitivity of various probes. These include a drastic weakening of vector bosons fusion versus mono-jet searches at the LHC, and the incorrect impression that indirect searches could lead to better constraints than direct detection for larger dark matter masses. We find that the dimension-5 operators are strongly constrained by direct detection bounds, while for dimension-6 operators LHC mono-jet searches are competitive or performing better than the other probes we consider.
high energy physics phenomenology
This paper presents a formulation of Snapshot Positioning as a mixed-integer least-squares problem. In snapshot positioning one estimates a position from code-phase and possibly Doppler observations of a Global Navigation Satellite Systems (GNSS) without knowing the time of departure (time stamp) of the codes. Solving the problem allows a receiver to determine a fix from short radio-frequency snapshots missing the time-stamp information embedded in the GNSS data stream. This is used to reduced the time to first fix in some receivers, and it is used in certain wildlife trackers. This paper presents two new formulations of the problem and an algorithm that solves the resulting mixed-integer least-squares problems. We also show that the new formulations can produce fixes even with huge initial errors, much larger than permitted in Van Diggelen's widely-cited coarse-time navigation method.
electrical engineering and systems science
We revisit scalar leptoquark pair-production at hadron colliders and significantly improve the level of precision of the cross section calculations. Apart from QCD contributions, we include lepton t-channel exchange diagrams that turn out to be relevant in the light of the recent B-anomalies. We evaluate all contributions at next-to-leading-order accuracy in QCD and resum, in the threshold regime, soft-gluon radiation at next-to-next-to-leading logarithmic accuracy. Our predictions consist hence in the most precise leptoquark cross section calculations available to date, and are necessary for the best exploitation of leptoquark searches at the LHC.
high energy physics phenomenology
We describe the energy distribution of hard gluons travelling through a dense quark-gluon plasma whose temperature increases linearly with time, within a probabilistic perturbative approach. The results were applied to the thermalization problem in heavy ion collisions, in the third stage of the bottom-up picture, to estimate the initial time for the third stage of the bottom-up scenario. We then look at the entropy density and average temperature of the soft thermal bath, as the system approaches (local) thermal equilibrium.
high energy physics phenomenology
Active matter is ubiquitous in biology and becomes increasingly more important in materials science. While numerous active systems have been investigated in detail both experimentally and theoretically, general design principles for functional active materials are still lacking. Building on a recently developed linear response optimization (LRO) framework, we here demonstrate that the spectra of nonlinear active mechanical and electric circuits can be designed similarly to those of linear passive networks.
physics
We construct a 1-parameter family of $\mathrm{SL}_2(\mathbf{R})$ representations of the pretzel knot $P(-2,3,7)$. As a consequence, we conclude that Dehn surgeries on this knot are left-orderable for all rational surgery slopes less than 6. Furthermore, we discuss a family of knots and exhibit similar orderability results for a few other examples.
mathematics
We present computer simulations about the spatial and temporal evolution of a 1-MeV proton microbeam transmitted through an insulating macrocapillary with the length of 45 mm and with the inner diameter of 800 {\mu}m. The axis of the capillary was tilted to 1{\deg} relative to the axis of the incident beam, which ensured geometrical nontransparency. The simulation is based on the combination of stochastic (Monte Carlo) and deterministic methods. It involves (1) random sampling of the initial conditions, according to distributions generated by the widely used and freely available computer software packages, SRIM and WINTRAX, (2) the numerical solution of the governing equations for following the classical trajectory of the projectiles, and (3) the description of the field-driven charge migration on the surface and in the bulk of the insulator material. We found that our simulation describes reasonably all of our previous experimental observations, indicating the functionality and reliability of the applied model. In addition, we found that at different phases of the beam transmission, different atomic processes result in the evolution of the beam distribution. First, in a scattering phase, the multiple small angle atomic scattering dominates in the beam transmission, resulting in an outgoing beam into a wide angular range and in a wide energy window. Later, in a mixed phase, scattering and guiding happens simultaneously, with a continuously increasing contribution of guiding. Finally, in the phase of the stabilized, guided transmission, a quadrupolelike focusing effect is observed, i.e., the transmitted beam is concentrated into a small spot, and the transmitted protons keep their initial kinetic energy.
physics
Clinical target volume (CTV) delineation from radiotherapy computed tomography (RTCT) images is used to define the treatment areas containing the gross tumor volume (GTV) and/or sub-clinical malignant disease for radiotherapy (RT). High intra- and inter-user variability makes this a particularly difficult task for esophageal cancer. This motivates automated solutions, which is the aim of our work. Because CTV delineation is highly context-dependent--it must encompass the GTV and regional lymph nodes (LNs) while also avoiding excessive exposure to the organs at risk (OARs)--we formulate it as a deep contextual appearance-based problem using encoded spatial contexts of these anatomical structures. This allows the deep network to better learn from and emulate the margin- and appearance-based delineation performed by human physicians. Additionally, we develop domain-specific data augmentation to inject robustness to our system. Finally, we show that a simple 3D progressive holistically nested network (PHNN), which avoids computationally heavy decoding paths while still aggregating features at different levels of context, can outperform more complicated networks. Cross-validated experiments on a dataset of 135 esophageal cancer patients demonstrate that our encoded spatial context approach can produce concrete performance improvements, with an average Dice score of 83.9% and an average surface distance of 4.2 mm, representing improvements of 3.8% and 2.4 mm, respectively, over the state-of-the-art approach.
electrical engineering and systems science
We derive the Hamiltonian for trilayer moir\'e systems with the Coulomb interaction projected onto the bands near the charge neutrality point. Motivated by the latest experimental results, we focus on the twisted symmetric trilayer graphene (TSTG) with a mirror-symmetry with respect to the middle layer. We provide a full symmetry analysis of the non-interacting Hamiltonian with a perpendicular displacement field coupling the band structure made otherwise of the twisted bilayer graphene (TBG) and the high velocity Dirac fermions, and we identify a hidden non-local symmetry of the problem. In the presence of this displacement field, we construct an approximate single-particle model, akin to the tripod model for TBG, capturing the essence of non-interacting TSTG. We also derive more quantitative perturbation schemes for the low-energy physics of TSTG with displacement field, obtaining the corresponding eigenstates. This allows us to obtain the Coulomb interaction Hamiltonian projected in the active band TSTG wavefunctions and derive the full many-body Hamiltonian of the system. We also provide an efficient parameterization of the interacting Hamiltonian. Finally, we show that the discrete symmetries at the single-particle level promote the $\mathrm{U} \left( 2 \right) \times \mathrm{U} \left( 2 \right)$ spin-valley symmetry to enlarged symmetry groups of the interacting problem under different limits. The interacting part of the Hamiltonian exhibits a large $\mathrm{U} \left( 4 \right) \times \mathrm{U} \left( 4 \right) \times \mathrm{U} \left( 4 \right) \times \mathrm{U} \left( 4 \right)$ symmetry in the chiral limit. Moreover, by identifying a new symmetry which we dub spatial many-body charge conjugation, we show that the physics of TSTG is symmetric around charge neutrality.
condensed matter
In the limit of a large number of colors (N), both Yang-Mills and quantum chromodynamics are expected to have a first-order phase transition separating a confined hadronic phase and a deconfined plasma phase. One aspect of this separation is that at large N, one can unambiguously identify a plasma regime that is strongly coupled. The existence of a first-order transition suggests that the hadronic phase can be superheated and the plasma phase supercooled. The supercooled deconfined plasma present at large N, if it exists, has the remarkable property that it has negative absolute pressure -- i.e. a pressure below that of the vacuum. For energy densities of order unity in a 1/N expansion but beyond the endpoint of the hadronic superheated phase, a description of homogeneous matter composed of ordinary hadrons with masses of order unity in a 1/N expansion can exist, and acts as though it has a temperature of $T_H$ in order unity. However, the connection between the canonical and microcanonical descriptions breaks down and the system cannot fully equilibrate as $N \rightarrow \infty$. Rather, in a hadronic description, energy is pushed to hadrons with masses that are arbitrarily large. The thermodynamic limit of large volumes becomes subtle for such systems: the energy density is no longer intensive. These conclusions follow provided that standard large N scaling rules hold, the system at large N undergoes a generic first-order phase transition between the hadronic and plasma phases and that the mesons and glueballs follow a Hagedorn-type spectrum.
high energy physics phenomenology
In this work, we study the development of a coplanar waveguide (CPW) resonator and its use in an electron spin resonance (ESR) spectrometer. The CPW resonator is designed to operate in S-band. It has a short circuit configuration which leads to miniaturization. It is so constructed such that it has a characteristic impedance of 50 ohms. Detailed electromagnetic simulation with a particular emphasis on the excitation of the structure has been performed for this resonator owing to its uniplanar nature. The design parameters and the electromagnetic field distribution are obtained from the simulation. The resonator is fabricated using optical lithography with a rapid prototyping technique. The characteristic response of the resonator is measured by coupling it to a Vector Network Analyzer (VNA). The ESR absorption spectrum of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH) is captured by using this resonator in reflection geometry. The microwave magnetic field distribution at the sample position is investigated.The measured g-factor value is found to be consistent with that reported in the literature. The quality factor of this resonator is found to be low and this makes it suitable for use in a Pulsed ESR spectrometer.
physics
We analyse prior risk factors for severe, critical or fatal courses of Covid-19 based on a retrospective cohort using claims data of the AOK Bayern. As our main methodological contribution, we avoid prior grouping and pre-selection of candidate risk factors. Instead, fine-grained hierarchical information from medical classification systems for diagnoses, pharmaceuticals and procedures are used, resulting in more than 33,000 covariates. Our approach has better predictive ability than well-specified morbidity groups but does not need prior subject-matter knowledge. The methodology and estimated coefficients are made available to decision makers to prioritize protective measures towards vulnerable subpopulations and to researchers who like to adjust for a large set of confounders in studies of individual risk factors.
statistics
We prove a sub-convex estimate for the sup-norm of $L^2$-normalized holomorphic modular forms of weight $k$ on the upper half plane, with respect to the unit group of a quaternion division algebra over $\mf Q$. More precisely we show that when the $L^2$ norm of an eigenfunction $f$ is one, | f |_\infty \ll k^{1/2 - 12/131 + \varepsilon}, for any $\varepsilon>0$ and for all $k$ sufficiently large.
mathematics
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNN models with varying degrees of shape-based training. We find that shape-biased models do not markedly improve adversarial robustness, and we show that ensembles of texture and shape-biased models can improve universal adversarial robustness while maintaining strong performance.
computer science
Our interest is to find the difference of the behavior between black holes with three different topologies. These black holes have spherical, hyperbolic and toroidal structures. We study in this paper the behavior of a probe D5-branes in this nontrivial black hole spacetime. We would like to find the solution what describe the embedding of probe D5-brane. This system realizes an "interface" solution, a kind of non-local operators, on the boundary gauge theories. These operators are important to deepen understanding of AdS/CFT correspondence.
high energy physics theory
In this paper, we propose a new orthogonal matching pursuit algorithm called quasi-OMP algorithm which greatly enhances the performance of classical orthogonal matching pursuit (OMP) algorithm, at some cost of computational complexity. We are able to show that under some sufficient conditions of mutual coherence of the sensing matrix, the QOMP Algorithm succeeds in recovering the s-sparse signal vector x within s iterations where a total number of 2s columns are selected under the both noiseless and noisy settings. In addition, we show that for Gaussian sensing matrix, the norm of the residual of each iteration will go to zero linearly depends on the size of the matrix with high probability. The numerical experiments are demonstrated to show the effectiveness of QOMP algorithm in recovering sparse solutions which outperforms the classic OMP and GOMP algorithm.
mathematics
We present predictions for the double parton scattering (DPS) four-jet production cross sections in $pA$ collisions at the LHC. Relying on the experimental capabilities to correlate centrality with impact parameter $B$ of the proton-nucleus collision, we discuss a strategy to extract the double parton scattering contributions in $pA$ collisions, which gives direct access to double parton distribution in the nucleon. We show that the production cross sections via DPS of four jets, out of which two may be light- or heavy-quark jets, are large enough to allow the method to be used already with data accumulated in 2016 $pA$ run.
high energy physics phenomenology
Large galaxies grow through the accumulation of dwarf galaxies. In principle it is possible to trace this growth history using the properties of a galaxy's stellar halo. Previous investigations of the galaxy M31 (Andromeda) have shown that outside a radius of 25 kpc the population of halo globular clusters is rotating in alignment with the stellar disk, as are more centrally located clusters. The M31 halo also contains coherent stellar substructures, along with a smoothly distributed stellar component. Many of the globular clusters outside 25 kpc are associated with the most prominent substructures, while others are part of the smooth halo. Here we report a new analysis of the kinematics of these globular clusters. We find that the two distinct populations are rotating with perpendicular orientations. The rotation axis for the population associated with the smooth halo is aligned with the rotation axis for the plane of dwarf galaxies that encircles M31. We interpret these separate cluster populations as arising from two major accretion epochs, likely separated by billions of years. Stellar substructures from the first epoch are gone, but those from the more recent second epoch still remain.
astrophysics
The problem of cell association is considered for cellular users present in the field. This has become a challenging problem with the deployment of 5G networks which will share the sub-6 GHz bands with the legacy 4G networks. Instead of taking a network-controlled approach, which may not be scalable with the number of users and may introduce extra delays into the system, we propose a scalable solution in the physical layer by utilizing data that can be collected by a large number of spectrum sensors deployed in the field. More specifically, we model the cell association problem as a nonlinear boundary detection problem and focus on solving this problem using randomized shallow networks for determining the boundaries for location of users associated to each cell. We exploit the power of data-driven modeling to reduce the computational cost of training in the proposed solution for the cell association problem. This is equivalent to choosing the right basis functions in the shallow architecture such that the detection is done with minimal error. Our experiments demonstrate the superiority of this method compared to its data-independent counterparts as well as its computational advantage over kernel methods.
electrical engineering and systems science
We describe an improved technique for using the backscattered phase from meteor radar echo measurements just prior to the specular point ($t_{0}$) to calculate meteor speeds and their uncertainty. Our method, which builds on earlier work of Cervera et al (1997), scans possible speeds in the Fresnel distance - time domain with a dynamic, sliding window and derives a best-speed estimate from the resultant speed distribution. We test the performance of our method, called pre-$t_{0}$ speeds by sliding-slopes technique (PSSST), on transverse scattered meteor echoes observed by the Middle Atmosphere Alomar Radar System (MAARSY) and the Canadian Meteor Orbit Radar (CMOR), and compare the results to time-of-flight and Fresnel transform speed estimates. Our novel technique is shown to produce good results when compared to both model and speed measurements using other techniques. We show that our speed precision is $\pm$5$\%$ at speeds less than 40 km/s and we find that more than 90$\%$ of all CMOR multi-station echoes have PSSST solutions. For CMOR data, PSSST is robust against the selection of critical phase value and poor phase unwrapping. Pick errors of up to $\pm$6 pulses for meteor speeds less than about 50 km/s produce errors of less than $\pm$5$\%$ of the meteoroid speed. In addition, the width of the PSSST speed Kernel density estimate (KDE) is used as a natural measure of uncertainty that captures both noise and $t_0$ pick uncertainties.
astrophysics
The flow in a Hele-Shaw cell with a time-increasing gap poses a unique shrinking interface problem. When the upper plate of the cell is lifted perpendicularly at a prescribed speed, the exterior less viscous fluid penetrates the interior more viscous fluid, which generates complex, time-dependent interfacial patterns through the Saffman-Taylor instability. The pattern formation process sensitively depends on the lifting speed and is still not fully understood. For some lifting speeds, such as linear or exponential speed, the instability is transient and the interface eventually shrinks as a circle. However, linear stability analysis suggests there exist shape invariant shrinking patterns if the gap $b(t)$ is increased more rapidly: $\displaystyle b(t)=\left(1-\frac{7}{2}\tau \mathcal{C} t\right)^{-{2}/{7}}$, where $\tau$ is the surface tension and $\mathcal{C}$ is a function of the interface perturbation mode $k$. Here, we use a spectrally accurate boundary integral method together with an efficient time adaptive rescaling scheme, which for the first time makes it possible to explore the nonlinear limiting dynamical behavior of a vanishing interface. When the gap is increased at a constant rate, our numerical results quantitatively agree with experimental observations (Nase et al., Phys. Fluids, vol. 23, 2011, pp. 123101). When we use the shape invariant gap $b(t)$, our nonlinear results reveal the existence of $k$-fold dominant, one-dimensional, web-like networks, where the fractal dimension is reduced to almost one at late times. We conclude by constructing a morphology diagram for pattern selection that relates the dominant mode $k$ of the vanishing interface and the control parameter $\mathcal{C}$.
physics
The increasing integration of intermittent renewable generation in power networks calls for novel planning and control methodologies, which hinge on detailed knowledge of the grid. However, reliable information concerning the system topology and parameters may be missing or outdated for temporally varying AC networks. This paper proposes an online learning procedure to estimate the admittance matrix of an AC network capturing topological information and line parameters. We start off by providing a recursive identification algorithm that exploits phasor measurements of voltages and currents. With the goal of accelerating convergence, we subsequently complement our base algorithm with a design-of-experiment procedure, which maximizes the information content of data at each step by computing optimal voltage excitations. Our approach improves on existing techniques and its effectiveness is substantiated by numerical studies on a 6-bus AC network.
electrical engineering and systems science
The construction of a building inevitably changes the microclimate in its vicinity. Many city authorities request comprehensive wind studies before granting a building permit, which can be obtained by Computational Fluid Dynamics (CFD) simulations. When performing wind simulations, the quality of the geometry model is essential. Still, no available studies examine different geometry inputs' impact on the wind flow through an urban environment. This study investigates the influence of the building geometry acquisition method on the simulated wind field in an urban area, focusing on the application of pedestrian wind comfort. A suburban area in the west coast of Norway was chosen as a case study. Four building model types were produced and used in the simulations for comparison. The simulations using a building model produced from data stored in the national general feature catalog (FKB) in Norway showed minor differences to the simulations using more detailed and accurate models based on remote sensing measurements. Prominent variations were seen using a model based on the extrusion of the building footprint. A greater understanding of the geometry acquisition method's influence may enable more efficient pedestrian wind comfort studies that recognize the uncertainty of different geometric model use in urban wind simulations.
physics
Black holes of sufficiently large initial radius are expected to be well described by a semiclassical analysis at least until half of their initial mass has evaporated away. For a small number of spacetime dimensions, this holds as long as the black hole is parametrically larger than the Planck length. In that case, curvatures are small and backreaction onto geometry is expected to be well described by a time-dependent classical metric. We point out that at large $D$, small curvature is insufficient to guarantee a valid semiclassical description of black holes. Instead, the strongest bounds come from demanding that the rate of change of the geometry is small and that black holes scramble information faster than they evaporate. This is a consequence of the enormous power of Hawking radiation in $D$-dimensions due to the large available phase space and the resulting minuscule evaporation times. Asymptotically, only black holes with entropies $S \geq D^{D+3} \log D$ are semiclassical. We comment on implications for realistic quantum gravity models in $D \leq 26$ as well as relations to bounds on theories with a large number of gravitationally interacting light species.
high energy physics theory
Ferroelectricity, the electrostatic counterpart to ferromagnetism, has long been thought to be incompatible with metallicity due to screening of electric dipoles and external electric fields by itinerant charges. Recent measurements, however, demonstrated signatures of ferroelectric switching in the electrical conductance of bilayers and trilayers of WTe$_2$, a semimetallic transition metal dichalcogenide with broken inversion symmetry. An especially promising aspect of this system is that the density of electrons and holes can be continuously tuned by an external gate voltage. This degree of freedom enables investigation of the interplay between ferroelectricity and free carriers, a previously unexplored regime. Here, we employ capacitive sensing in dual-gated mesoscopic devices of bilayer WTe$_2$ to directly measure the spontaneous polarization in the metallic state and quantify the effect of free carriers on the polarization in the conduction and valence bands, separately. We compare our results to a low-energy model for the electronic bands and identify the layer-polarized states that contribute to transport and polarization simultaneously. Bilayer WTe$_2$ is thus shown to be a canonical example of a ferroelectric metal and an ideal platform for exploring polar ordering, ferroelectric transitions, and applications in the presence of free carriers.
condensed matter
It has been recently discovered that the $\text{T}\bar{\text{T}}$ deformation is closely-related to Jackiw-Teitelboim gravity. At classical level, the introduction of this perturbation induces an interaction between the stress-energy tensor and space-time and the deformed EoMs can be mapped, through a field-dependent change of coordinates, onto the corresponding undeformed ones. The effect of this perturbation on the quantum spectrum is non-perturbatively described by an inhomogeneous Burgers equation. In this paper, we point out that there exist infinite families of models where the geometry couples instead to generic combinations of local conserved currents labelled by the Lorentz spin. In spirit, these generalisations are similar to the $\text{J}\bar{\text{T}}$ model as the resulting theories and the corresponding scattering phase factors are not Lorentz invariant. The link with the $\text{J}\bar{\text{T}}$ model is discussed in detail. While the classical setup described here is very general, we shall use the sine-Gordon model and its CFT limit as explanatory quantum examples. Most of the final equations and considerations are, however, of broader validity or easily generalisable to more complicated systems.
high energy physics theory
Energy dissipation by fast crystalline defects takes place mainly through the resonant interaction of their cores with periodic lattice. We show that the resultant effective friction can be reduced to zero by appropriately tuned acoustic sources located on the boundary of the body. To illustrate the general idea, we consider three prototypical models describing the main types of strongly discrete defects: dislocations, cracks and domain walls. The obtained control protocols, ensuring dissipation-free mobility of topological defects, can be also used in the design of meta-material systems aimed at transmitting mechanical information.
condensed matter
Cooling quantum systems is arguably one of the most important thermodynamic tasks connected to modern quantum technologies and an interesting question from a foundational perspective. It is thus of no surprise that many different theoretical cooling schemes have been proposed, differing in the assumed control paradigm and complexity, and operating either in a single cycle or in steady state limits. Working out bounds on quantum cooling has since been a highly context dependent task with multiple answers, with no general result that holds independent of assumptions. In this letter we derive a universal bound for cooling quantum systems in the limit of infinite cycles (or steady state regimes) that is valid for any control paradigm and machine size. The bound only depends on a single parameter of the refrigerator and is theoretically attainable in all control paradigms. For qubit targets we prove that this bound is achievable in a single cycle and by autonomous machines.
quantum physics
Thermal collapse of an isolated skyrmion on a two-dimensional spin lattice has been investigated. The method is based upon solution of the system of stochastic Landau-Lifshitz-Gilbert equations for up $10^4$ spins. Recently developed pulse-noise algorithm has been used for the stochastic component of the equations. The collapse rate follows the Arrhenius law. Analytical formulas derived within a continuous spin-field model support numerically-obtained values of the energy barrier and the pre-exponential factor, and their dependence on the magnetic field. Our findings agree with experiments, as well as with recent numerical results obtained by other methods.
condensed matter
Using the Very Long Baseline Array and the European Very Long Baseline Interferometry Network we have made a precise measurement of the radio parallax of the black hole X-ray binary MAXI\,J1820+070, providing a model-independent distance to the source. Our parallax measurement of ($0.348\pm0.033$) mas for MAXI J1820+070 translates to a distance of ($2.96\pm0.33$) kpc. This distance implies that the source reached ($15\pm3)\%$ of the Eddington luminosity at the peak of its outburst. Further, we use this distance to refine previous estimates of the jet inclination angle, jet velocity and the mass of the black hole in MAXI J1820+070 to be ($63\pm3)^{\circ}$, ($0.89\pm0.09)c$ and ($9.2\pm1.3) M_{\odot}$, respectively.
astrophysics
We report an experimental investigation of the role of measurement in quantum metrology when the states of the probes are mixed. In particular, we investigated optimized local measurements and general global projective measurements, involving entangling operations, on noisy Werner states of polarization entangled photons. We demonstrate experimentally that global measurement presents an advantage in parameter estimation with respect to the optimized local strategy. Moreover, the global strategy provides unambiguous information about the parameter of interest even when the amount of noise is not well characterized. This shows that the coherence in quantum operations, such as the Bell-state projection device used in our protocol, can be used to further boost the quantum advantage in metrology and play a fundamental role in the design of future quantum measurement devices.
quantum physics
We apply causal forests to a dataset derived from the National Study of Learning Mindsets, and consider resulting practical and conceptual challenges. In particular, we discuss how causal forests use estimated propensity scores to be more robust to confounding, and how they handle data with clustered errors.
statistics
The online homework system WeBWorK has been successfully used at several hundred colleges and universities. Despite its popularity, the WeBWorK system does not provide detailed metrics of student performance to instructors. In this article, we illustrate how an analysis of the log files of the WeBWorK system can provide information such as the amount of time students spend on WeBWorK assignments and how long they persist on problems. We estimate the time spent on an assignment by combining log file events into sessions of student activity. The validity of this method is confirmed by cross referencing with another time estimate obtained from a learning management system. As an application of these performance metrics, we contrast the behaviour of students with WeBWorK scores less than 50% with the remainder of the class in a first year Calculus course. This reveals that on average, the students who fail their homework start their homework later, have shorter activity sessions, and are less persistent when solving problems. We conclude by discussing the implications of WeBWorK analytics for instructional practices and for the future of learning analytics in undergraduate mathematics education.
mathematics
Decision trees are flexible models that are well suited for many statistical regression problems. In a Bayesian framework for regression trees, Markov Chain Monte Carlo (MCMC) search algorithms are required to generate samples of tree models according to their posterior probabilities. The critical component of such an MCMC algorithm is to construct good Metropolis-Hastings steps for updating the tree topology. However, such algorithms frequently suffering from local mode stickiness and poor mixing. As a result, the algorithms are slow to converge. Hitherto, authors have primarily used discrete-time birth/death mechanisms for Bayesian (sums of) regression tree models to explore the model space. These algorithms are efficient only if the acceptance rate is high which is not always the case. Here we overcome this issue by developing a new search algorithm which is based on a continuous-time birth-death Markov process. This search algorithm explores the model space by jumping between parameter spaces corresponding to different tree structures. In the proposed algorithm, the moves between models are always accepted which can dramatically improve the convergence and mixing properties of the MCMC algorithm. We provide theoretical support of the algorithm for Bayesian regression tree models and demonstrate its performance.
statistics
We consider time dependent problem in perturbative approach for a nonrelativistic inviscid spherically symmetric accretion model where the effect of the gravity of the medium is considered in Newtonian gravity framework. We consider spherically symmetric nonrelativistic accretion, i.e. Bondi accretion with self-gravity. Our approach is perturbative in the linear order of perturbation regime. We introduce linear perturbation over the existing steady state solution of the system. The analysis has two features, one is that the linear perturbation in mass accretion rate in such irrotational inviscid model of accretion gives rise to emergent gravity and on the other hand, we get some significant insights about instabilities in the flow due to the effect of gravity in the medium, whereas the instabilities are absent in the absence of self-gravity.
astrophysics
Fog computing can support IoT services with fast response time and low bandwidth usage by moving computation from the cloud to edge devices. However, existing fog computing frameworks have limited flexibility to support dynamic service composition with a data-oriented approach. Function-as-a-Service (FaaS) is a promising programming model for fog computing to enhance flexibility, but the current event- or topic-based design of function triggering and the separation of data management and function execution result in inefficiency for data-intensive IoT services. To achieve both flexibility and efficiency, we propose a data-centric programming model called Fog Function and also introduce its underlying orchestration mechanism that leverages three types of contexts: data context, system context, and usage context. Moreover, we showcase a concrete use case for smart parking where Fog Function allows service developers to easily model their service logic with reduced learning efforts compared to a static service topology. Our performance evaluation results show that the Fog Function can be scaled to hundreds of fog nodes. Fog Function can improve system efficiency by saving 95% of the internal data traffic over cloud function and it can reduce service latency by 30% over edge function.
computer science
Image-Text Matching is one major task in cross-modal information processing. The main challenge is to learn the unified visual and textual representations. Previous methods that perform well on this task primarily focus on not only the alignment between region features in images and the corresponding words in sentences, but also the alignment between relations of regions and relational words. However, the lack of joint learning of regional features and global features will cause the regional features to lose contact with the global context, leading to the mismatch with those non-object words which have global meanings in some sentences. In this work, in order to alleviate this issue, it is necessary to enhance the relations between regions and the relations between regional and global concepts to obtain a more accurate visual representation so as to be better correlated to the corresponding text. Thus, a novel multi-level semantic relations enhancement approach named Dual Semantic Relations Attention Network(DSRAN) is proposed which mainly consists of two modules, separate semantic relations module and the joint semantic relations module. DSRAN performs graph attention in both modules respectively for region-level relations enhancement and regional-global relations enhancement at the same time. With these two modules, different hierarchies of semantic relations are learned simultaneously, thus promoting the image-text matching process by providing more information for the final visual representation. Quantitative experimental results have been performed on MS-COCO and Flickr30K and our method outperforms previous approaches by a large margin due to the effectiveness of the dual semantic relations learning scheme. Codes are available at https://github.com/kywen1119/DSRAN.
computer science
Exploring the graph approach, we restate the extended definition of noncontextuality provided by the contextuality-by-default framework. This extended definition avoids the assumption of nondisturbance, which states that whenever two contexts overlap, the marginal distribution obtained for the intersection must be the same. We show how standard tools for characterizing contextuality can also be used in this extended framework for any set of measurements and, in addition, we also provide several conditions that can be tested directly in any contextuality experiment. Our conditions reduce to traditional ones for noncontextuality if the nondisturbance assumption is satisfied.
quantum physics
Identifying causal relationships is a challenging yet a crucial problem in many fields of science like epidemiology, climatology, ecology, genomics, economics and neuroscience, to mention only a few. Recent studies have demonstrated that ordinal partition transition networks (OPTNs) allow to infer the coupling direction between two dynamical systems. In this work, we generalize this concept to the interaction between multiple dynamical systems and propose a new method to detect causality in multivariate observational data. We demonstrate that our approach can reliably identify the direction of interaction and the corresponding delays with numerical simulations using linear stochastic systems as well as nonlinear dynamical systems such as a network of neural mass models. Finally, we apply our method to real-world observational microelectrode array data from rodent brain slices to study the causal effect networks underlying epileptic activity. Our results from simulations as well as real-world data suggest that OPTNs can provide a complementary approach to reliably infer causal effect networks from multivariate observational data.
statistics
We study the global, i.e. radially averaged, high Reynolds number (asymptotic) scaling of streamwise turbulence intensity squared defined as ${I^2=\overline{u^2}/U^2}$, where $u$ and $U$ are the fluctuating and mean velocities, respectively (overbar is time averaging). The investigation is based on the mathematical abstraction that the logarithmic region in wall turbulence extends across the entire inner and outer layers. Results are matched to spatially integrated Princeton Superpipe measurements [Hultmark M, Vallikivi M, Bailey SCC and Smits AJ. Logarithmic scaling of turbulence in smooth- and rough-wall pipe flow. J. Fluid Mech. Vol. 728, 376-395 (2013)]. Scaling expressions are derived both for log-law and power-law functions of radius. A transition to asymptotic scaling is found at a friction Reynolds number $Re_{\tau} \sim 11000$.
physics
We present the characterization of a novel balanced homodyne detector operating in the mid-infrared. The challenging task of revealing non-classicality in mid-infrared light, e.~g. in quantum cascade lasers emission, requires a high-performance detection system. Through the intensity noise power spectral density analysis of the differential signal coming from the incident radiation, we show that our setup is shot-noise limited. We discuss the experimental results with a view to possible applications to quantum technologies, such as free-space quantum communication.
quantum physics
We present joint Suzaku and Chandra observations of MKW4. With a global temperature of 1.6 keV, MKW4 is one of the smallest galaxy groups that have been mapped in X-rays out to the virial radius. We measure its gas properties from its center to the virial radius in the north, east, and northeast directions. Its entropy profile follows a power-law of $\propto r^{1.1}$ between R$_{500}$ and R$_{200}$ in all directions, as expected from the purely gravitational structure formation model. The well-behaved entropy profiles at the outskirts of MKW4 disfavor the presence of gas clumping or thermal non-equilibrium between ions and electrons in this system. We measure an enclosed baryon fraction of 11% at R$_{200}$, remarkably smaller than the cosmic baryon fraction of 15%. We note that the enclosed gas fractions at R$_{200}$ are systematically smaller for groups than for clusters from existing studies in the literature. The low baryon fraction of galaxy groups, such as MKW4, suggests that their shallower gravitational potential well may make them more vulnerable to baryon losses due to AGN feedback or galactic winds. We find that the azimuthal scatter of various gas properties at the outskirts of MKW4 is significantly lower than in other systems, suggesting that MKW4 is a spherically symmetric and highly relaxed system.
astrophysics
Laser-based angle-resolved photoemission spectroscopy (ARPES) and two-photon photoemission spectroscopy (2PPES) are employed to study the valence electronic structure of the Weyl semimetal candidate Td-WTe$_2$ along two high symmetry directions and for binding energies between $\approx$ -1 eV and 5 eV. The experimental data show a good agreement with band structure calculations. Polarization dependent measurements provide furthermore information on initial and intermediate state symmetry properties with respect to the mirror plane of the Td structure of WTe$_2$.
condensed matter
We consider the regression problem of estimating functions on $\mathbb{R}^D$ but supported on a $d$-dimensional manifold $ \mathcal{M} \subset \mathbb{R}^D $ with $ d \ll D $. Drawing ideas from multi-resolution analysis and nonlinear approximation, we construct low-dimensional coordinates on $\mathcal{M}$ at multiple scales, and perform multiscale regression by local polynomial fitting. We propose a data-driven wavelet thresholding scheme that automatically adapts to the unknown regularity of the function, allowing for efficient estimation of functions exhibiting nonuniform regularity at different locations and scales. We analyze the generalization error of our method by proving finite sample bounds in high probability on rich classes of priors. Our estimator attains optimal learning rates (up to logarithmic factors) as if the function was defined on a known Euclidean domain of dimension $d$, instead of an unknown manifold embedded in $\mathbb{R}^D$. The implemented algorithm has quasilinear complexity in the sample size, with constants linear in $D$ and exponential in $d$. Our work therefore establishes a new framework for regression on low-dimensional sets embedded in high dimensions, with fast implementation and strong theoretical guarantees.
statistics
For a category $\mathcal E$ with finite limits and well-behaved countable coproducts, we construct a model structure, called the effective model structure, on the category of simplicial objects in $\mathcal E$, generalising the Kan--Quillen model structure on simplicial sets. We then prove that the effective model structure is left and right proper and satisfies descent in the sense of Rezk. As a consequence, we obtain that the associated $\infty$-category has finite limits, colimits satisfying descent, and is locally Cartesian closed when $\mathcal E$ is, but is not a higher topos in general. We also characterise the $\infty$-category presented by the effective model structure, showing that it is the full sub-category of presheaves on $\mathcal E$ spanned by Kan complexes in $\mathcal E$, a result that suggests a close analogy with the theory of exact completions.
mathematics
We present an energy conserving lattice Boltzmann model based on a crystallographic lattice for simulation of weakly compressible flows. The theoretical requirements and the methodology to construct such a model are discussed. We demonstrate that the model recovers the isentropic sound speed in addition to the effects of viscous heating and heat flux dynamics. Several test cases for acoustics, thermal and thermoacoustic flows are simulated to show the accuracy of the proposed model.
physics
We provide a derivation of several classes of boundary conditions for fluids of Korteweg-type using a simple and transparent thermodynamic approach that automatically guarentees that the derived boundary conditions are compatible with the second law of thermodynamics. The starting assumption of our approach is to describe the boundary of the domain as the membrane separating two different continua, one inside the domain, and the other outside the domain. With this viewpoint one may employ the framework of continuum thermodynamics involving singular surfaces. This approach allows us to identify, for various classes of surface Helmholtz free energies, the corresponding surface entropy production mechanisms. By establishing the constitutive relations that guarantee that the surface entropy production is non-negative, we identify a new class of boundary conditions, which on one hand generalizes in a nontrivial manner the Navier's slip boundary conditions, and on the other hand describes dynamic and static contact angle conditions. We explore the general model in detail for a particular case of Korteweg fluid where the Helmholtz free energy in the bulk is that of a van der Waals fluid. We perform a series of numerical experiments to document the basic qualitative features of the novel boundary conditions and their practical applicability to model phenomena such as the contact angle hysteresis.
physics
We propose protocols to prepare highly excited energy eigenstates of a trapped ion in a harmonic trap which do not require laser pulses to induce transitions among internal levels. Instead the protocols rely on smoothly deforming the trapping potential between single and double well configurations. The speed of the changes is set to minimize non-adiabatic transitions by keeping the adiabaticity parameter constant. High fidelities are found for times more than two orders of magnitude smaller than with linear ramps of the control parameter. Deformation protocols are also devised to prepare superpositions to optimize interferometric sensitivity, combining the ground state and a highly excited state.
quantum physics
The recent experimental data of the weak charges of Cesium and proton is analyzed in the framework of the models based on the $\mbox{SU}(3)_C\times \mbox{SU}(3)_L \times \mbox{U}(1)_X$ (3-3-1) gauge group, including the 3-3-1 model with CKS mechanism (3-3-1CKS) and the general 3-3-1 models with arbitrary $\beta$ (3-3-1$\beta$) with three Higgs triplets. We will show that at the TeV scale, the mixing among neutral gauge bosons plays significant effect. Within the present values of the weak charges of Cesium and proton we get the lowest mass bound of the extra heavy neutral gauge boson to be 1.27 TeV. The results derived from the weak charge data, perturbative limit of Yukawa coupling of the top quark, and the relevant Landau poles favor the models with $\beta =\pm 1/\sqrt{3}$ and $\beta = 0$ while ruling out the ones with $\beta= \pm \sqrt{3}$. In addition, there are some hints showing that in the 3-3-1 models, the third quark family should be treated differently from the first twos.
high energy physics phenomenology
Statistical analysis of massive datasets very often implies expensive linear algebra operations with large dense matrices. Typical tasks are an estimation of unknown parameters of the underlying statistical model and prediction of missing values. We developed the H-MLE procedure, which solves these tasks. The unknown parameters can be estimated by maximizing the joint Gaussian log-likelihood function, which depends on a covariance matrix. To decrease high computational cost, we approximate the covariance matrix in the hierarchical (H-) matrix format. The H-matrix technique allows us to work with inhomogeneous covariance matrices and almost arbitrary locations. Especially, H-matrices can be applied in cases when the matrices under consideration are dense and unstructured. For validation purposes, we implemented three machine learning methods: the k-nearest neighbors (kNN), random forest, and deep neural network. The best results (for the given datasets) were obtained by the kNN method with three or seven neighbors depending on the dataset. The results computed with the H-MLE method were compared with the results obtained by the kNN method. The developed H-matrix code and all datasets are freely available online.
statistics
Computed tomography (CT) and chest X-ray (CXR) have been the two dominant imaging modalities deployed for improved management of Coronavirus disease 2019 (COVID-19). Due to faster imaging, less radiation exposure, and being cost-effective CXR is preferred over CT. However, the interpretation of CXR images, compared to CT, is more challenging due to low image resolution and COVID-19 image features being similar to regular pneumonia. Computer-aided diagnosis via deep learning has been investigated to help mitigate these problems and help clinicians during the decision-making process. The requirement for a large amount of labeled data is one of the major problems of deep learning methods when deployed in the medical domain. To provide a solution to this, in this work, we propose a semi-supervised learning (SSL) approach using minimal data for training. We integrate local-phase CXR image features into a multi-feature convolutional neural network architecture where the training of SSL method is obtained with a teacher/student paradigm. Quantitative evaluation is performed on 8,851 normal (healthy), 6,045 pneumonia, and 3,795 COVID-19 CXR scans. By only using 7.06% labeled and 16.48% unlabeled data for training, 5.53% for validation, our method achieves 93.61\% mean accuracy on a large-scale (70.93%) test data. We provide comparison results against fully supervised and SSL methods. Code: https://github.com/endiqq/Multi-Feature-Semi-Supervised-Learning-for-COVID-19-CXR-Images
electrical engineering and systems science
In this work, we study the line and loop operators in three-dimensional ${\mathcal N}=2$ fishnet theories in detail. We construct the straight line and circular loop operators which are at least classically half-BPS. We develop a new regularization scheme at frame $-1$ which is suitable for the study of the fermionic BPS loops in general super-Chern-Simons-matter theories. We initialize the perturbative computation for the vacuum expectation values of the circular BPS loop operators based on this scheme. We construct the cusped line operators as well, and compute the vacuum expectation values of these cusped line operators up to two-loop order. We find that the universal cusp anomalous dimension vanishes, if we put aside the fact that the generalized potential has a double pole in the $1/\epsilon$ expansion.
high energy physics theory
Parametric amplifiers are known to squeeze the vacuum state of the electromagnetic field, which results in predictable statistics of the photocounts at their output. However, recent theoretical work arXiv:1112.4159 predicts a very different statistical distribution for an amplifier based on a Josephson junction. We test the hypothesis experimentally and recover the expected squeezed vacuum statistics. We explain this discrepancy by showing theoretically how the photocount statistics is dictated by the detection process, from single mode (our experiment) to multimode, fully resolved in frequency (as in arXiv:1112.4159).
quantum physics
In this paper, we study the spin excitation properties of the frustrated triangular-lattice antiferromagnet Yb(BaBO$_3$)$_3$ with nuclear magnetic resonance. From the spectral analysis, neither magnetic ordering nor spin freezing is observed with temperature down to $T=0.26$ K, far below its Curie-Weiss temperature $|\theta_w|\sim2.3$ K. From the nuclear relaxation measurement, precise temperature-independent spin-lattice relaxation rates are observed at low temperatures under a weak magnetic field, indicating the gapless spin excitations. Further increasing the field intensity, we observe a spin excitation gap with the gap size proportional to the field intensity. These phenomena suggest a very unusual strongly correlated quantum disordered phase, and the implications for the quantum spin liquid state are further discussed.
condensed matter
We use the Beyond Ultra-deep Frontier Fields and Legacy Observations (BUFFALO) strong lensing image catalog of the merging galaxy cluster Abell 370 to obtain a mass model using free-form lens inversion algorithm Grale. The improvement of the strong lensing data quality results in a lens plane rms of only 0.45 arcsec, about a factor of two lower than that of our existing HFF v4 reconstruction. Since the number of images is nearly the same in both, we attribute the improvement to the improvement in the data quality. In our reconstructed mass model, we found indications of three distinct mass features in Abell 370: (i) a $\sim\!35$ kpc offset between the northern BCG and the nearest mass peak, (ii) a $\sim\!100$ kpc mass concentration of roughly critical density $\sim\!250$ kpc east of the main cluster, and (iii) a probable filament-like structure passing N-S through the cluster. While (i) is present in some form in most publicly available reconstructions spanning the range of modeling techniques: parametric, hybrid, and free-form, (ii) and (iii) are recovered by only about half of the reconstructions. We tested our hypothesis on the presence of the filament-like structure by creating a synthetic cluster - Irtysh IIIc - mocking the situation of a cluster with external mass. We also computed the source plane magnification distributions. Using them we estimated the probabilities of magnifications in the source plane, and scrutinized their redshift dependence. Finally, we explored the lensing effects of Abell 370 on the luminosity functions of sources at $z_s=9.0$, finding it consistent with published results.
astrophysics
The solar magnetic activity cycle has an amplitude that varies within a wide but limited range of values. This implies that there are nonlinear mechanisms that prevent runaway solutions. The purpose of this paper is to propose observable nonlinear mechanisms in the framework of the Babcock-Leighton-type dynamo. Sunspot emergences show systematic properties that strong cycles tend to have higher mean latitudes and lower tilt angle coefficients. We use the surface flux transport model to investigate the effect of these systematic properties on the expected final total dipolar moment, i.e. cancellation plus generation of dipole moment by a whole solar cycle. We demonstrate that the systematic change in latitude has similar nonlinear feedback on the solar cycle (latitudinal quenching) as tilt does (tilt quenching). Both forms of quenching lead to the expected final total dipolar moment being enhanced for weak cycles and being saturated to a nearly constant value for normal and strong cycles. This explains observed long-term solar cycle variability, e.g., the Gnevyshev-Ohl rule, which, in turn, justifies the nonlinear mechanisms inherent in the Babcock-Leighton-type dynamo.
astrophysics
The unit-Lindley distribution was recently introduced in the literature as a viable alternative to the Beta and the Kumaraswamy distributions with support in (0; 1). This distribution enjoys many virtuous properties over the named distributions. In this article, we address the issue of parameter estimation from a Bayesian perspective and study relative performance of different estimators through extensive simulation studies. Significant emphasis is given to the estimation of stress-strength reliability employing classical as well as Bayesian approach. A non-trivial useful application in the public health domain is presented proposing a simple metric of discrepancy.
statistics
We consider the four-point function of the lowest scalar in the stress-energy tensor multiplet in $\mathcal{N}=8$ ABJ(M) theory \cite{Aharony:2008ug, Aharony:2008gk}. At large central charge $c_T\sim N^{3/2}$, this correlator is given by the corresponding holographic correlation function in 11d supergravity on $AdS_4\times S^7$. We use Mellin space techniques to compute the leading $1/c_T$ correction to anomalous dimensions and OPE coefficients of operators that appear in this holographic correlator. For half and quarter-BPS operators, we find exact agreement with previously computed localization results. For the other BPS and non-BPS operators, our results match the $\mathcal{N}=8$ numerical bootstrap for ABJ(M) at large $c_T$, which provides a precise check of unprotected observables in AdS/CFT.
high energy physics theory
Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization. These min-max formulations, however, are more difficult to solve. We therefore provide a new stochastic gradient descent algorithm to efficiently solve this DRO formulation. Our approach applies gradient descent to the outer minimization formulation and estimates the gradient of the inner maximization based on a sample average approximation. The latter uses a subset of the data in each iteration, progressively increasing the subset size to ensure convergence. Theoretical results include establishing the optimal manner for growing the support size to balance a fundamental tradeoff between stochastic error and computational effort. Empirical results demonstrate the significant benefits of our approach over previous work, and also illustrate how learning with DRO can improve generalization.
statistics
We study the problem of approximating the Ising model partition function with complex parameters on bounded degree graphs. We establish a deterministic polynomial-time approximation scheme for the partition function when the interactions and external fields are absolutely bounded close to zero. Furthermore, we prove that for this class of Ising models the partition function does not vanish. Our algorithm is based on an approach due to Barvinok for approximating evaluations of a polynomial based on the location of the complex zeros and a technique due to Patel and Regts for efficiently computing the leading coefficients of graph polynomials on bounded degree graphs. Finally, we show how our algorithm can be extended to approximate certain output probability amplitudes of quantum circuits.
quantum physics
The cosmic thermal history, quantified by the evolution of the mean thermal energy density in the universe, is driven by the growth of structures as baryons get shock heated in collapsing dark matter halos. This process can be probed by redshift-dependent amplitudes of the thermal Sunyaev-Zeldovich (SZ) effect background. To do so, we cross-correlate eight sky intensity maps in the $\it{Planck}$ and Infrared Astronomical Satellite missions with two million spectroscopic redshift references in the Sloan Digital Sky Surveys. This delivers snapshot spectra for the far-infrared to microwave background light as a function of redshift up to $z\sim3$. We decompose them into the SZ and thermal dust components. Our SZ measurements directly constrain $\langle bP_{\rm e} \rangle$, the halo bias-weighted mean electron pressure, up to $z\sim 1$. This is the highest redshift achieved to date, with uncorrelated redshift bins thanks to the spectroscopic references. We detect a threefold increase in the density-weighted mean electron temperature $\bar{T}_{\rm{e}}$ from $7\times 10^5~{\rm K}$ at $z=1$ to $2\times 10^6~{\rm K}$ today. Over $z=1$-$0$, we witness the build-up of nearly $70\%$ of the present-day mean thermal energy density $\rho_{\rm{th}}$, with the corresponding density parameter $\Omega_{\rm th}$ reaching $1.5 \times10^{-8}$. We find the mass bias parameter of $\it{Planck}$'s universal pressure profile of $B=1.27$ (or $1-b=1/B=0.79$), consistent with the magnitude of non-thermal pressure in gas motion and turbulence from mass assembly. We estimate the redshift-integrated mean Compton parameter $y\sim1.2\times10^{-6}$, which will be tested by future spectral distortion experiments. More than half of which originates from the large-scale structure at $z<1$, which we detect directly.
astrophysics
Goodness-of-fit (GoF) testing is ubiquitous in statistics, with direct ties to model selection, confidence interval construction, conditional independence testing, and multiple testing, just to name a few applications. While testing the GoF of a simple (point) null hypothesis provides an analyst great flexibility in the choice of test statistic while still ensuring validity, most GoF tests for composite null hypotheses are far more constrained, as the test statistic must have a tractable distribution over the entire null model space. A notable exception is co-sufficient sampling (CSS): resampling the data conditional on a sufficient statistic for the null model guarantees valid GoF testing using any test statistic the analyst chooses. But CSS testing requires the null model to have a compact (in an information-theoretic sense) sufficient statistic, which only holds for a very limited class of models; even for a null model as simple as logistic regression, CSS testing is powerless. In this paper, we leverage the concept of approximate sufficiency to generalize CSS testing to essentially any parametric model with an asymptotically-efficient estimator; we call our extension "approximate CSS" (aCSS) testing. We quantify the finite-sample Type I error inflation of aCSS testing and show that it is vanishing under standard maximum likelihood asymptotics, for any choice of test statistic. We apply our proposed procedure both theoretically and in simulation to a number of models of interest to demonstrate its finite-sample Type I error and power.
statistics
In this paper, we study a problem of detecting the source of diffused information by querying individuals, given a sample snapshot of the information diffusion graph, where two queries are asked: {\em (i)} whether the respondent is the source or not, and {\em (ii)} if not, which neighbor spreads the information to the respondent. We consider the case when respondents may not always be truthful and some cost is taken for each query. Our goal is to quantify the necessary and sufficient budgets to achieve the detection probability $1-\delta$ for any given $0<\delta<1.$ To this end, we study two types of algorithms: adaptive and non-adaptive ones, each of which corresponds to whether we adaptively select the next respondents based on the answers of the previous respondents or not. We first provide the information theoretic lower bounds for the necessary budgets in both algorithm types. In terms of the sufficient budgets, we propose two practical estimation algorithms, each of non-adaptive and adaptive types, and for each algorithm, we quantitatively analyze the budget which ensures $1-\delta$ detection accuracy. This theoretical analysis not only quantifies the budgets needed by practical estimation algorithms achieving a given target detection accuracy in finding the diffusion source, but also enables us to quantitatively characterize the amount of extra budget required in non-adaptive type of estimation, refereed to as {\em adaptivity gap}. We validate our theoretical findings over synthetic and real-world social network topologies.
computer science
This paper presents a new method for shadow removal using unpaired data, enabling us to avoid tedious annotations and obtain more diverse training samples. However, directly employing adversarial learning and cycle-consistency constraints is insufficient to learn the underlying relationship between the shadow and shadow-free domains, since the mapping between shadow and shadow-free images is not simply one-to-one. To address the problem, we formulate Mask-ShadowGAN, a new deep framework that automatically learns to produce a shadow mask from the input shadow image and then takes the mask to guide the shadow generation via re-formulated cycle-consistency constraints. Particularly, the framework simultaneously learns to produce shadow masks and learns to remove shadows, to maximize the overall performance. Also, we prepared an unpaired dataset for shadow removal and demonstrated the effectiveness of Mask-ShadowGAN on various experiments, even it was trained on unpaired data.
computer science
We calculate partition function and correlation functions in A-twisted 2d $\mathcal{N}=(2,2)$ theories and topologically twisted 3d $\mathcal{N}=2$ theories containing adjoint chiral multiplet with particular choices of $R$-charges and the magnetic fluxes for flavor symmetries. According to Gauge-Bethe correspondence, they correspond to Heisenberg XXX and XXZ spin chain models. We identify the partition function as the inverse of the norm of the Bethe eigenstates. Correlation functions are identified as the coefficients of the expectation value of Baxter $Q$-operators. In addition, we consider correlation functions of 2d $\mathcal{N}=(2,2)^*$ theory and their relation to equivariant quantum cohomology and equivariant integration of cotangent bundle of Grassmann manifolds. Also, we study the ring relations of supersymmetric Wilson loops in 3d $\mathcal{N}=2^*$ theory and Bethe subalgebra of XXZ spin chain model.
high energy physics theory
Device-to-device communication (D2D) is a key enabler for connecting devices together to form the Internet of Things (IoT). A growing issue with IoT networks is the increasing number of IoT devices congesting the spectral resources of the cellular bands. Operating D2D in unlicensed band alleviates this issue by offloading network traffic from the licensed bands, while also reducing the associated licensing costs. To this end, we present a new low-cost radio access technology (RAT) protocol, called Sidelink Communications on Unlicensed BAnds (SCUBA), which can be implemented on cellular devices such that it coexists with the legacy cellular protocol by operating as a secondary RAT in a time division duplex manner using the existing radio hardware. SCUBA is compatible on different types of cellular devices including the low-complexity half-duplex frequency division duplex machine type communication (MTC) user equipments. SCUBA provides flexible sidelink (SL) latency and battery life tradeoff using a discontinuous reception procedure, which ensures that it is applicable across a wide range of use cases. We prove the effectiveness of our protocol with analyses and simulation results of the medium access control layer of SCUBA using different types of MTC traffic for both SL and the underlying cellular communication.
electrical engineering and systems science
The discovery of magnetic skyrmion bubbles in centrosymmetric magnets has been receiving increasing interest from the research community, due to the fascinating physics of topological spin textures and its possible applications to spintronics. However, key challenges remain, such as how to manipulate the nucleation of skyrmion bubbles to exclude the trivial bubbles or metastable skyrmion bubbles that usually coexist with skyrmion bubbles in the centrosymmetric magnets. Here, we report having successfully performed this task by applying spatially geometric confinement to a centrosymmetric frustrated Fe3Sn2 magnet. We demonstrate that the spatially geometric confinement can indeed stabilize the skyrmion bubbles, by effectively suppressing the formation of trivial bubbles and metastable skyrmion bubbles. We also show that the critical magnetic field for the nucleation of the skyrmion bubbles in the confined Fe3Sn2 nanostripes is drastically less, by an order of magnitude, than that what is required in the thin plate without geometrical confinement. By analyzing how the width and thickness of the nanostripes affect the spin textures of skyrmion bubbles, we infer that the topological transition of skyrmion bubbles is closely related to the dipole-dipole interaction, which we find is consistent with theoretical simulations. The results presented here represent an important step forward in manipulating the topological spin textures of skyrmion bubbles, making us closer to achieving the fabrication of skyrmion-based racetrack memory devices.
condensed matter
In a Dirac material we investigated the confining properties of massive and massless particles subjected to a potential well generated by a purely electrical potential, that is, an electric quantum dot. To achieve this in the most exhaustive way, we have worked on the aforementioned problem for charged particles with and without mass, limited to moving on a plane and whose dynamics are governed by the Dirac equation. The bound states are studied first and then the resonances, the latter by means of the Wigner time delay of the dispersion states as well as through the complex eigenvalues of the outgoing states, in order to obtain a complete picture of the confinement. One of the main results obtained and described in detail is that electric quantum dots for massless charges seem to act as sinks (or sources in the opposite direction) of resonances, while for massive particles the resonances and bound states are conserved with varying position depending on the depth of the well.
quantum physics
Entanglement of two different quantum orders is of an interest of the modern condensed matter physics. One of the examples is the dynamical multiferroicity, where fluctuations of electric dipoles lead to magnetization. We investigate this effect at finite temperature and demonstrate an elevated magnetic response of a ferroelectric near the ferroelectric quantum critical point (FE QCP). We calculate the magnetic susceptibility of a bulk sample on the paraelectric side of the FE QCP at finite temperature and find enhanced magnetic susceptibility near the FE QCP. We propose quantum paraelectric strontium titanate (STO) as a candidate material to search for dynamic multiferroicity. We estimate the magnitude of the magnetic susceptibility for this material and find that it is detectable experimentally.
condensed matter
Volt-VAR control (VVC) is a critical application in active distribution network management system to reduce network losses and improve voltage profile. To remove dependency on inaccurate and incomplete network models and enhance resiliency against communication or controller failure, we propose consensus multi-agent deep reinforcement learning algorithm to solve the VVC problem. The VVC problem is formulated as a networked multi-agent Markov decision process, which is solved using the maximum entropy reinforcement learning framework and a novel communication-efficient consensus strategy. The proposed algorithm allows individual agents to learn a group control policy using local rewards. Numerical studies on IEEE distribution test feeders show that our proposed algorithm matches the performance of single-agent reinforcement learning benchmark. In addition, the proposed algorithm is shown to be communication efficient and resilient.
electrical engineering and systems science
The hydrogen-terminated Silicon(100)-2x1 surface (H-Si(100)-2x1) provides a promising platform for the development of atom scale devices, with recent work showing their creation through precise desorption of surface hydrogen atoms. While samples with relatively large areas of the hydrogen terminated 2x1 surface are routinely created using an in-situ methodology, surface defects are inevitably formed as well reducing the area available for patterning. Here, we present a catalog of several commonly found defects of the H-Si(100)-2x1 surface. By using a combination of scanning tunneling microscopy (STM) and non-contact atomic force microscopy (nc-AFM), we are able to extract useful information regarding the atomic and electronic structure of these defects. This allowed for the confirmation of literature assignments of several commonly found defects, as well as proposed classification of previously unreported and unassigned defects. By better understanding the structure and origin of these defects, we make the first steps toward enabling the creation of superior surfaces ultimately leading to more consistent and reliable fabrication of atom scale devices.
condensed matter
We present an improved Standard-Model (SM) prediction for the dilepton decay of the neutral pion. The loop amplitude is determined by the pion transition form factor for $\pi^0\to\gamma^*\gamma^*$, for which we employ a dispersive representation that incorporates both space-like and time-like data as well as short-distance constraints. The resulting SM branching fraction, $ \text{BR}(\pi^0\to e^+e^-)=6.25(3)\times 10^{-8}$ , sharpens constraints on physics beyond the SM, including pseudoscalar and axial-vector mediators.
high energy physics phenomenology
Short term load forecasting has an essential medium for the reliable, economical and efficient operation of the power system. Most of the existing forecasting approaches utilize fixed statistical models with large historical data for training the models. However, due to the recent integration of large distributed generation, the nature of load demand has become dynamic. Thus because of the dynamic nature of the power load demand, the performance of these models may deteriorate over time. To accommodate the dynamic nature of the load demands, we propose a sliding window regression based dynamic model to predict the load demands of the multiarea power system. The proposed algorithm is tested on five zones of New York ISO. Results from our proposed algorithm are compared with four existing techniques to validate the performance superiority of the proposed algorithm.
electrical engineering and systems science
We present a path-integral hadronization for doubly heavy baryons. The two heavy quarks in the baryon are approximated as a scalar or axial-vector diquark described by a heavy diquark effective theory. The gluon dynamics are represented by a NJL-Model interaction for the heavy diquarks and light quarks, which leads to an effective action of the baryon fields after the quark and diquark fields are integrated out. This effective action for doubly heavy baryon includes the electromagnetic and electroweak interactions, as well as the interaction with light mesons. We also verify the Ward-Takahashi identity at the baryon level, obtain the Isgur-Wise function for weak transitions, and calculate the strong coupling constant of the doubly heavy baryon and pion. Numerical studies are also performed.
high energy physics phenomenology
We give detailed results of $\mathcal{O}(\alpha)$ QED corrections (both real emission and virtual corrections) to $B\to K\ell^+\ell^-$ modes. Requiring the real emission to be gauge invariant, the structure of the contact term(s) is fixed. The calculation is done with a fictitious photon mass as the IR regulator and results are shown to be independent of it, establishing cancellation of the soft divergences. Results are presented for a choice of photon energy, $k_{max}$, and photon angle $\theta_{cut}$. The QED effects are negative, thereby reducing the rate compared to that without QED effects. Electron channels are shown to receive large corrections ($\mathcal{O}(20\%)$). Impact on lepton flavour universality ratio, $R^{\mu e}_K$ are also discussed.
high energy physics phenomenology
Recent progress in high-dispersion spectroscopy has revealed the presence of vaporized heavy metals and ions in the atmosphere of hot Jupiters whose dayside temperature is larger than 2000 K, categorized as ultra hot Jupiters (UHJs). Using the archival data of high resolution transmission spectroscopy obtained with the Subaru telescope, we searched for neutral metals in HD149026b, a hot Jupiter cooler than UHJs. By removing stellar and telluric absorption and using a cross-correlation technique, we report tentative detection of neutral titanium with 4.4 sigma and a marginal signal of neutral iron with 2.8 sigma in the atmosphere. This is the first detection of neutral titanium in an exoplanetary atmosphere. In this temperature range, titanium tends to form titanium oxide (TiO). The fact that we did not detect any signal from TiO suggests that the C/O ratio in the atmosphere is higher than the solar value. The detection of metals in the atmosphere of hot Jupiters cooler than UHJs will be useful for understanding the atmospheric structure and formation history of hot Jupiters.
astrophysics
Estimating the causal effects of an intervention from high-dimensional observational data is difficult due to the presence of confounding. The task is often complicated by the fact that we may have a systematic missingness in our data at test time. Our approach uses the information bottleneck to perform a low-dimensional compression of covariates by explicitly considering the relevance of information. Based on the sufficiently reduced covariate, we transfer the relevant information to cases where data is missing at test time, allowing us to reliably and accurately estimate the effects of an intervention, even where data is incomplete. Our results on causal inference benchmarks and a real application for treating sepsis show that our method achieves state-of-the art performance, without sacrificing interpretability.
statistics
When engaging in argumentative discourse, skilled human debaters tailor claims to the beliefs of the audience, to construct effective arguments. Recently, the field of computational argumentation witnessed extensive effort to address the automatic generation of arguments. However, existing approaches do not perform any audience-specific adaptation. In this work, we aim to bridge this gap by studying the task of belief-based claim generation: Given a controversial topic and a set of beliefs, generate an argumentative claim tailored to the beliefs. To tackle this task, we model the people's prior beliefs through their stances on controversial topics and extend state-of-the-art text generation models to generate claims conditioned on the beliefs. Our automatic evaluation confirms the ability of our approach to adapt claims to a set of given beliefs. In a manual study, we additionally evaluate the generated claims in terms of informativeness and their likelihood to be uttered by someone with a respective belief. Our results reveal the limitations of modeling users' beliefs based on their stances, but demonstrate the potential of encoding beliefs into argumentative texts, laying the ground for future exploration of audience reach.
computer science
Traditional origami structures can be continuously deformed back to a flat sheet of paper, while traditional kirigami requires glue or seams in order to maintain its rigidity. In the former, non-trivial geometry can be created through overfolding paper while, in the latter, the paper topology is modified. Here we propose a hybrid approach that relies upon overlapped flaps that create in-plane compression resulting in the formation of "virtual" elastic shells. Not only are these structures self-supporting, but they have colossal load-to-weight ratios of order 10000.
condensed matter
The classical Bondi model is adopted to study accretion onto the finite luminous region around the central massive black hole (MBH) in an elliptical galaxy. Unlike Bondi (1952), we define the boundary conditions at a certain finite radius ($r_f$) instead of at the infinity and examine the variation of solutions for a simple case. In the following, we consider the special case of an MBH at the center of a Hernquist galaxy and involve the gravity and luminosity of its own galaxy. Our results in the first part show that kinetic energy at the final radius is ignorable even for not so far away from the center. Moreover, the mass accretion rate will be approximately equal to its Bondi value if the final radius ($r_f$) becomes about 2-3 orders of magnitude larger than semi-Bondi radius, i.e. $GM/c_{sf}^2$ (where $M$ and $c_{sf}$ are the mass of the central object and the sound speed at $r_f$). In the second part, adding the two extra forces of gravity and radiation in the momentum equation let us know that the maximum possible of accretion rate increases with a greater characteristic linear density of galaxy and lower radiation. e.g.:
astrophysics
Acute lower respiratory infections caused by respiratory viruses are common and persistent infectious diseases worldwide and in China, which have pronounced seasonal patterns. Meteorological factors have important roles in the seasonality of some major viruses. Our aim was to identify the dominant meteorological factors and to model their effects on common respiratory viruses in different regions of China. We analysed monthly virus data on patients from 81 sentinel hospitals in 22 provinces in mainland China from 2009 to 2013. The geographical detector method was used to quantify the explanatory power of each meteorological factor, individually and interacting in pairs. 28369 hospitalised patients with ALRI were tested, 10387 were positive for at least one virus, including RSV, influenza virus, PIV, ADV, hBoV, hCoV and hMPV. RSV and influenza virus had annual peaks in the north and biannual peaks in the south. PIV and hBoV had higher positive rates in the spring summer months. hMPV had an annual peak in winter spring, especially in the north. ADV and hCoV exhibited no clear annual seasonality. Temperature, atmospheric pressure, vapour pressure, and rainfall had most explanatory power on most respiratory viruses in each region. Relative humidity was only dominant in the north, but had no significant explanatory power for most viruses in the south. Hours of sunlight had significant explanatory power for RSV and influenza virus in the north, and for most viruses in the south. Wind speed was the only factor with significant explanatory power for human coronavirus in the south. For all viruses, interactions between any two of the paired factors resulted in enhanced explanatory power, either bivariately or non-linearly.
statistics
Axion-like particles (ALPs), predicted in theories beyond the Standard Model, can have observational effects on the transparency of the Universe to $\gamma$ rays in the presence of magnetic fields. In this work, we search for effects compatible with the existence of ALPs with 80 months of data from the ${\it Fermi}$ Large Area Telescope, by comparing the distributions of observed highest energy photons from sources beyond redshifts of z $\geq$ 0.1 with theoretical predictions in the presence of ALPs. We find no evidence for an increased $\gamma$-ray transparency due to ALPs and therefore we set limits on the ALPs parameters assuming a value of the intergalactic magnetic field strength of 1 nG. Photon-ALP couplings above $10^{-11}$ GeV$^{-1}$ are excluded for ALP masses $m_{a}$ $\lesssim3.0$ neV. These limits exclude a region of the parameter space not covered by other $\gamma$-ray telescopes and are compatible with constraints imposed by other experiments.
astrophysics
Magnesium Selenide (MgSe) is a wide bandgap semiconductor with applications in optoelectronics and energy conversion technologies. Understanding thermal conductivity (k) of MgSe is critical for optimum design of thermal transport in these applications. In this work, we report the temperature and length dependence lattice thermal conductivity of magnesium selenide (MgSe) with different crystallographic phases; zincblende, rocksalt, wurtzite and nickel arsenic, using first principles computations. Computations reveal significant differences in thermal conductivity (k) of MgSe for different phases. The observed trend in thermal conductivities is : kNiAs < krocksalt < kwurtzite < kzincblende. Our first principles calculations show a room temperature low k of 4.5 Wm-1K-1 for the NiAs phase and a high k of 20.4 W/mK for wurtzite phase. These differences are explained in terms of a phonon band gap in the vibrational spectra of zincblende and wurtzite phases, which suppresses scattering of acoustic phonons, leading to high phonon lifetimes.
condensed matter
We derive cosmological soft theorems for solids coupled to gravity. To this end, we first derive all cosmological adiabatic modes for solids, which display the interesting novelty of non-vanishing anisotropic stresses on large scales. Then, from the corresponding symmetries of the action of perturbations we compute the leading order related soft theorems using the operator product expansion. For the scalar bispectrum, we re-derive the result that Maldacena's consistency relation is recovered only upon angular averaging over the long mode direction. In addition, we find theorems for soft tensor and vector perturbations. In passing, we also clarify the derivation of these soft theorems in gauges where no residual diffeomorphisms exist.
high energy physics theory
Extracting effective deep features to represent content and style information is the key to universal style transfer. Most existing algorithms use VGG19 as the feature extractor, which incurs a high computational cost and impedes real-time style transfer on high-resolution images. In this work, we propose a lightweight alternative architecture - ArtNet, which is based on GoogLeNet, and later pruned by a novel channel pruning method named Zero-channel Pruning specially designed for style transfer approaches. Besides, we propose a theoretically sound sandwich swap transform (S2) module to transfer deep features, which can create a pleasing holistic appearance and good local textures with an improved content preservation ability. By using ArtNet and S2, our method is 2.3 to 107.4 times faster than state-of-the-art approaches. The comprehensive experiments demonstrate that ArtNet can achieve universal, real-time, and high-quality style transfer on high-resolution images simultaneously, (68.03 FPS on 512 times 512 images).
computer science
Abstraction is a powerful idea widely used in science, to model, reason and explain the behavior of systems in a more tractable search space, by omitting irrelevant details. While notions of abstraction have matured for deterministic systems, the case for abstracting probabilistic models is not yet fully understood. In this paper, we provide a semantical framework for analyzing such abstractions from first principles. We develop the framework in a general way, allowing for expressive languages, including logic-based ones that admit relational and hierarchical constructs with stochastic primitives. We motivate a definition of consistency between a high-level model and its low-level counterpart, but also treat the case when the high-level model is missing critical information present in the low-level model. We prove properties of abstractions, both at the level of the parameter as well as the structure of the models. We conclude with some observations about how abstractions can be derived automatically.
computer science
In this paper, we present a method for factor analysis of discrete data. This is accomplished by fitting a dependent Poisson model with a factor structure. To be able to analyze ordinal data, we also consider a truncated Poisson distribution. We try to find the model with the lowest AIC by employing a forward selection procedure. The probability to find the correct model is investigated in a simulation study. Moreover, we heuristically derive the corresponding asymptotic probabilities. An empirical study is also included.
statistics
Nearly 30 years ago, two-photon interference was observed, marking the beginning of a new quantum era. Indeed, two-photon interference has no classical analogue, giving it a distinct advantage for a range of applications. The peculiarities of quantum physics may now be used to our advantage to outperform classical computations, securely communicate information, simulate highly complex physical systems and increase the sensitivity of precise measurements. This separation from classical to quantum physics has motivated physicists to study two-particle interference for both fermionic and bosonic quantum objects. So far, two-particle interference has been observed with massive particles, among others, such as electrons and atoms, in addition to plasmons, demonstrating the extent of this effect to larger and more complex quantum systems. A wide array of novel applications to this quantum effect is to be expected in the future. This review will thus cover the progress and applications of two-photon (two-particle) interference over the last three decades.
quantum physics
In recent years, neural architecture search (NAS) has received intensive scientific and industrial interest due to its capability of finding a neural architecture with high accuracy for various artificial intelligence tasks such as image classification or object detection. In particular, gradient-based NAS approaches have become one of the more popular approaches thanks to their computational efficiency during the search. However, these methods often experience a mode collapse, where the quality of the found architectures is poor due to the algorithm resorting to choosing a single operation type for the entire network, or stagnating at a local minima for various datasets or search spaces. To address these defects, we present a differentiable variational inference-based NAS method for searching sparse convolutional neural networks. Our approach finds the optimal neural architecture by dropping out candidate operations in an over-parameterised supergraph using variational dropout with automatic relevance determination prior, which makes the algorithm gradually remove unnecessary operations and connections without risking mode collapse. The evaluation is conducted through searching two types of convolutional cells that shape the neural network for classifying different image datasets. Our method finds diverse network cells, while showing state-of-the-art accuracy with up to almost 2 times fewer non-zero parameters.
computer science
Recent studies on deep-learning-based small defection segmentation approaches are trained in specific settings and tend to be limited by fixed context. Throughout the training, the network inevitably learns the representation of the background of the training data before figuring out the defection. They underperform in the inference stage once the context changed and can only be solved by training in every new setting. This eventually leads to the limitation in practical robotic applications where contexts keep varying. To cope with this, instead of training a network context by context and hoping it to generalize, why not stop misleading it with any limited context and start training it with pure simulation? In this paper, we propose the network SSDS that learns a way of distinguishing small defections between two images regardless of the context, so that the network can be trained once for all. A small defection detection layer utilizing the pose sensitivity of phase correlation between images is introduced and is followed by an outlier masking layer. The network is trained on randomly generated simulated data with simple shapes and is generalized across the real world. Finally, SSDS is validated on real-world collected data and demonstrates the ability that even when trained in cheap simulation, SSDS can still find small defections in the real world showing the effectiveness and its potential for practical applications.
computer science
The photo-triggered unbinding of the intrinsically disordered S-peptide from the RNase S complex is studied with the help of transient IR spectroscopy, covering a wide range of time scales from 100 ps to 10 ms. To that end, an azobenzene moiety has been linked to the S-peptide in a way that its helicity is disrupted by light, thereby initiating its complete unbinding. The full sequence of events is observed, starting from unfolding of the helical structure of the S-peptide on a 20 ns timescale while still being in the binding pocket of the S-protein, S-peptide unbinding after 300 microseconds, and the structural response of the S-protein after 3 ms. With regard to the S-peptide dynamics, the binding mechanism can be classified as an induced fit, while the structural response of the S-protein is better described as conformational selection.
physics
We discuss the potential of the eXTP X-ray telescope, in particular its Spectroscopic Focusing Array (SFA), Large Area Detector (LAD) and Wide Field Monitor (WFM) for the detection of a signal from keV-scale decaying dark matter. We show that the sensitivity of the eXTP is sufficient to improve existing constraints on the mixing angle of the neutrino Minimal extension of the Standard Model ($\nu$MSM) by a factor of 5-10 within the dark matter mass range 2-50 keV, assuming 1% level of systematic uncertainty. We assert that the eXTP will be able to probe previously inaccessible range of $\nu$MSM parameters and serve as a precursor for the Athena mission in decaying dark matter searches.
astrophysics