text
stringlengths
11
9.77k
label
stringlengths
2
104
Visualization of the in vivo spatial distribution of superparamagnetic iron oxide nanoparticles (SPIONs) is crucial to biomedicine. Magnetic particle imaging (MPI) is one of the most promising approaches for direct measurements of the SPION distribution. In this paper, we systematically investigate a single-harmonic-based narrowband MPI approach. Herein, only the 3rd harmonic at 15 kHz of the SPION signal induced in an excitation magnetic field of 5 kHz is measured via a narrowband detection system for imaging during scanning a field-free-point in a field of view. Experiments on spot and line phantoms are performed to evaluate the spatial distribution by the assessment of the full width at half maximum and modulation transfer function at different excitation magnetic fields from 4 to 10 mT. Experimental results demonstrate that reconstructed images have a spatial resolution of 1.6 and 1.5 mm for a gradient field of 2.2 T/m and 4.4 T/m in x- and z-direction, respectively, at an excitation magnetic field of 4 mT. In terms of line gap, two lines with a gap of 0.5 mm are resolved. With increasing the excitation magnetic field to 10 mT, the spatial resolution gets worse to 2.4 and 2.0 mm in x- and z-direction, respectively. Moreover, the custom-built MPI scanner allows a limit of detection of 53 microgram (Fe)/mL (500 ng Fe weight) using perimag SPIONs. In addition, the excellent performance is demonstrated by imaging experiments on an "emg" logo phantom. We believe that the proposed narrowband MPI approach is a promising approach for SPION imaging.
physics
We present BubbleProfiler, a C++ software package for finding field profiles in bubble walls and calculating the bounce action during phase transitions involving multiple scalar fields. Our code uses a recently proposed perturbative method for potentials with multiple fields and a shooting method for single field cases. BubbleProfiler is constructed with modularity, flexibility and practicality in mind. These principles extend from the input of an arbitrary potential with multiple scalar fields in various forms, through the code structure, to the testing suite. After reviewing the physics context, we describe how the methods are implemented in BubbleProfiler, provide an overview of the code structure and detail usage scenarios. We present a number of examples that serve as test cases of BubbleProfiler and comparisons to existing public codes with similar functionality. We also show a physics application of BubbleProfiler in the scalar singlet extension of the Standard Model of particle physics by calculating the action as a function of model parameters during the electroweak phase transition. BubbleProfiler completes an important link in the toolchain for studying the properties of the thermal phase transition driving baryogenesis and properties of gravitational waves in models with multiple scalar fields. The code can be obtained from: https://github.com/bubbleprofiler/bubbleprofiler
high energy physics phenomenology
We initiate the classification of unitary superconformal defects in unitary superconformal field theories (SCFT) of diverse spacetime dimensions $3\leq d \leq 6$. Our method explores general constraints from the defect superconformal symmetry, unitarity, as well as consistency conditions of local bulk-defect couplings. Such features are common to all superconformal defects regardless of any Lagrangian description. In particular, modified Ward identities of conserved currents in the presence of the defect induce a distinguished set of conformal primary operators on the defect worldvolume, which includes the universal displacement operator associated with broken translations transverse to the defect. Consistency with the preserved superconformal symmetry and unitarity requires that such operators arrange into unitarity multiplets of the defect superconformal algebra, which in turn leads to nontrivial constraints on what kinds of defects are admissible in a given SCFT. We carry out the analysis explicitly for one-dimensional defects, namely superconformal lines, and leave the study of higher dimensional defects to forthcoming work. We also comment on the implications of our results for the deformations of superconformal lines and one-form symmetries in SCFTs.
high energy physics theory
Solid-state spins such as nitrogen-vacancy (NV) center are promising platforms for large-scale quantum networks. Despite the optical interface of NV center system, however, the significant attenuation of its zero-phonon-line photon in optical fiber prevents the network extended to long distances. Therefore a telecom-wavelength photon interface would be essential to reduce the photon loss in transporting quantum information. Here we propose an efficient scheme for coupling telecom photon to NV center ensembles mediated by rare-earth doped crystal. Specifically, we proposed protocols for high fidelity quantum state transfer and entanglement generation with parameters within reach of current technologies. Such an interface would bring new insights into future implementations of long-range quantum network with NV centers in diamond acting as quantum nodes.
quantum physics
It is generally accepted that the effective magnetic field acting on a magnetic moment is given by the gradient of the energy with respect to the magnetization. However, in ab initio spin dynamics within the adiabatic approximation, the effective field is also known to be exactly the negative of the constraining field, which acts as a Lagrange multiplier to stabilize an out-of-equilibrium, non-collinear magnetic configuration. We show that for Hamiltonians without mean-field parameters both of these fields are exactly equivalent, while there can be a finite difference for mean-field Hamiltonians. For density-functional theory (DFT) calculations the constraining field obtained from the auxiliary Kohn-Sham Hamiltonian is not exactly equivalent to the DFT energy gradient. This inequality is highly relevant for both ab initio spin dynamics and the ab initio calculation of exchange constants and effective magnetic Hamiltonians. We argue that the effective magnetic field and exchange constants have the highest accuracy in DFT when calculated from the energy gradient and not from the constraining field.
condensed matter
This review aims to draw attention to two issues of concern when we set out to make machine learning work in the chemical and materials domain, i.e., statistical loss function metrics for the validation and benchmarking of data-derived models, and the uncertainty quantification of predictions made by them. They are often overlooked or underappreciated topics as chemists typically only have limited training in statistics. Aside from helping to assess the quality, reliability, and applicability of a given model, these metrics are also key to comparing the performance of different models and thus for developing guidelines and best practices for the successful application of machine learning in chemistry.
physics
We complete the analytic evaluation of the master integrals for the two-loop non-planar box diagrams contributing to the top-pair production in the quark-initiated channel, at next-to-next-to-leading order in QCD. The integrals are determined from their differential equations, which are cast into a canonical form using the Magnus exponential. The analytic expressions of the Laurent series coefficients of the integrals are expressed as combinations of generalized polylogarithms, which we validate with several numerical checks. We discuss the analytic continuation of the planar and the non-planar master integrals, which contribute to $q {\bar q} \to t {\bar t}$ in QCD, as well as to the companion QED scattering processes $ e e \to \mu \mu$ and $e \mu \to e \mu$.
high energy physics phenomenology
The He I 10830 \AA\ triplet is a very informative indicator of chromospheric activities as the helium is the second most abundant element in the solar atmosphere. Taking advantage of the high resolution of the 1.6 m Goode Solar Telescope (GST) at Big Bear Solar Observatory (BBSO), previous observations have shown clear evidence of the enhanced absorption, instead of typically-observed emission, for two M-class flares. In this study, we analyze the evolution of the He I 10830 10830 \AA\ emission in numerical models and compare it with observations. The models represent the RADYN simulation results obtained from the F-CHROMA database. We consider the models with the injected electron spectra parameters close to observational estimates for the 2013-August-17 flare event ($\delta=8$, $E_c = \{15,20\}$ keV, $F=\{1\times 10^{11}, 3\times{}10^{11}\}$ erg cm$^{-2}$) in detail, as well as other available models. The modeling results agree well with observations, in the sense of both the maximum intensity decrease (-17.1%, compared to the observed value of -13.7%) and the trend of temporal variation (initial absorption phase followed by the emission). All models demonstrate the increased number densities and decreased ratio of the upper and lower level populations of He I 10830 10830 \AA\ transition in the initial phase, which enhances the opacity and forms an absorption feature. Models suggest that the temperatures and free electron densities at heights of 1.3-1.5 Mm should be larger than $\sim 10^4$ K and $6\times 10^{11}$ cm$^{-3}$ thresholds for the line to start being in emission.
astrophysics
Deep water circulation and mixing processes in deep lakes are largely unknown, although they are responsible for the transport of matter, nutrients and pollutants. Such a lack of knowledge cannot be reliably provided by numerical hydrodynamic modelling studies because detailed observations are typically not available to validate them. To overcome some of these deficiencies, a dedicated yearlong mooring comprising 100 high-resolution temperature sensors and a single current meter were located in the deeper half of the 344 m deepest point of the subalpine Lake Garda (Italy). The observations show peaks and calms of turbulent exchange, besides ubiquitous internal wave activity. In late winter, northerly winds activate episodic deep convective overturning, the dense water being subsequently advected along the lake-floor. Besides deep convection, such winds also set-up seiches and inertial waves that are associated with about 100 times larger turbulence dissipation rates than that by semidiurnal internal wave breaking observed in summer. In the lower 60 m above the lake-floor however, the average turbulence dissipation rate is approximately constant in value year-around, being about 10 times larger than open-ocean values, except during deep convection episodes.
physics
In this paper, according to CA duality, we study the complexity growth of dyonic RN-type black holes with quartic field strength corrections ($F^4$ corrections) to the matter action in general $D\geq4$-dimensions and find the behavior of action growth, similar to the case of the normal RN black holes, is different between electric and magnetic black holes which is unexpected since violates the electromagnetic duality. For restoring this duality, we add the Maxwell boundary term (at order 3-derivative) to the action and discuss the outcomes of the addition. Also, we have used another method that introduces UV finite cut off at AdS boundary to evaluate the complexity growth rate of dyonic black holes with and without $F^4$ corrections. In this method, without adding a surface term, late time growth rate of complexity exhibits expected behavior.
high energy physics theory
In the present study, the zeroth- and first-order radiative correction of the Casimir energy for massive and massless scalar fields, confined with mixed boundary conditions (Dirichlet- Neumann) between two parallel plates in $\phi^4$ theory, were computed. Two issues in performing calculations in this work are essential: first, to renormalize the bare parameters of the problem, a systematic method was used, which allows all influences from the boundary conditions to be imported in all elements of the renormalization program. This idea yields our counterterms appearing in the renormalization program to be position-dependent. Using the box subtraction scheme as a regularization technique is the other noteworthy point in the calculation. In this scheme, by subtracting the vacuum energies of two similar configurations from each other, regularizing divergent expressions and their removal process were significantly facilitated. All the obtained answers for the Casimir energy with the mixed boundary condition were consistent with well-known physical grounds. We also compared the Casimir energy for the massive scalar field confined with four types of boundary conditions (Dirichlet, Neumann, a mix of them and Periodic) in 3+1 dimensions with each other, and the sign and magnitude of their values were discussed.
high energy physics theory
Reversible hydrogen uptake and the metal/dielectric transition make the Mg/MgH$_2$ system a prime candidate for solid state hydrogen storage and dynamic plasmonics. However, high dehydrogenation temperatures and slow dehydrogenation hamper broad applicability. One promising strategy to improve dehydrogenation is the formation of metastable $\gamma$-MgH$_2$. A nanoparticle (NP) design, where $\gamma$-MgH$_2$ forms intrinsically during hydrogenation is presented and a formation mechanism based on transmission electron microscopy results is proposed.Volume expansion during hydrogenation causes compressive stress within the confined, anisotropic NPs, leading to plastic deformation of $\beta$-MgH$_2$ via (301) $\beta$ twinning. It is proposed that these twins nucleate $\gamma$-MgH$_2$ nanolamellas, which are stabilized by residual compressive stress. Understanding this mechanism is a crucial step toward cycle-stable, Mg-based dynamic plasmonic and hydrogen-storage materials with improved dehydrogenation. It is envisioned that a more general design of confined NPs utilizes the inherent volume expansion to reform $\gamma$-MgH$_2$ during each rehydrogenation
condensed matter
In view of the great contribution of neutrino-electron scattering to the deep understanding of electroweak interactions, we focus in this paper on the study of elastic scattering of a muon neutrino by an electron $(e^{-}\nu_{\mu}\rightarrow e^{-}\nu_{\mu})$ in the presence of a circularly polarized electromagnetic field. We perform our theoretical calculation within the framework of Fermi theory using the exact wave functions of charged particles in an electromagnetic field. The expression of the differential cross section (DCS) for this process is obtained analytically in the absence and presence of the laser field. The effect of the field strength and frequency on the exchange of photons as well as on the DCS is presented and analyzed. Massive neutrino effects are also included and discussed. This study, added to the previous ones, will significantly enrich our knowledge in fundamental physics.
high energy physics phenomenology
We perform an analysis of the Cosmic Web as a complex network, which is built on a $\Lambda$CDM cosmological simulation. For each of nodes, which are in this case dark matter halos formed in the simulation, we compute 10 network metrics, which characterize the role and position of a node in the network. The relation of these metrics to topological affiliation of the halo, i.e. to the type of large scale structure, which it belongs to, is then investigated. In particular, the correlation coefficients between network metrics and topology classes are computed. We have applied different machine learning methods to test the predictive power of obtained network metrics and to check if one could use network analysis as a tool for establishing topology of the large scale structure of the Universe. Results of such predictions, combined in the confusion matrix, show that it is not possible to give a good prediction of the topology of Cosmic Web (score is $\approx$ 70 $\%$ in average) based only on coordinates and velocities of nodes (halos), yet network metrics can give a hint about the topological landscape of matter distribution.
astrophysics
To navigate safely in urban environments, an autonomous vehicle (ego vehicle) must understand and anticipate its surroundings, in particular the behavior and intents of other road users (neighbors). Most of the times, multiple decision choices are acceptable for all road users (e.g., turn right or left, or different ways of avoiding an obstacle), leading to a highly uncertain and multi-modal decision space. We focus here on predicting multiple feasible future trajectories for both ego vehicle and neighbors through a probabilistic framework. We rely on a conditional imitation learning algorithm, conditioned by a navigation command for the ego vehicle (e.g., "turn right"). Our model processes ego vehicle front-facing camera images and bird-eye view grid, computed from Lidar point clouds, with detections of past and present objects, in order to generate multiple trajectories for both ego vehicle and its neighbors. Our approach is computationally efficient and relies only on on-board sensors. We evaluate our method offline on the publicly available dataset nuScenes, achieving state-of-the-art performance, investigate the impact of our architecture choices on online simulated experiments and show preliminary insights for real vehicle control
computer science
Developing machine learning enabled smart manufacturing is promising for composite structures assembly process. To improve production quality and efficiency of the assembly process, accurate predictive analysis on dimensional deviations and residual stress of the composite structures is required. The novel composite structures assembly involves two challenges: (i) the highly nonlinear and anisotropic properties of composite materials; and (ii) inevitable uncertainty in the assembly process. To overcome those problems, we propose a neural network Gaussian process model considering input uncertainty for composite structures assembly. Deep architecture of our model allows us to approximate a complex process better, and consideration of input uncertainty enables robust modeling with complete incorporation of the process uncertainty. Based on simulation and case study, the NNGPIU can outperform other benchmark methods when the response function is nonsmooth and nonlinear. Although we use composite structure assembly as an example, the proposed methodology can be applicable to other engineering systems with intrinsic uncertainties.
statistics
Enhanced room-temperature electromechanical coupling in the lead-free ferroelectric system $(1-x)$BaZr$_{0.2}$Ti$_{0.8}$O$_{3}$ - $x$Ba$_{0.7}$Ca$_{0.3}$TiO$_{3}$ (abbreviated as BZCT) at $x=0.5$ is attributed to the existence of a morphotropic phase region (MPR) containing an intermediate orthorhombic ($O$) phase between terminal rhombohedral ($R$) BZT and tetragonal ($T$) BCT phases. However, there is ambiguity regarding the morphotropic phase transition in BZCT at room temperature - while some experiments suggest a single $O$ phase within the MPR, others indicate coexistence of three polar phases ($T+R+O$). Therefore, to understand the thermodynamic stability of polar phases and its relation to electromechanical switching during morphotropic phase transition in BZCT, we develop a Landau potential based on the theory of polar anisotropy. Since intrinsic electrostrictive anisotropy changes as a function of electromechanical processing, we establish a correlation between the parameters of our potential and the coefficients of electrostriction. We also conducted phase-field simulations based on this potential to demonstrate changes in domain configuration from single-phase $O$ to three-phase $T+R+O$ at the equimolar composition with the increase in electrostrictive anisotropy. Diffusionless phase diagrams and the corresponding piezoelectric coefficients obtained from our model compare well with the experimental findings. Increase in electrostrictive anisotropy increases the degeneracy of the free energy at ambient temperature and pressure leading to decreasing polar anisotropy, although there is an accompanying increase in the electromechanical anisotropy manifested by an increase in the difference between effective longitudinal and transverse piezo-coefficients, $d_{33}$ and $d_{31}$.
condensed matter
Using a general parameterization of two-body scattering amplitude, we systematically analyze the corresponding data on $X(3872)$, more explicitly, the CDF data on inclusive $p\bar{p}$ scattering to $J/\psi \pi^+\pi^-$, and the Belle and BaBar data on $B$ decays to $K\, J/\psi \pi^+\pi^-$ and $K D\bar{D}^{*0}$ around the $D^0\bar{D}^{*0}$ threshold. We achieve a good reproduction of data and find that the $X(3872)$ can be interpreted as a bound and/or virtual state, or even higher-order (double or triple) virtual sate pole. The latter point was not realized previously in the literature. % The latter point has not been realized in other literatures. As a result, the compositeness of the $X(3872)$ can vary largely from nearly 0 to 1. More higher-precision data is needed to discriminate its pole structure and nature.
high energy physics phenomenology
Non-supersymmetric string models are plagued with tadpoles for dynamical fields, which signal uncanceled forces sourced by the vacuum. We argue that in certain cases, uncanceled dynamical tadpoles can lead to inconsistencies with quantum gravity, via violation of swampland constraints. We describe an explicit realization in a supersymmetric toroidal ${\textbf{Z}}_2\times{\textbf{Z}}_2$ orientifold with D7-branes, where the dynamical tadpole generated by displacement of the D7-branes off its minimum leads to violation of the axion Weak Gravity Conjecture. In these examples, cancellation of dynamical tadpoles provides consistency conditions for the configuration, of dynamical nature (as opposed to the topological conditions of topological tadpoles, such as RR tadpole cancellation in compact spaces). We show that this approach provides a re-derivation of the Z-minimization criterion for AdS vacua giving the gravitational dual of a-maximization in 4d $\mathcal{N}=1$ toric quiver SCFTs.
high energy physics theory
We clarify the undecided case $c_2 = 3$ of a theorem of Ein, Hartshorne and Vogelaar [Math. Ann. 259 (1982), 541--569] about the restriction of a stable rank 3 vector bundle with $c_1 = 0$ on the projective 3-space to a general plane. It turns out that there are more exceptions to the stable restriction property than those conjectured by the three authors. One of them is a Schwarzenberger bundle (twisted by $-1$); it has $c_3 = 6$. There are also some exceptions with $c_3 = 2$ (plus, of course, their duals). We also prove, for completeness, the basic properties of the corresponding moduli spaces; they are all nonsingular and connected, of dimension 28.
mathematics
This article develops design-based ratio estimators for clustered, blocked randomized controlled trials (RCTs), with an application to a federally funded, school-based RCT testing the effects of behavioral health interventions. We consider finite population weighted least squares estimators for average treatment effects (ATEs), allowing for general weighting schemes and covariates. We consider models with block-by-treatment status interactions as well as restricted models with block indicators only. We prove new finite population central limit theorems for each block specification. We also discuss simple variance estimators that share features with commonly used cluster-robust standard error estimators. Simulations show that the design-based ATE estimator yields nominal rejection rates with standard errors near true ones, even with few clusters.
statistics
In this paper, we consider the problem of the scattering of in-plane waves at an interface between a homogeneous medium and a metamaterial. The relevant eigenmodes in the two regions are calculated by solving a recently described non self-adjoint eigenvalue problem particularly suited to scattering studies. The method efficiently produces all propagating and evanescent modes consistent with the application of Snell's law and is applicable to very general scattering problems. In a model composite, we elucidate the emergence of a rich spectrum of eigenvalue degeneracies. These degeneracies appear in both the complex and real domains of the wave-vector. However, since this problem is non self-adjoint, these degeneracies generally represent a coalescing of both the eigenvalues and eigenvectors (exceptional points). Through explicit calculations of Poynting vector, we point out an intriguing phenomenon: there always appears to be an abrupt change in the sign of the refraction angle of the wave on two sides of an exceptional point. Furthermore, the presence of these degeneracies, in some cases, hints at fast changes in the scattered field as the incident angle is changed by small amounts. We calculate these scattered fields through a novel application of the Betti-Rayleigh reciprocity theorem. We present several numerical examples showing a rich scattering spectrum. In one particularly intriguing example, we point out wave behavior which may be related to the phenomenon of resonance trapping. We also show that there exists a deep connection between energy flux conservation and the biorthogonality relationship of the non self-adjoint problem. The proof applies to the general class of scattering problems involving elastic waves (under self-adjoint or non self-adjoint operators).
physics
We perform an in-depth analysis of the transformation rules under duality for couplings of theories containing multiple scalars, $p$-form gauge fields, linearized gravitons or $(p,1)$ mixed symmetry tensors. Following a similar reasoning to the derivation of the Buscher rules for string background fields under T-duality, we show that the couplings for all classes of aforementioned multi-field theories transform according to one of two sets of duality rules. These sets comprise the ordinary Buscher rules and their higher counterpart; this is a generic feature of multi-field theories in spacetime dimensions where the field strength and its dual are of the same degree. Our analysis takes into account topological theta terms and generalized $B$-fields, whose behavior under duality is carefully tracked. For a 1-form or a graviton in 4D, this reduces to the inversion of the complexified coupling or generalized metric under electric/magnetic duality. Moreover, we write down an action for linearized gravity in the presence of $\theta$-term from which we obtain previously suggested on-shell duality and double duality relations. This also provides an explanation for the origin of theta in the gravitational duality relations as a specific additional sector of the linearized gravity action.
high energy physics theory
Federated edge learning (FEEL) is a popular framework for model training at an edge server using data distributed at edge devices (e.g., smart-phones and sensors) without compromising their privacy. In the FEEL framework, edge devices periodically transmit high-dimensional stochastic gradients to the edge server, where these gradients are aggregated and used to update a global model. When the edge devices share the same communication medium, the multiple access channel (MAC) from the devices to the edge server induces a communication bottleneck. To overcome this bottleneck, an efficient broadband analog transmission scheme has been recently proposed, featuring the aggregation of analog modulated gradients (or local models) via the waveform-superposition property of the wireless medium. However, the assumed linear analog modulation makes it difficult to deploy this technique in modern wireless systems that exclusively use digital modulation. To address this issue, we propose in this work a novel digital version of broadband over-the-air aggregation, called one-bit broadband digital aggregation (OBDA). The new scheme features one-bit gradient quantization followed by digital quadrature amplitude modulation (QAM) at edge devices and over-the-air majority-voting based decoding at edge server. We provide a comprehensive analysis of the effects of wireless channel hostilities (channel noise, fading, and channel estimation errors) on the convergence rate of the proposed FEEL scheme. The analysis shows that the hostilities slow down the convergence of the learning process by introducing a scaling factor and a bias term into the gradient norm. However, we show that all the negative effects vanish as the number of participating devices grows, but at a different rate for each type of channel hostility.
computer science
Solid-state quantum registers are exceptional for storing quantum information at room temperature with long coherence time. Nevertheless, practical applications toward quantum supremacy require even longer coherence time to allow for more complex algorithms. In this work we propose a quantum register that lies in a decoherence-free subspace to be implemented with color centers in diamond. The quantum information is encoded in two logical states composed of two nearby nuclear spins, while an electron spin is used as ancilla for initialization and control. Moreover, by tuning an off-axis magnetic field we enable non-nuclear-spin-preserving transitions that we use for preparing the register through Stimulating Raman Adiabatic Passage. Furthermore, we use this sequence to manipulate the quantum register and an individual nuclear spin.
quantum physics
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
electrical engineering and systems science
Muons and neutrinos from cosmic ray interactions in the atmosphere originate from decays of mesons in air-showers. Sibyll-2.3c aims to give a precise description of hadronic interactions in the relevant phase space for conventional and prompt leptons in light of new accelerator data, including that from the LHC. Sibyll is designed primarily as an event generator for use in simulation of extensive air showers. Because it has been tuned for forward physics as well as the central region, it can also be used to calculate inclusive fluxes. The purpose of this paper is to describe the use of Sibyll-2.3c for calculation of fluxes of atmospheric leptons.
high energy physics phenomenology
Interacting Fermi gas provides an ideal model system to understand unconventional pairing and intertwined orders relevant to a large class of quantum materials. Rydberg-dressed Fermi gas is a recent experimental system where the sign, strength, and range of the interaction can be controlled. The interaction in momentum space has a negative minimum at $q_c$ inversely proportional to the characteristic length-scale in real space, the soft-core radius $r_c$. We show theoretically that single-component (spinless) Rydberg-dressed Fermi gas in two dimensions has a rich phase diagram with novel superfluid and density wave orders due to the interplay of the Fermi momentum $p_F$, interaction range $r_c$, and interaction strength $u_0$. For repulsive bare interactions $u_0>0$, the dominant instability is $f$-wave superfluid for $p_Fr_c\lesssim 2$, and density wave for $p_Fr_c\gtrsim 4$. The $f$-wave pairing in this repulsive Fermi gas is reminiscent of the conventional Kohn-Luttinger mechanism, but has a much higher $T_c$. For attractive bare interactions $u_0<0$, the leading instability is $p$-wave pairing. The phase diagram is obtained from functional renormalization group that treats all competing many-body instabilities in the particle-particle and particle-hole channels on equal footing.
condensed matter
We present a 142-ks Chandra observation of the enigmatic combination supernova remnant G310.6-1.6 consisting of a bright pulsar-wind nebula driven by an energetic pulsar, surrounded by a highly circular, very faint shell with a featureless, probably synchrotron, spectrum. Comparison with an observation 6 years earlier shows no measurable expansion of the shell, though some features in the pulsar-wind nebula have moved. We find an expansion age of at least 2500 yr, implying a current shock velocity less than about 1000 km/s. We place severe upper limits on thermal emission from the shell; if the shell locates the blast wave, a Sedov interpretation would require the remnant to be very young, about 1000 yr, and to have resulted from a dramatically sub-energetic supernova, ejecting << 0.02 M_sun with energy E < 3 x 10^47 erg. Even a merger-induced collapse of a white dwarf to a neutron star, with a low-energy explosion, is unlikely to produce such an event. Other explanations seem equally unlikely.
astrophysics
Abelian T-duality in Gauged Linear Sigma Models (GLSM) forms the basis of the physical understanding of Mirror Symmetry as presented by Hori and Vafa. We consider an alternative formulation of Abelian T-duality on GLSM's as a gauging of a global $U(1)$ symmetry with the addition of appropriate Lagrange multipliers. For GLSMs with Abelian gauge groups and without superpotential we reproduce the dual models introduced by Hori and Vafa. We extend the construction to formulate non-Abelian T-duality on GLSMs with global non-Abelian symmetries. The equations of motion that lead to the dual model are obtained for a general group, they depend in general on semi-chiral superfields; for cases such as $SU(2)$ they depend on twisted chiral superfields. We solve the equations of motion for an $SU(2)$ gauged group with a choice of a particular Lie algebra direction of the vector superfield. This direction covers a non-Abelian sector that can be described by a family of Abelian dualities. The dual model Lagrangian depends on twisted chiral superfields and a twisted superpotential is generated. We explore some non-perturbative aspects by making an Ansatz for the instanton corrections in the dual theories. We verify that the effective potential for the $U(1)$ field strength in a fixed configuration on the original theory matches the one of the dual theory. Imposing restrictions on the vector superfield, more general non-Abelian dual models are obtained. We analyze the dual models via the geometry of their susy vacua.
high energy physics theory
We report small angle neutron scattering (SANS) measurements of the skyrmion lattice in (Cu$_{0.976}$Zn$_{0.024}$)$_2$OSeO$_3$ under the application of an electric field. These measurements show an expansion of the skyrmion lattice stability region with electric field similar to that seen in pristine Cu$_2$OSeO$_3$. Furthermore, using time-resolved SANS, we observe the slow formation of skyrmions after an electric or magnetic field is applied, which has not been observed in pristine Cu$_2$OSeO$_3$ crystals. The measured formation times are dramatically longer than the corresponding skyrmion destruction times after the external field is removed, and increase exponentially from 100~s at 52.5~K to 10,000~s at 51.5~K. This thermally activated behaviour indicates an energy barrier for skyrmion formation of 1.57(2)~eV, the size of which demonstrates the huge cost for creating these complex chiral objects.
condensed matter
We develop graphlet analysis for multiplex networks and discuss how this analysis can be extended to multilayer and multilevel networks as well as to graphs with node and/or link categorical attributes. The analysis has been adapted for two typical examples of multiplexes: economic trade data represented as a 957-plex network and 75 social networks each represented as a 12-plex network. We show that wedges (open triads) occur more often in economic trade networks than in social networks, indicating the tendency of a country to produce/trade of a product in local structure of triads which are not closed. Moreover, our analysis provides evidence that the countries with small diversity tend to form correlated triangles. Wedges also appear in the social networks, however the dominant graphlets in social networks are triangles (closed triads). If a multiplex structure indicates a strong tie, the graphlet analysis provides another evidence for the concepts of strong/weak ties and structural holes. In contrast to Granovetter's seminal work on the strength of weak ties, in which it has been documented that the wedges with only strong ties are absent, here we show that for the analyzed 75 social networks, the wedges with only strong ties are not only present but also significantly correlated.
computer science
Tuning of model-based boosting algorithms relies mainly on the number of iterations, while the step-length is fixed at a predefined value. For complex models with several predictors such as Generalized Additive Models for Location, Scale and Shape (GAMLSS), imbalanced updates of predictors, where some distribution parameters are updated more frequently than others, can be a problem that prevents some submodels to be appropriately fitted within a limited number of boosting iterations. We propose an approach using adaptive step-length (ASL) determination within a non-cyclical boosting algorithm for GAMLSS to prevent such imbalance. Moreover, for the important special case of the Gaussian distribution, we discuss properties of the ASL and derive a semi-analytical form of the ASL that avoids manual selection of the search interval and numerical optimization to find the optimal step-length, and consequently improves computational efficiency. We show competitive behavior of the proposed approaches compared to penalized maximum likelihood and boosting with a fixed step-length for GAMLSS models in two simulations and two applications, in particular for cases of large variance and/or more variables than observations. In addition, the idea of the ASL is also applicable to other models with more than one predictor like zero-inflated count model, and brings up insights into the choice of the reasonable defaults for the step-length in simpler special case of (Gaussian) additive models.
statistics
Systems of interacting charges and fields are ubiquitous in physics. Recently, it has been shown that Hamiltonians derived using different gauges can yield different physical results when matter degrees of freedom are truncated to a few low-lying energy eigenstates. This effect is particularly prominent in the ultra-strong coupling regime. Such ambiguities arise because transformations reshuffle the partition between light and matter degrees of freedom and so level truncation is a gauge dependent approximation. To avoid this gauge ambiguity, we redefine the electromagnetic fields in terms of potentials for which the resulting canonical momenta and Hamiltonian are explicitly unchanged by the gauge choice of this theory. Instead the light/matter partition is assigned by the intuitive choice of separating an electric field between displacement and polarisation contributions. This approach is an attractive choice in typical cavity quantum electrodynamics situations.
quantum physics
We generalize soft theorems of the nonlinear sigma model beyond the $\mathcal{O} (p^2)$ amplitudes and the coset of $\text{SU} (N) \times \text{SU} (N) / \text{SU} (N) $. We first discuss the flavor ordering of the amplitudes for the Nambu-Goldstone bosons of a general symmetry group representation, so that we can reinterpret the known $\mathcal{O} (p^2)$ single soft theorem for $\text{SU} (N) \times \text{SU} (N) / \text{SU} (N) $ in the context of a general group representation. We then investigate the special case of the fundamental representation of $\text{SO} (N)$, where a special flavor ordering of the "pair basis" is available. We provide novel amplitude relations and a Cachazo-He-Yuan formula for such a basis, and derive the corresponding single soft theorem. Next, we extend the single soft theorem for a general group representation to $\mathcal{O} (p^4)$, where for at least two specific choices of the $\mathcal{O} (p^4)$ operators, the leading non-vanishing pieces can be interpreted as new extended theory amplitudes involving bi-adjoint scalars, and the corresponding soft factors are the same as at $\mathcal{O} (p^2)$. Finally, we compute the general formula for the double soft theorem, valid to all derivative orders, where the leading part in the soft momenta is fixed by the $\mathcal{O}(p^2)$ Lagrangian, while any possible corrections to the subleading part are determined by the $\mathcal{O}(p^4)$ Lagrangian alone. Higher order terms in the derivative expansion do not contribute any new corrections to the double soft theorem.
high energy physics theory
Stokes flow, discussed by G.G. Stokes in 1851, describes many microscopic biological flow phenomena, including cilia-driven transport and flagellar motility; the need to quantify and understand these flows has motivated decades of mathematical and computational research. Regularized stokeslet methods, which have been used and refined over the past twenty years, offer significant advantages in simplicity of implementation, with a recent modification based on nearest-neighbour interpolation providing significant improvements in efficiency and accuracy. Moreover this method can be implemented with the majority of the computation taking place through built-in linear algebra, entailing that state-of-the-art hardware and software developments in the latter, in particular multicore and GPU computing, can be exploited through minimal modifications ('passive parallelism') to existing MATLAB computer code. Hence, and with widely-available GPU hardware, significant improvements in the efficiency of the regularized stokeslet method can be obtained. The approach is demonstrated through computational experiments on three model biological flows: undulatory propulsion of multiple C. Elegans, simulation of progression and transport by multiple sperm in a geometrically confined region, and left-right symmetry breaking particle transport in the ventral node of the mouse embryo. In general an order-of-magnitude improvement in efficiency is observed. This development further widens the complexity of biological flow systems that are accessible without the need for extensive code development or specialist facilities.
physics
We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective. We prove the $\mathcal{\tilde O}(t^{-1/4})$ rate of convergence for the norm of the gradient of Moreau envelope, which is the standard stationarity measure for this class of problems. It matches the known rates that adaptive algorithms enjoy for the specific case of unconstrained smooth stochastic optimization. Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly unbounded optimization domains. Finally, we illustrate the applications and extensions of our results to specific problems and algorithms.
statistics
Preventing periodontal diseases (PD) and maintaining the structure and function of teeth are important goals for personal oral care. To understand the heterogeneity in patients with diverse PD patterns, we develop BAREB, a Bayesian repulsive biclustering method that can simultaneously cluster the PD patients and their tooth sites after taking the patient- and site- level covariates into consideration. BAREB uses the determinantal point process (DPP) prior to induce diversity among different biclusters to facilitate parsimony and interpretability. Since PD progression is hypothesized to be spatially-referenced, BAREB factors in the spatial dependence among tooth sites. In addition, since PD is the leading cause for tooth loss, the missing data mechanism is non-ignorable. Such nonrandom missingness is incorporated into BAREB. For the posterior inference, we design an efficient reversible jump Markov chain Monte Carlo sampler. Simulation studies show that BAREB is able to accurately estimate the biclusters, and compares favorably to alternatives. For real world application, we apply BAREB to a dataset from a clinical PD study, and obtain desirable and interpretable results. A major contribution of this paper is the Rcpp implementation of BAREB, available at https://github.com/YanxunXu/ BAREB.
statistics
We introduce new analysis methods for studying the star cluster formation processes in Orion A, especially examining the scenario of a cloud-cloud collision. We utilize the CARMA-NRO Orion survey $^{13}$CO (1-0) data to compare molecular gas to the properties of YSOs from the SDSS III IN-SYNC survey. We show that the increase of $v_{\rm 13CO} - v_{\rm YSO}$ and $\Sigma$ scatter of older YSOs can be signals of cloud-cloud collision. SOFIA-upGREAT 158$\mu$m [CII] archival data toward the northern part of Orion A are also compared to the $^{13}$CO data to test whether the position and velocity offsets between the emission from these two transitions resemble those predicted by a cloud-cloud collision model. We find that the northern part of Orion A, including regions ONC-OMC-1, OMC-2, OMC-3 and OMC-4, shows qualitative agreements with the cloud-cloud collision scenario, while in one of the southern regions, NGC1999, there is no indication of such a process in causing the birth of new stars. On the other hand, another southern cluster, L1641N, shows slight tendencies of cloud-cloud collision. Overall, our results support the cloud-cloud collision process as being an important mechanism for star cluster formation in Orion A.
astrophysics
The use of the full potential of stellar seismology is made difficult by the improper modeling of the upper-most layers of solar-like stars and their influence on the modeled frequencies. Our knowledge on these \emph{surface effects} has improved thanks to the use of 3D hydrodynamical simulations but the calculation of eigenfrequencies relies on empirical models for the description of the Lagrangian perturbation of turbulent pressure: the reduced-$\Gamma_1$ model (RGM) and the gas-$\Gamma_1$ model (GGM). Starting from the fully compressible turbulence equations, we derive both the GGM and RGM models using a closure to model the flux of turbulent kinetic energy. It is found that both models originate from two terms: the source of turbulent pressure due to compression produced by the oscillations and the divergence of the flux of turbulent pressure. It is also demonstrated that they are both compatible with the adiabatic approximation but also imply a number of questionable assumptions mainly regarding mode physics. Among others hypothesis, one has to neglect the Lagrangian perturbation of the dissipation of turbulent kinetic energy into heat and the Lagrangian perturbation of buoyancy work.
astrophysics
The widely-used locally constant field approximation (LCFA) can be utilized in order to derive a simple closed-form expression for the total number of particles produced in the presence of a strong electromagnetic field of a general spatio-temporal configuration. A usual justification for this approximate approach is the requirement that the external field vary slowly in space and time. In this investigation, we examine the validity of the LCFA by comparing its predictions to the results obtained by means of exact nonperturbative numerical techniques. To benchmark the LCFA in the regime of small field amplitudes and low frequencies, we employ a semiclassical approach. As a reference, we consider a standing electromagnetic wave oscillating both in time and space as well as two spatially uniform field configurations: Sauter pulse and oscillating electric field. Performing a thorough numerical analysis, we identify the domain of the field parameters where the approximation is well justified. In particular, it is demonstrated that the Keldysh parameter is not a relevant quantity governing the accuracy of the LCFA.
high energy physics phenomenology
We present an operational and model-independent framework to investigate the concept of no-backwards-in-time signaling. We define no-backwards-in-time signaling conditions, closely related to the spatial no-signaling conditions. These allow for theoretical possibilities in which the future affects the past, nevertheless without signaling backwards in time. This is analogous to non-local but no-signaling spatial correlations. Furthermore, our results shed new light on situations with indefinite causal structure and their connection to quantum theory.
quantum physics
We investigate the structure of nodal solutions for coupled nonlinear Schr\"{o}dinger equations in the repulsive coupling regime. Among other results, for the following coupled system of $N$ equations, we prove the existence of infinitely many nodal solutions which share the same componentwise-prescribed nodal numbers \begin{equation}\label{ab} \left\{ \begin{array}{lr} -{\Delta}u_{j}+\lambda u_{j}=\mu u^{3}_{j}+\sum_{i\neq j}\beta u_{j}u_{i}^{2} \,\,\,\,\,\,\, in\ \W , u_{j}\in H_{0,r}^{1}(\W), \,\,\,\,\,\,\,\,j=1,\dots,N, \end{array} \right. \end{equation} where $\W$ is a radial domain in $\mathbb R^n$ for $n\leq 3$, $\lambda>0$, $\mu>0$, and $\beta <0$. More precisely, let $p$ be a prime factor of $N$ and write $N=pB$. Suppose $\beta\leq-\frac{\mu}{p-1}$. Then for any given non-negative integers $P_{1},P_{2},\dots,P_{B}$, (\ref{ab}) has infinitely many solutions $(u_{1},\dots,u_{N})$ such that each of these solutions satisfies the same property: for $b=1,...,B$, $u_{pb-p+i}$ changes sign precisely $P_b$ times for $i=1,...,p$. The result reveals the complex nature of the solution structure in the repulsive coupling regime due to componentwise segregation of solutions. Our method is to combine a heat flow approach as deformation with a minimax construction of the symmetric mountain pass theorem using a $\mathbb Z_p$ group action index. Our method is robust, also allowing to give the existence of one solution without assuming any symmetry of the coupling.
mathematics
Since the introduction of Generative Adversarial Networks (GANs) and Variational Autoencoders (VAE), the literature on generative modelling has witnessed an overwhelming resurgence. The impressive, yet elusive empirical performance of GANs has lead to the rise of many GAN-VAE hybrids, with the hopes of GAN level performance and additional benefits of VAE, such as an encoder for feature reduction, which is not offered by GANs. Recently, the Wasserstein Autoencoder (WAE) was proposed, achieving performance similar to that of GANs, yet it is still unclear whether the two are fundamentally different or can be further improved into a unified model. In this work, we study the $f$-GAN and WAE models and make two main discoveries. First, we find that the $f$-GAN and WAE objectives partake in a primal-dual relationship and are equivalent under some assumptions, which then allows us to explicate the success of WAE. Second, the equivalence result allows us to, for the first time, prove generalization bounds for Autoencoder models, which is a pertinent problem when it comes to theoretical analyses of generative models. Furthermore, we show that the WAE objective is related to other statistical quantities such as the $f$-divergence and in particular, upper bounded by the Wasserstein distance, which then allows us to tap into existing efficient (regularized) optimal transport solvers. Our findings thus present the first primal-dual relationship between GANs and Autoencoder models, comment on generalization abilities and make a step towards unifying these models.
statistics
We propose a general framework for studying adaptive regret bounds in the online learning framework, including model selection bounds and data-dependent bounds. Given a data- or model-dependent bound we ask, "Does there exist some algorithm achieving this bound?" We show that modifications to recently introduced sequential complexity measures can be used to answer this question by providing sufficient conditions under which adaptive rates can be achieved. In particular each adaptive rate induces a set of so-called offset complexity measures, and obtaining small upper bounds on these quantities is sufficient to demonstrate achievability. A cornerstone of our analysis technique is the use of one-sided tail inequalities to bound suprema of offset random processes. Our framework recovers and improves a wide variety of adaptive bounds including quantile bounds, second-order data-dependent bounds, and small loss bounds. In addition we derive a new type of adaptive bound for online linear optimization based on the spectral norm, as well as a new online PAC-Bayes theorem that holds for countably infinite sets.
computer science
Skeleton data, which consists of only the 2D/3D coordinates of the human joints, has been widely studied for human action recognition. Existing methods take the semantics as prior knowledge to group human joints and draw correlations according to their spatial locations, which we call the semantic perspective for skeleton modeling. In this paper, in contrast to previous approaches, we propose to model skeletons from a novel spatial perspective, from which the model takes the spatial location as prior knowledge to group human joints and mines the discriminative patterns of local areas in a hierarchical manner. The two perspectives are orthogonal and complementary to each other; and by fusing them in a unified framework, our method achieves a more comprehensive understanding of the skeleton data. Besides, we customized two networks for the two perspectives. From the semantic perspective, we propose a Transformer-like network that is expert in modeling joint correlations, and present three effective techniques to adapt it for skeleton data. From the spatial perspective, we transform the skeleton data into the sparse format for efficient feature extraction and present two types of sparse convolutional networks for sparse skeleton modeling. Extensive experiments are conducted on three challenging datasets for skeleton-based human action/gesture recognition, namely, NTU-60, NTU-120 and SHREC, where our method achieves state-of-the-art performance.
computer science
We present a statistical analysis of near-relativistic (NR) solar energetic electron event spectra near 1au. We use measurements of the STEREO Solar Electron and Proton Telescope (SEPT) in the energyrange of 45-425 keV and utilize the SEPT electron event list containing all electron events observed bySTEREO A and STEREO B from 2007 through 2018. We select 781 events with significant signal tonoise ratios for our analysis and fit the spectra with single or broken power law functions of energy.We find 437 (344) events showing broken (single) power laws in the energy range of SEPT. The eventswith broken power laws show a mean break energy of about 120 keV. We analyze the dependence ofthe spectral index on the rise times and peak intensities of the events as well as on the presence ofrelativistic electrons. The results show a relation between the power law spectral index and the risetimes of the events with softer spectra belonging to rather impulsive events. Long rise-time events areassociated with hard spectra as well as with the presence of higher energy (>0.7 MeV) electrons. Thisgroup of events cannot be explained by a pure flare scenario but suggests an additional accelerationmechanism, involving a prolonged acceleration and/or injection of the particles. A dependence of thespectral index on the longitudinal separation from the parent solar source region was not found. Astatistical analysis of the spectral indices during impulsively rising events (rise times<20 minutes) isalso shown.
astrophysics
We construct high-order semi-discrete-in-time and fully discrete (with Fourier-Galerkin in space) schemes for the incompressible Navier-Stokes equations with periodic boundary conditions, and carry out corresponding error analysis. The schemes are of implicit-explicit type based on a scalar auxiliary variable (SAV) approach. It is shown that numerical solutions of these schemes are uniformly bounded without any restriction on time step size. These uniform bounds enable us to carry out a rigorous error analysis for the schemes up to fifth-order in a unified form, and derive global error estimates in $l^\infty(0,T;H^1)\cap l^2(0,T;H^2)$ in the two dimensional case as well as local error estimates in $l^\infty(0,T;H^1)\cap l^2(0,T;H^2)$ in the three dimensional case. We also present numerical results confirming our theoretical convergence rates and demonstrating advantages of higher-order schemes for flows with complex structures in the double shear layer problem.
mathematics
Metallicity is known to significantly affect the radial expansion of a massive star: the lower the metallicity, the more compact the star, especially during its post-MS evolution. We study this effect in the context of binary evolution. Using the stellar-evolution code MESA, we computed evolutionary tracks of stars at different metallicities, exploring variations of factors known to affect the radial expansion (e.g. semiconvection, overshooting, rotation). We find observational support for an evolution in which already at metallicity $0.2Z_{\odot}$ massive stars remain relatively compact during the Hertzprung-Gap (HG) phase and most of their expansion occurs during core-helium burning (CHeB). Consequently, we show that metallicity has a strong influence on the type of mass transfer evolution in binary systems. At solar metallicity, a case-B mass transfer is initiated shortly after the end of MS, and a giant donor is almost always a rapidly expanding HG star. At lower metallicity, the parameter space for mass transfer from a more evolved CHeB star increases dramatically. This means that envelope stripping and formation of helium stars in low-metallicity environments occurs later in the evolution of the donor, implying a much shorter duration of the Wolf-Rayet phase (even by an order of magnitude) and higher final core masses. This metallicity effect is independent of the impact of metallicity-dependent stellar winds. At very low metallicities, a significant fraction of massive stars in binaries engages in the first episode of mass transfer very late into their evolution, when they already have a well-developed CO core. The remaining lifetime ($< 10^4$ yr) is unlikely to be enough to strip the entire H-rich envelope. We also briefly discuss the extremely small parameter space for mass transfer from massive convective-envelope donors in the context of binary black hole merger formation.
astrophysics
We consider the quantum theory of paraxial non-relativistic electron beams in non-uniform magnetic fields, such as the Glaser field. We find the wave function of an electron from such a beam and show that it is a joint eigenstate of two ($z$-dependent) commuting gauge-independent operators. This generalized Laguerre-Gaussian vortex beam has a phase that is shown to consist of two parts, each being proportional to the eigenvalue of one of the two conserved operators and each having different symmetries. We also describe the dynamics of the angular momentum and cross-sectional area of any mode and how a varying magnetic field can split a mode into a superposition of modes. By a suitable change in frame of reference all of our analysis also applies to an electron in a quantum Hall system with a time-dependent magnetic field.
quantum physics
Based on the strongly lensed gravitational waves (GWs) from compact binary coalescence, we propose a new strategy to examine the fluid shear viscosity of dark matter (DM) in the gravitational wave domain, i.e., whether a GW experiences the damping effect when it propagates in DM fluid with nonzero shear viscosity. By assuming that the dark matter self-scatterings are efficient enough for the hydrodynamic description to be valid, our results demonstrate that future ground-based Einstein Telescope (ET) and satellite GW observatory (Big Bang Observer; BBO) may succeed in detecting any dark matter self-interactions at the scales of galaxies and clusters.
astrophysics
Thermodynamics properties of an interacting system of bosons are considered at finite temperatures and zero chemical potential within the Skyrme-like mean-field model. An interplay between attractive and repulsive interactions is investigated. As a particular example an equilibrium system of pions is discussed. Several modifications of thermodynamic properties in the considered system are found with increasing a strength of attractive forces. Different types of the first order phase transition are classified. Some of these transitions exist also in the Boltzmann approximation. However, effects of the Bose statistics introduce the notable additional changes in the thermodynamic quantities due to a possibility of the Bose-Einstein condensation.
high energy physics phenomenology
Gauge fields play important roles in condensed matter, explaining for example nonreciprocal and topological transport phenomena. Establishing gauge potentials for phonon transport in nanomechanical systems would bring quantum Hall physics to a new domain, which offers broad applications in sensing and signal processing, and is naturally associated with strong nonlinearities and thermodynamics. In this work, we demonstrate a magnetic gauge field for nanomechanical vibrations in a scalable, on-chip optomechanical system. We exploit multimode optomechanical interactions, which provide a useful resource for the necessary breaking of time-reversal symmetry. In a dynamically modulated nanophotonic system, we observe how radiation pressure forces mediate phonon transport between resonators of different frequencies, with a high rate and a characteristic nonreciprocal phase mimicking the Aharonov-Bohm effect. We show that the introduced scheme does not require high-quality cavities, such that it can be straightforwardly extended to explore topological acoustic phases in many-mode systems resilient to realistic disorder.
physics
This letter presents an analytical path loss model for air-ground (AG) propagation between unmanned aerial vehicles (UAVs) and ground-based vehicles. We consider built-up areas, such as the ones defined by ITU-R. The three-dimensional (3D) path loss model is based on propagation conditions and essential parameters are derived by using geometric methods. Owing to the generality, the analytical model is capable of arbitrary deployments of buildings, such as suburban, urban and dense urban. The analytical model is evaluated numerically, and validations conducted by ray-tracing simulations show the high accuracy of the proposed model. The closed-form analytical formulas provide a useful tool for quick and accurate prediction of UAV-to-vehicle propagation channels.
electrical engineering and systems science
We explore possible signatures for charged lepton flavour violation (LFV), sparticle discovery at the LHC and dark matter (DM) searches in grand unified theories (GUTs) based on SU(5), flipped SU(5) (FSU(5)) and SU(4)$_c \times $SU(2)$_L \times $SU(2)$_R$ (4-2-2). We assume that soft supersymmetry-breaking terms preserve the group symmetry at some high input scale, and focus on the non-universal effects on different matter representations generated by gauge interactions at lower scales, as well as the charged LFV induced in Type-1 see-saw models of neutrino masses. We identify the different mechanisms that control the relic DM density in the various GUT models, and contrast their LFV and LHC signatures. The SU(5) and 4-2-2 models offer good detection prospects both at the LHC and in LFV searches, though with different LSP compositions, and the SU(5) and FSU(5) models offer LFV within the current reach. The 4-2-2 model allows chargino and gluino coannihilations with neutralinos, and the former offer good detection prospects for both the LHC and LFV, while gluino coannihilations lead to lower LFV rates. Our results indicate that LFV is a powerful tool that complements LHC and DM searches, providing significant insights into the sparticle spectra and neutrino mass parameters in different models.
high energy physics phenomenology
This paper introduces cosmoDC2, a large synthetic galaxy catalog designed to support precision dark energy science with the Large Synoptic Survey Telescope (LSST). CosmoDC2 is the starting point for the second data challenge (DC2) carried out by the LSST Dark Energy Science Collaboration (LSST DESC). The catalog is based on a trillion-particle, 4.225 Gpc^3 box cosmological N-body simulation, the `Outer Rim' run. It covers 440 deg^2 of sky area to a redshift of z=3 and is complete to a magnitude depth of 28 in the r-band. Each galaxy is characterized by a multitude of properties including stellar mass, morphology, spectral energy distributions, broadband filter magnitudes, host halo information and weak lensing shear. The size and complexity of cosmoDC2 requires an efficient catalog generation methodology; our approach is based on a new hybrid technique that combines data-driven empirical approaches with semi-analytic galaxy modeling. A wide range of observation-based validation tests has been implemented to ensure that cosmoDC2 enables the science goals of the planned LSST DESC DC2 analyses. This paper also represents the official release of the cosmoDC2 data set, including an efficient reader that facilitates interaction with the data.
astrophysics
We develop an exactly solvable framework of Markov decision process with a finite horizon, and continuous state and action spaces. We first review the exact solution of conventional linear quadratic regulation with a linear transition and a Gaussian noise, whose optimal policy does not depend on the Gaussian noise, which is an undesired feature in the presence of significant noises. It motivates us to investigate exact solutions which depend on noise. To do so, we generalize the reward accumulation to be a general binary commutative and associative operation. By a new multiplicative accumulation, we obtain an exact solution of optimization assuming linear transitions with a Gaussian noise and the optimal policy is noise dependent in contrast to the additive accumulation. Furthermore, we also show that the multiplicative scheme is a general framework that covers the additive one with an arbitrary precision, which is a model-independent principle.
computer science
Many studies have examined the damage behaviour of dual-phase steels already. It is a topic of high interest, since understanding the mechanisms of damage during forming processes enables the production of steels with improved properties and damage tolerance. However, the focus was rarely on the comparison between representatives of this steel class, and the numerical simulation for the quantification of damage states was not thoroughly used. Therefore, this study compares the damage initiation and accumulation of two dual-phase steels (DP800 and DP1000), which are used in the automotive industry. Additionally, parameter sets of a phenomenological damage mechanics model with coupled damage evolution are calibrated for each material. The combined analysis reveals an earlier initiation of damage for the DP800, where the damage accumulation phase is prolonged. For DP1000 the damage nucleates only shortly before material failure. The material model is able to correctly predict the behaviour, while experimental analysis confirms the prediction via light optical and SEM metallography.
condensed matter
Using the quantum chromodynamics (QCD) equation of state (EoS) from lattice calculations we investigate effects from QCD on primordial gravitational waves (PGWs) produced during the inflationary era. We also consider different cases for vanishing and nonvanishing lepton asymmetry where the latter one is constrained by cosmic microwave background experiments. Our results show that there is up to a few percent deviation in the predicted gravitational wave background in the frequency range around the QCD transition ($10^{-10}- 10^{-7}$~Hz) for different lattice QCD EoSs, or at larger frequencies for nonvanishing lepton asymmetry using perturbative QCD. Future gravitational wave experiments with high enough sensitivity in the measurement of the amplitude of PGWs like SKA, EPTA, DECIGO and LISA can probe these differences and can shed light on the real nature of the cosmic QCD transition and the existence of a nonvanishing lepton asymmetry in the early universe.
high energy physics phenomenology
We show that accordiohedra furnish polytopes which encode amplitudes for all massive scalar field theories with generic interactions. This is done by deriving integral formulae for the Feynman diagrams at tree level and integrands at one loop level in the planar limit using the twisted intersection theory of convex realizations of the accordiohedron polytopes.
high energy physics theory
The evidence of Dark Matter (DM) is one of the strongest observational arguments in favour of physics beyond the Standard Model. Despite expectations, a similar evidence has been lacking so far in collider searches, with the possible exception of $B$-physics discrepancies, a coherent set of persistent deviations in a homogeneous dataset consisting of $b \to c$ and $b \to s$ semi-leptonic transitions. We explore the question whether DM and the $B$ discrepancies may have a common origin. We do so in the context of the so-called 4321 gauge model, a UV-complete and calculable setup that yields a $U_1$ leptoquark, the by far most successful single mediator able to explain the $B$ anomalies, along with other new gauge bosons, including a $Z^\prime$. Adding to this setup a 'minimal' DM fermionic multiplet, consisting of a ${\bf 4}$ under the 4321's $SU(4)$, we find the resulting model in natural agreement with the relic-density observation and with the most severe direct-detection bounds, in the sense that the parameter space selected by $B$ physics is also the one favoured by DM phenomenology. The DM candidate is a particle with a mass in the WIMP range, freeze-out dynamics includes a co-annihilator (the 'rest' of the ${\bf 4}$ multiplet), and the most important gauge mediator in the DM sector is the $Z^\prime$.
high energy physics phenomenology
The manifold Helmholtzian (1-Laplacian) operator $\Delta_1$ elegantly generalizes the Laplace-Beltrami operator to vector fields on a manifold $\mathcal M$. In this work, we propose the estimation of the manifold Helmholtzian from point cloud data by a weighted 1-Laplacian $\mathbf{\mathcal L}_1$. While higher order Laplacians ave been introduced and studied, this work is the first to present a graph Helmholtzian constructed from a simplicial complex as an estimator for the continuous operator in a non-parametric setting. Equipped with the geometric and topological information about $\mathcal M$, the Helmholtzian is a useful tool for the analysis of flows and vector fields on $\mathcal M$ via the Helmholtz-Hodge theorem. In addition, the $\mathbf{\mathcal L}_1$ allows the smoothing, prediction, and feature extraction of the flows. We demonstrate these possibilities on substantial sets of synthetic and real point cloud datasets with non-trivial topological structures; and provide theoretical results on the limit of $\mathbf{\mathcal L}_1$ to $\Delta_1$.
statistics
At present, we have almost as many theories to explain Fast Radio Bursts as we have Fast Radio Bursts observed. This landscape will be changing rapidly with CHIME/FRB, recently commissioned in Canada, and HIRAX, under construction in South Africa. This is an opportune time to review existing theories and their observational consequences, allowing us to efficiently curtail viable astrophysical models as more data becomes available. In this article we provide a currently up to date catalogue of the numerous and varied theories proposed for Fast Radio Bursts so far. We also launch an online evolving repository for the use and benefit of the community to dynamically update our theoretical knowledge and discuss constraints and uses of Fast Radio Bursts.
astrophysics
We investigate the constraints on the reionization history of the Universe from a joint analysis of the cosmic microwave background and neutral hydrogen fraction data. The $\tanh$ parametrization and principal component analysis methods are applied to the reionization history respectively. The commonly used $\tanh$ parametrization is oversimplistic when the neutral hydrogen fraction data are taken into account. Using the principal component analysis method, the reconstructed reionization history is consistent with the neutral hydrogen fraction data. With the principal component analysis method, we reconstruct the neutral hydrogen fraction at $z=9.75$ as $x_{\text{HI}}=0.69^{+0.30}_{-0.32}$ for $6<z<20$ range reconstruction, and $x_{\text{HI}}=0.76^{+0.22}_{-0.27}$ for $6<z<30$ range reconstruction. These results suggest that the Universe began to reionize at redshift no later than $z=10$ at a $95\%$ confidence level.
astrophysics
We have performed a stratigraphic and mineralogical analysis of a vertical transect across a ridge located at the distal end of a system of eroded alluvial deposits in the northern Atacama Desert of Chile. The ridge, which is interpreted to be an inverted channel, exhibits a history of sedimentary, evaporitic, and diagenetic origin that includes groundwater mobilization and precipitation of anhydrite cements throughout the volume of the ridge. The ridge consists of two units: a lower one exhibiting a sedimentary and diagenetic history, and an upper one exhibiting an evaporitic history. Interbedded in the section are also anhydritic and gypsic paleosols. Two mechanisms that contribute to channel preservation and inversion are identified in this case. The first mechanism is the cementation of the volume by anhydrite cements during early diagenesis, and the second newly identified mechanism is the armoring of the lateral slopes of the ridge by halite-rich cement. The slope-conforming armor formed by this second mechanism developed subsequent to the formation of the ridge as a consequence of the remobilization of soluble salts. Finally, we identify a series of Ca-sulfate-rich plates on the surface of the ridge, which we interpret here to form by fracturing and subsequent erosion of an evaporitic deposit. The plates exhibit a reticulated surface texture, which we interpret as the result of periodic deliquescence and reprecipitation of a thin surface film of the evaporite deposits in response to thick morning fogs that occur in this part of the Atacama. The cross section of the plates exhibits a thin portion of biological material, which we ascribe to bacterial mats that take advantage of the deliquescence of the substrate to obtain their water. This later has important implications in the search for extant or extinct life on Mars.
astrophysics
The non-zero value of Planck constant $\hbar$ underlies the emergence of several inequalities that must be satisfied in the quantum realm, the most prominent one being the Heisenberg Uncertainty Principle. Among these inequalities, the Bekenstein bound provides a universal limit on the entropy that can be contained in a localized quantum system of given size and total energy. In this letter, we explore how the Bekenstein bound is affected when the Heisenberg uncertainty relation is deformed so as to accommodate gravitational effects at the Planck scale (Generalized Uncertainty Principle). By resorting to very general arguments, we derive in this way a "generalized Bekenstein bound". Physical implications of this result are discussed for both cases of positive and negative values of the deformation parameter.
high energy physics theory
Recent technological advancements are making the use of compact, low-cost, low-power mm-wave radars viable for providing environmental awareness in a number of applications, ranging from automotive to indoor mapping and radio resource optimisation. These emerging use-cases pave the road towards networks in which a large number of radar and broadband communications devices coexist, sharing a common spectrum band in a possibly uncoordinated fashion. Although a clear understanding of how mutual interference influences radar and communications performance is key to proper system design, the core tradeoffs that arise in such scenarios are still largely unexplored. In this paper, we provide results that help bridge this gap, obtained by means of an analytical model and extensive simulations. To capture the fundamental interactions between the two systems, we study mm-wave networks where pulsed radars coexist with communications devices that access the channel following an ALOHA policy. We investigate the effect of key parameters on the performance of the coexisting systems, including the network density, fraction of radar and communication nodes in the network, antenna directivity, and packet length. We quantify the effect of mutual interference in the coexistence scenario on radar detection and communication network throughput, highlighting some non-trivial interplays and deriving useful design tradeoffs.
computer science
Future prospects for solar spectroscopy missions operating in the extreme ultraviolet (EUV) and soft X-ray (SXR) wavelength ranges, 1.2-1600 Angstroms, are discussed. NASA is the major funder of Solar Physics missions, and brief summaries of the opportunities for mission development under NASA are given. Upcoming major solar missions from other nations are also described. The methods of observing the Sun in the two wavelength ranges are summarized with a discussion of spectrometer types, imaging techniques and detector options. The major spectral features in the EUV and SXR regions are identified, and then the upcoming instruments and concepts are summarized. The instruments range from large spectrometers on dedicated missions, to tiny, low-cost CubeSats launched through rideshare opportunities.
astrophysics
We discuss 3d $\mathcal{N}=1$ supersymmetric SU(N) and U(N) Chern-Simons-matter theories, with $N_f$ matter superfields in the fundamental representation of SU(N) or U(N). In the large N 't Hooft limit with fixed 't Hooft coupling $\lambda$ these theories have one (for $N_f=1$) or two (for $N_f > 1$) exactly marginal deformations in the superpotential. At finite N these couplings acquire a beta function. We compute the beta function exactly for $\lambda=0$, at leading order in 1/N. For $N_f=1$ we find four fixed points, one of which is triply-degenerate. We show that at large N there are at most six fixed points for any $\lambda$, and conjecture that there are exactly six, with three of them stable (including a point with enhanced $\mathcal{N}=2$ supersymmetry). The strong-weak coupling dualities of $\mathcal{N}=1$ Chern-Simons-matter theories map each of these fixed points to a dual one. We show that at large N the phase structure near each of the three stable fixed points is different. For $N_f>1$ we analyze the fixed points at weak coupling, and we work out the action of the strong-weak coupling duality on the marginal and relevant superpotential couplings at large N (which was previously known only for $N_f=1$). In addition, we compute in these theories the 2-point and 3-point functions of the lowest gauge-invariant singlet superfield at large N, for all values of $\lambda$ and of the superpotential couplings, and use them to test the large N dualities. This computation is one of the ingredients needed for a computation of the beta function at order 1/N for all $\lambda$, which we leave for future work. We also discuss Chern-Simons-matter theories with extra Hubbard-Stratonovich type singlet fields, and suggest dualities between them.
high energy physics theory
This paper considers an Internet-of-Things (IoT) scenario in which devices sporadically transmit short packets with few pilot symbols over a fading channel. Devices are characterized by unique transmission non-idealities, such as I/Q imbalance. The number of pilots is generally insufficient to obtain an accurate estimate of the end-to-end channel, which includes the effects of fading and of the transmission-side distortion. This paper proposes to tackle this problem by using meta-learning. Accordingly, pilots from previous IoT transmissions are used as meta-training data in order to train a demodulator that is able to quickly adapt to new end-to-end channel conditions from few pilots. Various state-of-the-art meta-learning schemes are adapted to the problem at hand and evaluated, including Model-Agnostic Meta-Learning (MAML), First-Order MAML (FOMAML), REPTILE, and fast Context Adaptation VIA meta-learning (CAVIA). Both offline and online solutions are developed. In the latter case, an integrated online meta-learning and adaptive pilot number selection scheme is proposed. Numerical results validate the advantages of meta-learning as compared to training schemes that either do not leverage prior transmissions or apply a standard joint learning algorithms on previously received data.
electrical engineering and systems science
Dynamic response of the one-dimensional viscoelastic rod of finite length, that has one end fixed and the other subject to prescribed either displacement or stress, is analyzed by the analytical means of Laplace transform, yielding the displacement and stress of an arbitrary rod's point as a convolution of the boundary forcing and solution kernel. Thermodynamically consistent Burgers models are adopted as the constitutive equations describing mechanical properties of the rod. Short-time asymptotics implies the finite wave propagation speed in the case of the second class models, contrary to the case of the first class models. Moreover, Burgers model of the first class yield quite classical shapes of displacement and stress time profiles resulting from the boundary forcing assumed as the Heaviside function, while model of the second class yield responses that resemble to the sequence of excitation and relaxation processes.
physics
We explore constraints imposed by shear-driven instabilities on the acceleration of energetic particles in relativistic shearing flows. We show that shearing layers in large-scale AGN jets are likely to encompass a sizeable fraction ($\geq 0.1$) of the jet radius, requiring seed injection of GeV electrons for efficient acceleration. While the diffusion process may depend on pre-developed turbulence if injection occurs at higher energies, electron acceleration to PeV and proton acceleration to EeV energies appears possible within the constraints imposed by jet stability.
astrophysics
The worldwide outbreak of COVID-19 has posed a dire threat to the public. Human mobility has changed in various ways over the course of the pandemic. Despite current studies on common mobility metrics, research specifically on state-to-state mobility is very limited. By leveraging the mobile phone location data from over 100 million anonymous devices, we estimate the population flow between all states in the United States. We first analyze the temporal pattern and spatial differences of between-state flow from January 1, 2020 to May 15, 2020. Then, with repeated measures ANOVA and post-hoc analysis, we discern different time-course patterns of between-state population flow by pandemic severity groups. A further analysis shows moderate to high correlation between the flow reduction and the pandemic severity, the strength of which varies with different policies. This paper is promising in predicting imported cases.
physics
According to the Standard Model (SM), we expect to find a proton for each decaying neutron. However, the experiments counting the number of decayed neutrons and produced protons have a disagreement. This discrepancy suggests that neutrons might have an exotic decay to a Dark Sector (DS). In this paper, we explore a scenario where neutrons decay to a dark Dirac fermion $\chi$ and a non-abelian dark gauge boson $W'$. We discuss the cosmological implications of this scenario assuming DS particles are produced via freeze-in. In our proposed scenario, DS has three portals with the SM sector: (1) the fermion portal coming from the mixing of the neutron with $\chi$, (2) a scalar portal, and (3) a non-renormalizable kinetic mixing between photon and dark gauge bosons which induces a vector portal between the two sectors. We show that neither the fermion portal nor the scalar portal should contribute to the production of the particles in the early universe. Specifically, we argue that the maximum temperature of the universe must be low enough to prevent the production of $\chi$ in the early universe. In this paper, we rely on the vector portal to connect the two sectors, and we discuss the phenomenological bounds on the model. The main constraints come from ensuring the right relic abundance of dark matter and evading the neutron star bounds. When dark gauge boson is very light, measurements of the Big Bang Nucleosynthesis impose a stringent constraint as well.
high energy physics phenomenology
We present IQu, namely a quantum programming language that extends Reynold's Idealized Algol, the paradigmatic core of Algol-like languages. IQu combines imperative programming with high-order features, mediated by a simple type theory. IQu mildly merges its quantum features with the classical programming style that we can experiment through Idealized Algol, the aim being to ease a transition towards the quantum programming world. The proposed extension is done along two main directions. First, IQu makes the access to quantum co-processors by means of quantum stores. Second, IQu includes some support for the direct manipulation of quantum circuits, in accordance with recent trends in the development of quantum programming languages. Finally, we show that IQu is quite effective in expressing well-known quantum algorithms.
computer science
The Functional Linear Model with Functional Response (FLMFR) is one of the most fundamental models to assess the relation between two functional random variables. In this paper, we propose a novel goodness-of-fit test for the FLMFR against a general, unspecified, alternative. The test statistic is formulated in terms of a Cram\'er-von Mises norm over a doubly-projected empirical process which, using geometrical arguments, yields an easy-to-compute weighted quadratic norm. A resampling procedure calibrates the test through a wild bootstrap on the residuals and the use of convenient computational procedures. As a sideways contribution, and since the statistic requires a reliable estimator of the FLMFR, we discuss and compare several regularized estimators, providing a new one specifically convenient for our test. The finite sample behavior of the test is illustrated via a simulation study. Also, the new proposal is compared with previous significance tests. Two novel real datasets illustrate the application of the new test.
statistics
Factorial survival designs with right-censored observations are commonly inferred by Cox regression and explained by means of hazard ratios. However, in case of non-proportional hazards, their interpretation can become cumbersome; especially for clinicians. We therefore offer an alternative: median survival times are used to estimate treatment and interaction effects and null hypotheses are formulated in contrasts of their population versions. Permutation-based tests and confidence regions are proposed and shown to be asymptotically valid. Their type-1 error control and power behavior are investigated in extensive simulations, showing the new methods' wide applicability. The latter is complemented by an illustrative data analysis.
statistics
Suppose $k \ge 2$ is an integer. Let $Y_k$ be the poset with elements $x_1, x_2, y_1, y_2, \ldots, y_{k-1}$ such that $y_1 < y_2 < \cdots < y_{k-1} < x_1, x_2$ and let $Y_k'$ be the same poset but all relations reversed. We say that a family of subsets of $[n]$ contains a copy of $Y_k$ on consecutive levels if it contains $k+1$ subsets $F_1, F_2, G_1, G_2, \ldots, G_{k-1}$ such that $G_1\subset G_2 \subset \cdots \subset G_{k-1} \subset F_1, F_2$ and $|F_1| = |F_2| = |G_{k-1}|+1 =|G_{k-2}|+ 2= \cdots = |G_{1}|+k-1$. If both $Y_k$ and $Y'_k$ on consecutive levels are forbidden, the size of the largest such family is denoted by $\mathrm{La}_{\mathrm{c}}(n, Y_k, Y'_k)$. In this paper, we will determine the exact value of $\mathrm{La}_{\mathrm{c}}(n, Y_k, Y'_k)$.
mathematics
The research community has been actively working on the realization of quantum computer. But the large scale commercial quantum computers are not a reality yet quantum computing field has become richer by day with the advent of algorithms and the avenue of its application in multiple domains. Availability of efficient quantum simulators will enable the researchers to quickly verify their results and concepts in order to establish a working proof of correctness. One important algorithm that has become one of the basic ingredients to build other algorithms and models is the Grover's search Algorithm which is known to be the most compute intensive. Our approach highlights the design principles for the fast simulation of Grover's search which can be implemented on a general purpose personal computer. The performance obtained are encouraging when compared to the existing simulators.
quantum physics
Defects induced by liquid-phase exfoliation of graphite using sonication were studied. It was shown that localized impact by cavitation shock waves can produce bulk ripplocations and various types of dislocations in graphite nanoplatelets. Formation of ripples is more pronounced in large aspect (length/width) ratio platelets or nanobelts. Quasi-periodical ripple systems were observed in many nanobelts after sonication. Mechanism of formation of ripples and dislocations during sonication was proposed. Surprisingly, fast high-temperature processing was found to anneal most of defects. This is consistent with our observations that defects associated with ripplocations are strongly localized and thus can be fast annealed.
condensed matter
We consider a situation where an $N$-level system (NLS) is coupled to a heat bath without being necessarily thermalized. For this situation we derive general Jarzinski-type equations and conclude that heat and entropy is flowing from the hot bath to the cold NLS and, vice versa, from the hot NLS to the cold bath. The Clausius relation between increase of entropy and transfer of heat divided by a suitable temperature assumes the form of two inequalities which have already been considered in the literature. Our approach is illustrated by an analytical example.
quantum physics
We propose an iterative scheme to factor numbers based on the quantum dynamics of an ensemble of interacting bosonic atoms stored in a trap where the single-particle energy spectrum depends logarithmically on the quantum number. When excited by a time-dependent interaction these atoms perform Rabi oscillations between the ground state and an energy state characteristic of the factors. The number to be factored is encoded into the frequency of the sinusoidally modulated interaction. We show that a measurement of the energy of the atoms at a time chosen at random yields the factors with probability one half. We conclude by discussing a protocol to obtain the desired prime factors employing a logarithmic energy spectrum which consists of prime numbers only.
quantum physics
Conditional image generation (CIG) is a widely studied problem in computer vision and machine learning. Given a class, CIG takes the name of this class as input and generates a set of images that belong to this class. In existing CIG works, for different classes, their corresponding images are generated independently, without considering the relationship among classes. In real-world applications, the classes are organized into a hierarchy and their hierarchical relationships are informative for generating high-fidelity images. In this paper, we aim to leverage the class hierarchy for conditional image generation. We propose two ways of incorporating class hierarchy: prior control and post constraint. In prior control, we first encode the class hierarchy, then feed it as a prior into the conditional generator to generate images. In post constraint, after the images are generated, we measure their consistency with the class hierarchy and use the consistency score to guide the training of the generator. Based on these two ideas, we propose a TreeGAN model which consists of three modules: (1) a class hierarchy encoder (CHE) which takes the hierarchical structure of classes and their textual names as inputs and learns an embedding for each class; the embedding captures the hierarchical relationship among classes; (2) a conditional image generator (CIG) which takes the CHE-generated embedding of a class as input and generates a set of images belonging to this class; (3) a consistency checker which performs hierarchical classification on the generated images and checks whether the generated images are compatible with the class hierarchy; the consistency score is used to guide the CIG to generate hierarchy-compatible images. Experiments on various datasets demonstrate the effectiveness of our method.
computer science
It is widely considered that the classical Higgs branch of 4d $\mathcal{N}=2$ SQCD is a well understood object. However there is no satisfactory understanding of its structure. There are two complications: (1) the Higgs branch chiral ring contains nilpotent elements, as can easily be checked in the case of $\mathrm{SU}(N)$ with 1 flavour. (2) the Higgs branch as a geometric space can in general be decomposed into two cones with nontrivial intersection, the baryonic and mesonic branches. To study the second point in detail we use the recently developed tool of magnetic quivers for five-brane webs, using the fact that the classical Higgs branch for theories with 8 supercharges does not change through dimensional reduction. We compare this approach with the computation of the hyper-K\"ahler quotient using Hilbert series techniques, finding perfect agreement if nilpotent operators are eliminated by the computation of a so called radical. We study the nature of the nilpotent operators and give conjectures for the Hilbert series of the full Higgs branch, giving new insights into the vacuum structure of 4d $\mathcal{N}=2$ SQCD. In addition we demonstrate the power of the magnetic quiver technique, as it allows us to identify the decomposition into cones, and provides us with the global symmetries of the theory, as a simple alternative to the techniques that were used to date.
high energy physics theory
We consider the non-linear classical field theory which results from adding to the Maxwell's Lagrangian the contributions from the weak-field Euler-Heisenberg Lagrangian and a non-uniform part which involves derivatives of the electric and magnetic fields. We focus on the electrostatic case where the magnetic field is set to zero, and we derive the modified Gauss law, resulting in a higher order differential equation. This equation gives the electric field produced by stationary charges in the higher order non-linear electrodynamics. Specializing for the case of a point charge, we investigate the solutions of the modified Gauss law and calculate the correction to the Coulomb law.
high energy physics phenomenology
With the advent of gravitational wave astronomy, techniques to extend the reach of gravitational wave detectors are desired. In addition to the stellar-mass black hole and neutron star mergers already detected, many more are below the surface of the noise, available for detection if the noise is reduced enough. Our method (DeepClean) applies machine learning algorithms to gravitational wave detector data and data from on-site sensors monitoring the instrument to reduce the noise in the time-series due to instrumental artifacts and environmental contamination. This framework is generic enough to subtract linear, non-linear, and non-stationary coupling mechanisms. It may also provide handles in learning about the mechanisms which are not currently understood to be limiting detector sensitivities. The robustness of the noise reduction technique in its ability to efficiently remove noise with no unintended effects on gravitational-wave signals is also addressed through software signal injection and parameter estimation of the recovered signal. It is shown that the optimal SNR ratio of the injected signal is enhanced by $\sim 21.6\%$ and the recovered parameters are consistent with the injected set. We present the performance of this algorithm on linear and non-linear noise sources and discuss its impact on astrophysical searches by gravitational wave detectors.
astrophysics
One of the main reasons for the success of Evolutionary Algorithms (EAs) is their general-purposeness, i.e., the fact that they can be applied straightforwardly to a broad range of optimization problems, without any specific prior knowledge. On the other hand, it has been shown that incorporating a priori knowledge, such as expert knowledge or empirical findings, can significantly improve the performance of an EA. However, integrating knowledge in EAs poses numerous challenges. It is often the case that the features of the search space are unknown, hence any knowledge associated with the search space properties can be hardly used. In addition, a priori knowledge is typically problem-specific and hard to generalize. In this paper, we propose a framework, called Knowledge Integrated Evolutionary Algorithm (KIEA), which facilitates the integration of existing knowledge into EAs. Notably, the KIEA framework is EA-agnostic (i.e., it works with any evolutionary algorithm), problem-independent (i.e., it is not dedicated to a specific type of problems), expandable (i.e., its knowledge base can grow over time). Furthermore, the framework integrates knowledge while the EA is running, thus optimizing the use of the needed computational power. In the preliminary experiments shown here, we observe that the KIEA framework produces in the worst case an 80% improvement on the converge time, w.r.t. the corresponding "knowledge-free" EA counterpart.
computer science
The Coronavirus Disease 2019 (COVID-19) pandemic has caused tremendous amount of deaths and a devastating impact on the economic development all over the world. Thus, it is paramount to control its further transmission, for which purpose it is necessary to find the mechanism of its transmission process and evaluate the effect of different control strategies. To deal with these issues, we describe the transmission of COVID-19 as an explosive Markov process with four parameters. The state transitions of the proposed Markov process can clearly disclose the terrible explosion and complex heterogeneity of COVID-19. Based on this, we further propose a simulation approach with heterogeneous infections. Experimentations show that our approach can closely track the real transmission process of COVID-19, disclose its transmission mechanism, and forecast the transmission under different non-drug intervention strategies. More importantly, our approach can helpfully develop effective strategies for controlling COVID-19 and appropriately compare their control effect in different countries/cities.
statistics
We study the late time dynamics of a single active Brownian particle in two dimensions with speed $v_0$ and rotation diffusion constant $D_R$. We show that at late times $t\gg D_R^{-1}$, while the position probability distribution $P(x,y,t)$ in the $x$-$y$ plane approaches a Gaussian form near its peak describing the typical diffusive fluctuations, it has non-Gaussian tails describing atypical rare fluctuations when $\sqrt{x^2+y^2}\sim v_0 t$. In this regime, the distribution admits a large deviation form, $P(x,y,t) \sim \exp\left[-t\, D_R\, \Phi\left(\sqrt{x^2+y^2}/(v_0 t)\right)\right]$, where we compute the rate function $\Phi(z)$ analytically and also numerically using an importance sampling method. We show that the rate function $\Phi(z)$, encoding the rare fluctuations, still carries the trace of activity even at late times. Another way of detecting activity at late times is to subject the active particle to an external harmonic potential. In this case we show that the stationary distribution $P_\text{stat}(x,y)$ depends explicitly on the activity parameter $D_R^{-1}$ and undergoes a crossover, as $D_R$ increases, from a ring shape in the strongly active limit ($D_R\to 0$) to a Gaussian shape in the strongly passive limit $(D_R\to \infty)$.
condensed matter
Researchers and managers model ecological communities to infer the biotic and abiotic variables that shape species' ranges, habitat use, and co-occurrence which, in turn, are used to support management decisions and test ecological theories. Recently, species distribution models were developed for and applied to data from ecological communities. Model development and selection for ecological community data is difficult because a high level of complexity is desired and achieved by including numerous parameters, which can degrade predictive accuracy and be challenging to interpret and communicate. Like other statistical models, multi-species distribution models can be overparameterized. Regularization is a technique that optimizes predictive accuracy by shrinking or eliminating model parameters. For Bayesian models, the prior distribution automatically regularizes parameters. We propose a tree shrinkage prior for Bayesian multi-species distributions models that performs regularization and reduces the number of regression coefficients associated with predictor variables. Using this prior, the number of regression coefficients in multi-species distributions models is reduced by estimation of unique regression coefficients for a smaller number of guilds rather than a larger number of species. We demonstrated our tree shrinkage prior using examples of presence-absence data for six species of aquatic vegetation and relative abundance data for 15 species of fish. Our results show that the tree shrinkage prior can increase the predictive accuracy of multi-species distribution models and enable researchers to infer the number and species composition of guilds from ecological community data.
statistics
Transfer learning is widely used for transferring knowledge from a source domain to the target domain where the labeled data is scarce. Recently, deep transfer learning has achieved remarkable progress in various applications. However, the source and target datasets usually belong to two different organizations in many real-world scenarios, potential privacy issues in deep transfer learning are posed. In this study, to thoroughly analyze the potential privacy leakage in deep transfer learning, we first divide previous methods into three categories. Based on that, we demonstrate specific threats that lead to unintentional privacy leakage in each category. Additionally, we also provide some solutions to prevent these threats. To the best of our knowledge, our study is the first to provide a thorough analysis of the information leakage issues in deep transfer learning methods and provide potential solutions to the issue. Extensive experiments on two public datasets and an industry dataset are conducted to show the privacy leakage under different deep transfer learning settings and defense solution effectiveness.
computer science
Accurate detection of signal components is a frequently-encountered challenge in statistical applications with low signal-to-noise ratio. This problem is particularly challenging in settings with heteroscedastic noise. In certain signal-plus-noise models of data, such as the classical spiked covariance model and its variants, there are closed formulas for the spectral signal detection threshold (the largest sample eigenvalue attributable solely to noise) in the isotropic noise setting. However, existing methods for numerically evaluating the threshold for more general noise models remain unsatisfactory. In this work, we introduce a rapid algorithm for evaluating the spectral signal detection threshold. We consider noise matrices with a separable variance profile, as these arise often in applications. The solution is based on nested applications of Newton's method. We also devise a new algorithm for evaluating the Stieltjes transform of the spectral distribution at real values exceeding the threshold. The Stieltjes transform on this domain is known to be a key quantity in parameter estimation for spectral denoising methods. The correctness of both algorithms is proven from a detailed analysis of the master equations characterizing the Stieltjes transform, and their performance is demonstrated in numerical experiments.
statistics
Efficient production of high-charge electron bunches in laser wakefield acceleration using a mid-infrared (MIR) laser pulse is investigated by two-dimensional particle-in-cell simulations. Only a 2.5 TW (100 mJ, 40 fs) MIR laser pulse with a wavelength of 1.5 um can produce a quasi-monoenergetic electron (QME) bunch with a peak energy of 65 MeV in the blow-out regime. The simulation results are compared with those for the experiment showing the generation of 40-60 MeV QME bunches with more than 10^8 electrons using an 8 TW laser pulse with a wavelength of 0.8 um. The number of electrons in the QME bunch produced by a 1.5 um laser is 10 times higher than that produced by a 0.8 um laser, and is equivalent to 10^9. The conversion efficiency from the laser energy to the QEM bunch energy is more than 10%. Laser wakefield acceleration by a TW-class MIR laser pulse opens the door to generating a high-charge QME bunch with high efficiency.
physics
A Universe with finite age also has a finite causal scale. Larger scales can not affect our local measurements or modeling, but far away locations could have different cosmological parameters. The size of our causal Universe depends on the details of inflation and is usually assumed to be larger than our observable Universe today. To account for causality, we propose a new boundary condition, that can be fulfill by fixing the cosmological constant (a free geometric parameter of gravity). This forces a cancellation of vacuum energy with the cosmological constant. As a consequence, the measured cosmic acceleration can not be explained by a simple cosmological constant or constant vacuum energy. We need some additional odd properties such as the existence of evolving dark energy (DE) with energy-density fine tuned to be twice that of dark matter today. We show here that we can instead explain cosmic acceleration without DE (or modified gravity) assuming that the causal scale is smaller than the observable Universe today. Such scale corresponds to half the sky at z=1 and 60 degrees at z=1100, which is consistent with the anomalous lack of correlations observed in the CMB. Late time cosmic acceleration could then be interpreted as the smoking gun of primordial Inflation.
physics
In this paper we use the Isogeometric method to solve the Helmholtz equation with nonhomogeneous Dirichlet boundary condition over a bounded physical domain. Starting from the variational formulation of the problem, we show with details how to apply the isogeometric approach to obtain an approximation of the solution using biquadratic B-spline functions. To illustrate the power of the method we solve several difficult problems, which are particular cases of the Helmholtz equation, where the solution has discontinuous gradient in some points, or it is highly oscillatory. For these problems we explain how to select the knots of B-spline quadratic functions and how to insert knew knots in order to obtain good approximations of the exact solution on regions with irregular boundary. The results, obtained with our Julia implementation of the method, prove that isogeometric approach produces approximations with a relative small error and computational cost.
mathematics
Massive black holes (BHs) in dwarf galaxies can provide strong constraints on BH seeds, however reliably detecting them is notoriously difficult. High resolution radio observations were recently used to identify accreting massive BHs in nearby dwarf galaxies, with a significant fraction found to be non-nuclear. Here we present the first results of our optical follow-up of these radio-selected active galactic nuclei (AGNs) in dwarf galaxies using integral field unit (IFU) data from Gemini-North. We focus on the dwarf galaxy J1220+3020, which shows no clear optical AGN signatures in its nuclear SDSS spectrum covering the radio source. With our new IFU data, we confirm the presence of an active BH via the AGN coronal line [Fe X] and enhanced [O I] emission coincident with the radio source. Furthermore, we detect broad H$\alpha$ emission and estimate a BH mass of $M_{\rm BH}=10^{4.9}M_\odot$. We compare the narrow emission line ratios to standard BPT diagnostics and shock models. Spatially-resolved BPT diagrams show some AGN signatures, particularly in [O I]/H$\alpha$, but overall do not unambiguously identify the AGN. A comparison of our data to shock models clearly indicates shocked emission surrounding the AGN. The physical model most consistent with the data is an active BH with a radiatively inefficient accretion flow (RIAF) that both photoionizes and shock-excites the surrounding gas. We conclude that feedback is important in radio-selected BHs in dwarf galaxies, and that radio surveys may probe a population of low accretion-rate BHs in dwarf galaxies that cannot be detected through optical surveys alone.
astrophysics
Custom voice, a specific text to speech (TTS) service in commercial speech platforms, aims to adapt a source TTS model to synthesize personal voice for a target speaker using few speech data. Custom voice presents two unique challenges for TTS adaptation: 1) to support diverse customers, the adaptation model needs to handle diverse acoustic conditions that could be very different from source speech data, and 2) to support a large number of customers, the adaptation parameters need to be small enough for each target speaker to reduce memory usage while maintaining high voice quality. In this work, we propose AdaSpeech, an adaptive TTS system for high-quality and efficient customization of new voices. We design several techniques in AdaSpeech to address the two challenges in custom voice: 1) To handle different acoustic conditions, we use two acoustic encoders to extract an utterance-level vector and a sequence of phoneme-level vectors from the target speech during training; in inference, we extract the utterance-level vector from a reference speech and use an acoustic predictor to predict the phoneme-level vectors. 2) To better trade off the adaptation parameters and voice quality, we introduce conditional layer normalization in the mel-spectrogram decoder of AdaSpeech, and fine-tune this part in addition to speaker embedding for adaptation. We pre-train the source TTS model on LibriTTS datasets and fine-tune it on VCTK and LJSpeech datasets (with different acoustic conditions from LibriTTS) with few adaptation data, e.g., 20 sentences, about 1 minute speech. Experiment results show that AdaSpeech achieves much better adaptation quality than baseline methods, with only about 5K specific parameters for each speaker, which demonstrates its effectiveness for custom voice. Audio samples are available at https://speechresearch.github.io/adaspeech/.
electrical engineering and systems science
I use a mean-field approximation to show the rapid collapse of heteropolymers from random walk size to a much smaller, molten globule state due to hydrophobic interactions, to be followed by a slower annealing process in which there is little change in overall size.
physics
This survey is devoted to Martin Hairer's Reconstruction Theorem, which is one of the cornerstones of his theory of Regularity Structures. Our aim is to give a new self-contained and elementary proof of this Theorem, together with some applications, including a characterization, based on a single arbitrary test function, of negative H\"older spaces. We present the Reconstruction Theorem as a general result in the theory of distributions that can be understood without any knowledge of Regularity Structures themselves, which we do not even need to define.
mathematics
In this paper we study the representations of loop Affine-Virasoro Algebras. As they have canonical triangular decomposition, we define Verma modules and its irreducible quotients. We give necessary and sufficient condition for an irreducible highest weight module to have finite dimensional weight spaces. We prove that an irreducible integrable module is either an highest weight module or a lowest weight module whenever the canonical central element acts non trivially. At the end we construct Affine central operators for each integer and they commute with the action of the Affine Lie Algebra.
mathematics