text
stringlengths
11
9.77k
label
stringlengths
2
104
A left-right symmetric mirror model restoring parity at a high scale in a way such that the mirror fermions and mirror gauge sector simultaneously could exist at TeV scale is discussed. We also provide an ultraviolet completion of the model with vector-like fermions, and discuss some theoretical and phenomenological implications of this model.
high energy physics phenomenology
Laser based ranging (LiDAR) - already ubiquitously used in robotics, industrial monitoring, or geodesy - is a key sensor technology for future autonomous driving, and has been employed in nearly all successful implementations of autonomous vehicles to date. Coherent laser allows long-range detection, operates eye safe, is immune to crosstalk and yields simultaneous velocity and distance information. Yet for actual deployment in vehicles, video frame-rate requirements for object detection, classification and sensor fusion mandate megapixel per second measurement speed. Such pixel rates are not possible to attain with current coherent single laser-detector architectures at high definition range imagining, and make parallelization essential. A megapixel class coherent LiDAR has not been demonstrated, and is still impeded by the arduous requirements of large banks of detectors and digitizers on the receiver side, that need to be integrated on chip. Here we report hardware efficient coherent laser ranging at megapixel per second imaging rates. This is achieved using a novel concept for massively parallel coherent laser ranging that requires only a single laser and a single photoreceiver, yet achieves simultaneous recording of more than 64 channels with distance and velocity measurements each - attaining an unprecedented 5 megapixel per second rate. Heterodyning two offset chirped soliton microcombs on a single coherent receiver yields an interferogram containing both distance and velocity information of all particular channels, thereby alleviating the need to individually separate, detect and digitize distinct channels. The reported LiDAR implementation is hardware-efficient, compatible with photonic integration and demonstrates the significant advantages of acquisition speed, complexity and cost benefits afforded by the convergence of optical telecommunication and metrology technologies.
physics
We investigate our ability to determine the mean position, or centroid, of a linear array of equally-bright incoherent point sources of light, whose continuum limit is the problem of estimating the center of a uniformly-radiating object. We consider two receivers: an image-plane ideal direct-detection imager and a receiver that employs Hermite-Gaussian (HG) Spatial-mode Demultiplexing (SPADE) in the image plane, prior to shot-noise-limited photon detection. We compare the Fisher Information (FI) for estimating the centroid achieved by these two receivers, which quantifies the information-accrual rate per photon, and compare those with the Quantum Fisher Information (QFI): the maximum attainable FI by any choice of measurement on the collected light allowed by physics. We find that focal-plane direct imaging is strictly sub-optimal, although not by a large margin. We also find that the HG mode sorter, which is the optimal measurement for estimating the separation between point sources (or the length of a line object) is not only suboptimal, but it performs worse than direct imaging. We study the scaling behavior of the QFI and direct imaging's FI for a continuous, uniformly-bright object in terms of its length, and find that both are inversely proportional to the object's length when it is sufficiently larger than the Rayleigh length. Finally, we propose a two-stage adaptive modal receiver design that attains the QFI for centroid estimation.
quantum physics
We prove some sharp systolic inequalities for compact $3$-manifolds with boundary. They relate the (relative) homological systoles of the manifold to its scalar curvature and mean curvature of the boundary. In the equality case, the universal cover of the manifold is isometric to a cylinder over a disk of nonnegative constant curvature.
mathematics
Spin squeezing can improve atomic precision measurements beyond the standard quantum limit (SQL), and unitary spin squeezing is essential for improving atomic clocks. We report substantial and nearly unitary spin squeezing in $^{171}$Yb, an optical lattice clock atom. The collective nuclear spin of $\sim 10^3$ atoms is squeezed by cavity feedback, using light detuned from the system's resonances to attain unitarity. The observed precision gain over the SQL is limited by state readout to 6.5(4) dB, while the generated states offer a gain of 12.9(6) dB, limited by the curvature of the Bloch sphere. Using a squeezed state within 30% of unitarity, we demonstrate an interferometer that improves the averaging time over the SQL by a factor of 3.7(2). In the future, the squeezing can be simply transferred onto the optical clock transition of $^{171}$Yb.
physics
In this paper, we introduce neural teleportation, a simple operation one can use to initialize the weights of a neural network and gain faster convergence. Neural teleportation is the consequence of applying isomorphisms of quiver representations to neural networks. This process "teleports" a network to a new position in the weight space while leaving its input-to-output function unchanged. The concept of neural teleportation generalizes to any neural network architecture, activation function and task. We run several experiments that validate our hypothesis: teleporting a network at initialization speeds-up convergence. Finally, we discuss several mathematical and empirical findings concerning teleportation.
computer science
The extended object Poisson multi-Bernoulli mixture(PMBM) filter provides a closed-form solution for multiple extended object filtering with standard models. This paper considers computationally lighter alternatives to the extended object PMBM filter by propagating a Poisson multi-Bernoulli (PMB) density through the filtering recursion. A new local hypothesis representation is presented where each measurement creates a new Bernoulli component. This facilitates the developments of methods for efficiently approximating the PMBM posterior density after the update step as a PMB. Based on the new hypothesis representation, two approximation methods are presented: one is based on the track-oriented multi-Bernoulli (MB) approximation, and the other is based on the variational MB approximation via Kullback-Leibler divergence minimisation. The performance of the proposed PMB filters with gamma Gaussian inverse-Wishart implementations are evaluated in a simulation study.
electrical engineering and systems science
The origin of the extended soft X-ray emission around nearby highly inclined disk galaxies (often called as X-ray corona) remains uncertain. The emission could arise from volume-filling hot gas and/or its interaction with cool gas. Morphological properties of the X-ray emission can provide additional information to distinguish these different origins. We define model-independent parameters H50, H75, and H95 - vertical scales that enclose 50%, 75%, and 95% of the total flux of the emission, respectively. We study the correlation of these parameters with galaxy properties inferred from infrared observations of a sample of nearby highly inclined disk galaxies with high-quality Chandra data. We find weak negative correlations between H50or H75and the surface star formation rate (ISFR) and no correlation for H95. However, we detect strong, negative correlations of the vertical concentration of the emission, defined as H50/H95 or H75/H95, with ISFR. Our findings suggest that the X-ray emission around disk galaxies is likely comprised of two components: the extended, weak emission, characterized by H95, is influenced by the outflowing hot gas entrained in star formation driven winds, whereas the strong emission close to the disk which is often rich in cool gas characterized by H50 or H75, is largely impacted by cool-hot gas interaction.
astrophysics
The scheme of optical imaging using scattering lens can provide a resolution beyond the classical optical diffraction limit with a coherent-state input. Nevertheless, due to the shot noise of the coherent state, the corresponding signal-to-noise ratio and resolution are both still shot-noise-limited. In order to circumvent this problem, we theoretically propose an alternative scheme where the squeezed state (with a sub-shot noise) is considered as input and the quantum noise is then suppressed below the shot-noise level. Consequently, when comparing with the previous imaging scheme (using combination of coherent state and scattering lens), our proposal is able to achieve an enhanced signal-to-noise ratio for a given scattering lens. Meanwhile, it is demonstrated that the resolution is also improved. We believe that this method may afford a new way of using squeezed states and enable a higher performance than that of using coherent state and scattering lens.
quantum physics
In this paper, we numerically investigate the propulsive performance of three-dimensional pitching flexible plates with varying flexibility and trailing edge shapes. To eliminate the effect of other geometric parameters, only the trailing edge angle is varied from 45{\deg} (concave), 90{\deg} (rectangular) to 135{\deg} (convex) while maintaining the constant area of the flexible plate. We examine the impact of the frequency ratio f* defined as the ratio of the natural frequency of the flexible plate to the actuated pitching frequency. Through our numerical simulations, we find that the global maximum mean thrust occurs near f*=1 corresponding to the resonance condition. However, the optimal propulsive efficiency is achieved around f*=1.54 instead of the resonance condition. While the convex plate with low and high bending stiffness values shows the best performance, the rectangular plate with moderate bending stiffness is the most efficient propulsion configuration. Through dynamic mode decomposition, we find that the passive deformation can help in redistributing the pressure gradient thus improving the efficiency and thrust production. A momentum-based thrust evaluation approach is adopted to link the instantaneous vortical structures with the time-dependent thrust. When the vortices detach from the trailing edge, the instantaneous thrust shows the largest values due to the strong momentum change and convection process. Moderate flexibility and convex shape help transfer momentum to the fluid, thereby improving thrust generation and promoting the transition from drag to thrust. The increase of the trailing edge angle can broaden the range of flexibility that produces positive mean thrust.
physics
Entangled resources enable quantum sensing that achieves Heisenberg scaling, a quadratic improvement on the standard quantum limit, but preparing large scale entangled states is challenging in the presence of decoherence. We present a quantum control strategy using highly nonlinear geometric phase gates for preparing entangled states on spin ensembles which can be used for practical precision metrology. The method uses a dispersive coupling of $N$ spins to a common bosonic mode and does not require addressability, special detunings, or interactions between the spins. Using a control sequence that executes Grover's algorithm on a subspace of permutationally symmetric states, a target entangled resource state can be prepared using $O(N^{5/4})$ geometric phase gates. The geometrically closed path of the control operations ensures the gates are insensitive to the initial state of the mode and the sequence has built-in dynamical decoupling providing resilience to dephasing errors.
quantum physics
A novel transmission scheme is introduced for efficient data transmission by conveying additional information bits through jointly changing the index and number of active subcarriers within each orthogonal frequency division multiplexing (OFDM) subblock. The proposed scheme is different from the conventional OFDM-subcarrier number modulation (OFDM-SNM) and OFDM-index modulation (OFDM-IM), in which data bits are transmitted using either number or index of active subcarriers. The proposed modulation technique offers superior spectral and energy efficiency compared to its counterparts OFDM-SNM and OFDM-IM, especially at low modulation orders such as binary phase shift keying (BPSK) that can provide high reliability and low complexity, making it suitable for Internet of Things (IoT) applications that require better spectral and energy efficiency while enjoying high reliability and low complexity. Bit error rate (BER) performance analysis is provided for the proposed scheme, and Monte Carlo simulations are presented to prove the consistency of simulated BER with the analyzed one.
electrical engineering and systems science
The modeling of phenomenological structure is a crucial aspect in inverse imaging problems. One emerging modeling tool in computational imaging is the optimal transport framework. Its ability to model geometric displacements across an image's support gives it attractive qualities similar to those of optical flow methods which are effective at capturing visual motion, but are restricted to operate in significantly smaller state-spaces. Despite this advantage, two major drawbacks make it unsuitable for general deployment: (i) it suffers from exorbitant computational costs due to a quadratic optimization-variable complexity, and (ii) it has a mass-balancing assumption that limits applications with natural images. We tackle these issues simultaneously by proposing a novel formulation for an unbalanced optimal transport regularizer that has linear optimization-variable complexity. In addition, we present a general parallelizable proximal method for this regularizer, and demonstrate superior empirical performance on novel dynamical tracking applications in synthetic and real video.
electrical engineering and systems science
We formulate a new class of tensor gauge field theories in any dimension that is a hybrid class between symmetric higher-rank tensor gauge theory (i.e., higher-spin gauge theory) and anti-symmetric tensor topological field theory. Our theory describes a mixed unitary phase interplaying between gapless and gapped topological order phases (which can live with or without Euclidean, Poincar\'e or anisotropic symmetry, at least in ultraviolet high or intermediate energy field theory, but not yet to a lattice cutoff scale). The "gauge structure" can be compact, continuous, abelian or non-abelian. Our theory sits outside the paradigm of Maxwell electromagnetic theory in 1865 and Yang-Mills isospin/color theory in 1954. We discuss its local gauge transformation in terms of the ungauged vector-like or tensor-like higher-moment global symmetry. The non-abelian gauge structure is caused by gauging the non-commutative symmetries: a higher-moment symmetry and a charge conjugation (particle-hole) symmetry. Vector global symmetries along time direction may exhibit time crystals. We explore the relation of these long-range entangled matters to a non-abelian generalization of Fracton order in condensed matter, a field theory formulation of foliation, the spacetime embedding and Embeddon that we newly introduce, and possible fundamental physics applications to dark matter or dark energy.
high energy physics theory
The LHCb Collaboration has recently reported the discovery of direct CP violation in combined $D^0 \to K^+ K^-$ and $D^0 \to \pi^+ \pi^-$ decay modes at the $5.3 \sigma$ level. Assuming U-spin symmetry (i.e., $d \leftrightarrow s$ interchange symmetry) for the strong-interaction parts of these two channels, we find that their corresponding direct CP-violating asymmetries are ${\cal A}^{}_{\rm CP} (K^+ K^-) \simeq \left(-7.7 \pm 1.5\right) \times 10^{-4}$ and ${\cal A}^{}_{\rm CP} (\pi^+ \pi^-) \simeq \left(7.7 \pm 1.5\right) \times 10^{-4}$. The CP-forbidden transition $e^+ e^- \to D^0\bar{D}^0 \to \left(K^+ K^-\right)^{}_D \left(\pi^+ \pi^-\right)^{}_D$ on the $\psi(3770)$ resonance is therefore expected to have a rate of ${\cal O}(10^{-10})$ or smaller under U-spin symmetry, and it can be observed at a high-luminosity super-$\tau$-charm factory if at least $10^{10}$ pairs of coherent $D^0$ and $\bar{D}^0$ events are accumulated.
high energy physics phenomenology
Optically active solid-state spin registers have demonstrated their unique potential in quantum computing, communication and sensing. Realizing scalability and increasing application complexity requires entangling multiple individual systems, e.g. via photon interference in an optical network. However, most solid-state emitters show relatively broad spectral distributions, which hinders optical interference experiments. Here, we demonstrate that silicon vacancy centres in semiconductor silicon carbide (SiC) provide a remarkably small natural distribution of their optical absorption/emission lines despite an elevated defect concentration of $\approx 0.43\,\rm \mu m^{-3}$. In particular, without any external tuning mechanism, we show that only 13 defects have to be investigated until at least two optical lines overlap within the lifetime-limited linewidth. Moreover, we identify emitters with overlapping emission profiles within diffraction limited excitation spots, for which we introduce simplified schemes for generation of computationally-relevant Greenberger-Horne-Zeilinger (GHZ) and cluster states. Our results underline the potential of the CMOS-compatible SiC platform toward realizing networked quantum technology applications.
quantum physics
This paper presents a multi-agent control architecture and an online optimization method based on dynamic average consensus to coordinate the power consumption of a large population of Thermostatically Controlled Loads (TCLs). Our objective is to penalize peaks of power demand, smooth the load profile and enable Demand Side Management (DSM). The proposed architecture and methods exploit only local measurements of power consumption via Smart Power Sockets (SPSs) with no access to their internal temperature. No centralized aggregator of information is exploited and agents preserve their privacy by cooperating anonymously only through consensus-based distributed estimation, robust to node/link failure. The interactions among devices are designed to occur through an unstructured peer-to-peer (P2P) network over the internet. The architecture includes novel methods for parameter identification, state estimation and mixed logical modelling of TCLs and SPSs. It is designed from a multi-agent and plug-and-play perspective in which existing household appliances can interact with each other in an urban environment. Finally, a novel low cost testbed is proposed along with numerical tests and an experimental validation.
electrical engineering and systems science
The Riemannian barycentre is one of the most widely used statistical descriptors for probability distributions on Riemannian manifolds. At present, existing algorithms are able to compute the Riemannian barycentre of a probability distribution, only if i.i.d. samples of this distribution are readily available. However, there are many cases where i.i.d. samples are quite difficult to obtain, and have to be replaced with non-independent samples, generated by a Markov chain Monte Carlo method. To overcome this difficulty, the present paper proposes a new Markov chain Monte Carlo algorithm for computing the Riemannian barycentre of a probability distribution on a Hadamard manifold (a simply connected, complete Riemannian manifold with non-positive curvature). This algorithm relies on two original propositions, proved in the paper. The first proposition states that the recursive barycentre of samples generated from a geometrically ergodic Markov chain converges in the mean-square to the Riemannian barycentre of the stationary distribution of this chain. The second proposition provides verifiable conditions which ensure a Metropolis-Hastings Markov chain, with its values in a symmetric Hadamard manifold, is geometrically ergodic. This latter result yields a partial solution, in the context of Riemannian manifolds, to the problem of geometric ergodicity of Metropolis-Hastings chains, which has previously attracted extensive attention when considered in Euclidean space. In addition to these two propositions, the new Markov chain Monte Carlo algorithm, proposed in this paper, is applied to a problem of Bayesian inference, arising from computer vision.
mathematics
Studies of excess infrared radiation around white dwarfs provide important constraints on the evolution of planetary systems and low-mass stars beyond the main sequence stage. In this paper series, we focus on identifying and characterizing bright white dwarfs with an infrared excess. Here, we present 188 infrared excess candidates from Gaia and unWISE, 147 of which are new discoveries. Further characterization of this sample can significantly increase the current list of white dwarf debris disks and white dwarfs with low-mass companions.
astrophysics
We consider a model with vector-like fermions which also provide multi-charged particles. We search for allowed parameter region explaining muon anomalous magnetic moment (muon $g-2$) and $b \to s \ell^+ \ell^-$ anomalies, satisfying constraints from the lepton flavor violations and $Z$ boson decays, and discuss collider physics, in a framework of multi-charged particles. While carrying out numerical analysis, we explore the typical size of the muon $g-2$ and Wilson coefficients to explain $b \to s \ell^+ \ell^-$ anomalies when all other experimental constraints are satisfied. Furthermore, we discuss a possible extension of our model introducing $U(1)_{\mu - \tau}$ gauge symmetry and investigate possible collider signatures at LHC.
high energy physics phenomenology
There is an increased emphasis on visualizing neuroimaging results in more intuitive ways. Common statistical tools for dissemination, such as bar charts, lack the spatial dimension that is inherent in neuroimaging data. Here we present two packages for the statistical software R, ggseg and ggseg3d, that integrate this spatial component. The ggseg and ggseg3d packages visualize pre-defined brain segmentations as both 2D polygons and 3D meshes, respectively. Both packages are integrated with other well-established R-packages, allowing great flexibility. In this tutorial, we present the main data and functions in the ggseg and ggseg3d packages for brain atlas visualization. The main highlighted functions are able to display brain segmentation plots in R. Further, the accompanying ggsegExtra-package includes a wider collection of atlases, and is intended for community-based efforts to develop more compatible atlases to ggseg and ggseg3d. Overall, the ggseg-packages facilitate parcellation-based visualizations in R, improve and ease the dissemination of the results, and increase the efficiency of the workflows.
statistics
Model quantization helps to reduce model size and latency of deep neural networks. Mixed precision quantization is favorable with customized hardwares supporting arithmetic operations at multiple bit-widths to achieve maximum efficiency. We propose a novel learning-based algorithm to derive mixed precision models end-to-end under target computation constraints and model sizes. During the optimization, the bit-width of each layer / kernel in the model is at a fractional status of two consecutive bit-widths which can be adjusted gradually. With a differentiable regularization term, the resource constraints can be met during the quantization-aware training which results in an optimized mixed precision model. Further, our method can be naturally combined with channel pruning for better computation cost allocation. Our final models achieve comparable or better performance than previous quantization methods with mixed precision on MobilenetV1/V2, ResNet18 under different resource constraints on ImageNet dataset.
computer science
We classify all possible texture zeros in light neutrino mass matrix in diagonal charged lepton basis by considering Dirac nature of light neutrinos. For Hermitian nature of neutrino mass matrix, the number of possible texture zeros remain same as Majorana texture zeros, but with less free parameters due to the absence of additional Majorana CP phases. Relaxing the Hermitian nature of neutrino mass matrix lead to many more possibilities and freedom due to additional CP phases, mixing angles not constrained by neutrino data. While we find that none of the texture zeros in Hermitian case are allowed, in non-Hermitian case some textures even with four zeros are allowed. While most of the one-zero, two-zero and three-zero textures in this case are allowed, four-zero textures are tightly constrained with only 6 allowed out of 126 possibilities. The allowed textures also give interesting correlations between light neutrino parameters with sharp distinctions between normal and inverted mass ordering. Many of these allowed textures also saturate the Planck 2018 upper bound on the sum of absolute neutrino masses.
high energy physics phenomenology
In this work, we study the extended viscous dark energy models in the context of matter perturbations. To do this, we assume an alternative interpretation of the flat Friedmann-Lema\^itre-Robertson-Walker Universe, through the nonadditive entropy and the viscous dark energy. We implement the relativistic equations to obtain the growth of matter fluctuations for a smooth version of dark energy. As result, we show that the matter density contrast evolves similarly to the $\Lambda$CDM model in high redshift; in late time, it is slightly different from the standard model. Using the latest geometrical and growth rate observational data, we carry out a Bayesian analysis to constrain parameters and compare models. We see that our viscous models are compatible with cosmological probes, and the $\Lambda$CDM recovered with a $1\sigma$ confidence level. The viscous dark energy models relieve the tension of $H_0$ in $2 \sim 3 \sigma$. Yet, by involving the $\sigma_8$ tension, some models can alleviate it. In the model selection framework, the data discards the extended viscous dark energy models.
astrophysics
Resonant elastic X-ray scattering has been widely employed for exploring complex electronic ordering phenomena, like charge, spin, and orbital order, in particular in strongly correlated electronic systems. In addition, recent developments of pump-probe X-ray scattering allow us to expand the investigation of the temporal dynamics of such orders. Here, we introduce a new time-resolved Resonant Soft X-ray Scattering (tr-RSXS) endstation developed at the Pohang Accelerator Laboratory X-ray Free Electron Laser (PAL-XFEL). This endstation has an optical laser (wavelength of 800 nm plus harmonics) as the pump source. Based on the commissioning results, the tr-RSXS at PAL-XFEL can deliver a soft X-ray probe (400-1300 eV) with a time resolution about ~100 fs without jitter correction. As an example, the temporal dynamics of a charge density wave on a high-temperature cuprate superconductor is demonstrated.
physics
We study a Dark Matter (DM) model in which the dominant coupling to the standard model occurs through a neutrino-DM-scalar coupling. The new singlet scalar will generically have couplings to nuclei/electrons arising from renormalizable Higgs portal interactions. As a result the DM particle $X$ can convert into a neutrino via scattering on a target nucleus $\mathcal{N}$: $ X + \mathcal{N} \rightarrow \nu + \mathcal{N}$, leading to striking signatures at direct detection experiments. Similarly, DM can be produced in neutrino scattering events at neutrino experiments: $ \nu + \mathcal{N} \rightarrow X + \mathcal{N}$, predicting spectral distortions at experiments such as COHERENT. Furthermore, the model allows for late kinetic decoupling of dark matter with implications for small-scale structure. At low masses, we find that COHERENT and late kinetic decoupling produce the strongest constraints on the model, while at high masses the leading constraints come from DM down-scattering at XENON1T and Borexino. Future improvement will come from CE$\nu$NS data, ultra-low threshold direct detection, and rare kaon decays.
high energy physics phenomenology
In the light of the importance of gravitational waves in astronomy, it is essential to perform independent \textit{data analysis} compared to the approach used by the LIGO team. To address this, here we develop a general data-driven, template-free {\it noise suppression method}, for extraction of the event-waveform. Using the developed method, we obtain waveforms of all reported events by LIGO. In addition, using instantaneous frequencies (derived by Hilbert transform) of the extracted waveforms, we provide the time delays between the arrival of gravitational waves to the detectors.
astrophysics
Motivated by the recent galactic center gamma-ray excess identified in the Fermi-LAT data, we perform a detailed study of QCD fragmentation uncertainties in the modeling of the energy spectra of gamma-rays from Dark-Matter (DM) annihilation. When Dark-Matter particles annihilate to coloured final states, either directly or via decays such as $W^{(*)}\to q\bar{q}'$, photons are produced from a complex sequence of shower, hadronisation and hadron decays. In phenomenological studies, their energy spectra are typically computed using Monte Carlo event generators. These results have however intrinsic uncertainties due to the specific model used and the choice of model parameters, which are difficult to asses and which are typically neglected. We derive a new set of hadronisation parameters (tunes) for the \textsc{Pythia~8.2} Monte Carlo generator from a fit to LEP and SLD data at the $Z$ peak. For the first time, we also derive a conservative set of uncertainties on the shower and hadronisation model parameters. Their impact on the gamma-ray energy spectra is evaluated and discussed for a range of DM masses and annihilation channels. The spectra and their uncertainties are also provided in tabulated form for future use. The fragmentation-parameter uncertainties may be useful for collider studies as well.
high energy physics phenomenology
In this article, first we address the regularity of weak solution for a class of $p$-fractional Choquard equations: \begin{equation*} \;\;\; \left.\begin{array}{rl} (-\Delta)_p^su&=\left(\displaystyle\int_\Omega\frac{F(y,u)}{|x-y|^{\mu}}dy\right)f(x,u),\hspace{5mm}x\in \Omega,\\ u&=0,\hspace{35mm}x\in \mathbb R^N\setminus \Omega, \end{array} \right\} \end{equation*} where $\Omega\subset\mathbb R^N$ is a smooth bounded domain, $1<p<\infty$ and $0<s<1$ such that $sp<N,$ $0<\mu<\min\{N,2sp\}$ and $f:\Omega\times\mathbb R\to\mathbb R$ is a continuous function with at most critical growth condition (in the sense of Hardy-Littlewood-Sobolev inequality) and $F$ is its primitive. Next, for $p\geq2,$ we discuss the Sobolev versus H\"{o}lder minimizers of the energy functional $J$ associated to the above problem, and using that we establish the existence of the local minimizer of $J$ in the fractional Sobolev space $W_0^{s,p}(\Omega).$ Moreover, we discuss the aforementioned results by adding a local perturbation term (at most critical in the sense of Sobolev inequality) in the right-hand side in the above equation.
mathematics
Holography is a cornerstone characterisation and imaging technique that can be applied to the full electromagnetic spectrum, from X-rays to radio waves or even particles such as neutrons. The key property in all these holographic approaches is coherence that is required to extract the phase information through interference with a reference beam - without this, holography is not possible. Here we introduce a holographic imaging approach that operates on intrinsically incoherent and unpolarised beams, so that no phase information can be extracted from a classical interference measurement. Instead, the holographic information is encoded in the second order coherence of entangled states of light. Using spatial-polarisation hyper-entangled photons pairs, we remotely reconstruct phase images of complex objects. Information is encoded into the polarisation degree of the entangled state, allowing us to image through dynamic phase disorder and even in the presence of strong classical noise, with enhanced spatial resolution compared to classical coherent holographic systems. Beyond imaging, quantum holography quantifies hyper-entanglement distributed over 10^4 modes via a spatially-resolved Clauser-Horne-Shimony-Holt inequality measurement, with applications in quantum state characterisation.
quantum physics
In the framework of perturbative QCD, the radiative decays $J/\psi\rightarrow\gamma\eta^{(\prime)}$ are revisited in detail, where the involved one-loop integrals are evaluated analytically with the light quark masses kept. We have found that the sum of loop integrals is insensitive to the light quark masses and the branching ratios $\mathcal{B}(J/\psi\rightarrow\gamma\eta^{(\prime)})$ barely depend on the shapes of $\eta^{(\prime)}$ distribution amplitudes. With the parameters of $\eta-\eta^{\prime}$ mixing extracted from low energy processes and $J/\psi\rightarrow\gamma\eta^{(\prime)}$ by means of nonperturbative matrix elements $\langle0|G_{\mu\nu}^a\tilde{G}^{a,\mu\nu}|\eta^{(\prime)}\rangle$ based on $U_{A}(1)$ anomaly dominance argument, we could not give the ratio $R_{J/\psi}$ in agreement with experimental result. However, using the parameters, especially the mixing angle $\phi=33.5^{\circ}\pm0.9^{\circ}$, extracted from $\gamma^{\ast}\gamma-\eta^{\prime}$ transition form factor measured at $q^{2}=112~\mathrm{GeV}^{2}$ by BaBar collaboration, we obtain $R_{J/\psi}=4.70$ in good agreement with $R_{J/\psi}^{exp}=4.65\pm0.21$. As a crossing check, with $\Gamma^{exp}(\eta^{(\prime)}\rightarrow\gamma\gamma)$ and our results for $J/\psi\rightarrow\gamma\eta^{(\prime)}$, we get $\phi=33.9^{\circ}\pm0.6^{\circ}$. The difference between the determinations of $\phi$ is briefly discussed.
high energy physics phenomenology
The mass spectra of singly charmed and bottom baryons, $\Lambda_{c/b}(1/2^\pm,3/2^-)$ and $\Xi_{c/b}(1/2^\pm,3/2^-)$, are investigated using a nonrelativistic potential model with a heavy quark and a light diquark. The masses of the scalar and pseudoscalar diquarks are taken from a chiral effective theory. The effect of $U_A(1)$ anomaly induces an inverse hierarchy between the masses of strange and non-strange pseudoscalar diquarks, which leads to a similar inverse mass ordering in $\rho$-mode excitations of singly heavy baryons.
high energy physics phenomenology
Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together in order to provide capabilities that can be used by decision makers, warfighters and analysts. These pieces include data collection, data conditioning, algorithms, computing, robust artificial intelligence, and human-machine teaming. While much of the popular press today surrounds advances in algorithms and computing, most modern AI systems leverage advances across numerous different fields. Further, while certain components may not be as visible to end-users as others, our experience has shown that each of these interrelated components play a major role in the success or failure of an AI system. This article is meant to highlight many of these technologies that are involved in an end-to-end AI system. The goal of this article is to provide readers with an overview of terminology, technical details and recent highlights from academia, industry and government. Where possible, we indicate relevant resources that can be used for further reading and understanding.
computer science
In this work we examine kink-antikink collisions in two distinct hyperbolic models. The models depend on a deformation parameter, which controls two main characteristics of the potential with two degenerate minima: the height of the barrier and the values of the minima. In particular, the rest mass of the kinks decreases monotonically as the deformation parameter increases, and we identify the appearance of a gradual suppression of two bounce windows in the kink scattering and the production of long lived oscillons. The two effects are reported in connection to the presence of more than one vibrational state in the stability potential.
high energy physics theory
At the LHC, a TeV-scale leptoquark (LQ) that decays dominantly to a top quark ($t$) and a light charged lepton ($\ell=e,\mu$) would form a resonance system of \emph{boosted-$t$ $+$ high-$p_{\rm T}$-$\ell$}. We consider all possible vector LQ models within the Buchm\"{u}ller-R\"{u}ckl-Wyler classifications with the desired decay. We propose simple phenomenological Lagrangians that are suitable for bottom-up/experimental studies and, at the same time, can cover the relevant parameter spaces of these models. In this simplified framework, we study the pair and single production channels of vector LQs at the LHC. Interestingly, we find that, like the pair production, the cross sections of some single production processes also depend on the parameter $\kappa$ that appears in the gluon-vector LQ coupling. We adopt a search strategy of selecting events with at least one boosted hadronic top quark and exactly two high-$p_{\rm T}$ leptons of the same flavor and opposite sign. This combines events from the pair and single production processes and, therefore, can enhance the discovery potential beyond that of the pair-production-only searches. For $5\sigma$ discovery we find that vector LQs can be probed up to $2.55$ TeV for $100\%$ branching ratio in the $t\ell $ decay mode and $\mathcal{O}(1)$ new couplings at the $14$ TeV LHC with $3$ ab$^{-1}$ of integrated luminosity.
high energy physics phenomenology
We explain how to translate several recent results in derived algebraic geometry to derived differential geometry. These concern shifted Poisson structures on NQ-manifolds, Lie groupoids, smooth stacks and derived generalisations, and include existence and classification of various deformation quantisations.
mathematics
In coastal area, raw wastewater can be directly discharged in the sea during extreme rain events or failure of the sewage network. The highly urbanized and touristic area of Nice is directly exposed to this risk. In order to protect the bathing population and to apply the European regulation (DCE, 2000), Nice municipality has decided to work on a forecasting tool of transport and fate of fecal bacterial indicators such as Escherichia Coli. This paper deals with the first step of this approach which consists in developing a three-dimensional hydrodynamic simulation coupled with water quality parameters. The model used is based on Reynolds-Averaged Navier-Stokes's equations for the calculation of the velocities and water depth (TELEMAC3D), and wave energy balance equation to account for the wave induced current (TOMAWAC). Two field measurements were conducted on a real wastewater pollution coming from a sewage pipe located at-38m depth. The first campaign has validated the vertical diffusion of the effluent through the water column by the means of variable depth samples and has stressed the hourly variability of sewage bacterial quality. Due to the lack of data regarding this last point, the model could not be fully validated. The second campaign has been designed to overcome this issue and to gather all the data needed to proceed to the validation of our hydro-ecological simulation. The comparison of the field measurements and the simulations has successfully demonstrated the accuracy of the model. But the uncertainty associated with the laboratory analysis (MPN) are high, making the calibration and validation processes a complex task. Nevertheless, further research needs to be done regarding the variability of microbial quality in order to develop an operational tool allowing the forecast of bacterial pollution. The next step of this study will be to establish the hourly variation of microbial quality in the wastewater during dry and raining periods.
physics
We provide a linearised superfield description of the exotic non-metric $N=(4,0)$ supergravity in $D=6$, by using a pure spinor superfield formalism. The basic field $\Psi$ is a ghost number 2 scalar, transforming in the same R-symmetry module as the tensor fields. Partial results for the $N=(3,1)$ model are presented.
high energy physics theory
We employ Gaussian process (GP) regression to adjust for systematic errors in D3-type dispersion corrections introducing the associated, statistically improved model D3-GP. We generated a data set containing interaction energies for 1,248 molecular dimers, which resemble the dispersion-dominated systems contained in the S66 data set. Our systems do not only represent equilibrium structures, but also dimers with various relative orientations and conformations at both shorter and longer distances. To train our D3-GP model, we engineered two different vectorial representations of (supra-)molecular systems, both derived from the matrix of atom-pairwise D3(BJ) interaction terms: (a) a distance-resolved interaction energy histogram and (b) eigenvalues of the interaction matrix ordered according to their decreasing absolute value. We harness the prediction variance obtained from GP regression to select optimal training sets in an automated fashion. The larger the variance, the more information the corresponding data point may add to the training set. For a given set of molecular systems, variance-based sampling can approximately determine the smallest subset being subjected to reference calculations such that all dispersion corrections for the remaining systems fall below a predefined accuracy threshold. Our refined learning algorithm selects multiple (instead of single) systems which can be subjected to reference calculations simultaneously. We refer to the underlying selection strategy as batch-wise variance-based sampling (BVS). BVS-guided active learning is an essential component of our D3-GP workflow, which is implemented in a black-box fashion. This approach leads overall to a self-improving model (D3-GP) that predicts system-focused and GP-refined D3-type dispersion corrections for any given system of reference data.
physics
Following the theory of information measures based on the cumulative distribution function, we propose the fractional generalized cumulative entropy, and its dynamic version. These entropies are particularly suitable to deal with distributions satisfying the proportional reversed hazard model. We study the connection with fractional integrals, and some bounds and comparisons based on stochastic orderings, that allow to show that the proposed measure is actually a variability measure. The investigation also involves various notions of reliability theory, since the considered dynamic measure is a suitable extension of the mean inactivity time. We also introduce the empirical generalized fractional cumulative entropy as a non-parametric estimator of the new measure. It is shown that the empirical measure converges to the proposed notion almost surely. A central limit theorem is also established under the exponential distribution. The stability of the empirical measure is addressed, too. An example of application to real data is finally provided.
mathematics
Automated segmentation of medical imaging is of broad interest to clinicians and machine learning researchers alike. The goal of segmentation is to increase efficiency and simplicity of visualization and quantification of regions of interest within a medical image. Image segmentation is a difficult task because of multiparametric heterogeneity within the images, an obstacle that has proven especially challenging in efforts to automate the segmentation of brain lesions from non-contrast head computed tomography (CT). In this research, we have experimented with multiple available deep learning architectures to segment different phenotypes of hemorrhagic lesions found after moderate to severe traumatic brain injury (TBI). These include: intraparenchymal hemorrhage (IPH), subdural hematoma (SDH), epidural hematoma (EDH), and traumatic contusions. We were able to achieve an optimal Dice Coefficient1 score of 0.94 using UNet++ 2D Architecture with Focal Tversky Loss Function, an increase from 0.85 using UNet 2D with Binary Cross-Entropy Loss Function in intraparenchymal hemorrhage (IPH) cases. Furthermore, using the same setting, we were able to achieve the Dice Coefficient score of 0.90 and 0.86 in cases of Extra-Axial bleeds and Traumatic contusions, respectively.
electrical engineering and systems science
A freely propagating optical field having a periodic transverse spatial profile undergoes periodic axial revivals - a well-known phenomenon known as the Talbot effect or self-imaging. We show here that introducing tight spatio-temporal spectral correlations into an ultrafast pulsed optical field with a periodic transverse spatial profile eliminates all axial dynamics in physical space while revealing a novel space-time Talbot effect that can be observed only when carrying out time-resolved measurements. Indeed, 'time-diffraction' is observed whereupon the temporal profile of the field envelope at a fixed axial plane corresponds to a segment of the spatial propagation profile of a monochromatic field sharing the initial spatial profile and observed at the same axial plane. Time-averaging, which is intrinsic to observing the intensity, altogether veils this effect.
physics
We demonstrate that Kerr/CFT duality can be extended to rotating charged black hole solution surrounded by quintessence in Rastall gravity. Since Rastall gravity can be considered as $F(R,T) = R + \beta T $ theory with the addition of a matter term, so the gravitational Lagrangian is resemblant to Einstein general relativity. In fact, the resulting central term is coincident to the central term in the general relativity since $ T $ is only a trace of the matter term. We then exhibit that the entropy from CFT of this black hole is equivalent with the Bekenstein-Hawking entropy with no correction related to the Rastall coupling constant. Moreover, when Rastall coupling constant $ \kappa \lambda $ is switched off and $ a $ is approaching zero, we may extend the solution to the 5D black hole solution. It is found that this extremal solution contains twofold CFT duals that we call as $Q$-picture and $ P $-picture. In both pictures, the CFT entropy reproduces exactly the Bekenstein-Hawking entropy for Reissner-Nordstr\"om black hole surrounded by quintessence. The results establish that, for all black holes surrounded by quintessence, the microscopic entropies of the dual CFTs agree with the Bekenstein-Hawking entropies.
high energy physics theory
The European Strategy for Particle Physics (ESSP) submitted in 2013 a deliberation document to the CERN council explaining that a lepton collider with "energies of 500\,GeV or higher could explore the Higgs properties further, for example the [Yukawa] coupling to the top quark, the [trilinear] self-coupling and the total width.". In view of the forthcoming ESPP update in 2020, variations on this qualitative theme have been applied, inaccurately, to the case of the ILC, to argue that an upgrade to 500\,GeV would allow the measurement of the Higgs potential and would increase the potential for new particle searches. As a consequence, the strategic question was raised again whether the FCC-ee design study ought to consider a 500 GeV energy upgrade. In this note, we revisit the ESSP 2013 statement quantitatively and find [...] that 500 GeV is not a particularly useful energy for the lepton colliders under consideration, especially for the FCC-ee. A 5 sigma demonstration of the existence of the Higgs self-coupling is within reach at the energies foreseen for the FCC-ee, with a moderate change of configuration, which certainly deserves consideration.
high energy physics phenomenology
Four-dimensional scanning transmission electron microscopy (4D-STEM) of local atomic diffraction patterns is emerging as a powerful technique for probing intricate details of atomic structure and atomic electric fields. However, efficient processing and interpretation of large volumes of data remain challenging, especially for two-dimensional or light materials because the diffraction signal recorded on the pixelated arrays is weak. Here we employ data-driven manifold leaning approaches for straightforward visualization and exploration analysis of the 4D-STEM datasets, distilling real-space neighboring effects on atomically resolved deflection patterns from single-layer graphene, with single dopant atoms, as recorded on a pixelated detector. These extracted patterns relate to both individual atom sites and sublattice structures, effectively discriminating single dopant anomalies via multi-mode views. We believe manifold learning analysis will accelerate physics discoveries coupled between data-rich imaging mechanisms and materials such as ferroelectric, topological spin and van der Waals heterostructures.
electrical engineering and systems science
We describe a case of an interplay between human and computer proving which played a role in the discovery of an interesting mathematical result. The unusual feature of the use of computers here was that a computer generated but human readable proof was read, understood, generalized and abstracted by mathematicians to obtain the key lemma in an interesting mathematical result.
computer science
Understanding the magnetic properties of the various Mn doping configurations that can be encountered in $2H$-MoS$_2$ monolayer could be beneficial for its use in spintronics. Using density functional theory plus Hubbard U (DFT$+$U) approach, we study how a single isolated, double- and triple-substitution configurations of Mn atoms within a MoS$_2$ monolayer could contribute to its total magnetization. We find that the doping-configuration plays a critical role in stabilizing a ferromagnetic state in a Mn-doped MoS$_2$ monolayer. Indeed, the Mn-Mn magnetic interaction is found to be ferromagntic and strong for Mn in equidistant substitution positions where the separation average range of 6-11 {\AA}. The strongest ferromagnetic interaction is found when substitutions are in second nearest neighbors Mo-sites of the armchair chain. Clustering is energetically favorable but it strongly reduces the ferromagnetic exchange energies. Our results suggest that ordering the Mn dopants on MoS$_2$ monolayer is needed to increase its potential ferromagnetism.
condensed matter
We present a study of Far Ultraviolet (FUV) bright horizontal branch (HB) stars to understand the peculiarities seen in the HB sequence of the Globular Cluster NGC 1851, using ground and space based multi-wavelength data. Optical and UV color magnitude diagrams are used to classify HB stars and their membership from HST and Gaia DR2 data. The Spectral energy distributions (SEDs) of the hot HB stars located from the core to tidal radii are constructed. The SEDs reveal that the HB stars near the G-jump show a decrease in the FUV flux when atmospheric models of cluster metallicity are used for fitting, but a better fit is found with higher metallicity models, as expected due to atmospheric diffusion. We report on four particularly interesting extreme HB (EHB) stars, two each in inner and outer regions. We detect a sub-luminous EHB and a "blue-Hook" candidates with temperatures Teff ~ 25,000 K and 31,000 K, respectively. We found an EHB star (Teff ~ 17,000 K) with the radius lies between the BHB and normal EHB stars. The most peculiar of our EHB stars (Teff ~ 28,000 K) is found to be a photometric binary to a Blue Straggler star (BSS) (Teff ~ 7,000 K), which is an important target for spectroscopic study. This discovery of the candidate EHB+BSS binary system, could help to explain the mass loss in the RGB phase, leading to the formation of EHB stars.
astrophysics
AI Hya has been known as an eclipsing binary with a monoperiodic $\delta$ Sct pulsator. We present the results from its {\it TESS} photometry observed during Sector 7. Including our five minimum epochs, the eclipse timing diagram displays the apsidal motion with a rate of $\dot{\omega}$ = 0.075$\pm$0.031 deg year$^{-1}$, which corresponds to an apsidal period of U = 4800$\pm$2000 years. The binary star model represents that the smaller, less massive primary component is 427 K hotter than the pulsating secondary, and our distance of 612$\pm$36 pc is in good agreement with the $Gaia$ distance of 644$\pm$26 pc. We subtracted the binary effects from the observed {\it TESS} data and applied a multifrequency analysis to these residuals. The result reveals that AI Hya is multiperiodic in its pulsation. Of 14 signals detected, four ($f_1$, $f_2$, $f_3$, $f_6$) may be considered independent pulsation frequencies. The period ratios of $P_{\rm pul}/P_{\rm orb}$ = 0.012$-$0.021 and the pulsation constants of $Q$ = 0.30$-$0.52 days correspond to $\delta$ Sct pulsations in binaries. We found that the secondary component of AI Hya pulsates in both radial fundamental $F$ modes ($f_2$ and $f_3$) and non-radial $g_1$ modes with a low degree of $\ell$ = 2 ($f_1$ and $f_6$).
astrophysics
We compute, at one loop in perturbation theory, the probability density function of the total magnetization $M$ of the Ising model on the 4-torus and the 4-sphere. We develop a single perturbative expansion that is valid in the symmetric phase as well as the broken symmetry phase, provided that the correlation length is large compared to the system size $L$. We find that, at the critical point, for large system size in lattice units, the PDF approaches $p(M)\sim \exp(-f(L) M^4)$. Consequently, the critical value of the Binder cumulant of the total magnetization is $U = 1 - \frac{4\,\Gamma(5/4)^2}{3\,\Gamma(3/4)^2}$. We validate our results by comparison with Monte Carlo simulation.
high energy physics theory
Driver assistance systems as well as autonomous cars have to rely on sensors to perceive their environment. A heterogeneous set of sensors is used to perform this task robustly. Among them, radar sensors are indispensable because of their range resolution and the possibility to directly measure velocity. Since more and more radar sensors are deployed on the streets, mutual interference must be dealt with. In the so far unregulated automotive radar frequency band, a sensor must be capable of detecting, or even mitigating the harmful effects of interference, which include a decreased detection sensitivity. In this paper, we address this issue with Convolutional Neural Networks (CNNs), which are state-of-the-art machine learning tools. We show that the ability of CNNs to find structured information in data while preserving local information enables superior denoising performance. To achieve this, CNN parameters are found using training with simulated data and integrated into the automotive radar signal processing chain. The presented method is compared with the state of the art, highlighting its promising performance. Hence, CNNs can be employed for interference mitigation as an alternative to conventional signal processing methods. Code and pre-trained models are available at https://github.com/johanna-rock/imRICnn.
electrical engineering and systems science
We study the Einstein-Podolsky-Rosen (EPR) steering and present steerability criteria for arbitrary qubit-qudit (qudit-qubit) systems based on mutually unbiased measurements (MUMs) and general symmetric informationally complete measurements (general SIC-POVMs). Avoiding the usual complicated steering inequalities, these criteria can be more operational than some existing criteria and implemented experimentally. Detailed examples are given to illustrate the efficiency of the criteria in both computation and experimental implementation.
quantum physics
We propose a new method to determine the spatially or impact-parameter dependent nuclear parton distribution functions (nPDFs) using the double parton scattering (DPS) processes in high-energy heavy-ion (proton-nucleus and nucleus-nucleus) collisions. We derive a simple generic DPS formula in nuclear collisions by accommodating both the nuclear collision geometry and the spatially dependent nuclear modification effect, under the assumption that the impact-parameter dependence of nPDFs is only related to the nuclear thickness function. While the geometric effect is widely adopted, the impact of the spatially dependent nuclear modification on DPS cross sections has been overlooked so far, which can, however, be significant when the initial nuclear modification is large. In turn, the DPS cross sections in heavy-ion collisions can provide useful information on the spatial dependence of nPDFs. They can be, in general, obtained in minimum-bias nuclear collisions, featuring the virtue of independence of Glauber modeling.
high energy physics phenomenology
The supersymmetry preserving mu parameter in SUSY theories is naively expected to be of order the Planck scale while phenomenology requires it to be of order the weak scale. This is the famous SUSY mu problem. Its solution involves two steps: 1. first forbid mu, perhaps via some symmetry, and then 2. re-generate it of order the scale of soft SUSY breaking terms. However, present LHC limits suggest the soft breaking scale m_{soft} lies in the multi-TeV regime whilst naturalness requires mu~ m_{W,Z,h}~ 100 GeV so that a Little Hierarchy (LH) appears with mu << m_{soft}. We review twenty previously devised solutions to the SUSY mu problem and re-evaluate them in light of whether they are apt to support the LH. We organize the twenty solutions according to: 1. solutions from supergravity/superstring constructions, 2. extended MSSM solutions, 3. solutions from an extra local U(1)' and 4. solutions involving Peccei-Quinn (PQ) symmetry and axions. Early solutions would invoke a global Peccei-Quinn symmetry to forbid the mu term while relating the mu solution to solving the strong CP problem via the axion. We discuss the gravity-safety issue pertaining to global symmetries and the movement instead toward local gauge symmetries or R-symmetries, either continuous or discrete. At present, discrete R-symmetries of order M (Z_M^R) which emerge as remnants of Lorentz symmetry of compact dimensions seem favored. Even so, a wide variety of regenerative mechanisms are possible, some of which relate to other issues such as the strong CP problem or the generation of neutrino masses. We also discuss the issue of experimental verification or falsifiability of various solutions to the mu problem. Almost all solutions seem able to accommodate the LH.
high energy physics phenomenology
We study the cosmology of the dark sector consisting of (ultra) light scalars. Since the scalar mass is radiatively unstable, a special explanation is required to make the mass much smaller than the UV scale. There are two well-known mechanisms for the origin of scalar mass. The scalar can be identified as a pseudo-Goldstone boson, whose shift symmetry is explicitly broken by non-perturbative corrections, like the axion. Alternatively, it can be identified as a composite particle like the glueball, whose mass is limited by the confinement scale of the theory. In both cases, the scalar can be naturally light, but interaction behavior is quite different. The lighter the axion (glueball), the weaker (stronger) it interacts. We consider the dark axion whose shift symmetry is anomalously broken by the hidden non-abelian gauge symmetry. After the confinement of the gauge group, the dark axion and the dark glueball get masses and both form multicomponent dark matter. We carefully consider the effects of energy flow from the dark gluons to the dark axions and derive the full equations of motion for the background and the perturbed variables. The effect of the dark axion-dark gluon coupling on the evolution of the entropy and the isocurvature perturbations is also clarified. Finally, we discuss the gravo-thermal collapse of the glueball subcomponent dark matter after the halos form, in order to explore the potential to contribute to the formation of seeds for the supermassive black holes observed at high redshifts. With the simplified assumptions, the glueball subcomponent dark matter with the mass of $0.01-0.1 {\rm MeV}$, and the axion main dark matter component with the decay constant $f_a={\cal O}(10^{15}-10^{16}){\rm GeV}$, the mass of ${\cal O}(10^{-14}-10^{-18})\,{\rm eV}$, can provide the hint on the origin of the supermassive black holes at high redshifts.
high energy physics phenomenology
For many important problems the quantity of interest is an unknown function of the parameters, which is a random vector with known statistics. Since the dependence of the output on this random vector is unknown, the challenge is to identify its statistics, using the minimum number of function evaluations. This problem can been seen in the context of active learning or optimal experimental design. We employ Bayesian regression to represent the derived model uncertainty due to finite and small number of input-output pairs. In this context we evaluate existing methods for optimal sample selection, such as model error minimization and mutual information maximization. We show that for the case of known output variance, the commonly employed criteria in the literature do not take into account the output values of the existing input-output pairs, while for the case of unknown output variance this dependence can be very weak. We introduce a criterion that takes into account the values of the output for the existing samples and adaptively selects inputs from regions of the parameter space which have important contribution to the output. The new method allows for application to high-dimensional inputs, paving the way for optimal experimental design in high-dimensions.
statistics
This paper provides a closed form expression for the pairwise score vector for the multivariate ordered probit model. This result has several implications in likelihood-based inference. It is indeed used both to speed-up gradient based optimization routines for point estimation, and to provide a building block to compute standard errors and confidence intervals by means of the Godambe matrix.
statistics
Experimental advances in the fabrication and characterization of few-layer materials stacked at a relative twist of small angle have recently shown the emergence of flat energy bands. As a consequence electron interactions become relevant, providing inroads into the physics of strongly correlated two-dimensional systems. Here, we demonstrate by combining large scale ab initio simulations with numerically exact strong correlation approaches that an effective one-dimensional system emerges upon stacking two twisted sheets of GeSe, in marked contrast to all Moir\'e systems studied so far. This not only allows to study the necessarily collective nature of excitations in one dimension, but can also serve as a promising platform to scrutinize the crossover from two to one dimension in a controlled setup by varying the twist angle, which provides an intriguing benchmark with respect to theory. We thus establish twisted bilayer GeSe as an intriguing inroad into the strongly correlated physics of low-dimensional systems.
condensed matter
Wireless-connected Virtual Reality (VR) provides immersive experience for VR users from any-where at anytime. However, providing wireless VR users with seamless connectivity and real-time VR video with high quality is challenging due to its requirements in high Quality of Experience (QoE) and low VR interaction latency under limited computation capability of VR device. To address these issues,we propose a MEC-enabled wireless VR network, where the field of view (FoV) of each VR user can be real-time predicted using Recurrent Neural Network (RNN), and the rendering of VR content is moved from VR device to MEC server with rendering model migration capability. Taking into account the geographical and FoV request correlation, we propose centralized and distributed decoupled Deep Reinforcement Learning (DRL) strategies to maximize the long-term QoE of VR users under the VR interaction latency constraint. Simulation results show that our proposed MEC rendering schemes and DRL algorithms substantially improve the long-term QoE of VR users and reduce the VR interaction latency compared to rendering at VR devices
electrical engineering and systems science
We consider an axion cloud around a black hole with background magnetic fields. We calculate the decay rate of the axion cloud due to the axion-photon conversion associated with the axion-photon coupling. For simplicity, we consider the situation where the axion configuration is dominated by a solution for the eigenvalue equation equivalent to that for the Hydrogen atom, and the coupling term can be evaluated by a successive perturbation method. For the monopole background, we find the decay rate of the axion cloud is given by $\sim q^2\kappa^2(GM)^5\mu^8$, where $\mu$, $M$, $G$, $\kappa$ and $q$ are the axion mass, black hole mass, gravitational constant, coupling constant of the axion-photon coupling and monopole charge, respectively. For the uniform background magnetic field, we obtain the decay rate of the axion cloud $\sim B_0^2\kappa^2 (GM)^7\mu^6$, where $B_0$ is the magnetic field strength. Applying our formula to the central black hole in our galaxy, we find that the value of the decay rate for the case of the uniform magnetic field is comparable to the growth rate of the superradiant instability with $\kappa\sim 10^{-12}{\rm GeV^{-1}}$, $B_0\sim 10^3{\rm G}$ and $\mu\sim 10^{-18}{\rm eV}$. The ratio is $10^5$ times larger for the monopole magnetic field with the same values of the parameters.
high energy physics phenomenology
We characterize noncommutative symmetric Banach spaces for which every bounded sequence admits either a convergent subsequence, or a $2$-co-lacunary subsequence. This extends the classical characterization, due to R\"abiger.
mathematics
Input constraints as well as parametric uncertainties must be accounted for in the design of safe control systems. This paper presents an adaptive controller for multiple-input-multiple-output (MIMO) plants with input magnitude and rate saturation in the presence of parametric uncertainties. A filter is introduced in the control path to accommodate the presence of rate limits. An output feedback adaptive controller is designed to stabilize the closed loop system even in the presence of this filter. The overall control architecture includes adaptive laws that are modified to account for the magnitude and rate limits. Analytical guarantees of bounded solutions and satisfactory tracking are provided. Three flight control simulations with nonlinear models of the aircraft dynamics are provided to demonstrate the efficacy of the proposed adaptive controller for open loop stable and unstable systems in the presence of uncertainties in the dynamics as well as input magnitude and rate saturation.
mathematics
A non-iterative method of reconstruction is proposed from data of MRI system and of a harmonic electro-magnetic field at Larmor frequency. The method is based on the exact analytic formula for the contrast source function. A geometric method for acquisition of the full inductive field is discussed.
mathematics
Accretion states, which are universally observed in stellar-mass black holes in X-ray binaries, are also anticipated in active galactic nuclei (AGN). This is the case at low luminosities, when the jet-corona coupling dominates the energy output in both populations. Previous attempts to extend this framework to a wider AGN population have been extremely challenging due to heavy hydrogen absorption of the accretion disc continuum and starlight contamination from the host galaxies. We present the luminosity-excitation diagram (LED), based on the [OIV]$_{25.9\mu m}$ and [NeII]$_{12.8\mu m}$ mid-infrared nebular line fluxes. This tool enables to probe the accretion disc contribution to the ionising continuum. When applied to a sample of 167 nearby AGN, the LED recovers the characteristic q-shaped morphology outlined by individual X-ray binaries during a typical accretion episode, allowing us to tentatively identify the main accretion states. The soft state would include broad-line Seyferts and about half of the Seyfert 2 population, showing highly excited gas and radio-quiet cores consistent with disc-dominated nuclei. The hard state mostly includes low-luminosity AGN ($\leq 10^{-3}\, \rm{L_{Edd}}$) characterised by low-excitation radio-loud nuclei and a negligible disc contribution. The remaining half of Seyfert 2 nuclei and the bright LINERs show low excitation at high accretion luminosities, and could be identified with the bright-hard and intermediate states. Their hosts show ongoing star formation in the central kiloparsecs. We discuss the above scenario, its potential links with the galaxy evolution picture and the possible presence of accretion state transitions in AGN, as suggested by the growing population of changing-look quasars.
astrophysics
We calculate the Sivers asymmetry in the photoproduction of almost back-to-back $J/\psi$-jet pair in the process $ep^\uparrow \to J/\psi+\mathrm{jet}+X$, which will be possible at the future planned electron-ion collider (EIC). We use the framework of generalized parton model (GPM), and NRQCD for calculating the $J/\psi$ production rate. We include contributions from both color singlet and color octate states in the asymmetry. We obtain sizable Sivers asymmetry that can be promising to determine the gluon Sivers function. We also investigate the effect of TMD evolution on the asymmetry.
high energy physics phenomenology
We introduce a new model to explain the modulation of the orbital period observed in close stellar binary systems based on an angular momentum exchange between the spin of the active component and the orbital motion. This spin-orbit coupling is not due to tides, but is produced by a non-axisymmetric component of the gravitational quadrupole moment of the active star due to a persistent non-axisymmetric internal magnetic field. The proposed mechanism easily satisfies all the energy constraints having an energy budget about 100-1000 times smaller than those of previously proposed models and is supported by the observations of persistent active longitudes in the active components of close binary systems. We present preliminary applications to three well-studied binary systems to illustrate the model. The case of stars with hot Jupiters is also discussed showing that no significant orbital period modulation is generally expected on the basis of the proposed model.
astrophysics
We report measurements of resonant thermal capillary oscillations of a hemispherical liquid gas interface obtained using a half bubble deposited on a solid substrate. The thermal motion of the hemispherical interface is investigated using an atomic force microscope cantilever that probes the amplitude of vibrations of this interface versus frequency. The spectrum of such nanoscale thermal oscillations of the bubble surface presents several resonance peaks and reveals that the contact line of the hemispherical bubble is pinned on the substrate. The analysis of these peaks allows to measure the surface viscosity of the bubble interface. Minute amounts of impurities are responsible for altering the rheology of the pure water surface.
condensed matter
Basic quantum processes (such as particle creation, reflection, and transmission on the corresponding Klein steps) caused by inverse-square electric fields are calculated. These results represent a new example of exact nonperturbative calculations in the framework of QED. The inverse-square electric field is time-independent, inhomogeneous in the $x$-direction, and is inversely proportional to $x$ squared. We find exact solutions of the Dirac and Klein-Gordon equations with such a field and construct corresponding in- and out-states. With the help of these states and using the techniques developed in the framework of QED with $x$-electric potential steps, we calculate characteristics of the vacuum instability, such as differential and total mean numbers of particles created from the vacuum and vacuum-to-vacuum transition probabilities. We study the vacuum instability for two particular backgrounds: for fields widely stretches over the $x$-axis (small-gradient configuration) and for the fields sharply concentrates near the origin $x=0$ (sharp-gradient configuration). We compare exact results with ones calculated numerically. Finally, we consider the electric field configuration, composed by inverse-square fields and by an $x$-independent electric field between them to study the role of growing and decaying processes in the vacuum instability.
high energy physics theory
Integrating datasets from different disciplines is hard because the data are often qualitatively different in meaning, scale, and reliability. When two datasets describe the same entities, many scientific questions can be phrased around whether the similarities between entities are conserved. Our method, CLARITY, quantifies consistency across datasets, identifies where inconsistencies arise, and aids in their interpretation. We explore three diverse comparisons: Gene Methylation vs Gene Expression, evolution of language sounds vs word use, and country-level economic metrics vs cultural beliefs. The non-parametric approach is robust to noise and differences in scaling, and makes only weak assumptions about how the data were generated. It operates by decomposing similarities into two components: the `structural' component analogous to a clustering, and an underlying `relationship' between those structures. This allows a `structural comparison' between two similarity matrices using their predictability from `structure'. The software, CLARITY, is available as an R package from https://github.com/danjlawson/CLARITY.
statistics
Topological insulators in the AIII symmetry class lack experimental realization. Moreover, fractionalization in one-dimensional topological insulators has not been yet directly observed. Our work might open possibilities for both challenges. We propose a one-dimensional model realizing the AIII symmetry class which can be realized in current experiments with ultracold atomic gases. We further report on a distinctive property of topological edge modes in the AIII class: in contrast to those in the well studied BDI class, they have non-zero momentum. Exploiting this feature we propose a path for the detection of fractionalization. A fermion added to an AIII system splits into two halves localized at opposite momenta, which can be detected by imaging the momentum distribution.
condensed matter
Microlenses with typical stellar masses (a few ${\rm M}_{\odot}$) have traditionally been disregarded as potential sources of gravitational lensing effects at LIGO/Virgo frequencies, since the time delays are often much smaller than the inverse of the frequencies probed by LIGO/Virgo, resulting in negligible interference effects at LIGO/Virgo frequencies. While this is true for isolated microlenses in this mass regime, we show how, under certain circumstances and for realistic scenarios, a population of microlenses (for instance stars and remnants from a galaxy halo or from the intracluster medium) embedded in a macromodel potential (galaxy or cluster) can conspire together to produce time delays of order one millisecond which would produce significant interference distortions in the observed strains. At sufficiently large magnification factors (of several hundred), microlensing effects should be common in gravitationally lensed gravitational waves. We explore the regime where the predicted signal falls in the frequency range probed by LIGO/Virgo. We find that stellar mass microlenses, permeating the lens plane, and near critical curves, can introduce interference distortions in strongly lensed gravitational waves. For those lensed events with negative parity, (or saddle points, never studied before in the context of gravitational waves), and that take place near caustics of macromodels, they are more likely to produce measurable interference effects at LIGO/Virgo frequencies. This is the first study that explores the effect of a realistic population of microlenses, plus a macromodel, on strongly lensed gravitational waves.
astrophysics
Purpose: To develop an Artificial Intelligence (AI) agent for fully-automated rapid head and neck (H&N) IMRT plan generation without time-consuming inverse planning.$$$$ Methods: This AI agent was trained using a conditional Generative Adversarial Network architecture. The generator, PyraNet, is a novel Deep Learning network that implements 28 classic ResNet blocks in pyramid-like concatenations. The discriminator is a customized 4-layer DenseNet. The AI agent first generates customized 2D projections at 9 template beam angles from 3D CT volume and structures of a patient. These projections are then stacked as 4D inputs of PyraNet, from which 9 radiation fluence maps are generated simultaneously. Finally, the predicted fluence maps are imported into a commercial treatment planning system (TPS) for plan integrity checks. The AI agent was built and tested upon 231 oropharyngeal plans from a TPS plan library. Only the primary plans in the sequential boost regime were studied. A customized Harr wavelet loss was adopted for fluence map comparison. Isodose distributions in test AI plans and TPS plans were qualitatively evaluated. Key dosimetric metrics were statistically compared.$$$$ Results: All test AI plans were successfully generated. Isodose gradients outside of PTV in AI plans were comparable with TPS plans. After PTV coverage normalization, $D_{mean}$ of parotids and oral cavity in AI plans and TPS plans were comparable without statistical significance. AI plans achieved comparable $D_{max}$ at 0.01cc of brainstem and cord+5mm without clinically relevant differences, but body $D_{max}$ was higher than the TPS plan results. The AI agent needs ~3s per case to predict fluence maps.$$$$ Conclusions: The developed AI agent can generate H&N IMRT plans with satisfying dosimetry quality. With rapid and fully automated implementation, it holds great potential for clinical applications.
physics
Purpose: Quantitative magnetization transfer (qMT) imaging can be used to detect the signal of protons attached to relatively immobile macromolecules. Here, we show that the original qMT balanced steady-state free precession (bSSFP) model is biased due to over-simplistic assumptions made in its derivation. Theory and Methods: We present an improved model for qMT bSSFP, which incorporates finite radio-frequency (RF) pulse effects as well as simultaneous exchange and relaxation. Further, a correction to finite RF pulse effects for sinc-shaped excitations is derived. The new model is compared to the original one in numerical simulations of the Bloch-McConnell equations and in previously acquired in-vivo data. Results: Our numerical simulations show that the original signal equation is significantly biased in typical brain tissue structures (by 7-20 %) whereas the new signal equation outperforms the original one with minimal bias (< 1%). It is further shown that the bias of the original model strongly affects the acquired qMT parameters in human brain structures, with differences in the clinically relevant parameter of pool-size-ratio of up to 31 %. Particularly high biases of the original signal equation are expected in an MS lesion within diseased brain tissue (due to a low T2/T1-ratio), demanding a more accurate model for clinical applications. Conclusion: The improved model for qMT bSSFP is recommended for accurate qMT parameter mapping in healthy and diseased brain tissue structures.
physics
We study the state-dependent (SD) wiretap channel (WTC) with non-causal channel state information (CSI) at the encoder. This model subsumes all other instances of CSI availability as special cases, and calls for an efficient utilization of the state sequence for both reliability and security purposes. A lower bound on the secrecy-capacity, that improves upon the previously best known result published by Prabhakaran et al., is derived based on a novel superposition coding scheme. Our achievability gives rise to the exact secrecy-capacity characterization of a class of SD-WTCs that decompose into a product of two WTCs, where one is independent of the state and the other one depends only on the state. The results are derived under the strict semantic-security metric that requires negligible information leakage for all message distributions.
computer science
In this paper we obtain several results concerning the optimization of higher Steklov eigenvalues both in two and higher dimensional cases. We first show that the normalized (by boundary length) $k$-th Steklov eigenvalue on the disk is not maximized for a smooth metric on the disk for $k\geq 3$. For $k=1$ the classical result of [W] shows that $\sigma_1$ is maximized by the standard metric on the round disk. For $k=2$ it was shown [GP1] that $\sigma_2$ is not maximized for a smooth metric. We also prove a local rigidity result for the critical catenoid and the critical M\"obius band as free boundary minimal surfaces in a ball under $C^2$ deformations. We next show that the first $k$ Steklov eigenvalues are continuous under certain degenerations of Riemannian manifolds in any dimension. Finally we show that for $k\geq 2$ the supremum of the $k$-th Steklov eigenvalue on the annulus over all metrics is strictly larger that that over $S^1$-invariant metrics. We prove this same result for metrics on the M\"obius band.
mathematics
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised learning on graph-based datasets. For sparse graphs, linear and polynomial filter functions have yielded impressive results. For large non-sparse graphs, however, network training and evaluation becomes prohibitively expensive. By introducing low-rank filters, we gain significant runtime acceleration and simultaneously improved accuracy. We further propose an architecture change mimicking techniques from Model Order Reduction in what we call a reduced-order GCN. Moreover, we present how our method can also be applied to hypergraph datasets and how hypergraph convolution can be implemented efficiently.
computer science
A well-known axiom for proportional representation is Proportionality of Solid Coalitions (PSC). We characterize committees satisfying PSC as possible outcomes of the Minimal Demand rule, which generalizes an approach pioneered by Michael Dummett.
computer science
When a muon bound in an atom decays, there is a small probability that the daughter electron remains bound. That probability is evaluated. Surprisingly, a significant part of the rate is contributed by the negative energy component of the wave function, neglected in a previous study. A simple integral representation of the rate is presented. In the limit of close muon and electron masses, an analytic formula is derived.
high energy physics phenomenology
We compute the vacuum fermion current in $(2+1)$ dimensional Jackiw-Rossi model by using the $1/m$ expansion. The current is expressed through a weighted $\eta$-function with a matrix weight. In the presence of such a weight, the usual proof of topological nature of $\eta(0)$ is not longer applicable. Direct computations confirm the following surprising result: the fermion number induced by vortices in the Jackiw-Rossi model is \textit{not} topological.
high energy physics theory
The bottomonium spectrum is far from being established. The structures of higher vector states, including the $\Upsilon(10580)$, $\Upsilon(10860)$, and $\Upsilon(11020)$ states, are still in dispute. In addition, whether the $\Upsilon(10750)$ signal which was recently observed by the Belle Collaboration is a normal $b\bar{b}$ state or not should be examined. Faced with such a situation, we carried out a systematic investigation of the bottomonium spectrum in the scheme of the relativistic flux tube (RFT) model. A Chew-Frautschi like formula was derived analytically for the spin average mass of bottomonium states. We further incorporated the spin-dependent interactions and obtained a complete bottomonium spectrum. We found that the most established bottomonium states can be explained in the RFT scheme. The $\Upsilon(10750)$, $\Upsilon(10860)$, and $\Upsilon(11020)$ could be predominantly the $3^3D_1$, $5^3S_1$, and $4^3D_1$ states, respectively. Our predicted masses of $1F$ and $1G$ $b\bar{b}$ states are in agreement with the results given by the method of lattice QCD, which can be tested by experiments in future. We also compared the RFT model with the quark potential model in detail. The differences of these two kinds of models were discussed.
high energy physics phenomenology
In four spacetime dimensions, all ${\cal N} =1$ supergravity-matter systems can be formulated in the so-called $\mathsf{U}(1)$ superspace proposed by Howe in 1981. This paper is devoted to the study of those geometric structures which characterise a background $\mathsf{U}(1)$ superspace and are important in the context of supersymmetric field theory in curved space. We introduce (conformal) Killing tensor superfields $\ell_{(\alpha_1 \dots \alpha_m) ({\dot \alpha}_1 \dots {\dot \alpha}_n)}$, with $m$ and $n$ non-negative integers, $m+n>0$, and elaborate on their significance in the following cases: (i) $m=n=1$; (ii) $m-1=n=0$; and (iii) $m=n>1$. The (conformal) Killing vector superfields $\ell_{\alpha \dot \alpha}$ generate the (conformal) isometries of curved superspace, which are symmetries of every (conformal) supersymmetric field theory. The (conformal) Killing spinor superfields $\ell_{\alpha }$ generate extended (conformal) supersymmetry transformations. The (conformal) Killing tensor superfields with $m=n>1$ prove to generate all higher symmetries of the (massless) massive Wess-Zumino operator.
high energy physics theory
Survival analysis of right censored data arises often in many areas of research including medical research. Effect of covariates (and their interactions) on survival distribution can be studied through existing methods which requires to pre-specify the functional form of the covariates including their interactions. Survival trees offer relatively flexible approach when the form of covariates' effects is unknown. Most of the currently available survival tree construction techniques are not based on a formal test of significance; however, recently proposed ctree algorithm (Hothorn et al., 2006) uses permutation test for splitting decision that may be conservative at times. We consider parameter instability test of statistical significance of heterogeneity to guard against spurious findings of variation in covariates' effect without being overly conservative. We have proposed SurvCART algorithm to construct survival tree under conditional inference framework (Hothorn et al., 2006) that selects splitting variable via parameter instability test and subsequently finds the optimal split based on some maximally chosen statistic. Notably, unlike the existing algorithms which focuses only on heterogeneity in event time distribution, the proposed SurvCART algorithm can take splitting decision based in censoring distribution as well along with heterogeneity in event time distribution. The operating characteristics of parameter instability test and comparative assessment of SurvCART algorithm were carried out via simulation. Finally, SurvCART algorithm was applied to a real data setting. The proposed method is fully implemented in R package LongCART available on CRAN.
statistics
In 1933, Borsuk proposed the following problem: Can every bounded set in $\mathbb{E}^n$ be divided into $n+1$ subsets of smaller diameters? This problem has been studied by many authors, and a lot of partial results have been discovered. In particular, Kahn and Kalai's counterexamples surprised the mathematical community in 1993. Nevertheless, the problem is still far away from being completely resolved. This paper presents a broad review on related subjects and, based on a novel reformulation, introduces a computer proof program to deal with this well-known problem.
mathematics
Training classification models on imbalanced data tends to result in bias towards the majority class. In this paper, we demonstrate how variable discretization and cost-sensitive logistic regression help mitigate this bias on an imbalanced credit scoring dataset, and further show the application of the variable discretization technique on the data from other domains, demonstrating its potential as a generic technique for classifying imbalanced data beyond credit socring. The performance measurements include ROC curves, Area under ROC Curve (AUC), Type I Error, Type II Error, accuracy, and F1 score. The results show that proper variable discretization and cost-sensitive logistic regression with the best class weights can reduce the model bias and/or variance. From the perspective of the algorithm, cost-sensitive logistic regression is beneficial for increasing the value of predictors even if they are not in their optimized forms while maintaining monotonicity. From the perspective of predictors, the variable discretization performs better than cost-sensitive logistic regression, provides more reasonable coefficient estimates for predictors which have nonlinear relationships against their empirical logit, and is robust to penalty weights on misclassifications of events and non-events determined by their apriori proportions.
statistics
Based on the theory of independently scattered random measures, we introduce a natural generalisation of Gaussian space-time white noise to a Levy-type setting, which we call Levy-valued random measures. We determine the subclass of cylindrical Levy processes which correspond to Levy-valued random measures, and describe the elements of this subclass uniquely by their characteristic function. We embed the Levy-valued random measure, or the corresponding cylindrical Levy process, in the space of general and tempered distributions. For the latter case, we show that this embedding is possible if and only if a certain integrability condition is satisfied. Similar to existing definitions, we introduce Levy-valued additive sheets, and show that integrating a Levy-valued random measure in space defines a Levy-valued additive sheet. This relation is manifested by the result, that a Levy-valued random measure can be viewed as the weak derivative of a Levy-valued additive sheet in the space of distributions.
mathematics
Hyperspectral imaging (HSI) unlocks the huge potential to a wide variety of applications relied on high-precision pathology image segmentation, such as computational pathology and precision medicine. Since hyperspectral pathology images benefit from the rich and detailed spectral information even beyond the visible spectrum, the key to achieve high-precision hyperspectral pathology image segmentation is to felicitously model the context along high-dimensional spectral bands. Inspired by the strong context modeling ability of transformers, we hereby, for the first time, formulate the contextual feature learning across spectral bands for hyperspectral pathology image segmentation as a sequence-to-sequence prediction procedure by transformers. To assist spectral context learning procedure, we introduce two important strategies: (1) a sparsity scheme enforces the learned contextual relationship to be sparse, so as to eliminates the distraction from the redundant bands; (2) a spectral normalization, a separate group normalization for each spectral band, mitigates the nuisance caused by heterogeneous underlying distributions of bands. We name our method Spectral Transformer (SpecTr), which enjoys two benefits: (1) it has a strong ability to model long-range dependency among spectral bands, and (2) it jointly explores the spatial-spectral features of HSI. Experiments show that SpecTr outperforms other competing methods in a hyperspectral pathology image segmentation benchmark without the need of pre-training. Code is available at https://github.com/hfut-xc-yun/SpecTr.
electrical engineering and systems science
We present a high scale method to produce few layer graphene (FLG) based on the mechanical exfoliation of graphite and compare the obtained FLG with the one reported earlier arising from pencil lead ablation. Several things are modified and improved in the new approach. The purification and the ablation set_up are simplified, and the morphology of the FLG is modified and improved in view of some applications. The morphology dependent properties of FLGs, lead FLG and graphite FLG, are investigated as conductive layers and in nanocomposites. Newly obtained FLG has higher aspect ratio (high lateral size vs. thickness,higher 2D aspect) which is reflected by an enhanced transparency_conductivity features of the layer (film) and an elongation at break behavior in the polymer composites. On the contrary, the nanocomposite containing lead FLG shows for instance excellent gas barrier properties due to the multistep structure of lead FLG flakes. Such structure exhibits less 2D and more 3D character, which can be highly suitable for applications where the presence of active, reactive edges is beneficial ( in catalysis or supercapacitors electrodes). Nuclear reaction analysis is employed to investigate the morphology of graphite FLG film.
physics
We introduce a novel approach to estimation problems in settings with missing data. Our proposal -- the Correlation-Assisted Missing data (CAM) estimator -- works by exploiting the relationship between the observations with missing features and those without missing features in order to obtain improved prediction accuracy. In particular, our theoretical results elucidate general conditions under which the proposed CAM estimator has lower mean squared error than the widely used complete-case approach in a range of estimation problems. We showcase in detail how the CAM estimator can be applied to $U$-Statistics to obtain an unbiased, asymptotically Gaussian estimator that has lower variance than the complete-case $U$-Statistic. Further, in nonparametric density estimation and regression problems, we construct our CAM estimator using kernel functions, and show it has lower asymptotic mean squared error than the corresponding complete-case kernel estimator. We also include practical demonstrations throughout the paper using simulated data and the Terneuzen birth cohort and Brandsma datasets available from CRAN.
statistics
We show that domain walls, or kinks, can be constructed in simple scalar theories where the scalar has no potential. These theories belong to a class of k-essence where the Lagrangian vanishes identically when one lets the derivatives of the scalar vanish. The domain walls we construct have positive energy and stable quadratic perturbations. As particular cases, we find families of theories with domain walls and their quadratic perturbations identical to the ones of the canonical Mexican hat or sine-Gordon scalar theories. We show that canonical and non canonical cases are nevertheless distinguishable via higher order perturbations or a careful examination of the energies. In particular, in contrast to the usual case, our walls are local minima of the energy among the field configuration having some fixed topological charge, but not global minima.
high energy physics theory
We describe an updated calibration and diagnostic framework, Balrog, used to directly sample the selection and photometric biases of Dark Energy Survey's (DES) Year 3 (Y3) dataset. We systematically inject onto the single-epoch images of a random 20% subset of the DES footprint an ensemble of nearly 30 million realistic galaxy models derived from DES Deep Field observations. These augmented images are analyzed in parallel with the original data to automatically inherit measurement systematics that are often too difficult to capture with traditional generative models. The resulting object catalog is a Monte Carlo sampling of the DES transfer function and is used as a powerful diagnostic and calibration tool for a variety of DES Y3 science, particularly for the calibration of the photometric redshifts of distant "source" galaxies and magnification biases of nearer "lens" galaxies. The recovered Balrog injections are shown to closely match the photometric property distributions of the Y3 GOLD catalog, particularly in color, and capture the number density fluctuations from observing conditions of the real data within 1% for a typical galaxy sample. We find that Y3 colors are extremely well calibrated, typically within ~1-8 millimagnitudes, but for a small subset of objects we detect significant magnitude biases correlated with large overestimates of the injected object size due to proximity effects and blending. We discuss approaches to extend the current methodology to capture more aspects of the transfer function and reach full coverage of the survey footprint for future analyses.
astrophysics
In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score.
electrical engineering and systems science
Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs) mainly for object classification tasks. In this paper, we propose an efficient network architecture by considering advantages of both networks. The proposed method is integrated into an encoder-decoder DCNN model for medical image segmentation. Our method adds additional skip connections compared to ResNet but uses significantly fewer model parameters than DenseNet. We evaluate the proposed method on a public dataset (ISIC 2018 grand-challenge) for skin lesion segmentation and a local brain MRI dataset. In comparison with ResNet-based, DenseNet-based and attention network (AttnNet) based methods within the same encoder-decoder network structure, our method achieves significantly higher segmentation accuracy with fewer number of model parameters than DenseNet and AttnNet. The code is available on GitHub (GitHub link: https://github.com/MinaJf/DRU-net).
electrical engineering and systems science
The indefinite integral $$ \int x^\alpha e^{\eta x^\beta}\,_pF_q (a_1, a_2, \cdot\cdot\cdot a_p; b_1, b_2, \cdot\cdot\cdot, b_q; \lambda x^{\gamma})dx, $$ where $\alpha, \eta, \beta, \lambda, \gamma\ne0$ are real or complex constants and $_pF_q$ is the generalized hypergeometric function, is evaluated in terms of an infinite series involving the generalized hypergeometric function. Related integrals in which the exponential function $e^{\eta x^\beta}$ is either replaced by the hyperbolic function $\cosh\left(\eta x^\beta\right)$ or $\sinh\left(\eta x^\beta\right)$, or the sinusoidal function $\cos\left(\eta x^\beta\right)$ or $\sin\left(\eta x^\beta\right)$, are also evaluated in terms of infinite series involving the generalized hypergeometric function $_pF_q$. Some application examples from applied analysis, in which some new Fourier and Laplace integrals (or transforms) are evaluated, are given. The analytical solution of the Orr-Sommerfeld equation (with a linear mean flow background) in the short-wave limit is expressed in terms of some infinite series involving the hypergeometric series $_2F_3$. Making use of the hyperbolic and Euler identities, some interesting series identities involving exponential, hyperbolic, trigonometric functions and the generalized hypergeometric function are also derived.
mathematics
Describing partially-condensed Bose gases poses a long-standing theoretical challenge. We present exact stochastic Ehrenfest relations for the stochastic projected Gross-Pitaevskii equation, including both number and energy damping mechanisms, and all projector terms that arise from the energy cutoff separating system from reservoir. We test the theory by applying it to the centre of mass fluctuations of a harmonically trapped prolate system, finding close agreement between c-field simulations and analytical results. The formalism lays the foundation to analytically explore experimentally accessible hot Bose-Einstein condensates.
condensed matter
The lack of radiotherapy linear accelerators (LINACs) in Low- and Middle- Income Countries (LMICs) has been recognised as a major barrier to providing quality cancer care in these regions, along with a shortfall in the number of highly qualified personnel. It is expected that additional challenges will be faced in operating precise, high tech radiotherapy equipment in these environments, and anecdotal evidence suggests that LINACs have greater downtime and higher failure rates of components than their counterparts in High-Income Countries. To guide future developments such as the design of a LINAC tailored for use in LMIC environments, it is important to take a data-driven approach to any re-engineering of the technology. However, no detailed statistical data on LINAC downtime and failure modes has been previously collected or presented in the literature. This work presents the first known comparative analysis of failure modes and downtime of current generation LINACs in radiotherapy centres, with the aim of determining any correlations between LINAC environment and performance. Logbooks kept by radiotherapy personnel on the operation of their LINAC were obtained and analysed from centres in Oxford (UK), Abuja, Benin, Enugu, Lagos, Sokoto (Nigeria) and Gaborone (Botswana). By deconstructing the LINAC into 12 different subsystems, it is found that the vacuum subsystem only fails in the LMIC centres and the failure rate in an LMIC environment is more than twice as large in 6 of the 12 subsystems compared to the High Income Country (HIC). Additionally, it is shown that despite accounting for only 3.4% of total number of faults, the LINAC faults which take more than an hour to repair account for 74.6% of the total downtime. The results of this study inform future attempts to mitigate the problems affecting LINACs in LMIC environments.
physics
The holographic entanglement entropy functional for higher-curvature gravities involves a weighted sum whose evaluation, beyond quadratic order, requires a complicated theory-dependent splitting of the Riemann tensor components. Using the splittings of general relativity one can obtain unambiguous formulas perturbatively valid for general higher-curvature gravities. Within this setup, we perform a novel rewriting of the functional which gets rid of the weighted sum. The formula is particularly neat for general cubic and quartic theories, and we use it to explicitly evaluate the corresponding functionals. In the case of Lovelock theories, we find that the anomaly term can be written in terms of the exponential of a differential operator. We also show that order-$n$ densities involving $n_R$ Riemann tensors (combined with $n-n_R$ Ricci's) give rise to terms with up to $2n_R-2$ extrinsic curvatures. In particular, densities built from arbitrary Ricci curvatures combined with zero or one Riemann tensors have no anomaly term in their functionals. Finally, we apply our results for cubic gravities to the evaluation of universal terms coming from various symmetric regions in general dimensions. In particular, we show that the universal function characteristic of corner regions in $d=3$ gets modified in its functional dependence on the opening angle with respect to the Einstein gravity result.
high energy physics theory
The requirement for solving nonlinear algebraic equations is ubiquitous in the field of electric power system simulations. While Newton-based methods have been used to advantage, they sometimes do not converge, leaving the user wondering whether a solution exists. In addition to improved robustness, one advantage of holomorphic embedding methods (HEM) is that, even when they do not converge, roots plots of the Pad\'e approximants (PAs) to the functions in the inverse-$\alpha$ plane can be used to determine whether a solution exist. The convergence factor (CF) of the near-diagonal PAs applied to functions expanded about the origin is determined by the logarithmic capacity of the associated branch cut (BC) and the distance of the evaluation point from the origin. However the underlying mechanism governing this rate has been obscure. We prove that the ultimate distribution of the PA roots on the BC in the complex plane converges weakly to the equilibrium distribution of electrostatic charges on a 2-D conductor system with the same topology in a physical setting. This, along with properties of the Maclaurin series can be used to explain the structure of the CF equation We demonstrate the theoretical convergence behavior, with numerical experiments.
electrical engineering and systems science
Error models for quantum computing processors describe their deviation from ideal behavior and predict the consequences in applications. But those processors' experimental behavior -- the observed outcome statistics of quantum circuits -- are rarely consistent with error models, even in characterization experiments like randomized benchmarking (RB) or gate set tomography (GST), where the error model was specifically extracted from the data in question. We show how to resolve these inconsistencies, and quantify the rate of unmodeled errors, by augmenting error models with a parameterized wildcard error model. Adding wildcard error to an error model relaxes and weakens its predictions in a controlled way. The amount of wildcard error required to restore consistency with data quantifies how much unmodeled error was observed, in a way that facilitates direct comparison to standard gate error rates. Using both simulated and experimental data, we show how to use wildcard error to reconcile error models derived from RB and GST experiments with inconsistent data, to capture non-Markovianity, and to quantify all of a processor's observed error.
quantum physics
We propose a projection-based class of uniformity tests on the hypersphere using statistics that integrate, along all possible directions, the weighted quadratic discrepancy between the empirical cumulative distribution function of the projected data and the projected uniform distribution. Simple expressions for several test statistics are obtained for the circle and sphere, and relatively tractable forms for higher dimensions. Despite its different origin, the proposed class is shown to be related with the well-studied Sobolev class of uniformity tests. Our new class proves itself advantageous by allowing to derive new tests for hyperspherical data that neatly extend the circular tests by Watson, Ajne, and Rothman, and by introducing the first instance of an Anderson-Darling-like test for such data. The asymptotic distributions and the local optimality against certain alternatives of the new tests are obtained. A simulation study evaluates the theoretical findings and evidences that, for certain scenarios, the new tests are competitive against previous proposals. The new tests are employed in three astronomical applications.
statistics
Although neural network approaches achieve remarkable success on a variety of NLP tasks, many of them struggle to answer questions that require commonsense knowledge. We believe the main reason is the lack of commonsense \mbox{connections} between concepts. To remedy this, we provide a simple and effective method that leverages external commonsense knowledge base such as ConceptNet. We pre-train direct and indirect relational functions between concepts, and show that these pre-trained functions could be easily added to existing neural network models. Results show that incorporating commonsense-based function improves the baseline on three question answering tasks that require commonsense reasoning. Further analysis shows that our system \mbox{discovers} and leverages useful evidence from an external commonsense knowledge base, which is missing in existing neural network models and help derive the correct answer.
computer science