text
stringlengths
11
9.77k
label
stringlengths
2
104
In this talk, after a short overview of the history of the discovery of tetra-quarks and penta-quarks, we will discuss a possible interpretation of such states in the framework of a 40-years-old "string junction" picture that allows a unified QCD description of ordinary mesons and baryons as well as multi-quark resonances.
high energy physics phenomenology
We use Ru $L_3$-edge (2838.5 eV) resonant inelastic x-ray scattering (RIXS) to quantify the electronic structure of Ca$_2$RuO$_4$, a layered $4d$-electron compound that exhibits a correlation-driven metal-insulator transition and unconventional antiferromagnetism. We observe a series of Ru intra-ionic transitions whose energies and intensities are well described by model calculations. In particular, we find a $\rm{J}=0\rightarrow 2$ spin-orbit excitation at 320 meV, as well as Hund's-rule driven $\rm{S}=1\rightarrow 0$ spin-state transitions at 750 and 1000 meV. The energy of these three features uniquely determines the spin-orbit coupling, tetragonal crystal-field energy, and Hund's rule interaction. The parameters inferred from the RIXS spectra are in excellent agreement with the picture of excitonic magnetism that has been devised to explain the collective modes of the antiferromagnetic state. $L_3$-edge RIXS of Ru compounds and other $4d$-electron materials thus enables direct measurements of interactions parameters that are essential for realistic model calculations.
condensed matter
At millimeter wave (mmWave) frequencies, the higher cost and power consumption of hardware components in multiple-input multiple output (MIMO) systems do not allow beamforming entirely at the baseband with a separate radio frequency (RF) chain for each antenna. In such scenarios, to enable spatial multiplexing, hybrid beamforming, which uses phase shifters to connect a fewer number of RF chains to a large number of antennas is a cost effective and energy-saving alternative. This paper describes our research on fully adaptive transceivers that adapt their behaviour on a frame-by-frame basis, so that a mmWave hybrid MIMO system always operates in the most energy efficient manner. Exhaustive search based brute force approach is computationally intensive, so we study fractional programming as a low-cost alternative to solve the problem which maximizes energy efficiency. The performance results indicate that the resulting mmWave hybrid MIMO transceiver achieves significantly improved energy efficiency results compared to the baseline cases involving analogue-only or digital-only signal processing solutions, and shows performance trade-offs with the brute force approach.
electrical engineering and systems science
The Nelson-Seiberg theorem relates R-symmetries to F-term supersymmetry breaking, and provides a guiding rule for new physics model building beyond the Standard Model. A revision of the theorem gives a necessary and sufficient condition to supersymmetry breaking in models with polynomial superpotentials. This work revisits the theorem to include models with nonpolynomial superpotentials. With a generic R-symmetric superpotential, a singularity at the origin of the field space implies both R-symmetry breaking and supersymmetry breaking. We give a generalized necessary and sufficient condition for supersymmetry breaking which applies to both perturbative and nonperturbative models.
high energy physics theory
We present a case study for the doubly charged Higgs bosons $H^{\pm\pm}$ pair production in $e^+e^-$ and $pp$ colliders with their subsequent decays to four charged leptons. We consider the Higgs Triplet Model (HTM) not restricted by the custodial symmetry and the Minimal Left-Right Symmetric Model (MLRSM). The models include scalar triplets with different complexity of scalar potentials and, due to experimental restrictions, completely different scales of non-standard triplet vacuum expectation values. In both models, a doubly charged Higgs boson $H^{\pm\pm}$ can acquire a mass of hundreds of gigaelectronvolts, which can be probed at HL-LHC, future $e^+e^-$, and hadron colliders. We take into account a comprehensive set of constraints on the parameters of both models coming from neutrino oscillations, LHC, $e^+e^-$ and low-energy lepton flavour violating data and assume the same mass of $H^{\pm\pm}$. Our finding is that the $H^{\pm\pm}$ pair production in lepton and hadron colliders is comparable in both models, though more pronounced in MLRSM. We show that the decay branching ratios can be different within both models, leading to distinguishable four lepton signals and that the strongest are $4\mu$ events yielded by MLRSM. Typically we find that MLRSM signals are one order of magnitude larger that in HTM. For example, the $pp \to 4\mu$ MLRSM signal for 1 TeV $H^{\pm \pm}$ mass results in a clearly detectable significance of $S \simeq 11$ for HL-LHC and $S \simeq 290$ for FCC-hh. Finally we provide quantitative predictions for the dilepton invariant mass distributions and lepton separations which help to identify non-standard signals.
high energy physics phenomenology
In order to monitor the state of large-scale infrastructures, image acquisition by autonomous flight drones is efficient for stable angle and high-quality images. Supervised learning requires a large data set consisting of images and annotation labels. It takes a long time to accumulate images, including identifying the damaged regions of interest (ROIs). In recent years, unsupervised deep learning approaches such as generative adversarial networks (GANs) for anomaly detection algorithms have progressed. When a damaged image is a generator input, it tends to reverse from the damaged state to the healthy state generated image. Using the distance of distribution between the real damaged image and the generated reverse aging healthy state fake image, it is possible to detect the concrete damage automatically from unsupervised learning. This paper proposes an anomaly detection method using unpaired image-to-image translation mapping from damaged images to reverse aging fakes that approximates healthy conditions. We apply our method to field studies, and we examine the usefulness of our method for health monitoring of concrete damage.
electrical engineering and systems science
We present the discovery that ASASSN-14ko is a periodically flaring AGN at the center of the galaxy ESO 253-G003. At the time of its discovery by the All-Sky Automated Survey for Supernovae (ASAS-SN), it was classified as a supernova close to the nucleus. The subsequent six years of V- and g-band ASAS-SN observations reveal that ASASSN-14ko has nuclear flares occurring at regular intervals. The seventeen observed outbursts show evidence of a decreasing period over time, with a mean period of $P_0 = 114.2 \pm 0.4$ days and a period derivative of $\dot{P} = -0.0017\pm0.0003$. The most recent outburst in May 2020, which took place as predicted, exhibited spectroscopic changes during the rise and a had a UV bright, blackbody spectral energy distribution similar to tidal disruption events (TDEs). The X-ray flux decreased by a factor of 4 at the beginning of the outburst and then returned to its quiescent flux after ~8 days. TESS observed an outburst during Sectors 4-6, revealing a rise time of $5.60 \pm 0.05$ days in the optical and a decline that is best fit with an exponential model. We discuss several possible scenarios to explain ASASSN-14ko's periodic outbursts, but currently favor a repeated partial TDE. The next outbursts should peak in the optical on UT 2020-09-7.4$ \pm $1.1 and UT 2020-12-26.5$ \pm $1.4.
astrophysics
The development of dual-functional radar-communication (DFRC) systems, where vehicle localization and tracking can be combined with vehicular communication, will lead to more efficient future vehicular networks. In this paper, we develop a predictive beamforming scheme in the context of DFRC systems. We consider a system model where the road-side units estimates and predicts the motion parameters of vehicles based on the echoes of the DFRC signal. Compared to the conventional feedback-based beam tracking approaches, the proposed method can reduce the signaling overhead and improve the accuracy. To accurately estimate the motion parameters of vehicles in real-time, we propose a novel message passing algorithm based on factor graph, which yields near optimal solution to the maximum a posteriori estimation. The beamformers are then designed based on the predicted angles for establishing the communication links.}With the employment of appropriate approximations, all messages on the factor graph can be derived in a closed-form, thus reduce the complexity. Simulation results show that the proposed DFRC based beamforming scheme is superior to the feedback-based approach in terms of both estimation and communication performance. Moreover, the proposed message passing algorithm achieves a similar performance of the high-complexity particle-based methods.
electrical engineering and systems science
We apply recently constructed functional bases to the numerical conformal bootstrap for 1D CFTs. We argue and show that numerical results in this basis converge much faster than the traditional derivative basis. In particular, truncations of the crossing equation with even a handful of components can lead to extremely accurate results, in opposition to hundreds of components in the usual approach. We explain how this is a consequence of the functional basis correctly capturing the asymptotics of bound-saturating extremal solutions to crossing. We discuss how these methods can and should be implemented in higher dimensional applications.
high energy physics theory
A linear system is a pair $(P,\mathcal{L})$ where $\mathcal{L}$ is a family of subsets on a ground finite set $P$ such that $|l\cap l^\prime|\leq 1$, for every $l,l^\prime \in \mathcal{L}$. If all elements of $\mathcal{L}$ of a linear system $(P,\mathcal{L})$, then the linear system is called $r$-uniform linear system. The transversal number of a linear system $(P,\mathcal{L})$, $\tau(P,\mathcal{L})$, is the minimum cardinality of a subset $\hat{P}\subseteq P$ satisfying $l\cap\hat{P}\neq\emptyset$, for every $l\in\mathcal{L}$. The 2-packing number of a linear system $(P,\mathcal{L})$, $\nu_2(P,\mathcal{L})$, is the maximum cardinality of a subset $R\subseteq\mathcal{L}$ such that, any three elements of $R$ don't have a common point (are triplewise disjoint), that is, if three elements are chosen in $R$, then they are not incidents in a common point. For $r\geq2$, let $(P,\mathcal{L})$ be an $r$-uniform linear system. In "{\sc M. A. Henning and A. Yeo:} {\it Hypergraphs with large transversal number,} Discrete Math. {\bf 313} (2013), no. 8, 959--966." Henning and Yeo state the following question: Is it true that if $(P,\mathcal{L})$ is an $r$-uniform linear system then $\tau(P,\mathcal{L})\leq\displaystyle\frac{|P|+|\mathcal{L}|}{r+1}$ holds for all $r\geq2$?. In this note, we give some results of $r$-uniform linear systems, whose 2-packing number is fixed, satisfying the inequality.
mathematics
Dense 3D visual mapping estimates as many as possible pixel depths, for each image. This results in very dense point clouds that often contain redundant and noisy information, especially for surfaces that are roughly planar, for instance, the ground or the walls in the scene. In this paper we leverage on semantic image segmentation to discriminate which regions of the scene require simplification and which should be kept at high level of details. We propose four different point cloud simplification methods which decimate the perceived point cloud by relying on class-specific local and global statistics still maintaining more points in the proximity of class boundaries to preserve the infra-class edges and discontinuities. 3D dense model is obtained by fusing the point clouds in a 3D Delaunay Triangulation to deal with variable point cloud density. In the experimental evaluation we have shown that, by leveraging on semantics, it is possible to simplify the model and diminish the noise affecting the point clouds.
computer science
Due to high spectral efficiency and power efficiency, the continuous phase modulation (CPM) technique with constant envelope is widely used in aeronautical telemetry in strategic weapons and rockets, which are essential for national defence and aeronautic application. How to improve the bit error rate (BER) performance of CPM and keep a reasonable complexity is key for the entire telemetry system and has been the focus of research and engineering design. In this paper, a low-complexity noncoherent maximum likelihood sequence detection (MLSD) scheme for CPM is proposed. In the proposed method, the criterion of noncoherent MLSD for CPM is derived when the carrier phase is unknown, and then a novel Viterbi algorithm (VA) with modified state vector is designed to simplify the implementation of noncoherent MLSD. Both analysis and experimental results show that the proposed approach has lower computational complexity and does not need accurate carrier phase recovery, which overcomes the shortage of traditional MLSD method. What's more, compared to the traditional MLSD method, the proposed method also achieves almost the same detection performance.
electrical engineering and systems science
A polynomial-in-time growth bound is established for global Sobolev $H^s(\mathbb T)$ solutions to the derivative nonlinear Schr\"odinger equation on the circle with $s>1$. These bounds are derived as a consequence of a nonlinear smoothing effect for an appropriate gauge-transformed version of the periodic Cauchy problem, according to which a solution with its linear part removed possesses higher spatial regularity than the initial datum associated with that solution.
mathematics
Complex oxide heterointerfaces and van der Waals heterostructures present two versatile but intrinsically different platforms for exploring emergent quantum phenomena and designing new functionalities. The rich opportunity offered by the synergy between these two classes of materials, however, is yet to be charted. Here, we report an unconventional nonlinear optical filtering effect resulting from the interfacial polar alignment between monolayer MoS$_2$ and a neighboring ferroelectric oxide thin film. The second harmonic generation response at the heterointerface is either substantially enhanced or almost entirely quenched by an underlying ferroelectric domain wall depending on its chirality, and can be further tailored by the polar domains. Unlike the extensively studied coupling mechanisms driven by charge, spin, and lattice, the interfacial tailoring effect is solely mediated by the polar symmetry, as well explained via our density functional theory calculations, pointing to a new material strategy for the functional design of nanoscale reconfigurable optical applications.
condensed matter
Diffractive lenses have recently been applied to the domain of multispectral imaging in the X-ray and UV regimes where they can achieve very high resolution as compared to reflective and refractive optics. Conventionally, spectral components are reconstructed by taking measurements at the focal planes. However, the reconstruction quality can be improved by optimizing the measurement configuration. In this work, we adapt a sequential backward selection algorithm to search for a configuration which minimizes expected reconstruction error. By approximating the forward system as a circular convolution and making assumptions on the source and noise, we greatly reduce the complexity of the algorithm. Numerical results show that the configuration found by the algorithm significantly improves the reconstruction performance compared to a standard configuration.
electrical engineering and systems science
A phase transition is often accompanied by the appearance of an order parameter and symmetry breaking. Certain magnetic materials exhibit exotic hidden-order phases, in which the order parameters are not directly accessible to conventional magnetic measurements. Thus, experimental identification and theoretical understanding of a hidden order are difficult. Here we combine neutron scattering and thermodynamic probes to study the newly discovered rare-earth triangular-lattice magnet TmMgGaO$_4$. Clear magnetic Bragg peaks at K points are observed in the elastic neutron diffraction measurements. More interesting, however, is the observation of sharp and highly dispersive spin excitations that cannot be explained by a magnetic dipolar order, but instead is the direct consequence of the underlying multipolar order that is "hidden" in the neutron diffraction experiments. We demonstrate that the observed unusual spin correlations and thermodynamics can be accurately described by a transverse field Ising model on the triangular lattice with an intertwined dipolar and ferro-multipolar order.
condensed matter
We present a simple nonlinear digital pre-distortion (DPD) of optical transmitter components, which consists of concatenated blocks of a finite impulse response (FIR) filter, a memoryless nonlinear function and another FIR filter. The model is a Wiener-Hammerstein (WH) model and has essentially the same structure as neural networks or multilayer perceptions. This awareness enables one to achieve complexity-efficient DPD owing to the model-aware structure and exploit the well-developed optimization scheme in the machine learning field. The effectiveness of the method is assessed by electrical and optical back-to-back (B2B) experiments, and the results show that the WH DPD offers a 0.52-dB gain in signal-to-noise ratio (SNR) and 6.0-dB gain in optical modulator output power at a fixed SNR over linear-only DPD.
electrical engineering and systems science
The energy landscape for the Low-Voltage (LV) networks are beginning to change; changes resulted from the increase penetration of renewables and/or the predicted increase of electric vehicles charging at home. The previously passive `fit-and-forget' approach to LV network management will be inefficient to ensure its effective operations. A more adaptive approach is required that includes the prediction of risk and capacity of the circuits. Many of the proposed methods require full observability of the networks, motivating the installations of smart meters and advance metering infrastructure in many countries. However, the expectation of `perfect data' is unrealistic in operational reality. Smart meter (SM) roll-out can have its issues, which may resulted in low-likelihood of full SM coverage for all LV networks. This, together with privacy requirements that limit the availability of high granularity demand power data have resulted in the low uptake of many of the presented methods. To address this issue, Deep Learning Neural Network is proposed to predict the voltage distribution with partial SM coverage. The results show that SM measurements from key locations are sufficient for effective prediction of voltage distribution.
computer science
A new parton shower algorithm has been presented with the claim of providing soft-gluon resummation at `full colour' (arXiv:2001.11492). In this paper we show that the algorithm does not succeed in this goal. We show that full colour accuracy requires the Sudakov factors to be defined at amplitude level and that the simple parton-shower unitarity argument employed in arXiv:2001.11492 is not sufficient.
high energy physics phenomenology
We use the local load sharing fiber bundle model to demonstrate a shielding effect where strong fibers protect weaker ones. This effect exists due to the local stress enhancement around broken fibers in the local load sharing model, and it is therefore not present in the equal load sharing model. The shielding effect is prominent only after the initial disorder-driven part of the fracture process has finished, and if the fiber bundle has not reached catastrophic failure by this point, then the shielding increases the critical damage of the system, compared to equal load sharing. In this sense, the local stress enhancement may make the fracture process more stable, but at the cost of reduced critical force.
condensed matter
Quantum Fourier transform (QFT) is a key ingredient of many quantum algorithms. In typical applications such as phase estimation, a considerable number of ancilla qubits and gates are used to form a Hilbert space large enough for high-precision results. Qubit recycling reduces the number of ancilla qubits to one, but it is only applicable to semi-classical QFT and requires repeated measurements and feedforward within the coherence time of the qubits. In this work, we explore a novel approach based on resonators that forms a high-dimensional Hilbert space for the realization of QFT. By employing the perfect state-transfer method, we map an unknown multi-qubit state to a single resonator, and obtain the QFT state in the second oscillator through cross-Kerr interaction and projective measurement. A quantitive analysis shows that our method allows for high-dimensional and fully-quantum QFT employing the state-of-the-art superconducting quantum circuits. This paves the way for implementing various QFT related quantum algorithms.
quantum physics
The uncertainty principle is one of the comprehensive and fundamental concept in quantum theory. This principle states that it is not possible to simultaneously measure two incompatible observatories with high accuracy. Uncertainty principle has been formulated in various form. The most famous type of uncertainty relation is expressed based on the standard deviation of observables. In quantum information theory the uncertainty principle can be formulated using Shannon and von Neumann entropy. Entropic uncertainty relation in the presence of quantum memory is one of the most useful entropic uncertainty relations. Due to their importance and scalability, solid state systems have received considerable attention nowadays. In this work we will consider a quantum dot system as a solid state system. We will study the quantum correlation and quantum memory assisted entropic uncertainty in this typ of system. We will show that the temperature in of quantum dot system can affect the quantum correlation and entropic uncertainty bound. It will be observed that the entropic uncertainty bound decreases with decreasing temperature and quantum correlations decreases with increasing the temperature.
quantum physics
The ANAIS (Annual modulation with NaI(Tl) Scintillators) experiment aims at the confirmation or refutation of theDAMA/LIBRA positive annual modulation signal in the low energy detection rate, using the same target and technique, at the Canfranc Underground Laboratory (LSC) in Spain. ANAIS-112, consisting of nine 12.5 kg NaI(Tl) modules produced by Alpha Spectra Inc. in a 3x3matrix configuration, is taking data smoothly in "dark matter search" mode since August, 2017, after a commissioning phase and operation of the first detectors during the last years in various setups. A large effort has been carried out withinANAIS to characterize the background of sodium iodide detectors, before unblinding the data and performing the first annual modulation analysis. Here, the background models developed for all the nine ANAIS-112 detectors are presented. Measured spectra from threshold to high energy in different conditions are well described by the models based on quantified activities independently estimated following several approaches. In the region from 1 to 6 keVee the measured, efficiency corrected background level is 3.58+-0.02 keV-1 kg-1 day-1; NaI crystal bulk contamination is the dominant background source being 210Pb, 40K, 22Na and 3H contributions the most relevant ones. This background level, added to the achieved 1 keVee analysis threshold (thanks to the outstanding light collection and robust filtering procedures developed), allow ANAIS-112 to be sensitive to the modulation amplitude measured by DAMA/LIBRA, and able to explore at three sigma level in 5 years the WIMP parameter region singled out by this experiment.
astrophysics
High-temperature alloy design requires a concurrent consideration of multiple mechanisms at different length scales. We propose a workflow that couples highly relevant physics into machine learning (ML) to predict properties of complex high-temperature alloys with an example of the 9-12 wt.% Cr steels yield strength. We have incorporated synthetic alloy features that capture microstructure and phase transformations into the dataset. Identified high impact features that affect yield strength of 9Cr from correlation analysis agree well with the generally accepted strengthening mechanism. As part of the verification process, the consistency of sub-datasets has been extensively evaluated with respect to temperature and then refined for the boundary conditions of trained ML models. The predicted yield strength of 9Cr steels using the ML models is in excellent agreement with experiments. The current approach introduces physically meaningful constraints in interrogating the trained ML models to predict properties of hypothetical alloys when applied to data-driven materials.
condensed matter
We propose a Bayesian nonparametric matrix clustering approach to analyze the latent heterogeneity structure in the shot selection data collected from professional basketball players in the National Basketball Association (NBA). The proposed method adopts a mixture of finite mixtures framework and fully utilizes the spatial information via a mixture of matrix normal distribution representation. We propose an efficient Markov chain Monte Carlo algorithm for posterior sampling that allows simultaneous inference on both the number of clusters and the cluster configurations. We also establish large sample convergence properties for the posterior distribution. The excellent empirical performance of the proposed method is demonstrated via simulation studies and an application to shot chart data from selected players in the 2017 18 NBA regular season.
statistics
In three dimensions, it is known that field theories possessing extended $(p,q)$ anti-de Sitter (AdS) supersymmetry with ${\cal N}=p+q \geq 3$ can be realised in (2,0) AdS superspace. Here we present a formalism to reduce every field theory with (2,0) AdS supersymmetry to ${\cal N}=1$ AdS superspace. As nontrivial examples, we consider supersymmetric nonlinear sigma models formulated in terms of ${\cal N}=2$ chiral and linear supermultiplets. The $(2,0) \to (1,0)$ AdS reduction technique is then applied to the off-shell massless higher-spin supermultiplets in (2,0) AdS superspace constructed in [1]. As a result, for each superspin value $\hat s$, integer $(\hat s= s)$ or half-integer $(\hat s= s+1/2)$, with $s=1,2,\dots $, we obtain two off-shell formulations for a massless ${\cal N}=1$ superspin-$\hat s$ multiplet in AdS${}_3$. These models prove to be related to each other by a superfield Legendre transformation in the flat superspace limit, but the duality is not lifted to the AdS case. Two out of the four series of ${\cal N}=1$ supersymmetric higher-spin models thus derived are new. The constructed massless ${\cal N}=1$ supersymmetric higher-spin actions in AdS${}_3$ are used to formulate (i) higher-spin supercurrent multiplets in ${\cal N}=1$ AdS superspace; and (ii) new topologically massive higher-spin off-shell supermultiplets. Examples of ${\cal N}=1$ higher-spin supercurrents are given for models of a complex scalar supermultiplet. We also present two new off-shell formulations for a massive ${\cal N}=1$ gravitino supermultiplet in AdS${}_3$.
high energy physics theory
Thermal emission is the radiation of electromagnetic waves from hot objects. The promise of thermal-emission engineering for applications in energy harvesting, radiative cooling, and thermal camouflage has recently led to renewed research interest in this topic. There is a substantial need for accurate and precise measurement of thermal emission in a laboratory setting, which can be challenging in part due to the presence of background emission from the surrounding environment and the measurement instrument itself. This is especially true for measurements of emitters at temperatures close to that of the environment, where the impact of background emission is relatively large. In this paper, we describe, recommend, and demonstrate general procedures for thermal-emission measurements that are applicable to most experimental conditions, including less-common and more-challenging cases that include thermal emitters with temperature-dependent emissivity and emitters that are not in thermal equilibrium.
physics
X-ray Computed Tomography (CT) is widely used in clinical applications such as diagnosis and image-guided interventions. In this paper, we propose a new deep learning based model for CT image reconstruction with the backbone network architecture built by unrolling an iterative algorithm. However, unlike the existing strategy to include as many data-adaptive components in the unrolled dynamics model as possible, we find that it is enough to only learn the parts where traditional designs mostly rely on intuitions and experience. More specifically, we propose to learn an initializer for the conjugate gradient (CG) algorithm that involved in one of the subproblems of the backbone model. Other components, such as image priors and hyperparameters, are kept as the original design. Since a hypernetwork is introduced to inference on the initialization of the CG module, it makes the proposed model a certain meta-learning model. Therefore, we shall call the proposed model the meta-inversion network (MetaInv-Net). The proposed MetaInv-Net can be designed with much less trainable parameters while still preserves its superior image reconstruction performance than some state-of-the-art deep models in CT imaging. In simulated and real data experiments, MetaInv-Net performs very well and can be generalized beyond the training setting, i.e., to other scanning settings, noise levels, and data sets.
electrical engineering and systems science
The non-relativistic quantum mechanics with the generalized uncertainty principle (GUP) is examined when the potential is one-dimensional $\delta-$function. It is shown that unlike usual quantum mechanics, the Schr\"{o}dinger and Feynman's path-integral approaches are inequivalent at the first order of GUP parameter.
quantum physics
Metal nanoparticle (NP) surface coatings are known to significantly enhance the ultra-violet luminescence intensity of ZnO. Although there is general agreement that resonantly excited Localized Surface Plasmons (LSPs) in metal NPs can directly couple to excitons in the semiconductor increasing their spontaneous emission rate, the exact mechanisms involved in this phenomenon are currently not fully understood. In this work, LSP-exciton coupling in a ZnO single crystal and ZnO nanorods coated with a 2 nm Al layer has been investigated using correlative photoluminescence and depth-resolved cathodoluminescence and time-resolved photoluminescence spectroscopy. Temperature-resolved cathodoluminescence and photoluminescence measurements from 10 K to 250 K show enhancement factors up to 12 times of the free exciton (FX) emission at 80 K. The FX couple more efficiently to the LSPs in Al compared to the localized donor-bound excitons. Furthermore, a strong polarization dependence of the LSPs with respect to the FX was observed with higher enhanced FX transitions polarized in the same direction as the electric field of the incident excitation. These results indicate that selective enhancement of the ultra-violet excitonic PL in ZnO can be achieved by careful alignment of the crystallographic axes of the ZnO relative to the electric vector of the excitation source.
physics
Generating natural language under complex constraints is a principled formulation towards controllable text generation. We present a framework to allow specification of combinatorial constraints for sentence generation. We propose TSMH, an efficient method to generate high likelihood sentences with respect to a pre-trained language model while satisfying the constraints. Our approach is highly flexible, requires no task-specific training, and leverages efficient constraint satisfaction solving techniques. To better handle the combinatorial constraints, a tree search algorithm is embedded into the proposal process of the Markov chain Monte Carlo (MCMC) to explore candidates that satisfy more constraints. Compared to existing MCMC approaches, our sampling approach has a better mixing performance. Experiments show that TSMH achieves consistent and significant improvement on multiple language generation tasks.
computer science
The eigensolutions of many-body quantum systems are always difficult to compute. The envelope theory is a method to easily obtain approximate, but reliable, solutions in the case of identical particles. It is extended here to treat systems with different particles (bosons or fermions). The accuracy is tested for several systems composed of identical particles plus a different one.
quantum physics
We propose a neural network for zero-shot voice conversion (VC) without any parallel or transcribed data. Our approach uses pre-trained models for automatic speech recognition (ASR) and speaker embedding, obtained from a speaker verification task. Our model is fully convolutional and non-autoregressive except for a small pre-trained recurrent neural network for speaker encoding. ConVoice can convert speech of any length without compromising quality due to its convolutional architecture. Our model has comparable quality to similar state-of-the-art models while being extremely fast.
electrical engineering and systems science
Given the importance of dynamic contact angle, its numerous applications and the complexity and difficulty of use of available approaches, here we present a new simple semi-empirical models for estimation of dynamic contact angle's dependence on the wetting line velocity. These models should be applicable to any geometry, a very large range of capillary numbers and static contact angles and all solid-liquid-gas systems without requiring further experiments. Two simple equations are intuitively derived from the most promising theoretical dynamic contact angle models, the hydrodynamic and the molecular-kinetic models. Then the models, along with the basic form of the Hoffman model, are fitted to a large pool of data. The data are extracted from numerous studies and cover over 5 decades of capillary number, include static contact angles up to the superhydrophobic region and comprise of various geometries. The resulting models are compared to each other where the hydrodynamic model's predictions are found to be superior to the other two models by all statistical measures. Then, noting that the molecular forces become dominant at lower capillary numbers we separate the data into low and high capillary regions and repeat the process. There was only a minuscule difference between the results obtained for general models and high capillary models ($\textrm{Ca}>10^{-4}$) but the empirical approach resulted in the most accurate model at low capillary numbers ($\textrm{Ca}<10^{-4}$).
physics
Mixture modelling using elliptical distributions promises enhanced robustness, flexibility and stability over the widely employed Gaussian mixture model (GMM). However, existing studies based on the elliptical mixture model (EMM) are restricted to several specific types of elliptical probability density functions, which are not supported by general solutions or systematic analysis frameworks; this significantly limits the rigour and the power of EMMs in applications. To this end, we propose a novel general framework for estimating and analysing the EMMs, achieved through Riemannian manifold optimisation. First, we investigate the relationships between Riemannian manifolds and elliptical distributions, and the so established connection between the original manifold and a reformulated one indicates a mismatch between those manifolds, the major cause of failure of the existing optimisation for solving general EMMs. We next propose a universal solver which is based on the optimisation of a re-designed cost and prove the existence of the same optimum as in the original problem; this is achieved in a simple, fast and stable way. We further calculate the influence functions of the EMM as theoretical bounds to quantify robustness to outliers. Comprehensive numerical results demonstrate the ability of the proposed framework to accommodate EMMs with different properties of individual functions in a stable way and with fast convergence speed. Finally, the enhanced robustness and flexibility of the proposed framework over the standard GMM are demonstrated both analytically and through comprehensive simulations.
computer science
We illustrate how our recent light-front approach simplifies relativistic electrodynamics with an electromagnetic (EM) field $F^{\mu\nu}$ that is the sum of a (even very intense) plane travelling wave $F_t^{\mu\nu}(ct\!-\!z)$ and a static part $F_s^{\mu\nu}(x,y,z)$; it adopts the light-like coordinate $\xi=ct\!-\!z$ instead of time $t$ as an independent variable. This can be applied to several cases of extreme acceleration, both in vacuum and in a cold diluted plasma hit by a very short and intense laser pulse (slingshot effect, plasma wave-breaking and laser wake-field acceleration, etc.)
physics
In this work we study how entanglement of purification (EoP) and the new quantity of "complexity of purification" are related to each other using the $E_P=E_W$ conjecture. First, we consider two strips in the same side of a boundary and study the relationships between the entanglement of purification of this mixed state and the parameters of the system such as dimension, temperature, length of the strips and the distance between them. Next, using the same setup, we introduce two definitions for the complexity of mixed states, complexity of purification (CoP) and the interval volume (VI). We study their connections to other parameters similar to the EoP case. Then, we extend our study to more general examples of BTZ black holes solution in massive gravity, charged black holes and multipartite systems. Finally, we give various interpretations of our results using resource theories such as LOCC and also bit thread picture.
high energy physics theory
A finite sequence of equidistant samples (a sample train) of a periodic signal can be identified with a point in a multi-dimensional space. Such a point depends on the sampled signal, the sampling period, and the starting time of the sequence. If the starting time varies, then the corresponding point moves along a closed curve. We prove that such a curve, i.e., the set of all sample trains of a given length, determines the period of the sampled signal, provided that the sampling period is known. This is true even if the trains are short, and if the samples comprising trains are taken at a sub-Nyquist rate. The presented result is proved with a help of the theory of rotation numbers developed by Poincar\'e. We also prove that the curve of sample trains determines the sampled signal up to a time shift, provided that the ratio of the sampling period to the period of the signal is irrational. Eventually, we give an example which shows that the assumption on incommensurability of the periods cannot be dropped.
electrical engineering and systems science
Artificial atom qubits in diamond have emerged as leading candidates for a range of solid-state quantum systems, from quantum sensors to repeater nodes in memory-enhanced quantum communication. Inversion-symmetric group IV vacancy centers, comprised of Si, Ge, Sn and Pb dopants, hold particular promise as their neutrally charged electronic configuration results in a ground-state spin triplet, enabling long spin coherence above cryogenic temperatures. However, despite the tremendous interest in these defects, a theoretical understanding of the electronic and spin structure of these centers remains elusive. In this context, we predict the ground- and excited-state properties of the neutral group IV color centers from first principles. We capture the product Jahn-Teller effect found in the excited state manifold to second order in electron-phonon coupling, and present a non-perturbative treatment of the effect of spin-orbit coupling. Importantly, we find that spin-orbit splitting is strongly quenched due to the dominant Jahn-Teller effect, with the lowest optically-active $^3E_u$ state weakly split into $m_s$-resolved states. The predicted complex vibronic spectra of the neutral group IV color centers are essential for their experimental identification and have key implications for use of these systems in quantum information science.
quantum physics
The waste heat management becomes very important with increasing energy demand and limited fossil resources. Here, we demonstrate thermoelectric performance of allotropic TeSe2. Based on the first-principle calculations, we confirm the energetic and kinetic stability of five TeSe2 allotropes. We predict {\delta}-TeSe2 as a new direct band gap semiconductor having 1.60 eV direct band gap. All the TeSe2 allotropes exhibit band gap in UV-Vis region. The structural phases are clearly distinguished using simulated scanning tunnel microscopy. The room temperature Seebeck coefficient is maximum of 4 V/K for {\delta}-TeSe2. We show that room temperature thermoelectric figure of merit (ZT) can reach up to 3.1 with p-type doping in {\delta}-TeSe2. Moreover, temperature and chemical potential tuning extends the thermoelectric performance of TeSe2 allotropes. We strongly believe that our study is compelling from an experimental perspective and holds a key towards fabrication of thermoelectric devices based on TeSe2.
condensed matter
Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information.
computer science
We calculate the decay width of the photon splitting into three photons in a model of quantum electrodynamics with broken Lorentz invariance. We show that this process can lead to a cut-off in the very-high-energy part of a photon spectra of astrophysical sources. We obtain the 95\% CL bound on the Lorentz violating mass scale for photons from the analysis of the very-high-energy part of the Crab Nebula spectrum, obtained by HEGRA. This bound improves previous constraints by more than an order of magnitude.
high energy physics phenomenology
Dynamic scene deblurring is a challenging problem in computer vision. It is difficult to accurately estimate the spatially varying blur kernel by traditional methods. Data-driven-based methods usually employ kernel-free end-to-end mapping schemes, which are apt to overlook the kernel estimation. To address this issue, we propose a blur-attention module to dynamically capture the spatially varying features of non-uniform blurred images. The module consists of a DenseBlock unit and a spatial attention unit with multi-pooling feature fusion, which can effectively extract complex spatially varying blur features. We design a multi-level residual connection structure to connect multiple blur-attention modules to form a blur-attention network. By introducing the blur-attention network into a conditional generation adversarial framework, we propose an end-to-end blind motion deblurring method, namely Blur-Attention-GAN (BAG), for a single image. Our method can adaptively select the weights of the extracted features according to the spatially varying blur features, and dynamically restore the images. Experimental results show that the deblurring capability of our method achieved outstanding objective performance in terms of PSNR, SSIM, and subjective visual quality. Furthermore, by visualizing the features extracted by the blur-attention module, comprehensive discussions are provided on its effectiveness.
electrical engineering and systems science
We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study. A baseline experiment has been evaluated on OpenEDS for the task of semantic segmentation of pupil, iris, sclera and background, with the mean intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create opportunities to researchers in the eye tracking community and the broader machine learning and computer vision community to advance the state of eye-tracking for VR applications. The dataset is available for download upon request at https://research.fb.com/programs/openeds-challenge
computer science
Speaker recognition is the process of identifying a speaker based on the voice. The technology has attracted more attention with the recent increase in popularity of smart voice assistants, such as Amazon Alexa. In the past few years, various convolutional neural network (CNN) based speaker recognition algorithms have been proposed and achieved satisfactory performance. However, convolutional operations are building blocks that typically perform on a local neighborhood at a time and thus miss to capture global, long-range interactions at the feature level which are critical for understanding the pattern in a speaker's voice. In this work, we propose to apply Non-local Convolutional Neural Networks (NLCNN) to improve the capability of capturing long-range dependencies at the feature level, therefore improving speaker recognition performance. Specifically, we introduce non-local blocks where the output response of a position is computed as a weighted sum of the input features at all positions. Combining non-local blocks with pre-defined CNN networks, we investigate the effectiveness of NLCNN models. Without extensive tuning, the proposed NLCNN models outperform state-of-the-art speaker recognition algorithms on the public Voxceleb dataset. What's more, we investigate different types of non-local operations applied to the frequency-time domain, time domain, frequency domain and frame-level respectively. Among them, time domain is the most effective one for speaker recognition applications.
computer science
In 1955, it was first suggested that Hall effect can be employed for amplification purposes by using semiconductor material with very high mobility. While this idea was limited at that time, yet it was not entirely discarded expecting eventual progress. We revisit this idea and discuss it in the light of current literature. This manuscript kindles this 65 year old amazing idea and views it with modern understanding, which will aid in realizing Hall amplifiers.
physics
The Mann-Kendall test for trend has gained a lot of attention in a range of disciplines, especially in the environmental sciences. One of the drawbacks of the Mann-Kendall test when applied to real data is that no distinction can be made between meaningful and non-meaningful differences in subsequent observations. We introduce the concept of partial ties, which allows inferences while accounting for (non)meaningful difference. We introduce the modified statistic that accounts for such a concept and derive its variance estimator. We also present analytical results for the behavior of the test in a class of contiguous alternatives. Simulation results which illustrate the added value of the test are presented. We apply our extended version of the test to some real data concerning blood donation in Europe.
statistics
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, which usually facilitates decent performance on the testing data. The performance is nonetheless unmet when tested on samples from different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective and architecture-agnostic nature of our RSC method.
computer science
The lepton angular distributions of the Drell-Yan process in fixed-target experiments are investigated by NLO and NNLO perturbative QCD. We present the calculated angular parameters $\lambda$, $\mu$, $\nu$ and the degree of violation of the Lam-Tung relation, $1-\lambda-2\nu$, for the NA10, E615 and E866 experiments. Predictions for the ongoing COMPASS and SeaQuest experiments are also presented. The transverse momentum ($q_T$) distributions of $\lambda$ and $\nu$ show a clear dependence on the dimuon mass ($Q$) while those of $\mu$ have a strong rapidity ($x_F$) dependence. Furthermore, $\lambda$ and $\nu$ are found to scale with $q_T/Q$. These salient features could be qualitatively understood by a geometric approach where the lepton angular distribution parameters are expressed in terms of the polar and azimuthal angles of the "natural axis" in the dilepton rest frame.
high energy physics phenomenology
In this paper, we show that $W^{1,p}$ $(1\leq p<\infty)$ weak solutions to divergence form elliptic systems are Lipschitz and piecewise $C^{1}$ provided that the leading coefficients and data are of piecewise Dini mean oscillation, the lower order coefficients are bounded, and interfacial boundaries are $C^{1,\text{Dini}}$. This extends a result of Li and Nirenberg (\textit{Comm. Pure Appl. Math.} \textbf{56} (2003), 892-925). Moreover, under a stronger assumption on the piecewise $L^{1}$-mean oscillation of the leading coefficients, we derive a global weak type-(1,1) estimate with respect to $A_{1}$ Muckenhoupt weights for the elliptic systems without lower order terms.
mathematics
We calculate cross section for production of $D$ mesons and $\Lambda_c$ baryons in proton-proton collisions at the LHC. The cross section for production of $c \bar c$ pairs is calculated within $k_T$-factorization approach with the Kimber-Martin-Ryskin unintegrated gluon distributions. We show that our approach well describes the $D^0$, $D^+$ and $D_s$ experimental data. We try to understand recent ALICE and LHCb data for $\Lambda_c$ production with the $c \to \Lambda_c$ independent parton fragmentation approach. The Peterson fragmentation functions are used. The $f_{c \to \Lambda_c}$ fragmentation fraction and $\varepsilon_{c}^{\Lambda}$ parameter for $c \to \Lambda_c$ are varied. Although one can agree with the ALICE data using standard estimation of model uncertainties one cannot describe simultaneously the ALICE and the LHCb data with the same set of parameters. The fraction $f_{c \to \Lambda_c}$ neccessary to describe the ALICE data is much larger than the average value obtained from $e^+ e^-$ or $e p$ experiments. It seems very difficult, if not impossible, to understand the ALICE data within the considered independent parton fragmentation scheme.
high energy physics phenomenology
Trapped matter-wave interferometers (TMIs) present a platform for precision sensing within a compact apparatus, extending coherence time by repeated traversal of a confining potential. However, imperfections in this potential can introduce unwanted systematic effects, particularly when combined with errors in the associated beamsplitter operations. This can affect both the interferometer phase and visibility, and can make the performance more sensitive to other experimental imperfections. I examine the character and degree of these systematic effects, in particular within the context of 2D TMIs applicable for rotation sensing. I show that current experimental control can enable these interferometers to operate in a regime robust against experimental imperfections.
physics
We study the confining/deconfining phase transition in the mass deformed Yang-Mills matrix model which is obtained by the dimensional reduction of the bosonic sector of the four-dimensional maximally supersymmetric Yang-Mills theory compactified on the three sphere, i.e. the bosonic BMN model. The $1/D$ (with $D$ the number of matrices) expansion suggests that the model may have two closely separated transitions. However, using a second order lattice formulation of the model we find that for the small value of the mass parameter, $\mu=2$, those two apparent critical temperatures merge at large $N$, leaving only a single weakly first-order phase transition, in agreement with recent numerical results for $\mu=0$ (the bosonic BFSS model).
high energy physics theory
We give proofs of G\"odel's incompleteness theorems after A. Joyal. The proof uses internal category theory in an arithmetic universe, a predicative generalisation of topoi. Applications to L\"ob's Theorem are discussed.
mathematics
Motivated by the embedding problem of canonical models in small codimension, we extend Severi's double point formula to the case of surfaces with rational double points, and we give more general double point formulae for varieties with isolated singularities. A concrete application is for surfaces with geometric genus $p_g=5$: the canonical model is embedded in $\mathbb{P}^4$ if and only if we have a complete intersection of type $(2,4)$ or $(3,3)$.
mathematics
The out-of-time-order correlators (OTOCs) is used to study the quantum phase transitions (QPTs) between the normal phase and the superradiant phase in the Rabi and few-body Dicke models with large frequency ratio of theatomic level splitting to the single-mode electromagnetic radiation field frequency. The focus is on the OTOC thermally averaged with infinite temperature, which is an experimentally feasible quantity. It is shown that thecritical points can be identified by long-time averaging of the OTOC via observing its local minimum behavior. More importantly, the scaling laws of the OTOC for QPTs are revealed by studying the experimentally accessible conditions with finite frequency ratio and finite number of atoms in the studied models. The critical exponents extracted from the scaling laws of OTOC indicate that the QPTs in the Rabi and Dicke models belong to the same universality class.
quantum physics
Nanoscale communication systems operating in Ter-ahertz (THz) band are anticipated to revolutionise the healthcaresystems of the future. Global wireless data traffic is undergoinga rapid growth. However, wireless systems, due to their broad-casting nature, are vulnerable to malicious security breaches. Inaddition, advances in quantum computing poses a risk to existingcrypto-based information security. It is of the utmost importanceto make the THz systems resilient to potential active and passiveattacks which may lead to devastating consequences, especiallywhen handling sensitive patient data in healthcare systems. Newstrategies are needed to analyse these malicious attacks and topropose viable countermeasures. In this manuscript, we presenta new authentication mechanism for nanoscale communicationsystems operating in THz band at the physical layer. We assessedan impersonation attack on a THz system. We propose usingpath loss as a fingerprint to conduct authentication via two-stephypothesis testing for a transmission device. We used hiddenMarkov Model (HMM) viterbi algorithm to enhance the outputof hypothesis testing. We also conducted transmitter identificationusing maximum likelihood and Gaussian mixture model (GMM)expectation maximization algorithms. Our simulations showedthat the error probabilities are a decreasing functions of SNR. At 10 dB with 0.2 false alarm, the detection probability was almostone. We further observed that HMM out-performs hypothesistesting at low SNR regime (10% increase in accuracy is recordedat SNR =5 dB) whereas the GMM is useful when groundtruths are noisy. Our work addresses major security gaps facedby communication system either through malicious breachesor quantum computing, enabling new applications of nanoscalesystems for Industry 4.0.
electrical engineering and systems science
Several ways have been proposed in the literature to define a coherence measure based on Tsallis relative entropy. One of them is defined as a distance between a state and a set of incoherent states with Tsallis relative entropy taken as a distance measure. Unfortunately, this measure does not satisfy the required strong monotonicity, but a modification of this coherence has been proposed that does. We introduce three new Tsallis coherence measures coming from a more general definition that also satisfy the strong monotonicity, and compare all five definitions between each other. Using three coherence measures that we discuss, one can also define a discord. Two of these have been used in the literature, and another one is new. We also discuss two correlation measures based on Tsallis relative entropy. We provide explicit expressions for all three discord and two correlation measure on pure states. Lastly, we provide tight upper and lower bounds on two discord and correlations measures on any quantum state, with the condition for equality.
quantum physics
We report on the transition between two regimes of heat transport in a radiatively driven convection experiment, where a fluid gets heated up within a tunable heating length $\ell$ in the vicinity of the bottom of the tank. The first regime is similar to the one observed in standard Rayleigh-B\'enard experiments, the Nusselt number $Nu$ being related to the Rayleigh number $Ra$ through the power-law $Nu \sim Ra^{1/3}$. The second regime corresponds to the "ultimate" or mixing-length scaling regime of thermal convection, where $Nu$ varies as the square-root of $Ra$. Evidence for these two scaling regimes have been reported in Lepot et al. (Proc. Nat. Acad. Sci. U S A, {\bf 115}, 36, 2018), and we now study in detail how the system transitions from one to the other. We propose a simple model describing radiatively driven convection in the mixing-length regime. \corr{It leads to the scaling relation $Nu \sim \frac{\ell}{H} Pr^{1/2} Ra^{1/2}$,} where $H$ is the height of the cell, thereby allowing us to deduce the values of $Ra$ and $Nu$ at which the system transitions from one regime to the other. These predictions are confirmed by the experimental data gathered at various $Ra$ and $\ell$. \corr{We conclude by showing that boundary layer corrections can persistently modify the Prandtl number dependence of $Nu$ at large $Ra$, for $Pr \gtrsim 1$.
physics
Presented is a new algorithm for estimating the frequency of a single-tone noisy signal using linear least squares (LLS). Frequency estimation is a nonlinear problem, and typically, methods such as Nonlinear Least Squares (NLS) (batch) or a digital phase locked loop (DPLL) (online) are employed for such an estimate. However, with the linearization approach presented here, one can harness the efficiency of LLS to obtain very good estimates, while experiencing little penalty for linearizing. In this paper, the mathematical basis of this algorithm is described, and the bias and variance are analyzed analytically and numerically. With the batch version of this algorithm, it will be demonstrated that the estimator is just as good as NLS. But because LLS is non recursive, the estimate it produces much more efficiently than from NLS. When the proposed algorithm is implemented online, it will be demonstrated that performance is comparable to a digital phase locked loop, with some stability and tracking range advantages.
electrical engineering and systems science
COVID-19, due to its accelerated spread has brought in the need to use assistive tools for faster diagnosis in addition to typical lab swab testing. Chest X-Rays for COVID cases tend to show changes in the lungs such as ground glass opacities and peripheral consolidations which can be detected by deep neural networks. However, traditional convolutional networks use point estimate for predictions, lacking in capture of uncertainty, which makes them less reliable for adoption. There have been several works so far in predicting COVID positive cases with chest X-Rays. However, not much has been explored on quantifying the uncertainty of these predictions, interpreting uncertainty, and decomposing this to model or data uncertainty. To address these needs, we develop a visualization framework to address interpretability of uncertainty and its components, with uncertainty in predictions computed with a Bayesian Convolutional Neural Network. This framework aims to understand the contribution of individual features in the Chest-X-Ray images to predictive uncertainty. Providing this as an assistive tool can help the radiologist understand why the model came up with a prediction and whether the regions of interest captured by the model for the specific prediction are of significance in diagnosis. We demonstrate the usefulness of the tool in chest x-ray interpretation through several test cases from a benchmark dataset.
electrical engineering and systems science
Theoretical predictions for the Lamb shift in helium are limited by unknown quantum electrodynamic effects of the order $\alpha^7m$, where $\alpha$ is the fine-structure constant and $m$ is the electron mass. We make an important step towards the complete calculation of these effects by deriving the most challenging part, which is induced by the virtual photon exchange between all three helium particles, the two electrons, and the nucleus. The complete calculation of the $\alpha^7m$ effect including the radiative corrections will allow comparing of the nuclear charge radii determined from the electronic and muonic helium atoms and thus provide a stringent test of the Standard Model of fundamental interactions.
high energy physics phenomenology
We confirm the planetary nature of a warm Jupiter transiting the early M dwarf TOI-1899, using a combination of available TESS photometry; high-precision, near-infrared spectroscopy with the Habitable-zone Planet Finder; and speckle and adaptive optics imaging. The data reveal a transiting companion on an $\sim29$-day orbit with a mass and radius of $0.66\pm0.07\ \mathrm{M_{J}}$ and $1.15_{-0.05}^{+0.04}\ \mathrm{R_{J}}$, respectively. The star TOI-1899 is the lowest-mass star known to host a transiting warm Jupiter, and we discuss the follow-up opportunities afforded by a warm ($\mathrm{T_{eq}}\sim362$ K) gas giant orbiting an M0 star. Our observations reveal that TOI-1899.01 is a puffy warm Jupiter, and we suggest additional transit observations to both refine the orbit and constrain the true dilution observed in TESS.
astrophysics
The deepening penetration of variable energy resources creates unprecedented challenges for system operators (SOs). An issue that merits special attention is the precipitous net load ramps, which require SOs to have flexible capacity at their disposal so as to maintain the supply-demand balance at all times. In the judicious procurement and deployment of flexible capacity, a tool that forecasts net load ramps may be of great assistance to SOs. To this end, we propose a methodology to forecast the magnitude and start time of daily primary three-hour net load ramps. We perform an extensive analysis so as to identify the factors that influence net load and draw on the identified factors to develop a forecasting methodology that harnesses the long short-term memory model. We demonstrate the effectiveness of the proposed methodology on the CAISO system using comparative assessments with selected benchmarks based on various evaluation metrics.
electrical engineering and systems science
Medicine is an important application area for deep learning models. Research in this field is a combination of medical expertise and data science knowledge. In this paper, instead of 2D medical images, we introduce an open-access 3D intracranial aneurysm dataset, IntrA, that makes the application of points-based and mesh-based classification and segmentation models available. Our dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction. We provide a large-scale benchmark of classification and part segmentation by testing state-of-the-art networks. We also discuss the performance of each method and demonstrate the challenges of our dataset. The published dataset can be accessed here: https://github.com/intra3d2019/IntrA.
electrical engineering and systems science
Differential privacy is a leading protection setting, focused by design on individual privacy. Many applications, in medical / pharmaceutical domains or social networks, rather posit privacy at a group level, a setting we call integral privacy. We aim for the strongest form of privacy: the group size is in particular not known in advance. We study a problem with related applications in domains cited above that have recently met with substantial recent press: sampling. Keeping correct utility levels in such a strong model of statistical indistinguishability looks difficult to be achieved with the usual differential privacy toolbox because it would typically scale in the worst case the sensitivity by the sample size and so the noise variance by up to its square. We introduce a trick specific to sampling that bypasses the sensitivity analysis. Privacy enforces an information theoretic barrier on approximation, and we show how to reach this barrier with guarantees on the approximation of the target non private density. We do so using a recent approach to non private density estimation relying on the original boosting theory, learning the sufficient statistics of an exponential family with classifiers. Approximation guarantees cover the mode capture problem. In the context of learning, the sampling problem is particularly important: because integral privacy enjoys the same closure under post-processing as differential privacy does, any algorithm using integrally privacy sampled data would result in an output equally integrally private. We also show that this brings fairness guarantees on post-processing that would eventually elude classical differential privacy: any decision process has bounded data-dependent bias when the data is integrally privately sampled. Experimental results against private kernel density estimation and private GANs displays the quality of our results.
statistics
Solving multi-objective optimization problems is important in various applications where users are interested in obtaining optimal policies subject to multiple, yet often conflicting objectives. A typical approach to obtain optimal policies is to first construct a loss function that is based on the scalarization of individual objectives, and then find the optimal policy that minimizes the loss. However, optimizing the scalarized (and weighted) loss does not necessarily provide guarantee of high performance on each possibly conflicting objective because it is challenging to assign the right weights without knowing the relationship among these objectives. Moreover, the effectiveness of these gradient descent algorithms is limited by the agent's ability to explain their decisions and actions to human users. The purpose of this study is two-fold. First, we propose a vector value function based multi-objective reinforcement learning (V2f-MORL) approach that seeks to quantify the inter-objective relationship via reinforcement learning (RL) when the impact of one objective on others is unknown a prior. In particular, we construct one actor and multiple critics that can co-learn the policy and inter-objective relationship matrix (IORM), quantifying the impact of objectives on each other, in an iterative way. Second, we provide a semantic representation that can uncover the trade-off of decision policies made by users to reconcile conflicting objectives based on the proposed V2f-MORL approach for the explainability of the generated behaviors subject to given optimization objectives. We demonstrate the effectiveness of the proposed approach via a MuJoCo based robotics case study.
electrical engineering and systems science
Sandstone mechanical stability is of key concern in projects involving injections of CO2 in sandstone geological reservoirs, for the purpose of long-term storage. We developed a method to measure nanometer-scale deformations of sandstones in real time. We demonstrate that Berea sandstone, when hydrated, changes dimensions with a relative deformation of the order of 10^(-4). If the moisture content increases, sandstone samples exhibit an extension and if the moisture content decreases then the samples shrink. We also discover that, immediately after exposure to water, the sandstone temporarily shrinks, just for a few seconds, after which a slow extension begins, and continues until about half of the fluid evaporates. Such shrinkage followed by an extension has been observed also when the sample was exposed to acetone, Mineral Spirits or Vacuum Oil. The results are obtained using a high-resolution nano-positioner technique and, in independent experiments, confirmed using the technique of coda wave interferometry.
physics
A new methodological approach for the study of topology for shapes made of arrangements of lines, planes or solids is presented. Topologies for shapes are traditionally built on the classical theory of point-sets. In this paper, topologies are built with shapes, which are formalized as unanalyzed objects without points, and with structures defined from their parts. An interpretative, aesthetic dimension is introduced according to which the topological structure of a shape is not inherited from an ambient space but is induced based on how its appearance is interpreted into parts. The proposed approach provides a more natural, spatial framework for studies on the mathematical structure of design objects, in art and design. More generally, it shows how mathematical constructs (here, topology) can be built directly in terms of objects of art and design, as opposed to a more common opposite approach, where objects of art and design are subjugated to canonical mathematical constructs.
mathematics
Local quantum uncertainty and interferometric power have been introduced by Girolami et al. in [1,2] as geometric quantifiers of quantum correlations. The aim of the present paper is to discuss their properties in a unified manner by means of the the metric adjusted skew information defined by Hansen in [3].
quantum physics
Stable radius of cylindrical space due to additional repulsion caused by noncommutativity of two-component field values is found.
physics
In this paper, we analyze the doubly heavy spin-3/2--spin-1/2 baryons and light meson vertices within the method of light-cone QCD sum rules. These vertices are parametrized in terms of one (three) coupling constant(s) for the pseudoscalar (vector) mesons. The said coupling constants are calculated for all possible transitions. The results presented here can serve to be useful information in experimental as well as theoretical studies of the properties of doubly heavy baryons.
high energy physics phenomenology
Ni-doped MoS$_2$ is a layered material with useful tribological, optoelectronic, and catalytic properties. Experiment and theory on doped MoS$_2$ has focused mostly on monolayers or finite particles: theoretical studies of bulk Ni-doped MoS$_2$ are lacking and the mechanisms by which Ni alters bulk properties are largely unsettled. We use density functional theory calculations to determine the structure, mechanical properties, electronic properties, and formation energies of bulk Ni-doped 2H-MoS$_2$ as a function of doping concentration. We find four meta-stable structures: Mo or S substitution, and tetrahedral (t-) or octahedral (o-) intercalation. We compute phase diagrams as a function of chemical potential to guide experimental synthesis. A convex hull analysis shows that t-intercalation (favored over o-intercalation) is quite stable against phase segregation and in comparison with other compounds containing Ni, Mo, and S; the doping formation energy is around 0.1 meV/atom. Intercalation forms strong interlayer covalent bonds and does not increase the $c$-parameter. Ni-doping creates new states in the electronic density of states in MoS$_2$ and shifts the Fermi level, which are of interest for tuning the electronic and optical properties. We calculate the infrared and Raman spectra and find new peaks and shifts in existing peaks that are unique to each dopant site, and therefore may be used to identify the site experimentally, which has been a challenge to do conclusively.
condensed matter
We study the moduli space of frozen planet orbits in the Helium atom for an interpolation between instantaneous and mean interactions and show that this moduli space is compact.
mathematics
We study the Taylor expansion for the solutions of differential equations driven by $p$-rough paths with $p>2$. We prove a general theorem concerning the convergence of the Taylor expansion on a nonempty interval provided that the vector fields are analytic on a ball centered at the initial point. We also derive criteria that enable us to study the rate of convergence of the Taylor expansion. Finally and this is also the main and the most original part of this paper, we prove Castell expansions and tail estimates with exponential decays for the remainder terms of the solutions of the stochastic differential equations driven by continuous centered Gaussian process with finite $2D~\rho-$variation and fractional Brownian motion with Hurst parameter $H>1/4$.
mathematics
Dynamic inference problems in autoregressive (AR/ARMA/ARIMA), exponential smoothing, and navigation are often formulated and solved using state-space models (SSM), which allow a range of statistical distributions to inform innovations and errors. In many applications the main goal is to identify not only the hidden state, but also additional unknown model parameters (e.g. AR coefficients or unknown dynamics). We show how to efficiently optimize over model parameters in SSM that use smooth process and measurement losses. Our approach is to project out state variables, obtaining a value function that only depends on the parameters of interest, and derive analytical formulas for first and second derivatives that can be used by many types of optimization methods. The approach can be used with smooth robust penalties such as Hybrid and the Student's t, in addition to classic least squares. We use the approach to estimate robust AR models and long-run unemployment rates with sudden changes.
mathematics
We present a table-top beamline providing a soft X-ray supercontinuum extending up to 350 eV from high-order harmonic generation with sub-13 fs 1300 nm driving pulses and simultaneous production of sub-5 fs pulses centered at 800 nm. Optimization of the high harmonic generation in a long and dense gas medium yields a photon flux of ~2 x 10^7 photons/s/1% bandwidth at 300 eV. The temporal resolution of X-ray transient absorption experiments with this beamline is measured to be 11 fs for 800 nm excitation. This dual-wavelength approach, combined with high flux and high spectral and temporal resolution soft X-ray absorption spectroscopy, is a new route to the study of ultrafast electronic dynamics in carbon-containing molecules and materials at the carbon K-edge.
physics
We introduce the factor complex of a neural code, and show how intervals and maximal codewords are captured by the combinatorics of factor complexes. We use these results to obtain algebraic and combinatorial characterizations of max-intersection-complete codes, as well as a new combinatorial characterization of intersection-complete codes.
mathematics
In this paper, we focus on unsupervised representation learning for skeleton-based action recognition. Existing approaches usually learn action representations by sequential prediction but they suffer from the inability to fully learn semantic information. To address this limitation, we propose a novel framework named Prototypical Contrast and Reverse Prediction (PCRP), which not only creates reverse sequential prediction to learn low-level information (e.g., body posture at every frame) and high-level pattern (e.g., motion order), but also devises action prototypes to implicitly encode semantic similarity shared among sequences. In general, we regard action prototypes as latent variables and formulate PCRP as an expectation-maximization task. Specifically, PCRP iteratively runs (1) E-step as determining the distribution of prototypes by clustering action encoding from the encoder, and (2) M-step as optimizing the encoder by minimizing the proposed ProtoMAE loss, which helps simultaneously pull the action encoding closer to its assigned prototype and perform reverse prediction task. Extensive experiments on N-UCLA, NTU 60, and NTU 120 dataset present that PCRP outperforms state-of-the-art unsupervised methods and even achieves superior performance over some of supervised methods. Codes are available at https://github.com/Mikexu007/PCRP.
computer science
The resistance between two nodes in some resistor networks has been studied extensively by mathematicians and physicists. Let $L_n$ be a linear hexagonal chain with $n$\, 6-cycles. Then identifying the opposite lateral edges of $L_n$ in ordered way yields the linear hexagonal cylinder chain, written as $R_n$. We obtain explicit formulae for the resistance distance $r_{L_n}(i, j)$ (resp. $r_{R_n}(i,j)$) between any two vertices $i$ and $j$ of $L_n$ (resp. $R_n$). To the best of our knowledge $\{L_n\}_{n=1}^{\infty}$ and $\{R_n\}_{n=1}^{\infty}$ are two nontrivial families with diameter going to $\infty$ for which all resistance distances have been explicitly calculated. We determine the maximum and the minimum resistance distances in $L_n$ (resp. $R_n$). The monotonicity and some asymptotic properties of resistance distances in $L_n$ and $R_n$ are given. As well we give formulae for the Kirchhoff indices of $L_n$ and $R_n$ respectively.
mathematics
After the discovery of the double-charm baryon $\Xi_{cc}^{++}$ by LHCb, one of the most important topics is to search for the bottom-charm baryons which contain a $b$ quark, a $c$ quark and a light quark. In this work, we study the two-body non-leptonic weak decays of a bottom-charm baryon into a spin-$1/2$ bottomed baryon and a light pseudoscalar meson with the short-distance contributions calculated under the factorization hypothesis and the long-distance contributions considering the final-state-interaction effects. The branching fractions of all fifty-seven decay channels are estimated. The results indicate that $\Xi_{bc}^+\to\Xi_b^0\pi^+$, $\Xi_{bc}^{0}\to\Xi_{b}^{-}\pi^+$ and $\Omega_{bc}^0\to\Omega_b^-\pi^+$ decay modes have relatively large decay rates and thus could be used to experimentally search for the bottom-charm baryons. The topological diagrams and the SU(3) symmetry of bottom-charm baryon decays are discussed.
high energy physics phenomenology
The reionization of hydrogen is closely linked to the first structures in the universe, so understanding the timeline of reionization promises to shed light on the nature of these early objects. In particular, transmission of Lyman alpha (Ly$\alpha$) from galaxies through the intergalactic medium (IGM) is sensitive to neutral hydrogen in the IGM, so can be used to probe the reionization timeline. In this work, we implement an improved model of the galaxy UV luminosity to dark matter halo mass relation to infer the volume-averaged fraction of neutral hydrogen in the IGM from Ly$\alpha$ observations. Many models assume that UV-bright galaxies are hosted by massive dark matter haloes in overdense regions of the IGM, so reside in relatively large ionized regions. However, observations and N-body simulations indicate that scatter in the UV luminosity-halo mass relation is expected. Here, we model the scatter (though we assume the IGM topology is unaffected) and assess the impact on Ly$\alpha$ visibility during reionization. We show that UV luminosity-halo mass scatter reduces Ly$\alpha$ visibility compared to models without scatter, and that this is most significant for UV-bright galaxies. We then use our model with scatter to infer the neutral fraction, $\overline{x}_{\mathrm{HI}}$, at $z \sim 7$ using a sample of Lyman-break galaxies in legacy fields. We infer $\overline{x}_{\mathrm{HI}} = 0.55_{-0.13}^{+0.11}$ with scatter, compared to $\overline{x}_{\mathrm{HI}} = 0.59_{-0.14}^{+0.12}$ without scatter, a very slight decrease and consistent within the uncertainties. Finally, we place our results in the context of other constraints on the reionization timeline and discuss implications for future high-redshift galaxy studies.
astrophysics
The strange quark matter under strong magnetic fields and finite temperatures is studied in the framework of the MIT Bag model. Matter under such conditions is believed to be present in the core of dense astrophysical objects, like Neutron Stars and more exotic compact objects like Quark Stars. In this study, the anisotropy of the pressure due to the presence of a strong magnetic field is taken into account and a temperature-dependent equation of state is obtained. In the strong field regime, the behavior of the transversal pressure suggests a transversal collapse of the quark and electron gasses for magnetic fields above $\sim10^{19}$ G, even at finite temperature, which can enhance as well the collapse. The corresponding behavior of the energy per baryon and the mass-radius relation for Quark Stars at different temperatures, fixed magnetic field and taking into account the baryon number conservation, $\beta$-equilibrium and charge neutrality, are as well reported.
astrophysics
We explore the low energy neutrinos from stopped cosmic ray muons in the Earth. Based on the muon intensity at the sea level and the muon energy loss rate, the depth distributions of stopped muons in the rock and sea water can be derived. Then we estimate the $\mu^-$ decay and nuclear capture probabilities in the rock. Finally, we calculate the low energy neutrino fluxes and find that they depend heavily on the detector depth $d$. For $d = 1000$ m, the $\nu_e$, $\bar{\nu}_e$, $\nu_\mu$ and $\bar{\nu}_\mu$ fluxes in the range of 13 MeV $ \leq E_\nu \leq$ 53 MeV are averagely $10.8 \%$, $6.3\%$, $3.7 \%$ and $6.2 \%$ of the corresponding atmospheric neutrino fluxes, respectively. The above results will be increased by a factor of 1.4 if the detector depth $d < 30$ m. In addition, we find that most neutrinos come from the region within 200 km and the near horizontal direction, and the $\bar{\nu}_e$ flux depends on the local rock and water distributions.
high energy physics phenomenology
Feature descriptors of point clouds are used in several applications, such as registration and part segmentation of 3D point clouds. Learning discriminative representations of local geometric features is unquestionably the most important task for accurate point cloud analyses. However, it is challenging to develop rotation or scale-invariant descriptors. Most previous studies have either ignored rotations or empirically studied optimal scale parameters, which hinders the applicability of the methods for real-world datasets. In this paper, we present a new local feature description method that is robust to rotation, density, and scale variations. Moreover, to improve representations of the local descriptors, we propose a global aggregation method. First, we place kernels aligned around each point in the normal direction. To avoid the sign problem of the normal vector, we use a symmetric kernel point distribution in the tangential plane. From each kernel point, we first projected the points from the spatial space to the feature space, which is robust to multiple scales and rotation, based on angles and distances. Subsequently, we perform graph convolutions by considering local kernel point structures and long-range global context, obtained by a global aggregation method. We experimented with our proposed descriptors on benchmark datasets (i.e., ModelNet40 and ShapeNetPart) to evaluate the performance of registration, classification, and part segmentation on 3D point clouds. Our method showed superior performances when compared to the state-of-the-art methods by reducing 70$\%$ of the rotation and translation errors in the registration task. Our method also showed comparable performance in the classification and part-segmentation tasks with simple and low-dimensional architectures.
computer science
We predict that antiferromagnetic bilayers formed from van der Waals (vdW) materials, like bilayer CrI$_3$, have a strong magnetoelectric response that can be detected by measuring the gate voltage dependence of Faraday or Kerr rotation signals, total magnetization, or anomalous Hall conductivity. Strong effects are possible in single-gate geometries, and in dual-gate geometries that allow internal electric fields and total carrier densities to be varied independently. We comment on the reliability of density-functional-theory estimates of interlayer magnetic interactions in van der Waals bilayers, and on the sensitivity of magnetic interactions to pressure that alters the spatial separation between layers.
condensed matter
The study of the Unruh effect naturally raises the interest for a deeper understanding of the analogy between temperature and acceleration. A recurring question is whether an accelerated frame can be distinguished from an inertial thermal bath in pure thermodynamic experiments, such problem has been approached in the literature and a consensus is yet to be fully reached. In the present work we use the open quantum system formalism to investigate the case where both acceleration and background temperature are present. We find the asymptotic state density and entanglement generation from the Markovian evolution of accelerated qubits interacting with a thermal state of the external scalar field. Our results suggest that there is a very small asymmetry on the effects of the Unruh and background temperatures. Addressing the nonzero background temperature case is of both theoretical and phenomenological interest, thus the authors hope to enrich the existing discussions on the topic.
high energy physics theory
This study examines a nonparametric inference on a stationary L\'evy-driven Ornstein-Uhlenbeck (OU) process $X = (X_{t})_{t \geq 0}$ with a compound Poisson subordinator. We propose a new spectral estimator for the L\'evy measure of the L\'evy-driven OU process $X$ under macroscopic observations. We also derive, for the estimator, multivariate central limit theorems over a finite number of design points, and high-dimensional central limit theorems in the case wherein the number of design points increases with an increase in the sample size. Built on these asymptotic results, we develop methods to construct confidence bands for the L\'evy measure and propose a practical method for bandwidth selection.
statistics
Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM). The systolic array (SA) is a pipelined 2D array of processing elements (PEs), with very efficient local data movement, well suited to accelerating GEMM, and widely deployed in industry. In this work, we describe two significant improvements to the traditional SA architecture, to specifically optimize for CNN inference. Firstly, we generalize the traditional scalar PE, into a Tensor-PE, which gives rise to a family of new Systolic Tensor Array (STA) microarchitectures. The STA family increases intra-PE operand reuse and datapath efficiency, resulting in circuit area and power dissipation reduction of as much as 2.08x and 1.36x respectively, compared to the conventional SA at iso-throughput with INT8 operands. Secondly, we extend this design to support a novel block-sparse data format called density-bound block (DBB). This variant (STA-DBB) achieves a 3.14x and 1.97x improvement over the SA baseline at iso-throughput in area and power respectively, when processing specially-trained DBB-sparse models, while remaining fully backwards compatible with dense models.
computer science
In cancer translational research, increasing effort is devoted to the study of the combined effect of two drugs when they are administered simultaneously. In this paper, we introduce a new approach to estimate the part of the effect of the two drugs due to the interaction of the compounds, i.e. which is due to synergistic or antagonistic effects of the two drugs, compared to a reference value representing the condition when the combined compounds do not interact, called zero-interaction. We describe an in-vitro cell viability experiment as a random experiment, by interpreting cell viability as the probability of a cell in the experiment to be viable after treatment, and including information related to different exposure conditions. We propose a flexible Bayesian spline regression framework for modelling the viability surface of two drugs combined as a function of the concentrations. Since the proposed approach is based on a statistical model, it allows to include replicates of the experiments, to evaluate the uncertainty of the estimates, and to perform prediction. We test the model fit and prediction performance on a simulation study, and on an ovarian cancer cell dataset. Posterior estimates of the zero-interaction level and of the synergy term, obtained via adaptive MCMC algorithms, are used to compute interpretable measures of efficacy of the combined experiment, including relative volume under the surface (rVUS) measures to summarise the zero-interaction and synergy terms and a bi-variate alternative to the well-known EC50 measure.
statistics
In this paper, the zero-forcing and regularized zero-forcing schemes operating in crowded extra-large MIMO (XL-MIMO) scenarios with a fixed number of subarrays have been emulated using the randomized Kaczmarz algorithm (rKA). For that, non-stationary properties have been deployed through the concept of visibility regions when considering two different power normalization methods of non-stationary channels. We address the randomness design of rKA based on the exploitation of spatial non-stationary properties. Numerical results show that, in general, the proposed rKA-based combiner applicable to XL-MIMO systems can considerably decrease computational complexity of the signal detector by paying with small performance losses.
electrical engineering and systems science
In $B_c^- \to J/\psi (\to \mu^+ \mu^-)\tau^-\bar{\nu}_\tau$ decay, the three-momentum $\boldsymbol{p}_{\tau^-}$ cannot be determined accurately due to the decay products of $\tau^-$ inevitably include an undetected $\nu_{\tau}$. As a consequence, the angular distribution of this decay cannot be measured. In this work, we construct a {\it measurable} angular distribution by considering the subsequent decay $\tau^- \to \pi^- \nu_\tau$. The full cascade decay is $B_c^- \to J/\psi (\to \mu^+ \mu^-)\tau^- (\to \pi^- \nu_\tau)\bar{\nu}_\tau$, in which the three-momenta $\boldsymbol{p}_{\mu^+}$, $\boldsymbol{p}_{\mu^-}$, and $\boldsymbol{p}_{\pi^-}$ can be measured. The five-fold differential angular distribution containing all Lorentz structures of the new physics (NP) effective operators can be written in terms of twelve angular observables $\mathcal{I}_i (q^2, E_\pi)$. Integrating over the energy of pion $E_\pi$, we construct twelve normalized angular observables $\widehat{\mathcal{I}}_i(q^2)$ and two lepton-flavor-universality ratios $R(P_{L,T}^{J/\psi})(q^2)$. Based on the $B_c \to J/\psi$ form factors calculated by the latest lattice QCD and sum rule, we predict the $q^2$ distribution of all $\widehat{\mathcal{I}}_i$ and $R(P_{L,T}^{J/\psi})$ both within the Standard Model and in eight NP benchmark points. We find that the benchmark BP2 (corresponding to the hypothesis of tensor operator) has the greatest effect on all $\widehat{\mathcal{I}}_{i}$ and $R(P_{L,T}^{J/\psi})$, except $\widehat{\mathcal{I}}_{5}$. The ratios $R(P_{L,T}^{J/\psi})$ are more sensitive to the NP with pseudo-scalar operators than the $\widehat{\mathcal{I}}_{i}$. Finally, we discuss the symmetries in the angular observables and present a model-independent method to determine the existence of tensor operators.
high energy physics phenomenology
The goal of metagenomics is to study the composition of microbial communities, typically using high-throughput shotgun sequencing. In the metagenomic binning problem, we observe random substrings (called contigs) from a mixture of genomes and want to cluster them according to their genome of origin. Based on the empirical observation that genomes of different bacterial species can be distinguished based on their tetranucleotide frequencies, we model this task as the problem of clustering N sequences generated by M distinct Markov processes, where M<<N. Utilizing the large-deviation principle for Markov processes, we establish the information-theoretic limit for perfect binning. Specifically, we show that the length of the contigs must scale with the inverse of the Chernoff Information between the two most similar species. Our result also implies that contigs should be binned using the conditional relative entropy as a measure of distance, as opposed to the Euclidean distance often used in practice.
computer science
We discuss how dissipative effects and the presence of a thermal radiation bath, which are inherent characteristics of the warm inflation dynamics, can evade the recently proposed Swampland conjectures. Different forms of dissipation terms, motivated by both microphysical quantum field theory and phenomenological models, are discussed and their viability to overcome the assumed Swampland constraints is analyzed.
astrophysics
Feedback from massive stars plays a key role in molecular cloud evolution. After the onset of star formation, the young stellar population is exposed by photoionization, winds, supernovae, and radiation pressure from massive stars. Recent observations of nearby galaxies have provided the evolutionary timeline between molecular clouds and exposed young stars, but the duration of the embedded phase of massive star formation is still ill-constrained. We measure how long massive stellar populations remain embedded within their natal cloud, by applying a statistical method to six nearby galaxies at 20-100 pc resolution, using CO, Spitzer 24$\rm\,\mu m$, and H$\alpha$ emission as tracers of molecular clouds, embedded star formation, and exposed star formation, respectively. We find that the embedded phase (with CO and 24$\rm\,\mu m$ emission) lasts for $2{-}7$ Myr and constitutes $17{-}47\%$ of the cloud lifetime. During approximately the first half of this phase, the region is invisible in H$\alpha$, making it heavily obscured. For the second half of this phase, the region also emits in H$\alpha$ and is partially exposed. Once the cloud has been dispersed by feedback, 24$\rm\,\mu m$ emission no longer traces ongoing star formation, but remains detectable for another $2{-}9$ Myr through the emission from ambient CO-dark gas, tracing star formation that recently ended. The short duration of massive star formation suggests that pre-supernova feedback (photoionization and winds) is important in disrupting molecular clouds. The measured timescales do not show significant correlations with environmental properties (e.g. metallicity). Future JWST observations will enable these measurements routinely across the nearby galaxy population.
astrophysics
Quantum correlations of identical particles are important for quantum-enhanced technologies. The recently introduced non-standard approach to treat identical particles [G. Compagno et al., Phil. Trans. R. Soc. A 376, 20170317 (2018)] is here exploited to show the effect of particle indistinguishability on the characterization of entanglement of three identical qubits. We show that, by spatially localized measurements in separated regions, three independently-prepared separated qubits in a pure elementary state behave as distinguishable ones, as expected. On the other hand, delocalized measurements make it emerge a measurement-induced entanglement. We then find that three independently-prepared boson qubits under complete spatial overlap exhibit genuine three-partite entanglement. These results evidence the effect of spatial overlap on identical particle entanglement and show that the latter depends on both the quantum state and the type of measurement.
quantum physics
We present a coarse-grained model for stochastic transport of noninteracting chemical signals inside neuronal dendrites and show how first-passage properties depend on the key structural factors affected by neurodegenerative disorders or aging: the extent of the tree, the topological bias induced by segmental decrease of dendrite diameter, and the trapping probabilities in biochemical cages and growth cones. We derive an exact expression for the distribution of first-passage times, which follows a universal exponential decay in the long-time limit. The asymptotic mean first-passage time exhibits a crossover from power-law to exponential scaling upon reducing the topological bias. We calibrate the coarse-grained model parameters and obtain the variation range of the mean first-passage time when the geometrical characteristics of the dendritic structure evolve during the course of aging or neurodegenerative disease progression (A few disorders are chosen and studied for which clear trends for the pathological changes of dendritic structure have been reported in the literature). We prove the validity of our analytical approach under realistic fluctuations of structural parameters, by comparing to the results of Monte Carlo simulations. Moreover, by constructing local structural irregularities, we analyze the resulting influence on transport of chemical signals and formation of heterogeneous density patterns. Since neural functions rely on chemical signal transmission to a large extent, our results open the possibility to establish a direct link between the disease progression and neural functions.
physics
The data on differential cross sections and photon-beam asymmetries for the $\gamma p \to K^+\Lambda(1520)$ reaction have been analyzed within a tree-level effective Lagrangian approach. In addition to the $t$-channel $K$ and $K^\ast$ exchanges, the $u$-channel $\Lambda$ exchange, the $s$-channel nucleon exchange, and the interaction current, a minimal number of nucleon resonances in the $s$ channel are introduced in constructing the reaction amplitudes to describe the data. The results show that the experimental data can be well reproduced by including either the $N(2060)5/2^-$ or the $N(2120)3/2^-$ resonance. In both cases, the contact term and the $K$ exchange are found to make significant contributions, while the contributions from the $K^\ast$ and $\Lambda$ exchanges are negligible in the former case and considerable in the latter case. Measurements of the data on target asymmetries are called on to further pin down the resonance contents and to clarify the roles of the $K^\ast$ and $\Lambda$ exchanges in this reaction.
high energy physics phenomenology
While the success of semi-supervised learning (SSL) is still not fully understood, Sch\"olkopf et al. (2012) have established a link to the principle of independent causal mechanisms. They conclude that SSL should be impossible when predicting a target variable from its causes, but possible when predicting it from its effects. Since both these cases are somewhat restrictive, we extend their work by considering classification using cause and effect features at the same time, such as predicting disease from both risk factors and symptoms. While standard SSL exploits information contained in the marginal distribution of all inputs (to improve the estimate of the conditional distribution of the target given inputs), we argue that in our more general setting we should use information in the conditional distribution of effect features given causal features. We explore how this insight generalises the previous understanding, and how it relates to and can be exploited algorithmically for SSL.
statistics
Relaxing the conventional assumption of a minimal coupling between the dark matter (DM) and dark energy (DE) fields introduces significant changes in the predicted evolution of the Universe. Therefore, testing such a possibility constitutes an essential task not only for cosmology but also for fundamental physics. In a previous communication [Phys. Rev. D99, 043521, 2019], we proposed a new null test for the $\Lambda$CDM model based on the time dependence of the ratio between the DM and DE energy densities which is also able to detect potential signatures of interaction between the dark components. In this work, we extend that analysis avoiding the $ \Lambda$CDM assumption and reconstruct the interaction in the dark sector in a fully model-independent way using data from type Ia supernovae, cosmic chronometers and baryonic acoustic oscillations. According to our analysis, the $\Lambda$CDM model is consistent with our model-independent approach at least at $3\sigma$ CL over the entire range of redshift studied. On the other hand, our analysis shows that the current background data do not allow us to rule out the existence of an interaction in the dark sector. Finally, we present a forecast for next-generation LSS surveys. In particular, we show that Euclid and SKA will be able to distinguish interacting models with about 4\% of precision at $z\approx 1$.
astrophysics