text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Lattice mismatch can substantially impact the spatial organization of heterogeneous materials. We examine a simple model for lattice-mismatched solids over a broad range of temperature and composition, revealing both uniform and spatially modulated phases. Scenarios for coexistence among them are unconventional due to the extensive mechanical cost of segregation. Together with an adapted Maxwell construction for elastic phase separation, mean field theory predicts a phase diagram that captures key low-temperature features of Monte Carlo simulations. | condensed matter |
We present an optimal probabilistic protocol to distill quantum coherence. Inspired by a specific entanglement distillation protocol, our main result yields a strictly incoherent operation that produces one of a family of maximally coherent states of variable dimension from any pure quantum state. We also expand this protocol to the case where it is possible, for some initial states, to avert any waste of resources as far as the output states are concerned, by exploiting an additional transformation into a suitable intermediate state. These results provide practical schemes for efficient quantum resource manipulation. | quantum physics |
Surgical tool segmentation in endoscopic images is the first step towards pose estimation and (sub-)task automation in challenging minimally invasive surgical operations. While many approaches in the literature have shown great results using modern machine learning methods such as convolutional neural networks, the main bottleneck lies in the acquisition of a large number of manually-annotated images for efficient learning. This is especially true in surgical context, where patient-to-patient differences impede the overall generalizability. In order to cope with this lack of annotated data, we propose a self-supervised approach in a robot-assisted context. To our knowledge, the proposed approach is the first to make use of the kinematic model of the robot in order to generate training labels. The core contribution of the paper is to propose an optimization method to obtain good labels for training despite an unknown hand-eye calibration and an imprecise kinematic model. The labels can subsequently be used for fine-tuning a fully-convolutional neural network for pixel-wise classification. As a result, the tool can be segmented in the endoscopic images without needing a single manually-annotated image. Experimental results on phantom and in vivo datasets obtained using a flexible robotized endoscopy system are very promising. | computer science |
Classical nova outburst has been suggested for a number of extragalactic symbiotic stars, but in none of the systems has it been proven. In this work we study the nature of one of these systems, LMC S154. We gathered archival photometric observations in order to determine the timescales and nature of variability in this system. Additionally we carried out photometric and spectroscopic monitoring of the system and fitted synthetic spectra to the observations. Carbon abundance in the photosphere of the red giant is significantly higher than that derived for the nebula, which confirms pollution of the circumbinary material by the ejecta from nova outburst. The photometric and spectroscopic data show that the system reached quiescence in 2009, which means that for the first time all of the phases of a nova outburst were observed in an extragalactic symbiotic star. The data indicate that most probably there were three outbursts observed in LMC S154, which would make this system a member of a rare class of symbiotic recurrent novae. The recurrent nature of the system is supported by the discovery of coronal lines in the spectra, which are observed only in symbiotic stars with massive white dwarfs and with short-recurrence-time outbursts. Gathered evidence is sufficient to classify LMC S154 as the first bona fide extragalactic symbiotic nova, which is likely a recurrent nova. It is also the first nova with a carbon-rich donor. | astrophysics |
We introduce a method for calculating the divided differences of the exponential function by means of addition and removal of items from the input list to the function. Our technique exploits a new identity related to divided differences recently derived by F. Zivcovich [Dolomites Research Notes on Approximation 12, 28-42 (2019)]. We show that upon adding an item to or removing an item from the input list of an already evaluated exponential, the re-evaluation of the divided differences can be done with only $O(s n)$ floating point operations and $O(s n)$ bytes of memory, where $[z_0,\dots,z_n]$ are the inputs and $s \propto \max_{i,j} |z_i - z_j|$. We demonstrate our algorithm's ability to deal with input lists that are orders-of-magnitude longer than the maximal capacities of the current state-of-the-art. We discuss in detail one practical application of our method: the efficient calculation of weights in the off-diagonal series expansion quantum Monte Carlo algorithm. | physics |
We propose a 2D Encoder-Decoder based deep learning architecture for semantic segmentation, that incorporates anatomical priors by imitating the encoder component of an autoencoder in latent space. The autoencoder is additionally enhanced by means of hierarchical features, extracted by an U-Net module. Our suggested architecture is trained in an end-to-end manner and is evaluated on the example of pelvic bone segmentation in MRI. A comparison to the standard U-Net architecture shows promising improvements. | computer science |
We consider the time evolution of a state in an isolated quantum spin lattice system with energy cumulants proportional to the number of the sites $L^d$. We compute the distribution of the eigenvalues of the time averaged state over a time window $[t_0,t_0+t]$ in the limit of large $L$. This allows us to infer the size of a subspace that captures time evolution in $[t_0,t_0+t]$ with an accuracy $1-\epsilon$. We estimate the size to be $ \frac{\sqrt{2\mathfrak{e}_2}}{\pi}\mathrm{erf}^{-1}(1-\epsilon) L^{\frac{d}{2}}t$, where $\mathfrak{e}_2$ is the energy variance per site, and $\mathrm{erf}^{-1}$ is the inverse error function. | quantum physics |
We examine the constraints on sub-GeV dark sector particles set by the proto-neutron star cooling associated with the core-collapse supernova event SN1987a. Considering explicitly a dark photon portal dark sector model, we compute the relevant interaction rates of dark photon ($A'$) and dark fermion ($\chi$) with the Standard Model particles as well as their self-interaction inside the dark sector. We find that even with a small dark sector fine structure constant $\alpha_D\ll 1$, dark sector self-interactions can easily lead to their own self-trapping. This effect strongly limits the energy luminosity carried away by dark sector particles from the supernova core and thus drastically affects the parameter space that can be constrained by SN1987a. We consider specifically two mass ratios $m_{A'}=3m_\chi$ and $3m_{A'}=m_\chi$ which represent scenarios where the decay of $A'$ to $\chi\bar\chi$ is allowed or not. We show that SN1987a can only place bounds on the dark sector when $\alpha_D\lesssim 10^{-15}$ ($10^{-7}$) for the former (latter) for $m_\chi\lesssim 20$ MeV. Furthermore, this evades the supernova bounds on the widely-examined dark photon parameter space completely if $\alpha_D\lesssim 10^{-7}$ for the former, while lifts the bounds when $\alpha_D\lesssim 10^{-7}$ if $m_\chi\lesssim 100$ MeV. Our findings thus imply that the existing supernova bounds on light dark particles can be generally evaded by a similar self-trapping mechanism. This also implies that non-standard strongly self-interacting neutrino is not consistent with the SN1987a observation. Same effects can also take place for other known stellar bounds on dark sector particles. | high energy physics phenomenology |
In this paper, we investigate theoretically the back-action evading measurement of the collective mode of an interacting atomic Bose-Einstein condensate (BEC) trapped in an optical cavity which is driven coherently by a pump laser with a modulated amplitude. It is shown that for a specified kind of amplitude modulation of the driving laser, one can measure a generalized quadrature of the collective mode of the BEC indirectly through the output cavity field with a negligible backaction noise in the good-cavity limit. Nevertheless, the on-resonance added noise of measurement is suppressed below the standard quantum limit (SQL) even in the bad cavity limit. Moreover, the measurement precision can be controlled through the s-wave scattering frequency of atomic collisions. | quantum physics |
Risk prediction capitalizing on emerging human genome findings holds great promise for new prediction and prevention strategies. While the large amounts of genetic data generated from high-throughput technologies offer us a unique opportunity to study a deep catalog of genetic variants for risk prediction, the high-dimensionality of genetic data and complex relationships between genetic variants and disease outcomes bring tremendous challenges to risk prediction analysis. To address these rising challenges, we propose a kernel-based neural network (KNN) method. KNN inherits features from both linear mixed models (LMM) and classical neural networks and is designed for high-dimensional risk prediction analysis. To deal with datasets with millions of variants, KNN summarizes genetic data into kernel matrices and use the kernel matrices as inputs. Based on the kernel matrices, KNN builds a single-layer feedforward neural network, which makes it feasible to consider complex relationships between genetic variants and disease outcomes. The parameter estimation in KNN is based on MINQUE and we show, that under certain conditions, the average prediction error of KNN can be smaller than that of LMM. Simulation studies also confirm the results. | statistics |
We report the results of AstroSat observations of GRS 1915$+$105 obtained using 100 ks guaranteed-time (GT) during the soft state. The Color-Color Diagram (CCD) indicates a variability class of $\delta$ with the detection of High Frequency QPO (HFQPO) in the power density spectra (PDS). The HFQPO is seen to vary in the frequency range of $67.96 - 70.62$ Hz with percentage rms $\sim 0.83 - 1.90$ % and significance varying from $1.63 - 7.75$. The energy dependent power spectra show that the HFQPO features are dominant only in $6 - 25$ keV energy band. The broadband energy spectra ($0.7 - 50$ keV) of SXT (Soft X-ray Telescope) and LAXPC (Large Area X-ray Proportional Counter) modelled with nthComp and powerlaw imply that the source has an extended corona in addition to a compact 'Comptonizing corona' that produces high energy emission and exhibits HFQPOs. The broadband spectral modelling indicates that the source spectra are well described by thermal Comptonization with electron temperature (kT$_{\rm e}$) of $2.07 - 2.43$ keV and photon-index ($\Gamma_{\rm nth}$) between $1.73-2.45$ with an additional powerlaw component of photon-index ($\Gamma_{\rm PL}$) between $2.94 - 3.28$. The norm of nthComp component is high ($\sim 8$) during the presence of strong HFQPO and low ($\sim 3$) during the absence of HFQPO. Further, we model the energy spectra with the kerrbb model to estimate the accretion rate, mass and spin of the source. Our findings indicate that the source accretes at super-Eddington rate of $1.17-1.31~ \dot{M}_{\rm Edd}$. Moreover, we find the mass and spin of the source as $12.44 - 13.09~M_{\odot}$ and $0.990-0.997$ with $90\%$ confidence suggesting that GRS 1915$+$105 is a maximally rotating stellar mass X-ray binary black hole source. | astrophysics |
In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communication services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users' information in its multiple stacks to avoid learning the same resource allocation scheme and users' states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm. | electrical engineering and systems science |
In this paper, the boundary element method is combined with Chebyshev operational matrix technique to solve two-dimensional multi-order time-fractional partial differential equations; nonlinear and linear in respect to spatial and temporal variables, respectively. Fractional derivatives are estimated by Caputo sense. Boundary element method is used to convert the main problem into a system of a multi-order fractional ordinary differential equation. Then, the produced system is approximated by Chebyshev operational matrix technique, ans its condition number is analyzed. Accuracy and efficiency of the proposed hybrid scheme are demonstrated by solving three different types of two-dimensional time fractional convection-diffusion equations numerically. The convergent rates are calculated for different meshing within the boundary element technique. Numerical results are given by graphs and tables for solutions and different type of error norms. | mathematics |
We present an experimental demonstration as well as a theoretical model of an integrated circuit designed for the manipulation of a microwave field down to the single-photon level. The device is made of a superconducting resonator coupled to a transmission line via a second frequency-tunable resonator. The tunable resonator can be used as a tunable coupler between the fixed resonator and the transmission line. Moreover, the manipulation of the microwave field between the two resonators is possible. In particular, we demonstrate the swapping of the field from one resonator to the other by pulsing the frequency detuning between the two resonators. The behavior of the system, which determines how the device can be operated, is analyzed as a function of one key parameter of the system, the damping ratio of the coupled resonators. We show a good agreement between experiments and simulations, realized by solving a set of coupled differential equations. | condensed matter |
We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with nonlinear functions trained with the Temporal-Difference (TD) learning algorithm. We first consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime, the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises naturally, implicit in the initialization of their parameters. Both in the under- and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the above algorithm in the lazy training regime. We then compare this scaling of the parameters to the mean-field regime, where the approximately linear behavior of the model is lost. Under this alternative scaling we prove that all fixed points of the dynamics in parameter space are global minimizers. We finally give examples of our convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks. | computer science |
We study the excitation function of the low-lying charmonium state: $\Psi$(3686) in $\bar p$ Au collisions taking into account their in-medium propagation. The time evolution of the spectral functions of the charmonium state is studied with a BUU type transport model. We calculated the excitation function of $\Psi$(3686) production and show that it is strongly effected by the medium. The energy regime will be available for the PANDA experiment. | high energy physics phenomenology |
Although the first results of the KATRIN neutrino mass experiment are consistent with a new improved upper limit of 1.1 eV for the effective mass of the electron neutrino, surprisingly they are also consistent with an exotic model of the neutrino masses put forward in 2013 that includes one tachyonic mass state doublet having $m^2\sim - 0.2$ keV$^2$. A definitive conclusion on the validity of the model should be possible after less than one year of KATRIN data-taking. | high energy physics phenomenology |
We study the $D_s^+\to \pi^+(a_0(980)^0\to)\pi^0\eta$, $\pi^0(a_0(980)^+\to)\pi^+\eta$ decays, which have been recently measured by the BESIII collaboration. We propose that $D_s^+\to \pi^{+(0)}(a_0(980)^{0(+)}\to)\pi^{0(+)}\eta$ receives the contributions from the triangle rescattering processes, where $M^0$ and $\rho^+$ in $D_s^+\to M^0 \rho^+$, by exchanging $\pi^{0(+)}$, are formed as $a_0(980)^{0(+)}$ and $\pi^{+(0)}$, respectively, with $M^0=(\eta,\eta')$. Accordingly, we calculate that ${\cal B}(D_s^+\to a_0(980)^{0(+)}\pi^{+(0)})=(1.7\pm 0.2\pm 0.1)\times 10^{-2}$ and ${\cal B}(D_s^+\to \pi^{+(0)}(a_0(980)^{0(+)}\to)\pi^{0(+)}\eta)=(1.4\pm 0.1\pm 0.1)\times 10^{-2}$, being consistent with the data. | high energy physics phenomenology |
In this paper we introduce new modules over the ring of ponderation functions, so we recover old results in harmonic analysis from the side of ring theory. Moreover, we prove that Laplace transform, Fourier transform and Hankel transform generate some kind of modules over the ring of ponderation functions. | mathematics |
We study the super instanton solution in the gauge theory with U$(n_{+}| n_{-})$ gauge group. Based on the ADHM construction generalized to the supergroup theory, we derive the instanton partition function from the super instanton moduli space through the equivariant localization. We derive the Seiberg-Witten geometry and its quantization for the supergroup gauge theory from the instanton partition function, and study the connection with classical and quantum integrable systems. We also argue the brane realization of the supergroup quiver gauge theory, and possible connection to the non-supergroup quiver gauge theories. | high energy physics theory |
Carroll's group is presented as a group of transformations in a 5-dimensional space ($\mathcal{C}$) obtained by embedding the Euclidean space into a (4; 1)-de Sitter space. Three of the five dimensions of $\mathcal{C}$ are related to $\mathcal{R}^3$, and the other two to mass and time. A covariant formulation of Caroll's group, analogous as introduced by Takahashi to Galilei's group, is deduced. Unit representations are studied. | high energy physics theory |
We study the quantization of the corner symmetry algebra of 3d gravity associated with 1d spatial boundaries. We first recall that in the continuum, this symmetry algebra is given by the central extension of the Poincar\'e loop algebra. At the quantum level, we construct a discrete current algebra with a $\mathcal{D}\mathrm{SU}(2)$ quantum symmetry group that depends on an integer $N$. This algebra satisfies two fundamental properties: First it is compatible with the quantum space-time picture given by the Ponzano-Regge state-sum model, which provides a path integral amplitudes for 3d loop quantum gravity. We then show that we recover in the $N\rightarrow\infty$ limit the central extension of the Poincar\'e current algebra. The number of boundary edges defines a discreteness parameter $N$ which counts the number of flux lines attached to the boundary. We analyse the refinement, coarse-graining and fusion processes as $N$ changes. Identifying a discrete current algebra on quantum boundaries is an important step towards understanding how conformal field theories arise on spatial boundaries in loop quantum gravity. It also shows how an asymptotic BMS symmetry group could appear from the continuum limit of 3d quantum gravity. | high energy physics theory |
We employ the quark part of the symmetric energy-momentum tensor current to calculate the transition gravitational form factors of the $N(1535) \rightarrow N$ by means of the light cone QCD sum rule formalism. In numerical analysis, we use two different sets of the shape parameters in the distribution amplitudes of the $ N(1535) $ baryon and the general form of the nucleon's interpolating current. It is seen that the momentum squared dependence of the gravitational form factors can be well described by the p-pole fit function. The results obtained by using two sets of parameters are found to be quite different from each other and the $N(1535) \rightarrow N$ transition gravitational form factors depend highly on the shape parameters of the distribution amplitudes of the $N(1535)$ state that parametrize relative orbital angular momentum of the constituent quarks. | high energy physics phenomenology |
We compare the predictions of the fundamentally motivated minimal coupling ($\hat{\boldsymbol{p}}\cdot\hat{\boldsymbol{A}}$) and the ubiquitous dipole coupling ($\hat{\boldsymbol{x}}\cdot\hat{\boldsymbol{E}}$) in the light-matter interaction. By studying the light-matter interaction for hydrogen-like atoms we find that the dipole approximation cannot be a-priori justified to analyze the physics of vacuum excitations (a very important phenomenon in relativistic quantum information) since a dominant wavelength is absent in those problems, no matter how small (as compared to any frequency scale) the atom is. Remarkably, we show that the dipole approximation in those regimes can still be valid as long as the interaction time is longer than the light-crossing time of the atoms, which is a very reasonable assumption. We also highlight some of the subtleties that one has to be careful with when working with the explicitly non-gauge invariant nature of the minimal coupling and we compare it with the explicitly gauge invariant dipole coupling. | quantum physics |
It has been shown that improper Gaussian signaling (IGS) can improve the performance of wireless interference-limited systems when perfect channel state information (CSI) is available. In this paper, we investigate the robustness of IGS against imperfect CSI on the transmitter side in a two-user single-input single-output (SISO) interference channel (IC) as well as in a SISO Z-IC, when interference is treated as noise. We assume that the true channel coefficients belong to a known region around the channel estimates, which we call the uncertainty region. Following a worst-case robustness approach, we study the rate-region boundary of the IC for the worst channel in the uncertainty region. For the two-user IC, we derive a robust design in closed-form, which is independent of the phase of the channels by allowing only one of the users to transmit IGS. For the Z-IC, we provide a closed-form design for the transmission parameters by considering an enlarged uncertainty region and allowing both users to employ IGS. In both cases, the IGS-based designs are ensured to perform no worse than proper Gaussian signaling. Furthermore, we show, through numerical examples, that the proposed robust designs significantly outperform non-robust solutions. | electrical engineering and systems science |
In differential privacy (DP), we want to query a database about n users, in a way that "leaks at most eps about any individual user," even conditioned on any outcome of the query. Meanwhile, in gentle measurement, we want to measure n quantum states, in a way that "damages the states by at most alpha," even conditioned on any outcome of the measurement. In both cases, we can achieve the goal by techniques like deliberately adding noise to the outcome before returning it. This paper proves a new and general connection between the two subjects. Specifically, we show that on products of n quantum states, any measurement that is alpha-gentle for small alpha is also O(alpha)-DP, and any product measurement that is eps-DP is also O(eps*sqrt(n))-gentle. Illustrating the power of this connection, we apply it to the recently studied problem of shadow tomography. Given an unknown d-dimensional quantum state rho, as well as known two-outcome measurements E_1,...,E_m, shadow tomography asks us to estimate Pr[E_i accepts rho], for every i in [m], by measuring few copies of rho. Using our connection theorem, together with a quantum analog of the so-called private multiplicative weights algorithm of Hardt and Rothblum, we give a protocol to solve this problem using O((log m)^2 (log d)^2) copies of rho, compared to Aaronson's previous bound of ~O((log m)^4 (log d)). Our protocol has the advantages of being online (that is, the E_i's are processed one at a time), gentle, and conceptually simple. Other applications of our connection include new lower bounds for shadow tomography from lower bounds on DP, and a result on the safe use of estimation algorithms as subroutines inside larger quantum algorithms. | quantum physics |
In 2017, the BESIII Collaboration announced the observation of a charged charmonium-like structure in the $\psi(3686)\pi^\pm$ invariant mass spectrum of the $e^+ e^- \to \psi(3686) \pi^+ \pi^-$ process at different energy points, which enables us to perform a precise study of this process based on the initial single pion emission (ISPE) mechanism. In this work, we perform a combined fit to the experimental data of the cross section of $e^+ e^- \to \psi(3686) \pi^+ \pi^-$, and the corresponding $\psi(3686)\pi^\pm$ and dipion invariant mass spectra. Our result shows that the observed charged charmonium-like structure in $e^+ e^- \to \psi(3686) \pi^+ \pi^-$ can be well reproduced based on the ISPE mechanism, and that the corresponding dipion invariant mass spectrum and cross section can be depicted with the same parameters. In fact, it provides strong evidence that the ISPE mechanism can be an underlying mechanism resulting in such novel a phenomenon. | high energy physics phenomenology |
In a pair of correlated quantum systems a measurement in one corresponds to a change in the state of the other. In the process, information is lost. Measurement along which set of projectors would accompany minimum loss in information content is the optimization problem of quantum discord and is an important aspect of a classical to quantum transition because it asks us to look for the most classical states. This optimization problem is known to be NP-complete and is important because discord is defined through it making it a major obstacle on every computation. The standard zero discord condition helps us move to a stronger measure that addresses the correlated observables, in such a context we show that discord minimizes at the diagonal basis of the reduced density matrices and present an analytical expression of the measure. | quantum physics |
We present the first images of the nebula around eta Carinae obtained with HST/WFC3, including a UV image in the F280N filter that traces MgII emission, plus contemporaneous imaging in the F336W, F658N, and F126N filters that trace near-UV continuum, [NII], and [FeII], respectively. The F336W and F658N images are consistent with previous images in these filters, and F126N shows that for the most part, [FeII] 12567 traces clumpy shocked gas seen in [NII]. The F280N image, however, reveals MgII emission from structures that have not been seen in any previous line or continuum images of eta Carinae. This image shows diffuse MgII emission immediately outside the bipolar Homunculus nebula in all directions, but with the strongest emission concentrated over the poles. The diffuse structure with prominent radial streaks, plus an anticorrelation with ionized tracers of clumpy shocked gas, leads us to suggest that this is primarily MgII resonant scattering from unshocked, neutral atomic gas. We discuss the implied structure and geometry of the MgII emission, and its relation to the Homunculus lobes and various other complex nebular structures. An order of magnitude estimate of the neutral gas mass traced by MgII is 0.02Msun, with a corresponding kinetic energy around 1e47erg. This may provide important constraints on polar mass loss in the early phases of the Great Eruption. We argue that the MgII line may be an excellent tracer of significant reservoirs of freely expanding, unshocked, and otherwise invisible neutral atomic gas in a variety of stellar outflows. | astrophysics |
The LHCb Collaboration has reported resonant activity in the channel $D^+ K^-$, identifying two components: $X_0(2900)$ with $J^P = 0^+$ at $2866 {\pm} 7$ MeV, $\Gamma_0=57{\pm} 13$ MeV and $X_1(2900)$ with $J^P = 1^-$ at $2904 {\pm} 7$ MeV, $\Gamma_1=110{\pm} 12$ MeV. We interpret the $X_0(2900)$ component as a $cs \bar u\bar d$ isosinglet compact tetraquark, calculating its mass to be $2863 {\pm} 12$ MeV. This is the first exotic hadron with open heavy flavor. The analogous $bs\bar u\bar d$ tetraquark is predicted at $6213 {\pm} 12$ MeV. We discuss possible interpretations of the heavier and wider $X_1(2900)$ state and examine potential implications for other systems with two heavy quarks. | high energy physics phenomenology |
This article analysis differential equations which represents damped and fractional oscillators. First, it is shown that prior to using physical quantities in fractional calculus, it is imperative that they are turned dimensionless. Afterwards, approximated expressions that relate the two equations parameters for the case that the fractional order is close to an integer number are presented. Following, a numerical regression is made using power series expansion, and, also from fractional calculus, the fact that both equations cannot be equivalent is concluded. In the end, from the numerical regression data, the analytical approximated expressions that relate the two equations' parameters are refined. | physics |
Probability is an important question in the ontological interpretation of quantum mechanics. It has been discussed in some trajectory interpretations such as Bohmian mechanics and stochastic mechanics. New questions arise when the probability domain extends to the complex space, including the generation of complex trajectory, the definition of the complex probability, and the relation of the complex probability to the quantum probability. The complex treatment proposed in this article applies the optimal quantum guidance law to derive the stochastic differential equation governing a particle random motion in the complex plane. The probability distribution of the particle position over the complex plane is formed by an ensemble of the complex quantum random trajectories, which are solved from the complex stochastic differential equation. Meanwhile, this probability distribution is verified by the solution of the complex Fokker Planck equation. It is shown that quantum probability and classical probability can be integrated under the framework of complex probability, such that they can both be derived from the same probability distribution by different statistical ways of collecting spatial points. | quantum physics |
We consider exact algorithms for Bayesian inference with model selection priors (including spike-and-slab priors) in the sparse normal sequence model. Because the best existing exact algorithm becomes numerically unstable for sample sizes over n=500, there has been much attention for alternative approaches like approximate algorithms (Gibbs sampling, variational Bayes, etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO) or empirical Bayesian methods. However, by introducing algorithmic ideas from online sequential prediction, we show that exact calculations are feasible for much larger sample sizes: for general model selection priors we reach n=25000, and for certain spike-and-slab priors we can easily reach n=100000. We further prove a de Finetti-like result for finite sample sizes that characterizes exactly which model selection priors can be expressed as spike-and-slab priors. The computational speed and numerical accuracy of the proposed methods are demonstrated in experiments on simulated data, on a differential gene expression data set, and to compare the effect of multiple hyper-parameter settings in the beta-binomial prior. In our experimental evaluation we compute guaranteed bounds on the numerical accuracy of all new algorithms, which shows that the proposed methods are numerically reliable whereas an alternative based on long division is not. | statistics |
Given the strong dependence of material structure and properties on the length and strength of constituent bonds and the fact that surface adsorption and chemical reactions are initiated by the formation of bonds between two systems, bonding parameters are of key importance for material design and industrial processes. In this study, a machine learning (ML)-based model is used to accurately predict bonding properties from information pertaining to isolated systems before bonding. This model employs the density of states (DOS) before bond formation as the ML descriptor and accurately predicts binding energy, bond distance, covalent electron amount, and Fermi energy even when only 20% of the whole dataset is used for training. The results show that the DOS of isolated systems before bonding is a powerful descriptor for the accurate prediction of bonding and adsorption properties. | condensed matter |
In the context of 4d effective gravity theories with 8 supersymmetries, we propose to unify, strenghten, and refine the several swampland conjectures into a single statement: the structural criterion, modelled on the structure theorem in Hodge theory. In its most abstract form the new swampland criterion applies to all 4d $\mathcal{N}=2$ effective theories (having a quantum-consistent UV completion) whether supersymmetry is \emph{local} or rigid: indeed it may be regarded as the more general version of Seiberg-Witten geometry which holds both in the rigid and local cases. As a first application of the new swampland criterion we show that a quantum-consistent $\mathcal{N}=2$ supergravity with a cubic pre-potential is necessarily a truncation of a higher-$\mathcal{N}$ \textsc{sugra}. More precisely: its moduli space is a Shimura variety of `magic' type. In all other cases a quantum-consistent special K\"ahler geometry is either an arithmetic quotient of the complex hyperbolic space $SU(1,m)/U(m)$ or has no \emph{local} Killing vector. Applied to Calabi-Yau 3-folds this result implies (assuming mirror symmetry) the validity of the Oguiso-Sakurai conjecture in Algebraic Geometry: all Calabi-Yau 3-folds $X$ without rational curves have Picard number $\rho=2,3$; in facts they are finite quotients of Abelian varieties. More generally: the K\"ahler moduli of $X$ do not receive quantum corrections if and only if $X$ has infinite fundamental group. In all other cases the K\"ahler moduli have instanton corrections in (essentially) all possible degrees. | high energy physics theory |
In this thesis, I describe a number of recent important developments in neutrino cosmology on three fronts. Firstly, focusing on Large-Scale Structure (LSS) data, I will show that current cosmological probes contain a wealth of information on the sum of the neutrino masses. I report on the analysis leading to the currently best upper limit on the sum of the neutrino masses of $0.12\,{\rm eV}$. I show how cosmological data exhibits a weak preference for the normal neutrino mass ordering because of parameter space volume effects, and propose a simple method to quantify this preference. Secondly, I will discuss how galaxy bias represents a severe limitation towards fully capitalizing on the neutrino information hidden in LSS data. I propose a method for calibrating the scale-dependent galaxy bias using CMB lensing-galaxy cross-correlations. Moreover, in the presence of massive neutrinos, the usual definition of bias becomes inadequate, as it leads to a scale-dependence on large scales which has never been accounted for. I show that failure to define the bias appropriately will be a problem for future LSS surveys, and propose a simple recipe to account for the effect of massive neutrinos on galaxy bias. Finally, I discuss implications of correlations between neutrino parameters and other cosmological parameters. In non-phantom dynamical dark energy models, the upper limit on the sum of the neutrino masses becomes tighter than the $\Lambda$CDM limit. Therefore, such models exhibit an even stronger preference for the normal ordering, and their viability could be jeopardized should near-future laboratory experiments determine that the mass ordering is inverted. I then discuss correlations between neutrino and inflationary parameters. I find that our determination of inflationary parameters is stable against assumptions about the neutrino sector. (abridged) | astrophysics |
In a previous work (MGV18), we showed numerically that the turbulent cascade generated by quasi-2D structures (with wave vectors mostly-perpendicular to the mean magnetic field) is able to generate a temperature profile close to the one observed in solar wind ($\simeq 1/R$) in the range 0.2 $\le R \le$ 1 au. Theory, observations and numerical simulations point to another robust structure, the radial-slab, with dominant wave vectors along the radial: we study here the efficiency of the radial-slab cascade in building the $1/R$ temperature profile. As in MGV18, we solve the three-dimensional MHD equations including expansion to simulate the turbulent evolution. We find that an isotropic distribution of wave vectors with large cross helicity at 0.2 au, along with a large wind expansion rate, lead again to a temperature decay rate close to $1/R$ but with a radial-slab anisotropy at 1 au. Surprisingly, the turbulent cascade concentrates in the plane transverse to the radial direction, displaying 1D spectra with scalings close to $k^{-5/3}$ in this plane. This supports both the idea of turbulent heating of the solar wind, and the existence of two different turbulent cascades, quasi-2D and radial slab, at the origin of the heating. We conclude that sampling the radial spectrum in the solar wind may give but a poor information on the real cascade regime and rate when the radial slab is a non-negligible part of turbulence. | astrophysics |
Saturn's moon Titan is the only extraterrestrial body known to host stable lakes and a hydrological cycle. Titan's lakes predominantly contain liquid methane, ethane, and nitrogen, with methane evaporation driving its hydrological cycle. Molecular interactions between these three species lead to non-ideal behavior that causes Titan's lakes to behave differently than Earth's lakes. Here, we numerically investigate how methane evaporation and non-ideal interactions affect the physical properties, structure, dynamics, and evolution of shallow lakes on Titan. We find that, under certain temperature regimes, methane-rich mixtures are denser than relatively ethane-rich mixtures. This allows methane evaporation to stratify Titan's lakes into ethane-rich upper layers and methane-rich lower layers, separated by a strong compositional gradient. At temperatures above 86K, lakes remain well-mixed and unstratified. Between 84 and 86K, lakes can stratify episodically. Below 84K, lakes permanently stratify, and develop very methane-depleted epilimnia. Despite small seasonal and diurnal deviations (<5K) from typical surface temperatures, Titan's rain-filled ephemeral lakes and "phantom lakes" may nevertheless experience significantly larger temperature fluctuations, resulting in polymictic or even meromictic stratification, which may trigger ethane ice precipitation. | astrophysics |
Scalar singlet dark matter in anomaly-free composite Higgs models is accompanied by exotic particles to which the dark matter annihilates. The latter can therefore freeze out even in the absence of couplings to the Standard Model. In this regime, both current and future direct detection constraints can be avoided. Moreover, due to the different decay modes of the extra particles, the dark matter candidate can even escape indirect detection constraints. Assessing this issue requires dedicated simulations of the gamma ray spectrum, that we provide in the present article in the context of $SO(7)/SO(6)$. For the parameter space region that evades constraints from dark matter experiments, we develop new analyses to be performed at a future 100 TeV collider based on the search of the new particles produced in the decay of heavy vector-like quarks. | high energy physics phenomenology |
We propose a method to assist fault mitigation in quantum computation through the use of sensors co-located near physical qubits. Specifically, we consider using transition edge sensors co-located on silicon substrates hosting superconducting qubits to monitor for energy injection from ionizing radiation, which has been demonstrated to increase decoherence in transmon qubits. We generalize from these two physical device concepts and explore the potential advantages of co-located sensors to assist fault mitigation in quantum computation. In the simplest scheme, co-located sensors beneficially assist rejection of calculations potentially affected by environmental disturbances. Investigating the potential computational advantage further required development of an extension to the standard formulation of quantum error correction. In a specific case of the standard three-qubit, bit-flip quantum error correction code, we show that given a 20% overall error probability per qubit, approximately 90% of repeated calculation attempts are correctable. However, when sensor-detectable errors account for 45% of overall error probability, the use of co-located sensors uniquely associated with independent qubits boosts the fraction of correct final-state calculations to 96%, at the cost of rejecting 7% of repeated calculation attempts. | quantum physics |
In this chapter, we present a brief and non-exhaustive review of the developments of theoretical models for accretion flows around neutron stars. A somewhat chronological summary of crucial observations and modelling of timing and spectral properties are given in sections 2 and 3. In section 4, we argue why and how the Two-Component Advective Flow (TCAF) solution can be applied to the cases of neutron stars when suitable modifications are made for the NSs. We showcase some of our findings from Monte Carlo and Smoothed Particle Hydrodynamic simulations which further strengthens the points raised in section 4. In summary, we remark on the possibility of future works using TCAF for both weakly magnetic and magnetic Neutron Stars. | astrophysics |
When a well-localized photon is incident on a spatially superposed absorber but is not absorbed, the photon can still deliver energy to the absorber. It is shown that when the transferred energy is small relative to the energy uncertainty of the photon, this constitutes an unusual type of weak measurement of the absorber's energy, where the energy distribution of the unabsorbed photon acts as the measurement device, and the strongly disturbed state of the absorber becomes the effective pre-selection. Treating the final state of the absorber as the post-selection, it is shown that the absorber's energy increase is the weak value of its translational Hamiltonian, and the energy distribution of the photon shifts by the opposite amount. The basic case of non-scattering is examined, followed by the case of interaction-free energy transfer. Details and interpretations of the results are discussed. | quantum physics |
Starting with Maxwell's equations, we derive the fundamental results of the Huygens-Fresnel-Kirchhoff and Rayleigh-Sommerfeld theories of scalar diffraction and scattering. These results are then extended to cover the case of vector electromagnetic fields. The famous Sommerfeld solution to the problem of diffraction from a perfectly conducting half-plane is elaborated. Far-field scattering of plane waves from obstacles is treated in some detail, and the well-known optical cross-section theorem, which relates the scattering cross-section of an obstacle to its forward scattering amplitude, is derived. Also examined is the case of scattering from mild inhomogeneities within an otherwise homogeneous medium, where, in the first Born approximation, a fairly simple formula is found to relate the far-field scattering amplitude to the host medium's optical properties. The related problem of neutron scattering from ferromagnetic materials is treated in the final section of the paper. | physics |
High skill labour is an important factor underpinning the competitive advantage of modern economies. Therefore, attracting and retaining scientists has become a major concern for migration policy. In this work, we study the migration of scientists on a global scale, by combining two large data sets covering the publications of 3.5 Mio scientists over 60 years. We analyse their geographical distances moved for a new affiliation and their age when moving, this way reconstructing their geographical "career paths". These paths are used to derive the world network of scientists mobility between cities and to analyse its topological properties. We further develop and calibrate an agent-based model, such that it reproduces the empirical findings both at the level of scientists and of the global network. Our model takes into account that the academic hiring process is largely demand-driven and demonstrates that the probability of scientists to relocate decreases both with age and with distance. Our results allow interpreting the model assumptions as micro-based decision rules that can explain the observed mobility patterns of scientists. | physics |
Generative Adversarial Networks (GANs) based semi-supervised learning (SSL) approaches are shown to improve classification performance by utilizing a large number of unlabeled samples in conjunction with limited labeled samples. However, their performance still lags behind the state-of-the-art non-GAN based SSL approaches. We identify that the main reason for this is the lack of consistency in class probability predictions on the same image under local perturbations. Following the general literature, we address this issue via label consistency regularization, which enforces the class probability predictions for an input image to be unchanged under various semantic-preserving perturbations. In this work, we introduce consistency regularization into the vanilla semi-GAN to address this critical limitation. In particular, we present a new composite consistency regularization method which, in spirit, leverages both local consistency and interpolation consistency. We demonstrate the efficacy of our approach on two SSL image classification benchmark datasets, SVHN and CIFAR-10. Our experiments show that this new composite consistency regularization based semi-GAN significantly improves its performance and achieves new state-of-the-art performance among GAN-based SSL approaches. | computer science |
We explore when it is legal to differentiate a polynomial evaluated at a root of unity using modular arithmetic. | mathematics |
The very bright and compact massive young cluster, NGC 3603, has been cited as an example of a starburst in the Milky Way and compared with the much-studied R136/30 Doradus region in the Large Magellanic Cloud. Here we build on the discovery by Mohr-Smith et al. (2017) of a large number of reddened O stars around this cluster. We construct a list of 288 candidate O stars with proper motions, in a region of sky spanning 1.5x1.5 square degrees centered on NGC 3603, by cross-matching the Mohr-Smith et al. (2017) catalogue with Gaia DR2 (Gaia Collaboration et al. 2018). This provides the basis for a first comprehensive examination of the proper motions of these massive stars in the halo of NGC 3603, relative to the much better studied central region. We identify up to 11 likely O star ejections -- 8 of which would have been ejected between 0.60 and 0.95 Myr ago (supporting the age of c.1 Myr that has been attributed to the bright cluster centre). Seven candidate ejections are arranged in a partial ring to the south of the cluster core spanning radii of 9-18 arcmin (18-36 pc if the cluster is 7 kpc away). We also show that the cluster has a halo of a further 100 O stars extending to a radius of at least 5 arcmin, adding to the picture of NGC 3603 as a scaled down version of the R136/30 Dor region. | astrophysics |
In this paper, we develop a neural attentive interpretable recommendation system, named NAIRS. A self-attention network, as a key component of the system, is designed to assign attention weights to interacted items of a user. This attention mechanism can distinguish the importance of the various interacted items in contributing to a user profile. Based on the user profiles obtained by the self-attention network, NAIRS offers personalized high-quality recommendation. Moreover, it develops visual cues to interpret recommendations. This demo application with the implementation of NAIRS enables users to interact with a recommendation system, and it persistently collects training data to improve the system. The demonstration and experimental results show the effectiveness of NAIRS. | computer science |
Magnetic refrigeration exploits the magnetocaloric effect which is the entropy change upon application and removal of magnetic fields in materials, providing an alternate path for refrigeration other than the conventional gas cycles. While intensive research has uncovered a vast number of magnetic materials which exhibits large magnetocaloric effect, these properties for a large number of compounds still remain unknown. To explore new functional materials in this unknown space, machine learning is used as a guide for selecting materials which could exhibit large magnetocaloric effect. By this approach, HoB$_{2}$ is singled out, synthesized and its magnetocaloric properties are evaluated, leading to the experimental discovery of gigantic magnetic entropy change 40.1 J kg$^{-1}$ K$^{-1}$ (0.35 J cm$^{-3}$ K$^{-1}$) for a field change of 5 T in the vicinity of a ferromagnetic second-order phase transition with a Curie temperature of 15 K. This is the highest value reported so far, to our knowledge, near the hydrogen liquefaction temperature thus it is a highly suitable material for hydrogen liquefaction and low temperature magnetic cooling applications. | condensed matter |
Models of stellar structure and evolution can be constrained using accurate measurements of the parameters of eclipsing binary members of open clusters. Multiple binary stars provide the means to tighten the constraints and, in turn, to improve the precision and accuracy of the age estimate of the host cluster. In the previous two papers of this series, we have demonstrated the use of measurements of multiple eclipsing binaries in the old open cluster NGC6791 to set tighter constraints on the properties of stellar models than was previously possible, thereby improving both the accuracy and precision of the cluster age. We identify and measure the properties of a non-eclipsing cluster member, V56, in NGC\,6791 and demonstrate how this provides additional model constraints that support and strengthen our previous findings. We analyse multi-epoch spectra of V56 from FLAMES in conjunction with the existing photometry and measurements of eclipsing binaries in NGC6971. The parameters of the V56 components are found to be $M_{\rm p}=1.103\pm 0.008 M_{\odot}$ and $M_{\rm s}=0.974\pm 0.007 M_{\odot}$, $R_{\rm p}=1.764\pm0.099 R_{\odot}$ and $R_{\rm s}=1.045\pm0.057 R_{\odot}$, $T_{\rm eff,p}=5447\pm125$ K and $T_{\rm eff,s}=5552\pm125$ K, and surface [Fe/H]=$+0.29\pm0.06$ assuming that they have the same abundance. The derived properties strengthen our previous best estimate of the cluster age of $8.3\pm0.3$ Gyr and the mass of stars on the lower red giant branch (RGB), which is $M_{\rm RGB} = 1.15\pm0.02M_{\odot}$ for NGC6791. These numbers therefore continue to serve as verification points for other methods of age and mass measures, such as asteroseismology. | astrophysics |
With a significant increase in area throughput, Massive MIMO has become an enabling technology for fifth generation (5G) wireless mobile communication systems. Although prototypes were built, an openly available dataset for channel impulse responses to verify assumptions, e.g. regarding channel sparsity, is not yet available. In this paper, we introduce a novel channel sounder architecture, capable of measuring multiantenna and multi-subcarrier channel state information (CSI) at different frequency bands, antenna geometries and propagation environments. The channel sounder has been verified by evaluation of channel data from first measurements. Such datasets can be used to study various deep-learning (DL) techniques in different applications, e.g., for indoor user positioning in three dimensions, as is done in this paper. Not only we do achieve an accuracy better than 75 cm for line of sight (LoS), as is comparable to state-of-the-art conventional positioning techniques, but also obtain similar precision for the more challenging case of non-line of sight (NLoS). Further extensive indoor/outdoor measurement campaigns will provide a more comprehensive open CSI dataset, tagged with positions, for the scientific community to further test various algorithms. | electrical engineering and systems science |
We compute the cross section of inclusive dijet photoproduction in ultraperipheral Pb-Pb collisions at the LHC using next-to-leading order perturbative QCD. We demonstrate that our theoretical calculations provide a good description of various kinematic distributions measured by the ATLAS collaboration. We find that the calculated dijet photoproduction cross section is sensitive to nuclear modifications of parton distribution functions (PDFs) at the level of 10 to 20%. Hence, this process can be used to reduce uncertainties in the determination of these nuclear PDFs, whose current magnitude is comparable to the size of the calculated nuclear modifications of the dijet photoproduction cross section. | high energy physics phenomenology |
A near-optimal reconstruction of the radiance of a High Dynamic Range scene from an exposure stack can be obtained by modeling the camera noise distribution. The latent radiance is then estimated using Maximum Likelihood Estimation. But this requires a well-calibrated noise model of the camera, which is difficult to obtain in practice. We show that an unbiased estimation of comparable variance can be obtained with a simpler Poisson noise estimator, which does not require the knowledge of camera-specific noise parameters. We demonstrate this empirically for four different cameras, ranging from a smartphone camera to a full-frame mirrorless camera. Our experimental results are consistent for simulated as well as real images, and across different camera settings. | electrical engineering and systems science |
High-order topological phases host robust boundary states at the boundary of the boundary, which can be interpreted from their boundary topology. In this work, considering the interplay between superconductors and magnetic fields to gap the surface states of three-dimensional weak topological insulators, we show that second-order topological superconductors (TSCs) featuring helical or chiral Majorana hinge modes and third-order TSC featuring Majorana corner modes can be realized. Remarkably, the higher-order TSCs in our models can be attributed to their certain boundaries, surfaces, or hinges, which naturally behave as first-order TSC in DIII or D symmetry class. Correspondingly, these higher-order TSCs can be characterized by the boundary first-order topological invariants, such as surface Chern numbers or surface $Z_2$ topological invariants for surface TSCs. Our models can effectively capture the topology of iron-based superconductors with desired inverted band structures and superconducting pairings. | condensed matter |
The synchrotron cooling of relativistic electrons is one of the most effective radiation mechanisms in astrophysics. It not only accompanies the process of particle acceleration but also has feedback on the formation of the energy distribution of the parent electrons. The radiative cooling time of electrons decreases with energy as $t_{\rm syn} \propto 1/E$; correspondingly the overall radiation efficiency increases with energy. On the other hand, this effect strictly limits the maximum energy of individual photons. Even in the so-called extreme accelerators, where the acceleration proceeds at the highest possible rate, $t_{\rm acc}^{-1} = eBc/E$, allowed in an ideal magnetohydrodynamic plasma, the synchrotron radiation cannot extend well beyond the characteristic energy determined by the electron mass and the fine-structure constant: $h \nu^{\rm max} \sim m_e c^2/\alpha \sim 70 \rm\,MeV$. In this paper, we propose a model in which the formation of synchrotron radiation takes place in compact magnetic blobs located inside the particle accelerator and develop a formalism for calculations of synchrotron radiation emerging from such systems. We demonstrate that for certain combinations of parameters characterizing the accelerator and the magnetic blobs, the synchrotron radiation can extend beyond this limit by a several orders of magnitude. This scenario requires a weak magnetization of the particle accelerator, and an efficient conversion of gas internal energy into magnetic energy in sufficiently small blobs. The required size of the blobs is constrained by the magnetic mirroring effect, that can prevent particle penetration into the regions of strong magnetic field under certain conditions. | astrophysics |
We compute the three-loop corrections to the quark axial vector form factor in massless QCD, focusing on the pure-singlet contributions where the axial vector current couples to a closed quark loop. Employing the Larin prescription for $\gamma_5$, we discuss the UV renormalization of the form factor. The infrared singularity structure of the resulting singlet axial-vector form factor is explained from infrared factorization, defining a finite remainder function. | high energy physics phenomenology |
We study simple stochastic scenarios, based on birth-and-death Markovian processes, that describe populations with Allee effect, to account for the role of demographic stochasticity. In the mean-field deterministic limit we recover well-known deterministic evolution equations widely employed in population ecology. The mean-time to extinction is in general obtained by the Wentzel-Kramers-Brillouin (WKB) approximation for populations with strong and weak Allee effects. An exact solution for the mean time to extinction can be found via a recursive equation for special cases of the stochastic dynamics. We study the conditions for the validity of the WKB solution and analyze the boundary between the weak and strong Allee effect by comparing exact solutions with numerical simulations. | condensed matter |
In this work, we report the first account of monolithically-fabricated vertical cavity surface emitting lasers (VCSELs) of densely-packed, orientation-controlled, atomically flat colloidal quantum wells (CQWs) using a self-assembly method and demonstrate single-mode lasing from a record thin colloidal gain medium with a film thickness of 7 nm under femtosecond optical excitation. We used specially engineered CQWs to demonstrate these hybrid CQW-VCSELs consisting of only a few layers to a single monolayer of CQWs and achieved the lasing from these thin gain media by thoroughly modeling and implementing a vertical cavity consisting of distributed Bragg reflectors with an additional dielectric layer for mode tuning. Accurate spectral and spatial alignment of the cavity mode with the CQW films was secured with the help of full electromagnetic computations. While overcoming the long-pending problem of limited electrical conductivity in thicker colloidal films, such ultra-thin colloidal gain media can help enabling fully electrically-driven colloidal lasers. | physics |
There has been important understanding of the process by which a hypersonic dust impact makes an electrical signal on a spacecraft sensor, leading to a fuller understanding of the physics. Zaslavsky (2015) showed that the most important signal comes from the charging of the spacecraft, less from charging of an antenna. The present work is an extension of the work of Zaslavsky. An analytical treatment of the physics of a hypersonic dust impact and the mechanism for generating an electrical signal in a sensor, an antenna, is presented. The treatment is compared with observations from STEREO and Parker Solar Probe. A full treatment of this process by simulations seems beyond present computer capabilities, but some parts of the treatment can must depend on simulations but other features can be better understood through analytical treatment. Evidence for a somewhat larger contribution from the antenna part of the signal than in previous publications is presented. Importance of electrostatic forces in forming the exiting plasma cloud is emphasized. Electrostatic forces lead to a rapid expansion of the escaping cloud, so that it expands more rapidly than escapes, and frequently surrounds one or more antennas. This accounts for the ability of dipole antennas to detect dust impacts. Some progress toward an understanding occasional negative charging of an antenna is presented, together with direct evidence of such charging. Use of laboratory measurements of charge to estimate size of spacecraft impacts are shown to be not reliable without further calibration work. | physics |
Planck data robustly exclude the simple $\lambda\phi^4$ scenario for inflation. This is also the case for models of Axion Inflation in which the inflaton field is the radial part of the Peccei-Quinn complex scalar field. In this letter we show that for the KSVZ model it is possible to match the data taking into account radiative corrections to the tree level potential. After writing down the 1-loop Coleman-Weinberg potential, we show that a radiative plateau is easily generated thanks to the fact that the heavy quarks are charged under $SU(3)_c$ in order to solve the strong CP problem. We also give a numerical example for which the inflationary observables are computed and the heavy quarks are predicted to have a mass $m_Q \gtrsim 10^3TeV$. | high energy physics phenomenology |
The Convolutional Neural Network (CNN) is a state-of-the-art architecture for a wide range of deep learning problems, the quintessential example of which is computer vision. CNNs principally employ the convolution operation, which can be accelerated using the Fourier transform. In this paper, we present an optical hardware accelerator that combines silicon photonics and free-space optics, leveraging the use of the optical Fourier transform within several CNN architectures. The hardware presented is a proof of concept, demonstrating that this technology can be applied to artificial intelligence problems with a large efficiency boost with respect to canonical methods. | electrical engineering and systems science |
We consider the associated production of a Higgs boson and a photon in weak boson fusion in the Standard Model (SM) and the Standard Model Effective Theory (SMEFT), with the Higgs boson decaying to a pair of bottom quarks. Analysing events in a cut-based analysis and with multivariate techniques we determine the sensitivity of this process to the bottom-Yukawa coupling in the SM and to possible CP-violation mediated by dimension-6 operators in the SMEFT. | high energy physics phenomenology |
The recent candidate detection of 20 ppb of phosphine in the middle atmosphere of Venus is so unexpected that it requires an exhaustive search for explanations of its origin. Phosphorus-containing species have not been modelled for Venusian atmosphere before and our work represents the first attempt to model phosphorus species in Venusian atmosphere. We thoroughly explore the potential pathways of formation of phosphine in a Venusian environment, including in the planet's atmosphere, cloud and haze layers, surface, and subsurface. We investigate gas reactions, geochemical reactions, photochemistry, and other non-equilibrium processes. None of these potential phosphine production pathways are sufficient to explain the presence of ppb phosphine levels on Venus. The presence of PH3, therefore, must be the result of a process not previously considered plausible for Venusian conditions. The process could be unknown geochemistry, photochemistry, or even aerial microbial life, given that on Earth phosphine is exclusively associated with anthropogenic and biological sources. The detection of phosphine adds to the complexity of chemical processes in the Venusian environment and motivates in situ follow up sampling missions to Venus. | astrophysics |
A gas-liquid type of phase transition is found based on the particle dynamics on radius-$R$ circle in which the coordinate appears as the angle-variable of 1D XY-model. Due to the specific appearance of compact-space radius (volume) in the present interpretation of XY-model, the ground-state develops a minimum at some critical radius, leading to the multi-valued Gibbs energy similar to systems with first-order phase transition. | condensed matter |
Lexicographically minimal string rotation (LMSR) is a problem to find the minimal one among all rotations of a string in the lexicographical order, which is widely used in equality checking of graphs, polygons, automata and chemical structures. In this paper, we propose an $O(n^{3/4})$ quantum query algorithm for LMSR. In particular, the algorithm has average-case query complexity $O(\sqrt n \log n)$, which is shown to be asymptotically optimal up to a polylogarithmic factor, compared with its $\Omega\left(\sqrt{n/\log n}\right)$ lower bound. Furthermore, we claim that our quantum algorithm outperforms any (classical) randomized algorithms in both worst-case and average-case query complexities by showing that every (classical) randomized algorithm for LMSR has worst-case query complexity $\Omega(n)$ and average-case query complexity $\Omega(n/\log n)$. Our quantum algorithm for LMSR is developed in a framework of nested quantum algorithms, based on two new results: (i) an $O(\sqrt{n})$ (optimal) quantum minimum finding on bounded-error quantum oracles; and (ii) its $O\left(\sqrt{n \log(1/\varepsilon)}\right)$ (optimal) error reduction. As a byproduct, we obtain some better upper bounds of independent interest: (i) $O(\sqrt{N})$ (optimal) for constant-depth MIN-MAX trees on $N$ variables; and (ii) $O(\sqrt{n \log m})$ for pattern matching which removes $\operatorname{polylog}(n)$ factors. | quantum physics |
This paper presents a method for modelling interfacial mass transfer in Interface Capturing simulations of two-phase flow with phase change. The model enables mechanistic prediction of the local rate of phase change at the vapour-liquid interface on arbitrary computational meshes and is applicable to realistic cases involving two-phase mixtures with large density ratios. The simulation methodology is based on the Volume Of Fluid (VOF) representation of the flow, whereby an interfacial region in which mass transfer occurs is implicitly identified by a phase indicator, in this case the volume fraction of liquid, which varies from the value pertaining to the "bulk" liquid to the value of the bulk vapour. The novel methodology proposed here has been implemented using the Finite Volume framework and solution methods typical of "industrial" CFD practice. The proposed methodology for capturing mass transfer is applicable to arbitrary meshes without the need to introduce elaborate but artificial smearing of the mass transfer term as is often done in other techniques. The method has been validated via comparison with analytical solutions for planar interface evaporation and bubble growth test cases, and against experimental observations of steam bubble growth. | physics |
Recent modeling of Neutron Star Interior Composition Explorer(NICER) observations of the millisecond pulsar PSR J0030+0451 suggests that the magnetic field of the pulsar is non-dipolar. We construct a magnetic field configuration where foot points of the open field lines closely resemble the hotspot configuration from NICER observations. Using this magnetic field as input, we perform force-free simulations of the magnetosphere of PSR J0030+0451, showing the three-dimensional structure of its plasma-filled magnetosphere. Making simple and physically motivated assumptions about the emitting regions, we are able to construct the multi-wavelength lightcurves that qualitatively agree with the corresponding observations. The agreement suggests that multipole magnetic structures are the key to modeling this type of pulsars, and can be used to constrain the magnetic inclination angle and the location of radio emission. | astrophysics |
Loss of mobility or balance resulting from neural trauma is a critical consideration in public health. Robotic exoskeletons hold great potential for rehabilitation and assisted movement, yet optimal assist-as-needed (AAN) control remains unresolved given pathological variance among patients. We introduce a model predictive control (MPC) architecture for lower limb exoskeletons centred around a fuzzy logic algorithm (FLA) identifying modes of assistance based on human involvement. Assistance modes are: 1) passive for human relaxed and robot dominant, 2) active-assist for human cooperation with the task, and 3) safety in the case of human resistance to the robot. Human torque is estimated from electromyography (EMG) signals prior to joint motions, enabling advanced prediction of torque by the MPC and selection of assistance mode by the FLA. The controller is demonstrated in hardware with three subjects on a 1-DOF knee exoskeleton tracking a sinusoidal trajectory with human relaxed assistive, and resistive. Experimental results show quick and appropriate transfers among the assistance modes and satisfied assistive performance in each mode. Results illustrate an objective approach to lower limb robotic assistance through on-the-fly transition between modes of movement, providing a new level of human-robot synergy for mobility assist and rehabilitation. | computer science |
The Dirichlet forms related to various infinite systems of interacting Brownian motions are studied. For a given random point field $ \mu $, there exist two natural infinite-volume Dirichlet forms $ (\mathcal{E}^{\mathsf{upr}},\mathcal{D}^{\mathsf{upr}})$ and $(\mathcal{E}^{\mathsf{lwr}},\mathcal{D}^{\mathsf{lwr}})$ on $ L^2(\mathsf{S} ,\mu ) $ describing interacting Brownian motions each with unlabeled equilibrium state $ \mu $. The former is a decreasing limit of a scheme of such finite-volume Dirichlet forms, and the latter is an increasing limit of another scheme of such finite-volume Dirichlet forms. Furthermore, the latter is an extension of the former. We present a sufficient condition such that these two Dirichlet forms are the same. In the first main theorem (Theorem 3.1) the Markovian semi-group given by $(\mathcal{E}^{\mathsf{lwr}},\mathcal{D}^{\mathsf{lwr}})$ is associated with a natural infinite-dimensional stochastic differential equation (ISDE). In the second main theorem (Theorem 3.2), we prove that these Dirichlet forms coincide with each other by using the uniqueness of {\ws}s of ISDE. We apply Theorem 3.1 to stochastic dynamics arising from random matrix theory such as the sine, Bessel, and Ginibre interacting Brownian motions and interacting Brownian motions with Ruelle's class interaction potentials, and Theorem 3.2 to the sine$ _2$ interacting Brownian motion and interacting Brownian motions with Ruelle's class interaction potentials of $ C_0^3 $-class. | mathematics |
Let $X$ be a set and let $S$ be an inverse semigroup of partial bijections of $X$. Thus, an element of $S$ is a bijection between two subsets of $X$, and the set $S$ is required to be closed under the operations of taking inverses and compositions of functions. We define $\Gamma_{S}$ to be the set of self-bijections of $X$ in which each $\gamma \in \Gamma_{S}$ is expressible as a union of finitely many members of $S$. This set is a group with respect to composition. The groups $\Gamma_{S}$ form a class containing numerous widely studied groups, such as Thompson's group $V$, the Nekrashevych-R\"{o}ver groups, Houghton's groups, and the Brin-Thompson groups $nV$, among many others. We offer a unified construction of geometric models for $\Gamma_{S}$ and a general framework for studying the finiteness properties of these groups. | mathematics |
We present time-lapse spectroscopy of a classical nova explosion commencing 9 days after discovery. These data reveal the appearance of a transient feature in Fe II and [O I]. We explore different models for this feature and conclude that it is best explained by a circumbinary disc shock-heated following the classical nova event. Circumbinary discs may play an important role in novae in accounting for the absorption systems known as THEA, the transfer of angular momentum, and the possible triggering of the nova event itself. | astrophysics |
We show that the attraction-repulsion chemotaxis system \begin{equation*} \begin{cases} u_t = \Delta u - \chi\nabla\cdot(u\nabla v_1) + \xi\nabla\cdot(u\nabla v_2)\\ \partial_t v_1 = \Delta v_1 - \beta v_1 + \alpha u \\ \partial_t v_2 = \Delta v_2 - \delta v_2 + \gamma u, \end{cases} \end{equation*} posed with homogeneous Neumann boundary conditions in bounded domains $\Omega=B_R \subset \mathbb{R}^3$, $R>0$, admits radially symmetric solutions which blow-up in finite time if it is attraction-dominated in the sense that $\chi\alpha-\xi\gamma>0$. | mathematics |
We study the generation of single-photon pulses with the tailored temporal shape via nonlocal spectral filtering. A shaped photon is heralded from a time-energy entangled photon pair upon spectral filtering and time-resolved detection of its entangled counterpart. We show that the temporal shape of the heralded photon is defined by the time-inverted impulse response of the spectral filter and does not depend on the heralding instant. Thus one can avoid post-selection of particular heralding instants and achieve substantially higher heralding rate of shaped photons as compared to the generation of photons via nonlocal temporal modulation. Furthermore, the method can be used to generate shaped photons with a coherence time in the ns-$\mu$s range and is particularly suitable to produce photons with the exponentially rising temporal shape required for efficient interfacing to a single quantum emitter in free space. | quantum physics |
Phase field theory for fracture is developed at large strains with an emphasis on a correct introduction of surface stresses. This is achieved by multiplying the cohesion and gradient energies by the local ratio of the crack surface areas in the deformed and undeformed configurations and with the gradient energy in terms of the gradient of the order parameter in the reference configuration. This results in an expression for the surface stresses which is consistent with the sharp surface approach. Namely, the structural part of the Cauchy surface stress represents an isotropic biaxial tension, with the magnitude of a force per unit length equal to the surface energy. The surface stresses are a result of the geometric nonlinearities, even when strains are infinitesimal. They make multiple contributions to the Ginzburg-Landau equation for damage evolution, both in the deformed and undeformed configurations. Important connections between material parameters are obtained using an analytical solution for two separating surfaces, as well as an analysis of the stress-strain curves for homogeneous tension for different degradation and interpolation functions. A complete system of equations is presented in the undeformed and deformed configurations. All the phase field parameters are obtained utilizing the existing first principle simulations for the uniaxial tension of Si crystal in the [100] and [111] directions. | physics |
We investigate flows interacting with a square and a fractal shape multi-scale structures in the compressible regime for Mach numbers at subsonic and supersonic upstream conditions using large-eddy-simulations (LES). We also aim at identifying similarities and differences that these interactions have with corresponding interactions in the canonical incompressible flow problem. To account for the geometrical complexity associated with the fractal structures, we apply an immersed boundary method to model the no-slip boundary condition at the solid surfaces, with adequate mesh resolution in the vicinity of the small fractal features. We validate the numerical results through extensive comparisons with experimental wind tunnel measurements at a low Mach number. Similar to the incompressible flow case results, we find a break-up of the flow structures by the fractal plate and an increase in turbulent mixing in the downstream direction. As the Mach number increases, we observe noticeable wake meandering and higher spread rate of the wake in the lateral direction perpendicular to the streamwise-spanwise plane. Although not significant, we quantify the difference between the square and the fractal plates using two-point velocity correlations across the Mach number range. The wakes generated by the fractal plate in the compressible regime showed lower turbulent kinetic energy (TKE) and energy spectra levels compared to those of the square case. Moreover, results in terms of the near-field pressure spectra seem to indicate that the fractal plate has the potential to reduce the aerodynamic noise. | physics |
T-duality of string theory can be extended to the Poisson-Lie T-duality when the target space has a generalized isometry group given by a Drinfel'd double. In M-theory, T-duality is understood as a subgroup of U-duality, but the non-Abelian extension of U-duality is still a mystery. In this paper, we study membrane theory on a curved background with a generalized isometry group given by the $\mathcal{E}_n$ algebra. This provides a natural setup to study non-Abelian U-duality because the $\mathcal{E}_n$ algebra has been proposed as a U-duality extension of the Drinfel'd double. We show that the standard treatment of Abelian U-duality can be extended to the non-Abelian setup. However, a famous issue in Abelian U-duality still exists in the non-Abelian extension. | high energy physics theory |
In this paper, we consider the optimal design of networked estimators to minimize the communication/measurement cost under the networked observability constraint. This problem is known as the minimum-cost networked estimation problem, which is generally claimed to be NP-hard. The main contribution of this work is to provide a polynomial-order solution for this problem under the constraint that the underlying dynamical system is self-damped. Using structural analysis, we subdivide the main problem into two NP-hard subproblems known as (i) optimal sensor selection, and (ii) minimum-cost communication network. For self-damped dynamical systems, we provide a polynomial-order solution for subproblem (i). Further, we show that the subproblem (ii) is of polynomial-order complexity if the links in the communication network are bidirectional. We provide an illustrative example to explain the methodologies. | electrical engineering and systems science |
In this paper, we show that the equation $\varphi(|x^{m}-y^{m}|)=|x^{n}-y^{n}|$ has no nontrivial solutions in integers $x,y,m,n$ with $xy\neq0, m>0, n>0$ except for the solutions $(x,y,m,n)=((2^{t-1}\pm1),-(2^{t-1}\mp1),2,1), (-(2^{t-1}\pm1),(2^{t-1}\mp1),2,1),$ where $t$ is a integer with $t\geq 2.$ The equation $\varphi(|\frac{x^{m}-y^{m}}{x-y}|)=|\frac{x^{n}-y^{n}}{x-y}|$ has no nontrivial solutions in integers $x,y,m,n$ with $xy\neq0, m>0, n>0$ except for the solutions $(x,y,m,n)=(a\pm1, -a, 1, 2), (a\pm i, -a, 2, 1),$ where $a$ is a integer with $i=1,2.$ | mathematics |
We show that quantum absorption refrigerators, which has traditionally been studied as of three qubits, each of which is connected to a thermal reservoir, can also be constructed by using three qubits and two thermal baths, where two of the qubits, including the qubit to be cooled, are connected to a common bath. With a careful choice of the system, bath, and qubit-bath interaction parameters within the Born-Markov and rotating wave approximations, one of the qubits attached to the common bath achieves a cooling in the steady-state. We observe that the proposed refrigerator may also operate in a parameter regime where no or negligible steady-state cooling is achieved, but there is considerable transient cooling. The steady-state temperature can be lowered significantly by an increase in the strength of the few-body interaction terms existing due to the use of the common bath in the refrigerator setup, proving the importance of the two-bath setup over the conventional three-bath construction. The proposed refrigerator built with three qubits and two baths is shown to provide steady-state cooling for both Markovian qubit-bath interactions between the qubits and canonical bosonic thermal reservoirs, and a simpler reset model for the qubit-bath interactions. | quantum physics |
These lectures provide an introduction to the low-energy dynamics of Nambu-Goldstone fields, associated with some spontaneous (or dynamical) symmetry breaking, using the powerful methods of effective field theory. The generic symmetry properties of these massless modes are described in detail and two very relevant phenomenological applications are worked out: chiral perturbation theory, the low-energy effective theory of QCD, and the (non-linear) electroweak effective theory. The similarities and differences between these two effective theories are emphasized, and their current status is reviewed. Special attention is given to the short-distance dynamical information encoded in the low-energy couplings of the effective Lagrangians. The successful methods developed in QCD could help us to uncover fingerprints of new physics scales from future measurements of the electroweak effective theory couplings. | high energy physics phenomenology |
The varying cortical geometry of the brain creates numerous challenges for its analysis. Recent developments have enabled learning surface data directly across multiple brain surfaces via graph convolutions on cortical data. However, current graph learning algorithms do fail when brain surface data are misaligned across subjects, thereby affecting their ability to deal with data from multiple domains. Adversarial training is widely used for domain adaptation to improve the segmentation performance across domains. In this paper, adversarial training is exploited to learn surface data across inconsistent graph alignments. This novel approach comprises a segmentator that uses a set of graph convolution layers to enable parcellation directly across brain surfaces in a source domain, and a discriminator that predicts a graph domain from segmentations. More precisely, the proposed adversarial network learns to generalize a parcellation across both, source and target domains. We demonstrate an 8% mean improvement in performance over a non-adversarial training strategy applied on multiple target domains extracted from MindBoggle, the largest publicly available manually-labeled brain surface dataset. | electrical engineering and systems science |
Different from the conventional Rydberg antiblockade (RAB) regime that either requires weak Rydberg-Rydberg interaction (RRI), or compensates Rydberg-Rydberg interaction (RRI)-induced energy shift by introducing dispersive interactions, we show that RAB regime can be achieved by resonantly driving the transitions between ground state and Rydberg state under strong RRI. The Rabi frequencies are of small amplitude and time-dependent harmonic oscillation, which plays a critical role for the presented RAB. The proposed unconventional RAB regime is used to construct high-fidelity controlled-Z (CZ) gate and controlled-not (CNOT) gate in one step. Each atom requires single external driving. And the atomic addressability is not required for the presented unconventional RAB, which would simplify experimental complexity and reduce resource consumption. | quantum physics |
Deep learning-based single image super-resolution enables very fast and high-visual-quality reconstruction. Recently, an enhanced super-resolution based on generative adversarial network (ESRGAN) has achieved excellent performance in terms of both qualitative and quantitative quality of the reconstructed high-resolution image. In this paper, we propose to add one more shortcut between two dense-blocks, as well as add shortcut between two convolution layers inside a dense-block. With this simple strategy of adding more shortcuts in the proposed network, it enables a faster learning process as the gradient information can be back-propagated more easily. Based on the improved ESRGAN, the dual reconstruction is proposed to learn different aspects of the super-resolved image for judiciously enhancing the quality of the reconstructed image. In practice, the super-resolution model is pre-trained solely based on pixel distance, followed by fine-tuning the parameters in the model based on adversarial loss and perceptual loss. Finally, we fuse two different models by weighted-summing their parameters to obtain the final super-resolution model. Experimental results demonstrated that the proposed method achieves excellent performance in the real-world image super-resolution challenge. We have also verified that the proposed dual reconstruction does further improve the quality of the reconstructed image in terms of both PSNR and SSIM. | electrical engineering and systems science |
We investigate the application of the Shapley value to quantifying the contribution of a tuple to a query answer. The Shapley value is a widely known numerical measure in cooperative game theory and in many applications of game theory for assessing the contribution of a player to a coalition game. It has been established already in the 1950s, and is theoretically justified by being the very single wealth-distribution measure that satisfies some natural axioms. While this value has been investigated in several areas, it received little attention in data management. We study this measure in the context of conjunctive and aggregate queries by defining corresponding coalition games. We provide algorithmic and complexity-theoretic results on the computation of Shapley-based contributions to query answers; and for the hard cases we present approximation algorithms. | computer science |
Using group-theoretical approach we found a family of four nine-parameter quantum states for the two-spin-1/2 Heisenberg system in an external magnetic field and with multiple components of Dzyaloshinsky-Moriya (DM) and Kaplan-Shekhtman-Entin-Wohlman-Aharony (KSEA) interactions. Exact analytical formulas are derived for the entanglement of formation for the quantum states found. The influence of DM and KSEA interactions on the behavior of entanglement and on the shape of disentangled region is studied. A connection between the two-qubit quantum states and the reduced density matrices of many-particle systems is discussed. | quantum physics |
The next generation of axion direct detection experiments may rule out or confirm axions as the dominant source of dark matter. We develop a general likelihood-based framework for studying the time-series data at such experiments, with a focus on the role of dark-matter astrophysics, to search for signatures of the QCD axion or axion like particles. We illustrate how in the event of a detection the likelihood framework may be used to extract measures of the local dark matter phase-space distribution, accounting for effects such as annual modulation and gravitational focusing, which is the perturbation to the dark matter phase-space distribution by the gravitational field of the Sun. Moreover, we show how potential dark matter substructure, such as cold dark matter streams or a thick dark disk, could impact the signal. For example, we find that when the bulk dark matter halo is detected at 5$\sigma$ global significance, the unique time-dependent features imprinted by the dark matter component of the Sagittarius stream, even if only a few percent of the local dark matter density, may be detectable at $\sim$2$\sigma$ significance. A co-rotating dark disk, with lag speed $\sim$50 km$/$s, that is $\sim$20$\%$ of the local DM density could dominate the signal, while colder but as-of-yet unknown substructure may be even more important. Our likelihood formalism, and the results derived with it, are generally applicable to any time-series based approach to axion direct detection. | astrophysics |
We assign a relational structure to any finite algebra in a canonical way, using solution sets of equations, and we prove that this relational structure is polymorphism-homogeneous if and only if the algebra itself is polymorphism-homogeneous. We show that polymorphism-homogeneity is also equivalent to the property that algebraic sets (i.e., solution sets of systems of equations) are exactly those sets of tuples that are closed under the centralizer clone of the algebra. Furthermore, we prove that the aforementioned properties hold if and only if the algebra is injective in the category of its finite subpowers. We also consider two additional conditions: a stronger variant for polymorphism-homogeneity and for injectivity, and we describe explicitly the finite semilattices, lattices, Abelian groups and monounary algebras satisfying any one of these three conditions. | mathematics |
By means of simple dynamical experiments we study the combined effect of gravitational and gas dynamics in the evolution of an initially out-of-equilibrium, uniform and rotating massive over-density thought of as in isolation. The rapid variation of the system mean-field potential makes the point like particles (PPs), which interact only via Newtonian gravity, form a quasistationary thick disk dominated by rotational motions surrounded by far out-of-equilibrium spiral arms. On the other side, the gas component is subjected to compression shocks and radiative cooling so as to develop a much flatter disk, where rotational motions are coherent and the velocity dispersion is smaller than that of PPs. Around such gaseous disk long-lived, but nonstationary, spiral arms form: these are made of gaseous particles that move coherently because have acquired a specific phase-space correlation during the gravitational collapse phase. Such a phase-space correlation represents a signature of the violent origin of the arms and implies both the motion of matter and the transfer of energy. On larger scales, where the radial velocity component is significantly larger than the rotational one, the gas follows the same out-of-equilibrium spiral arms traced by PPs. We finally outline the astrophysical and cosmological implications of our results. | astrophysics |
To achieve a reliable communication with short data blocks, we propose a novel decoding strategy for Kronecker-structured constant modulus signals that provides low bit error ratios (BERs) especially in the low energy per bit to noise power spectral density ratio $(E_b/N_0)$. The encoder exploits the fact that any M-PSK constellation can be factorized as Kronecker products of lower or equal order PSK constellation sets. A construction of two types of schemes is first derived. For such Kronecker-structured schemes, a conceptually simple decoding algorithm is proposed, referred to as Kronecker-RoD (rank-one detector). The decoder is based on a rank-one approximation of the "tensorized" received data block, has a built-in noise rejection capability and a smaller implementation complexity than state-of-the-art detectors. Compared with convolutional codes with hard and soft Viterbi decoding, Kronecker-RoD outperforms the latter in BER performance at same spectral efficiency. | electrical engineering and systems science |
Existing computer vision technologies in artwork recognition focus mainly on instance retrieval or coarse-grained attribute classification. In this work, we present a novel dataset for fine-grained artwork attribute recognition. The images in the dataset are professional photographs of classic artworks from the Metropolitan Museum of Art, and annotations are curated and verified by world-class museum experts. In addition, we also present the iMet Collection 2019 Challenge as part of the FGVC6 workshop. Through the competition, we aim to spur the enthusiasm of the fine-grained visual recognition research community and advance the state-of-the-art in digital curation of museum collections. | computer science |
We present the first complete implementation of a randomness and privacy amplification protocol based on Bell tests. This allows the building of device-independent random number generators which output provably unbiased and private numbers, even if using an uncharacterised quantum device potentially built by an adversary. Our generation rates are linear in the runtime of the quantum device and the classical randomness post-processing has quasi-linear complexity -- making it efficient on a standard personal laptop. The statistical analysis is tailored for real-world quantum devices, making it usable as a quantum technology today. We then showcase our protocol on the quantum computers from the IBM-Q experience. Although not purposely built for the task, we show that quantum computer can run faithful Bell tests by adding minimal assumptions. At a high level, these amount to trusting that the quantum device was not purposely built to trick the user, but otherwise remains mostly uncharacterised. In this semi-device-independent manner, our protocol generates provably private and unbiased random numbers on today's quantum computers. | quantum physics |
This paper presents a model for quasi two-dimensional MHD flows between two planes with small magnetic Reynolds number and constant transverse magnetic field orthogonal to the planes. A method is presented that allows to take 3D effects into account in a 2D equation of motion thanks to a model for the transverse velocity profile. The latter is obtained by using a double perturbation asymptotic development both in the core flow and in the Hartmann layers arising along the planes. A new model is thus built that describes inertial effects in these two regions. Two separate classes of phenomena are thus pointed out : the one related to inertial effects in the Hartmann layer gives a model for recirculating flows and the other introduces the possibility of having a transverse dependence of the velocity profile in the core flow. The ''recirculating'' velocity profile is then introduced in the transversally averaged equation of motion in order to provide an effective 2D equation of motion. Analytical solutions of this model are obtained for two experimental configurations : isolated vortices aroused by a point electrode and axisymmetric parallel layers occurring in the MATUR (MAgneticTURbulence) experiment. The theory is found to give a satisfactory agreement with the experiment so that it can be concluded that recirculating flows are actually responsible for both vortices core spreading and excessive dissipative behavior of the axisymmetric side wall layers. | physics |
In radiotherapy, a trade-off exists between computational workload/speed and dose calculation accuracy. Calculation methods like pencil-beam convolution can be much faster than Monte-Carlo methods, but less accurate. The dose difference, mostly caused by inhomogeneities and electronic disequilibrium, is highly correlated with the dose distribution and the underlying anatomical tissue density. We hypothesize that a conversion scheme can be established to boost low-accuracy doses to high-accuracy, using intensity information obtained from computed tomography (CT) images. A deep learning-driven framework was developed to test the hypothesis by converting between two commercially-available dose calculation methods: AAA (anisotropic-analytic-algorithm) and AXB (Acuros XB).A hierarchically-dense U-Net model was developed to boost the accuracy of AAA dose towards the AXB level. The network contained multiple layers of varying feature sizes to learn their dose differences, in relationship to CT, both locally and globally. AAA and AXB doses were calculated in pairs for 120 lung radiotherapy plans covering various treatment techniques, beam energies, tumor locations, and dose levels. | physics |
We study cluster adjacency conjectures for amplitudes in maximally supersymmetric Yang-Mills theory. We show that the n-point one-loop NMHV ratio function satisfies Steinmann cluster adjacency. We also show that the one-loop BDS-like normalized NMHV amplitude satisfies cluster adjacency between Yangian invariants and final symbol entries up to 9-points. We present conjectures for cluster adjacency properties of Pl\"ucker coordinates, quadratic cluster variables, and NMHV Yangian invariants that generalize the notion of weak separation. | high energy physics theory |
Disorder-induced magnetoresistance (MR) effect is quadratic at low perpendicular magnetic fields and linear at high fields. This effect is technologically appealing, especially in the two-dimensional (2D) materials such as graphene, since it offers potential applications in magnetic sensors with nanoscale spatial resolution. However, it is a great challenge to realize a graphene magnetic sensor based on this effect because of the difficulty in controlling the spatial distribution of disorder and enhancing the MR sensitivity in the single-layer regime. Here, we report a room-temperature colossal MR of up to 5,000% at 9 T in terraced single-layer graphene. By laminating single-layer graphene on a terraced substrate, such as TiO2 terminated SrTiO3, we demonstrate a universal one order of magnitude enhancement in the MR compared to conventional single-layer graphene devices. Strikingly, a colossal MR of >1,000% was also achieved in the terraced graphene even at a high carrier density of ~1012 cm-2. Systematic studies of the MR of single-layer graphene on various oxide- and non-oxide-based terraced surfaces demonstrate that the terraced structure is the dominant factor driving the MR enhancement. Our results open a new route for tailoring the physical property of 2D materials by engineering the strain through a terraced substrate. | condensed matter |
In some athletic races, such as cycling and types of speed skating races, athletes have to complete a relatively long distance at a high speed in the presence of direct opponents. To win such a race, athletes are motivated to hide behind others to suppress energy consumption before a final moment of the race. This situation seems to produce a social dilemma: players want to hide behind others, whereas if a group of players attempts to do so, they may all lose to other players that overtake them. To support that speed skaters are involved in such a social dilemma, we analyzed video footage data for 14 mass start skating races to find that skaters that hid behind others to avoid air resistance for a long time before the final lap tended to win. Furthermore, the finish rank of the skaters in mass start races was independent of the record of the same skaters in time-trial races measured in the absence of direct opponents. The results suggest that how to strategically cope with a skater's dilemma may be a key determinant for winning long-distance and high-speed races with direct opponents. | statistics |
We consider both "bottom-up" and "top-down" approaches to the origin of gauge kinetic mixing. We focus on the possibilities for obtaining kinetic mixings $\epsilon$ which are consistent with experimental constraints and are much smaller than the naive estimates ($\epsilon \sim 10^{-2} - 10^{-1}$) at the one-loop level. In the bottom-up approach, we consider the possible suppression from multi-loop processes. Indeed we argue that kinetic mixing through gravity alone, requires at least six loops and could be as large as $\sim 10^{-13}$. In the top-down approach we consider embedding the Standard Model and a $U(1)_X$ in a single grand-unified gauge group as well as the mixing between Abelian and non-Abelian gauge sectors. | high energy physics phenomenology |
An analytical solution of the impulsive impact of a cylindrical body submerged below a calm water surface is obtained by solving a free boundary problem. The shape of the cross section of the body is arbitrary. The integral hodograph method is applied to derive the complex velocity potential defined in a parameter plane. The boundary-value problem is reduced to a Fredholm integral equation of the first kind in the velocity magnitude on the free surface. The velocity field, the impulsive pressure on the body surface, and the added mass are determined in a wide range of depths of submergence for various cross-sectional shapes, such as a flat plate, a circular cylinder, and a rectangle. | physics |
I prove that the open unit cube can be symplectically embedded into a longer polydisc in such a way that the area of each section satisfies a sharp bound and the complement of each section is path-connected. This answers a variant of a question by F. Schlenk. | mathematics |
In this note we describe a method to calculate the action of a particular Fourier-Mukai transformation on a basis of brane charges on elliptically fibered Calabi-Yau threefolds with and without a section. The Fourier-Mukai kernel is the ideal sheaf of the relative diagonal and for fibrations that admit a section this is essentially the Poincar\'e sheaf. We find that in this case it induces an action of the modular group on the charges of 2-branes. | high energy physics theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.