text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
A new mesoscale mechanical model, describing elastic interactions in carbon nanotubes (CNT) and other nanofilaments, is proposed. Functional form of the developed model is based on enhanced vector model (EVM) that describes basic types of bond deformations: tension, torsion, bending and shear. Calibration of bond stiffnesses is performed by adjusting EVM parameters to reproduce both CNT's deformation energies and shape observed in a full-atomistic simulation. The parameters obtained are compared with the ones obtained from Euler-Bernoulli beam theory considerations. It is found that after certain critical length of a tested CNT specimen its stiffness parameters become length-independent and can be used in mesoscale simulations of CNTs of arbitrary length.
|
condensed matter
|
Random numbers are a fundamental ingredient for many applications including simulation, modelling and cryptography. Sound random numbers should be independent and uniformly distributed. Moreover, for cryptographic applications they should also be unpredictable. We demonstrate a real-time self-testing source independent quantum random number generator (QRNG) that uses squeezed light as source. We generate secure random numbers by measuring the quadratures of the electromagnetic field without making any assumptions on the source; only the detection device is trusted. We use a homodyne detection to alternatively measure the Q and P conjugate quadratures of our source. Using the entropic uncertainty relation, measurements on P allow us to estimate a bound on the min-entropy of Q conditioned on any classical or quantum side information that a malicious eavesdropper may detain. This bound gives the minimum number of secure bits we can extract from the Q measurement. We discuss the performance of different estimators for this bound. We operate this QRNG with a squeezed state and we compare its performance with a QRNG using thermal states. The real-time bit rate was 8.2 kb/s when using the squeezed source and between 5.2-7.2 kb/s when the thermal state source was used.
|
quantum physics
|
This article proposes a new safety concept: backup plan safety. The backup plan safety is defined as the ability to complete one of the alternative missions in the case of primary mission abortion. To incorporate this new safety concept in control problems, we formulate a feasibility maximization problem that adopts additional (virtual) input horizons toward the alternative missions on top of the input horizon toward the primary mission. Cost functions for the primary and alternative missions construct multiple objectives, and multi-horizon inputs evaluate them. To address the feasibility maximization problem, we develop a multi-horizon multi-objective model predictive path integral control (3M) algorithm. Model predictive path integral control (MPPI) is a sampling-based scheme that can help the proposed algorithm deal with nonlinear dynamic systems and achieve computational efficiency by parallel computation. Simulations of the aerial vehicle and ground vehicle control problems demonstrate the new concept of backup plan safety and the performance of the proposed algorithm.
|
electrical engineering and systems science
|
Diffusion-weighted MRI (DW-MRI) has recently seen a rising interest in planar, spherical and general B-tensor encodings. Some of these sequences have aided traditional linear encoding in the estimation of white matter microstructural features, generally by making DW-MRI less sensitive to the orientation of axon fascicles in a voxel. However, less is known about their potential to make the signal more sensitive to fascicle orientation, especially in crossing-fascicle voxels. Although planar encoding has been commended for the resemblance of its signal with the voxel's orientation distribution function (ODF), linear encoding remains the near undisputed method of choice for orientation estimation. This paper presents a theoretical framework to gauge the sensitivity of axisymmetric B-tensors to fascicle orientations. A signal peak separation index (SPSI) is proposed, motivated by theoretical considerations on a simple multi-tensor model of fascicle crossing. Theory and simulations confirm the intuition that linear encoding, because it maximizes B-tensor anisotropy, possesses an intrinsic advantage over all other axisymmetric B-tensors. At identical SPSI however, oblate B-tensors yield higher signal and may be more robust to acquisition noise than their prolate counterparts. The proposed index relates the properties of the B-tensor to those of the tissue microstructure in a straightforward way and can thus guide the design of diffusion sequences for improved orientation estimation and tractography.
|
physics
|
In April 2019, Aycock et al. published "Sexual harassment reported by undergraduate female physicists" in Phys. Rev. PER. Their main finding is that 3/4 of undergraduate women in physics in the U.S. report experiencing sexual harassment. Gender minorities experience high rates of harassment also. Many physics departments will want to discuss these findings with the ultimate goal of making our field harassment-free. However, there are major challenges inherent in having these discussions, since many participants will have experienced harassment themselves. We suggest questions for discussion organizers to reflect on, and ideas for how departments can approach these discussions.
|
physics
|
This paper presents new machine learning approaches to approximate the solution of optimal stopping problems. The key idea of these methods is to use neural networks, where the hidden layers are generated randomly and only the last layer is trained, in order to approximate the continuation value. Our approaches are applicable for high dimensional problems where the existing approaches become increasingly impractical. In addition, since our approaches can be optimized using a simple linear regression, they are very easy to implement and theoretical guarantees can be provided. In Markovian examples our randomized reinforcement learning approach and in non-Markovian examples our randomized recurrent neural network approach outperform the state-of-the-art and other relevant machine learning approaches.
|
statistics
|
We revisit the existence and stability of the critical front in the extended Fisher-KPP equation, refining earlier results of Rottsch{\"a}fer and Wayne [28] which establish stability of fronts without identifying a precise decay rate. We verify that the front is marginally spectrally stable: while the essential spectrum touches the imaginary axis at the origin, there are no unstable eigenvalues and no eigenvalue (or resonance) embedded in the essential spectrum at the origin. Together with the recent work of Avery and Scheel [3], this implies nonlinear stability of the critical front with sharp t --3/2 decay rate, as previously obtained in the classical Fisher-KPP equation. The main challenges are to regularize the singular perturbation in the extended Fisher-KPP equation and to track eigenvalues near the essential spectrum, and we overcome these difficulties with functional analytic methods.
|
mathematics
|
In recent years, Open Educational Resources (OERs) were earmarked as critical when mitigating the increasing need for education globally. Obviously, OERs have high-potential to satisfy learners in many different circumstances, as they are available in a wide range of contexts. However, the low-quality of OER metadata, in general, is one of the main reasons behind the lack of personalised services such as search and recommendation. As a result, the applicability of OERs remains limited. Nevertheless, OER metadata about covered topics (subjects) is essentially required by learners to build effective learning pathways towards their individual learning objectives. Therefore, in this paper, we report on a work in progress project proposing an OER topic extraction approach, applying text mining techniques, to generate high-quality OER metadata about topic distribution. This is done by: 1) collecting 123 lectures from Coursera and Khan Academy in the area of data science related skills, 2) applying Latent Dirichlet Allocation (LDA) on the collected resources in order to extract existing topics related to these skills, and 3) defining topic distributions covered by a particular OER. To evaluate our model, we used the data-set of educational resources from Youtube, and compared our topic distribution results with their manually defined target topics with the help of 3 experts in the area of data science. As a result, our model extracted topics with 79% of F1-score.
|
computer science
|
We consider the $n$ body problem defined on surfaces of constant positive curvature. For the 5 and 7 body problem in a collinear symmetric configuration we obtain initial positions which lead to relative equilibria. We give explicitly the values of masses in terms of the initial positions. For positions for which relative equilibria exist, there are infinitely many values of the masses that generate such solutions. For the 5 and 7 body problem, the set of parameters (masses and positions) leading to relative equilibria has positive Lebesgue measure.
|
mathematics
|
We study associated Higgs production with a photon at electron-positron colliders, $e^+e^-\to h\gamma$, in various extended Higgs models, such as the inert doublet model (IDM), the inert triplet model (ITM) and the two Higgs doublet model (THDM). The cross section in the standard model (SM) is maximal around $\sqrt{s}=$250 GeV, and we present how and how much the new physics can enhance or reduce the production rate. We also discuss the correlation with the $h\to\gamma\gamma$ and $h\to Z\gamma$ decay rates. We find that, with a sizable coupling to a SM-like Higgs boson, charged scalars can give considerable contributions to both the production and the decay if their masses are around 100 GeV. Under the theoretical constraints from vacuum stability and perturbative unitarity as well as the current constraints from the Higgs measurements at the LHC, the production rate can be enhanced from the SM prediction at most by a factor of two in the IDM. In the ITM, in addition, we find a particular parameter region where the $h\gamma$ production significantly increases by a factor of about six to eight, but the $h\to\gamma\gamma$ decay still remains as in the SM. In the THDM, possible deviations from the SM prediction are minor in the viable parameter space.
|
high energy physics phenomenology
|
Neutrinos produced during a supernova explosion induce reactions on abundant nuclei in the outer stellar shells and contribute in this way to the synthesis of the elements in the Universe. This neutrino nucleosynthesis process has been identified as an important contributor to the origin of $^7$Li, $^{11}$B,$^{19}$F, $^{138}$La, and $^{180}$Ta, but also to the long-lived radionuclides $^{22}$Na and $^{26}$Al, which are both key isotopes for $\gamma$-ray astronomy. The manuscript summarizes the recent progress achieved in simulations of neutrino nucleosynthesis.
|
astrophysics
|
It is possible, and for several reasons attractive, to explain a collection of recent anomalies involving $b\rightarrow s\mu\mu$ processes with a $Z^{\prime}$ gauge boson coupled only to the third family in the weak eigenbasis. From this premise, requiring cancellation of all gauge anomalies (including mixed and gravitational anomalies) fixes a unique charge assignment for the third family Standard Model fermions, which is simply proportional to hypercharge. After a brief discussion of some general features of anomaly cancellation in $Z^\prime$ theories, we discuss the phenomenology of such a `Third Family Hypercharge Model', which is subject to a trio of important constraints: (i) $B_s-\bar{B}_s$ mixing, (ii) lepton universality of the $Z$ boson couplings, and (iii) constraints from direct searches for the $Z^\prime$ boson at the LHC. Finally, in gauging third family hypercharge, this model forbids all Yukawa couplings (at the renormalisable level) save those of the third family, leading to a possible explanation of the heaviness of the third family.
|
high energy physics phenomenology
|
Leptogenesis appears to be a viable alternative to account for the baryon asymmetry of the universe through baryogenesis. In this context, we consider a scenario in which the standard model is extended with $S_3$ and $Z_2$ symmetry in addition to the two scalar triplets, two scalar doublets and three right handed neutrinos. Presence of scalar triplets and right-handed neutrinos in the scenarios of both type-I and type-II seesaw framework provide a different leptogenesis option and can help us to understand the matter-antimatter asymmetry with simple $S_3$ symmetry. We discuss the neutrino phenomenology and leptogenesis in both high ($O(10^{10})$ GeV) and low energy scale ($O$(2)TeV) by constraining the Yukawa couplings. Moreover, we also consider the constraints on model parameters from neutrino oscillation data and leptogenesis to explain the rare lepton flavor violating decay and muon g-2 anomaly.
|
high energy physics phenomenology
|
We study the Hawking flux from a black hole with soft hair by the anomaly cancellation method proposed by Robinson and Wilczek. Unlike the earlier studies considering the black hole with linear supertranslation hair, our study takes into account the supertranslation hair to the quadratic order, which then yields the angular dependent horizon. As a result, highly nontrivial kinetic-mixings appear among the spherical Kaluza-Klein modes of the (1+1)d near-horizon reduced theory, which obscures the traditional derivation of the Hawking flux. However, after a series of field re-definitions, we can disentangle the mode-mixings into canonical normal modes, but the reduced metrics for these normal modes are mode-dependent. Despite of this, the resultant Hawking flux turns out to be mode-independent and remains the same as the Schwarzschild's one. Thus, one cannot tell the black holes with nonlinear supertranslation hairs from the Schwarzschild's one by examining the Hawking flux, so that the nonlinear soft hairs can be thought as the microstates.
|
high energy physics theory
|
In this paper an approximation of the image of the closed ball of the space $L_p$ $(p>1)$ centered at the origin with radius $r$ under Hilbert-Schmidt integral operator $F(\cdot):L_p\rightarrow L_q$ $\displaystyle \left(\frac{1}{p}+\frac{1}{q}=1\right)$ is presented. An error estimation for given approximation is obtained.
|
mathematics
|
This paper investigates waveform estimation (tracking) of the time-varying force in a two-level optomechanical system with backaction noise by Kalman filtering. It is assumed that the backaction and measurement noises are Gaussian and white. By discretizing the continuous-time optomechanical system, the state of the resulting system can be estimated by the unbiased minimum variance Kalman filtering. Then an estimator of the time-varying force is obtained, provided that the external force is also in discrete time. Furthermore, the accuracy of the force estimation, described by the mean squared error, is derived theoretically. Finally, the feasibility of the proposed algorithm is illustrated by comparing the theoretical accuracy with the numerical accuracy in a numerical example.
|
quantum physics
|
Our ability to understand and tailor metal-organic interfaces is mandatory to functionalize organic complexes for next generation electronic and spintronic devices. For magnetic data storage applications, metal-carrying organic molecules, so called single molecular magnets (SMM) are of particular interest as they yield the possibility to store information on the molecular scale. In this work, we focus on the adsorption properties of the prototypical SMM Sc3N@C80 grown in a monolayer film on the Ag(111) substrate. We provide clear evidence of a pyramidal distortion of the otherwise planar Sc3N core inside the carbon cage upon the adsorption on the Ag(111) surface. This adsorption induced structural change of the Sc3N@C80 molecule can be correlated to a charge transfer from the substrate into the lowest unoccupied molecular orbital of Sc3N@C80, which significantly alters the charge density of the fullerene core. Our comprehensive characterization of the Sc3N@C80-Ag(111) interface hence reveals an indirect coupling mechanism between the Sc3N core of the fullerene molecule and the noble metal surface mediated via an interfacial charge transfer. Our work shows that such an indirect coupling between the encapsulated metal centers of SMM and metal surfaces can strongly affect the geometric structure of the metallic centers and thereby potentially also alters the magnetic properties of SMMs on surfaces.
|
condensed matter
|
We consider graphical models based on a recursive system of linear structural equations. This implies that there is an ordering, $\sigma$, of the variables such that each observed variable $Y_v$ is a linear function of a variable specific error term and the other observed variables $Y_u$ with $\sigma(u) < \sigma (v)$. The causal relationships, i.e., which other variables the linear functions depend on, can be described using a directed graph. It has been previously shown that when the variable specific error terms are non-Gaussian, the exact causal graph, as opposed to a Markov equivalence class, can be consistently estimated from observational data. We propose an algorithm that yields consistent estimates of the graph also in high-dimensional settings in which the number of variables may grow at a faster rate than the number of observations, but in which the underlying causal structure features suitable sparsity; specifically, the maximum in-degree of the graph is controlled. Our theoretical analysis is couched in the setting of log-concave error distributions.
|
statistics
|
In this paper we define a new object, the momentum amplituhedron, which is the long sought-after positive geometry for tree-level scattering amplitudes in $\mathcal{N}=4$ super Yang-Mills theory in spinor helicity space. Inspired by the construction of the ordinary amplituhedron, we introduce bosonized spinor helicity variables to represent our external kinematical data, and restrict them to a particular positive region. The momentum amplituhedron $\mathcal{M}_{n,k}$ is then the image of the positive Grassmannian via a map determined by such kinematics. The scattering amplitudes are extracted from the canonical form with logarithmic singularities on the boundaries of this geometry.
|
high energy physics theory
|
Within Music Information Retrieval (MIR), prominent tasks -- including pitch-tracking, source-separation, super-resolution, and synthesis -- typically call for specialised methods, despite their similarities. Conditional Generative Adversarial Networks (cGANs) have been shown to be highly versatile in learning general image-to-image translations, but have not yet been adapted across MIR. In this work, we present an end-to-end supervisable architecture to perform all aforementioned audio tasks, consisting of a WaveNet synthesiser conditioned on the output of a jointly-trained cGAN spectrogram translator. In doing so, we demonstrate the potential of such flexible techniques to unify MIR tasks, promote efficient transfer learning, and converge research to the improvement of powerful, general methods. Finally, to the best of our knowledge, we present the first application of GANs to guided instrument synthesis.
|
computer science
|
This paper reviews recent progress in the synthesis of near-infrared (NIR) lead chalcogenide (PbX; PbX=PbS, PbSe, PbTe) quantum dots (QDs) and their applications in NIR QDs based light emitting diodes (NIR-QLEDs). It summarizes the strategies of how to synthesize high efficiency PbX QDs and how to realize high performance PbX based NIR-QLEDs.
|
physics
|
A high-order, well-balanced, positivity-preserving quasi-Lagrange moving mesh DG method is presented for the shallow water equations with non-flat bottom topography. The well-balance property is crucial to the ability of a scheme to simulate perturbation waves over the lake-at-rest steady state such as waves on a lake or tsunami waves in the deep ocean. The method combines a quasi-Lagrange moving mesh DG method, a hydrostatic reconstruction technique, and a change of unknown variables. The strategies in the use of slope limiting, positivity-preservation limiting, and change of variables to ensure the well-balance and positivity-preserving properties are discussed. Compared to rezoning-type methods, the current method treats mesh movement continuously in time and has the advantages that it does not need to interpolate flow variables from the old mesh to the new one and places no constraint for the choice of an update scheme for the bottom topography on the new mesh. A selection of one- and two-dimensional examples are presented to demonstrate the well-balance property, positivity preservation, and high-order accuracy of the method and its ability to adapt the mesh according to features in the flow and bottom topography.
|
mathematics
|
Alpha Fe2O3 powders have been prepared by the reduction reaction method with NaHB4 as reducing agent and followed a conventional sintering process. The XRD pattern with Rietveld refinement profile reveal that the prepared Fe2O3 with corundum structure (hematite). VSM loop exhibits obvious room-temperature weak ferromagnetism, a pinched hysteresis loop may introduced by the shape anisotropy effect. The simultaneous ferroelectric behavior of {\alpha}-Fe2O3 with "five-fold" ferroelectric hysteresis loops approves that this structured Fe2O3 can be known as a novel multiferroic material.
|
condensed matter
|
In this paper, we study the application of the recently proposed soft gluon factorization (SGF) to exclusive quarkonium production or decay. We find that in the nonrelativistic QCD factorization framework there are too many nonperturbative parameters. Thanks to the factorization of kinematical physics from dynamical physics, the SGF significantly reduces the number of nonperturbative parameters. Therefore, the SGF can improve our predictive power of exclusive quarkonium production or decay. By applying to $\eta_c+\gamma$ production at B-factories, our result is the closest one to data among all theoretical calculations.
|
high energy physics phenomenology
|
We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting, but without being provided any explicit task boundaries or identities. However, since CL models are continually updated, the utility of stored seen examples may diminish over time. Here, we propose Gradient based Memory EDiting (GMED), a framework for editing stored examples in continuous input space via gradient updates, in order to create a wide range of more ``challenging" examples for replay. GMED-edited examples remain similar to their unedited forms, but can yield increased loss in the upcoming model updates, thereby making the future replays more effective in overcoming catastrophic forgetting. By construction, GMED can be seamlessly applied in conjunction with other memory-based CL algorithms to bring further improvement. Experiments on six datasets validate that GMED is effective, and our single best method significantly outperforms existing approaches on three datasets. Code and data can be found at https://github.com/INK-USC/GMED.
|
statistics
|
We have developed a software library that simulates noisy quantum logic circuits. We represent quantum states by their density matrices, and incorporate possible errors in initialisation, logic gates, memory and measurement using simple models. Our quantum simulator is implemented as a new backend on IBM's open-source Qiskit platform. In this document, we provide its description, and illustrate it with some simple examples.
|
quantum physics
|
Measuring ThermoRemanent Magnetization (TRM) decays on a single crystal CuMn(6$\%$) spin glass sample, we have systematically mapped the rapid decrease of the characteristic timescale $tw_{eff}$ near $T_g$. Using $tw_{eff}$ to determine the length scale of the growth of correlations during the waiting time, $\xi_{TRM}$, (observed in both numerical studies and experiment), we observe both growth of $\xi_{TRM}$ in the spin glass phase and then a rapid reduction very close to $T_g$. We interpret this reduction in $\xi_{TRM}$, for all waiting times, as being governed by the critical correlation length scale $\xi_{crit}=a(T-T_c)^{-\nu}$.
|
condensed matter
|
A general problem in quantum mechanics is the reconstruction of eigenstate wave functions from measured data. In the case of molecular aggregates, information about excitonic eigenstates is vitally important to understand their optical and transport properties. Here we show that from spatially resolved near field spectra it is possible to reconstruct the underlying delocalized aggregate eigenfunctions. Although this high-dimensional nonlinear problem defies standard numerical or analytical approaches, we have found that it can be solved using a convolutional neural network. For both one-dimensional and two-dimensional aggregates we find that the reconstruction is robust to various types of disorder and noise.
|
quantum physics
|
The design and conduct of platform trials have become increasingly popular for drug development programs, attracting interest from statisticians, clinicians and regulatory agencies. Many statistical questions related to designing platform trials - such as the impact of decision rules, sharing of information across cohorts, and allocation ratios on operating characteristics and error rates - remain unanswered. In many platform trials, the definition of error rates is not straightforward as classical error rate concepts are not applicable. In particular, the strict control of the family-wise Type I error rate often seems unreasonably rigid. For an open-entry, exploratory platform trial design comparing combination therapies to the respective monotherapies and standard-of-care, we define a set of error rates and operating characteristics and then use these to compare a set of design parameters under a range of simulation assumptions. When setting up the simulations, we aimed for realistic trial trajectories, e.g. in case one compound is found to be superior to standard-of-care, it could become the new standard-of-care in future cohorts. Our results indicate that the method of data sharing, exact specification of decision rules and quality of the biomarker used to make interim decisions all strongly contribute to the operating characteristics of the platform trial. Together with the potential flexibility and complexity of a platform trial, which also impact the achieved operating characteristics, this implies that utmost care needs to be given to evaluation of different assumptions and design parameters at the design stage.
|
statistics
|
We combine the method of exchangeable pairs with Stein's method for functional approximation. As a result, we give a general linearity condition under which an abstract Gaussian approximation theorem for stochastic processes holds. We apply this approach to estimate the distance of a sum of random variables, chosen from an array according to a random permutation, from a Gaussian mixture process. This result lets us prove a functional combinatorial central limit theorem. We also consider a graph-valued process and bound the speed of convergence of the distribution of its rescaled edge counts to a continuous Gaussian process.
|
mathematics
|
We compare the star forming main sequence (SFMS) -- both integrated and resolved on 1kpc scales -- between the high-resolution TNG50 simulation of IllustrisTNG and observations from the 3D-HST slitless spectroscopic survey at z~1. Contrasting integrated star formation rates (SFRs), we find that the slope and normalization of the star-forming main sequence in TNG50 are quantitatively consistent with values derived by fitting observations from 3D-HST with the Prospector Bayesian inference framework. The previous offsets of 0.2-1dex between observed and simulated main sequence normalizations are resolved when using the updated masses and SFRs from Prospector. The scatter is generically smaller in TNG50 than in 3D-HST for more massive galaxies with M_*>10^10Msun, even after accounting for observational uncertainties. When comparing resolved star formation, we also find good agreement between TNG50 and 3D-HST: average specific star formation rate (sSFR) radial profiles of galaxies at all masses and radii below, on, and above the SFMS are similar in both normalization and shape. Most noteworthy, massive galaxies with M_*>10^10.5Msun, which have fallen below the SFMS due to ongoing quenching, exhibit a clear central SFR suppression, in both TNG50 and 3D-HST. In TNG this inside-out quenching is due to the supermassive black hole (SMBH) feedback model operating at low accretion rates. In contrast, the original Illustris simulation, without this same physical SMBH mechanism, does not reproduce the central SFR profile suppression seen in data. The observed sSFR profiles provide support for the TNG quenching mechanism and how it affects gas on kiloparsec scales in the centers of galaxies.
|
astrophysics
|
Recent water line observations toward several low-mass protostars suggest low water gas fractional abundances in the inner warm envelopes. Water destruction by X-rays has been proposed to influence the water abundances in these regions, but the detailed chemistry, including the nature of alternative oxygen carriers, is not yet understood. In this study, we aim to understand the impact of X-rays on the composition of low-mass protostellar envelopes, focusing specifically on water and related oxygen bearing species. We compute the chemical composition of two low-mass protostellar envelopes using a 1D gas-grain chemical reaction network, under various X-ray field strengths. According to our calculations, outside the water snowline, the water gas abundance increases with $L_{\mathrm{X}}$. Inside the water snowline, water maintains a high abundance of $\sim 10^{-4}$ for small $L_{\mathrm{X}}$, with water and CO being the dominant oxygen carriers. For large $L_{\mathrm{X}}$, the water gas abundances significantly decrease just inside the water snowline (down to $\sim10^{-8}-10^{-7}$) and in the innermost regions ($\sim10^{-6}$). For these cases, the O$_{2}$ and O gas abundances reach $\sim 10^{-4}$ within the water snowline, and they become the dominant oxygen carriers. The HCO$^{+}$ and CH$_{3}$OH abundances, which have been used as tracers of the water snowline, significantly increase/decrease within the water snowline, respectively, as the X-ray fluxes become larger. The abundances of some other dominant molecules, such as CO$_{2}$, OH, CH$_{4}$, HCN, and NH$_{3}$, are also affected by strong X-ray fields, especially within their own snowlines. These X-ray effects are larger in lower density envelope models. Future observations of water and related molecules (using e.g., ALMA and ngVLA) will access the regions around protostars where such X-ray induced chemistry is effective.
|
astrophysics
|
Nontrivial topology in bulk matter has been linked with the existence of topologically protected interfacial states. We show that a gaseous plasmon polariton (GPP), an electromagnetic surface wave existing at the boundary of magnetized plasma and vacuum, has a topological origin that arises from the nontrivial topology of magnetized plasma. Because a gaseous plasma cannot sustain a sharp interface with discontinuous density, one must consider a gradual density falloff with scale length comparable or longer than the wavelength of the wave. We show that the GPP may be found within a gapped spectrum in present-day laboratory devices, suggesting that platforms are currently available for experimental investigation of topological wave physics in plasmas.
|
physics
|
Statistical modeling of rainfall is an important challenge in meteorology, particularly from the perspective of rainfed agriculture where a proper assessment of the future availability of rainwater is necessary. The probability models mostly used for this purpose are exponential, gamma, Weibull and lognormal distributions, where the unknown model parameters are routinely estimated using the maximum likelihood estimator (MLE). However, presence of outliers or extreme observations is quite common in rainfall data and the MLEs being highly sensitive to them often leads to spurious inference. In this paper, we discuss a robust parameter estimation approach based on the minimum density power divergence estimators (MDPDEs) which provides a class of estimates through a tuning parameter including the MLE as a special case. The underlying tuning parameter controls the trade-offs between efficiency and robustness of the resulting inference; we also discuss a procedure for data-driven optimal selection of this tuning parameter as well as robust selection of an appropriate model that provides best fit to some specific rainfall data. We fit the above four parametric models to the areally-weighted monthly rainfall data from the 36 meteorological subdivisions of India for the years 1951-2014 and compare the fits based on the MLE and the proposed optimum MDPDE; the superior performances of the MDPDE based approach are illustrated for several cases. For all month-subdivision combinations, the best-fit models and the estimated median rainfall amounts are provided. Software (written in R) for calculating MDPDEs and their standard errors, optimal tuning parameter selection and model selection are also provided.
|
statistics
|
Using detrended fluctuation analysis (DFA) and rescaled range (R/S) analysis, we investigate the scaling properties of EUV intensity fluctuations of low-latitude coronal holes (CHs) and neighboring quiet-Sun (QS) regions in signals obtained with the Solar Dynamics Observatory/Atmospheric Imaging Assembly (SDO/AIA) instrument. Contemporaneous line-of-sight SDO/Helioseismic and Magnetic Imager (HMI) magnetic fields provide a context for the physical environment. We find that the intensity fluctuations in the time series of EUV images present at each spatial point a scaling symmetry over the range $\sim 20$ min to $\sim$ 1 hour. Thus we are able to calculate a generalized Hurst exponent and produce image maps, not of physical quantities like intensity or temperature, but of a single dynamical parameter that sums up the statistical nature of the intensity fluctuations at each pixel. In quiet-Sun (QS) regions and in coronal holes (CHs) with magnetic bipoles, the scaling exponent ($1.0 < \alpha \leq 1.5$) corresponds to anti-correlated turbulent-like processes. In coronal holes, and in quiet-Sun regions primarily associated with (open) magnetic field of dominant polarity, the generalized exponent (0.5 $< \alpha <$ 1) corresponds to positively-correlated (persistent) processes. We identify a tendency for $\alpha$ $\sim$ $1$ near coronal hole boundaries and in other regions in which open and closed magnetic fields are in proximity. This is a signature of an underlying $1/f$ type process that is characteristic for self-organized criticality and shot-noise models.
|
astrophysics
|
Wasserstein distributionally robust optimization (DRO) aims to find robust and generalizable solutions by hedging against data perturbations in Wasserstein distance. Despite its recent empirical success in operations research and machine learning, existing performance guarantees for generic loss functions are either overly conservative due to the curse of dimensionality, or plausible only in large sample asymptotics. In this paper, we develop a non-asymptotic framework for analyzing the out-of-sample performance for Wasserstein robust learning and the generalization bound for its related Lipschitz and gradient regularization problems. To the best of our knowledge, this gives the first finite-sample guarantee for generic Wasserstein DRO problems without suffering from the curse of dimensionality. Our results highlight the bias-variation trade-off intrinsic in the Wasserstein DRO, which balances between the empirical mean of the loss and the variation of the loss, measured by the Lipschitz norm or the gradient norm of the loss. Our analysis is based on two novel methodological developments that are of independent interest: 1) a new concentration inequality controlling the decay rate of large deviation probabilities by the variation of the loss and, 2) a localized Rademacher complexity theory based on the variation of the loss.
|
computer science
|
We investigate the degradation of quantum entanglement in the Schwarzschild-de Sitter black hole spacetime, by studying the mutual information and the logarithmic negativity for maximally entangled, bipartite initial states for massless minimal scalar fields. This spacetime is endowed with a black hole as well as a cosmological event horizon, giving rise to particle creation at two different temperatures. We consider two independent descriptions of thermodynamics and particle creation in this background. The first involves thermal equilibrium of an observer with the individual Hawking temperature of either of the horizons. We show that as of the asymptotically flat/anti-de Sitter black holes, the entanglement or correlation degrades here with increasing Hawking temperature. The second treats both the horizons combinedly to define a total entropy and an effective equilibrium temperature. We present a field theoretic derivation of this effective temperature and argue that unlike the usual cases, the particle creation here is not ocurring in causally disconnected spacetime wedges but in a single region. Using these states, we then show that in this scenario the entanglement never degrades but increases with increasing black hole temperature and holds true no matter how hot the black hole becomes or how small the cosmological constant is. We argue that this phenomenon can have no analogue in the asymptotically flat/anti-de Sitter black hole spacetimes.
|
high energy physics theory
|
We present a complete fabrication study of an efficiently-coupled microring optical circuit tailored for cavity quantum electrodynamics (QED) with trapped atoms. The microring structures are fabricated on a transparent membrane with high in-vacuum fiber edge-coupling efficiency in a broad frequency band. In addition, a bus waveguide pulley coupler realizes critical coupling to the microrings at both of the cesium D-line frequencies, while high coupling efficiency is achieved at the cesium 'magic' wavelengths for creating a lattice of two-color evanescent field traps above a microring. The presented platform holds promises for realizing a robust atom-nanophotonics hybrid quantum device.
|
quantum physics
|
Let $H$ be a Hopf algebra and $\mathcal{LR}(H)$ the category of Yetter-Drinfel'd-Long bimodules over $H$. We first give sufficient and necessary conditions for $\mathcal{LR}(H)$ to be symmetry and pseudosymmetry, respectively. We then introduce the definition of $u$-condition in $\mathcal{LR}(H)$ and discuss the relation between the $u$-condition and the symmetry of $\mathcal{LR}(H)$. Finally, we show that $\mathcal{LR}(H)$ over a triangular (cotriangular, resp.) Hopf algebra contains a rich symmetric subcategory.
|
mathematics
|
We report trapping and propagation of photonic Dirac mode in a helically twisted hollow core photonic crystal fiber (HC-PCF) where the trapped light in the hollow (air) defect can preserve the orbital angular momentum (OAM). We show that a photonic Dirac point can emerge even in a twisted system for a suitable choice of curvilinear coordinate and the related waveguide defect modes defined in the new basis can preserve the associated OAM during axial translation. The effect of twist rate, defect geometry and crystal dimension on the propagation of OAM carrying trapped Dirac modes is critically analyzed. The results derived by FEM simulation are verified with an analytical theory based on dynamics of Bloch modes in twisted photonic crystals which are in good agreement. The proposed HC-PCF can play an important role in exciting and guiding of OAM carrying modes that help particle trapping and quantum communication.
|
physics
|
Single-photon avalanche diode (SPAD) arrays are solid-state detectors offering imaging capabilities at the level of individual photons, with unparalleled photon counting and time-resolved performance. This fascinating technology has progressed at very high pace in the past 15~years, since its inception in standard CMOS technology in 2003. A host of architectures has been explored, ranging from simpler implementations, based solely on off-chip data processing, to progressively ``smarter" sensors including on-chip, or even pixel-level, timestamping and processing capabilities. As the technology matured, a range of biophotonics applications has been explored, including (endoscopic) FLIM, (multi-beam multiphoton) FLIM-FRET, SPIM-FCS, super-resolution microscopy, time-resolved Raman, NIROT, and PET. We will review some representative sensors and their corresponding applications, including the most relevant challenges faced by chip designers and end-users. Finally, we will provide an outlook on the future of this fascinating technology.
|
physics
|
Motivated by the recent theoretical study by Okubo $et \ al$ [Phys. Rev. Lett. ${\bf 108}$, 017206 (2012)] on the possible realization of the frustration-induced $ symmetric$ skyrmion-lattice state in the $J_1$-$J_2$ (or $J_1$-$J_3$) triangular-lattice Heisenberg model without the Dzyaloshinskii-Moriya interaction, we investigate the ordering of the classical $J_1$-$J_2$ honeycomb-lattice Heisenberg antiferromagnet under magnetic fields by means of a Monte Carlo simulation, a mean-field analysis and a low-temperature expansion. The model has been known to have an infinite ring-like degeneracy in the wavevector space in its ground state for $1/6<J_2/J_1<0.5$, in distinction with the triangular-lattice model. As reported by Okumura $et \ al$ [J. Phys. Soc. Jpn. ${\bf 79}$, 114705 (2010)], such a ring-like degeneracy gives rise to exotic spin liquid states in zero field, $e.g$, the "ring-liquid" state and the "pancake-liquid" state. In this paper, we study the in-field ordering properties of the model paying attention to the possible appearance of exotic multiple-$q$ states. Main focus is made on the $J_2/J_1=0.3$ case, where we observe a rich variety of multiple-$q$ states including the single-$q$, double-$q$ and triple-$q$ states. While the skyrmion-lattice triple-$q$ state observed in the triangular-lattice model is not realized, we instead observe an exotic double-$q$ state consisting of meron/antimeron lattice textures.
|
condensed matter
|
We propose a new scalable multi-class Gaussian process classification approach building on a novel modified softmax likelihood function. The new likelihood has two benefits: it leads to well-calibrated uncertainty estimates and allows for an efficient latent variable augmentation. The augmented model has the advantage that it is conditionally conjugate leading to a fast variational inference method via block coordinate ascent updates. Previous approaches suffered from a trade-off between uncertainty calibration and speed. Our experiments show that our method leads to well-calibrated uncertainty estimates and competitive predictive performance while being up to two orders faster than the state of the art.
|
statistics
|
Beyond future applications, quantum networks open interesting fundamental perspectives, notably novel forms of quantum correlations. In this work we discuss quantum correlations in networks from the perspective of the underlying quantum states and their entanglement. We address the questions of which states can be prepared in the so-called triangle network, consisting of three nodes connected pairwise by three sources. We derive necessary criteria for a state to be preparable in such a network, considering both the cases where the sources are statistically independent and classically correlated. This shows that the network structure imposes strong and non-trivial constraints on the set of preparable states, fundamentally different from the standard characterization of multipartite quantum entanglement.
|
quantum physics
|
Coupled models of mantle thermal evolution, volcanism, outgassing, weathering, and climate evolution for Earth-like (in terms of size and composition) stagnant lid planets are used to assess their prospects for habitability. The results indicate that planetary CO$_2$ budgets ranging from $\approx 3$ orders of magnitude lower than Earth's to $\approx 1$ order of magnitude larger, and radiogenic heating budgets as large or larger than Earth's, allow for habitable climates lasting 1-5 Gyrs. The ability of stagnant lid planets to recover from potential snowball states is also explored; recovery is found to depend on whether atmosphere-ocean chemical exchange is possible. For a "hard" snowball with no exchange, recovery is unlikely, as most CO$_2$ outgassing takes place via metamorphic decarbonation of the crust, which occurs below the ice layer. However, for a "soft" snowball where there is exchange between atmosphere and ocean, planets can readily recover. For both hard and soft snowball states, there is a minimum CO$_2$ budget needed for recovery; below this limit any snowball state would be permanent. Thus there is the possibility for hysteresis in stagnant lid planet climate evolution, where planets with low CO$_2$ budgets that start off in a snowball climate will be permanently stuck in this state, while otherwise identical planets that start with a temperate climate will be capable of maintaining this climate for 1 Gyrs or more. Finally, the model results have important implications for future exoplanet missions, as they can guide observations to planets most likely to possess habitable climates.
|
astrophysics
|
Humans are capable of learning a new behavior by observing others perform the skill. Robots can also implement this by imitation learning. Furthermore, if with external guidance, humans will master the new behavior more efficiently. So how can robots implement this? To address the issue, we present Federated Imitation Learning (FIL) in the paper. Firstly, a knowledge fusion algorithm deployed on the cloud for fusing knowledge from local robots is presented. Then, effective transfer learning methods in FIL are introduced. With FIL, a robot is capable of utilizing knowledge from other robots to increase its imitation learning. FIL considers information privacy and data heterogeneity when robots share knowledge. It is suitable to be deployed in cloud robotic systems. Finally, we conduct experiments of a simplified self-driving task for robots (cars). The experimental results demonstrate that FIL is capable of increasing imitation learning of local robots in cloud robotic systems.
|
computer science
|
The DisCoCat model of natural language meaning assigns meaning to a sentence given: (i) the meanings of its words, and, (ii) its grammatical structure. The recently introduced DisCoCirc model extends this to text consisting of multiple sentences. While in DisCoCat all meanings are fixed, in DisCoCirc each sentence updates meanings of words. In this paper we explore different update mechanisms for DisCoCirc, in the case where meaning is encoded in density matrices---which come with several advantages as compared to vectors. Our starting point are two non-commutative update mechanisms, borrowing one from quantum foundations research, from Leifer and Spekkens. Unfortunately, neither of these satisfies any desirable algebraic properties, nor are internal to the meaning category. By passing to double density matrices we do get an elegant internal diagrammatic update mechanism. We also show that (commutative) spiders can be cast as an instance of the Leifer-Spekkens update mechanism. This result is of interest to quantum foundations, as it bridges the work in Categorical Quantum Mechanics (CQM) with that on conditional quantum states. Our work also underpins implementation of text-level natural language processing on quantum hardware (a.k.a. QNLP), for which exponential space-gain and quadratic speed-up have previously been identified.
|
quantum physics
|
We discuss the realization of a universal set of ultrafast single- and two-qubit operations with superconducting quantum circuits and investigate the most relevant physical and technical limitations that arise when pushing for faster and faster gates. With the help of numerical optimization techniques, we establish a fundamental bound on the minimal gate time, which is determined independently of the qubit design solely by its nonlinearity. In addition, important practical restrictions arise from the finite qubit transition frequency and the limited bandwidth of the control pulses. We show that for highly anharmonic flux qubits and commercially available control electronics, elementary single- and two-qubit operations can be implemented in about 100 picoseconds with residual gate errors below $10^{-4}$. Under the same conditions, we simulate the complete execution of a compressed version of Shor's algorithm for factoring the number 15 in about one nanosecond. These results demonstrate that compared to state-of-the-art implementations with transmon qubits, a hundredfold increase in the speed of gate operations with superconducting circuits is still feasible.
|
quantum physics
|
The sign problem is a key challenge in computational physics, encapsulating our inability to properly understand many important quantum many-body phenomena in physics, chemistry and the material sciences. Despite its centrality, the circumstances under which the problem arises or can be resolved as well as its interplay with the related notion of `non-stoquasticity' are often not very well understood. In this study, we make an attempt to elucidate the circumstances under which the sign problem emerges and to clear up some of the confusion surrounding this intricate computational phenomenon. To that aim, we make use of the recently introduced off-diagonal series expansion quantum Monte Carlo scheme with which we analyze in detail a number of examples that capture the essence of our results.
|
quantum physics
|
The paper deals with a Batch Self Organizing Map algorithm (DBSOM) for data described by distributional-valued variables. This kind of variables is characterized to take as values one-dimensional probability or frequency distributions on a numeric support. The objective function optimized in the algorithm depends on the choice of the distance measure. According to the nature of the date, the $L_2$ Wasserstein distance is proposed as one of the most suitable metrics to compare distributions. It is widely used in several contexts of analysis of distributional data. Conventional batch SOM algorithms consider that all variables are equally important for the training of the SOM. However, it is well known that some variables are less relevant than others for this task. In order to take into account the different contribution of the variables we propose an adaptive version of the DBSOM algorithm that tackles this problem with an additional step: a relevance weight is automatically learned for each distributional-valued variable. Moreover, since the $L_2$ Wasserstein distance allows a decomposition into two components: one related to the means and one related to the size and shape of the distributions, also relevance weights are automatically learned for each of the measurement components to emphasize the importance of the different estimated parameters of the distributions. Examples of real and synthetic datasets of distributional data illustrate the usefulness of the proposed DBSOM algorithms.
|
statistics
|
Hydrogen deuteride (HD) is prevalent in a wide variety of astrophysical environments, and measuring its large-scale distribution at different epochs can in principle provide information about the properties of these environments. In this paper, we explore the prospects for accessing this distribution using line intensity mapping of emission from the lowest rotational transition in HD, focusing on observations of the epoch of reionization ($z\sim6-10$) and earlier. We find the signal from the epoch of reionization to be strongest most promising, through cross-correlations within existing [CII] intensity mapping surveys. While the signal we predict is out of reach for current-generation projects, planned future improvements should be able to detect reionization-era HD without any additional observations, and would help to constrain the properties of the star-forming galaxies thought to play a key role in reionization. We also investigate several avenues for measuring HD during "cosmic dawn" ($z\sim10-30$), a period in which HD could provide one of the only complementary observables to 21$\,$cm intensity maps. We conclude that existing and planned facilities are poorly matched to the specifications desirable for a significant detection, though such a measurement may be achievable with sustained future effort. Finally, we explain why HD intensity mapping of the intergalactic medium during the cosmic dark ages ($z\gtrsim 30$) appears to be out of reach of any conceivable experiment.
|
astrophysics
|
The recently developed quadrature by expansion (QBX) technique accurately evaluates the layer potentials with singular, weakly or nearly singular, or even hyper singular kernels in the integral equation reformulations of partial differential equations. The idea is to form a local complex polynomial or partial wave expansion centered at a point away from the boundary to avoid the singularity in the integrand, and then extrapolate the expansion at points near or even exactly on the boundary. In this paper, in addition to the local complex Taylor polynomial expansion, we derive new representations of the Laplace layer potentials using both the local complex polynomial and plane wave expansions. Unlike in the QBX, the local complex polynomial expansion in the new quadrature by two expansions (QB2X) method only collects the far-field contributions and its number of expansion terms can be analyzed using tools from the classical fast multipole method. The plane wave type expansion in the QB2X method better captures the layer potential features near the boundary. It is derived by applying the Fourier extension technique to the density and boundary geometry functions and then analytically utilizing the Residue Theorem for complex contour integrals. The internal connections of the layer potential with its density function and curvature on the boundary are explicitly revealed in the plane wave expansion and its error is bounded by the Fourier extension errors. We present preliminary numerical results to demonstrate the accuracy of the QB2X representations and to validate our analysis.
|
mathematics
|
Form factors are quantities that involve both asymptotic on-shell states and gauge invariant operators. They provide a natural bridge between on-shell amplitudes and off-shell correlation functions of operators, thus allowing us to use modern on-shell amplitude techniques to probe into the off-shell side of quantum field theory. In particular, form factors have been successfully used in computing the cusp (soft) anomalous dimensions and anomalous dimensions of general local operators. This review is intended to provide a pedagogical introduction to some of these developments. We will first review some amplitudes background using four-point amplitudes as main examples. Then we generalize these techniques to form factors, including (1) tree-level form factors, (2) Sudakov form factor and infrared singularities, and (3) form factors of general operators and their anomalous dimensions. Although most examples we consider are in N=4 super-Yang-Mill theory, the on-shell methods are universal and are expected to be applicable to general gauge theories.
|
high energy physics theory
|
Modern methods for studying wetting in nanodisperse systems based on the use of electron microscopy and experimental results on the size effects in the interaction of highly dispersed phases are presented. For students of physical and Physics and Technology specialties of higher educational institutions. ----- Izlozheny sovremennye metody issledovaniya smachivaniya v nanodispersnyh sistemah, osnovannye na primenenii elektronnoj mikroskopii, i eksperimentalnye rezultaty o proyavlenii razmernyh effektov pri kontaktnom vzaimodejstvii vysokodispersnyh faz. Dlya studentov fizicheskih i fiziko-tehnicheskih specialnostej vysshih uchebnyh zavedenij.
|
condensed matter
|
The "de Sitter constraint" on the space of effective scalar field theories consistent with superstring theory provides a lower bound on the slope of the potential of a scalar field which dominates the evolution of the Universe, e.g., a hypothetical inflaton field. Whereas models of single scalar field inflation with a canonically normalized field do not obey this constraint, it has been claimed recently in the literature that models of warm inflation can be made compatible with it in the case of large dissipation. The de Sitter constraint is known to be derived from entropy considerations. Since warm inflation necessary involves entropy production, it becomes necessary to determine how this entropy production will affect the constraints imposed by the swampland conditions. Here, we generalize these entropy considerations to the case of warm inflation and show that the condition on the slope of the potential remains essentially unchanged and is, hence, robust even in the warm inflation dynamics. We are then able to conclude that models of warm inflation indeed can be made consistent with the "swampland" criteria.
|
high energy physics theory
|
A quantum integrability index was proposed in \cite{KMS}. It systematizes the Goldschmidt and Witten's operator counting argument \cite{GW} by using the conformal symmetry. In this work we compute the quantum integrability indexes for the symmetric coset models ${SU(N)}/{SO(N)}$ and $SO(2N)/{SO(N)\times SO(N)}$. The indexes of these theories are all non-positive except for the case of ${SO(4)}/{SO(2)\times SO(2)}$. Moreover we extend the analysis to the theories with fermions and consider a concrete theory: the $\mathbb{CP}^N$ model coupled with a massless Dirac fermion. We find that the indexes for this class of models are non-positive as well.
|
high energy physics theory
|
Finding the important nodes in complex networks by topological structure is of great significance to network invulnerability. Several centrality measures have been proposed recently to evaluate the performance of nodes based on their correlation, showing that the interaction between nodes has an influence on the importance of nodes. In this paper, a novel method based on node distribution and global influence in complex networks is proposed. Our main idea is that the importance of nodes being linked not only to the relative position in the network but also to the correlations with each other. The nodes in the complex networks are classified according to the distance matrix, then the correlation coefficient between pairs of nodes is calculated. From the whole perspective in the network, the global similarity centrality (GSC) is proposed based on the relevance and shortest distance between any two nodes. The efficiency, accuracy and monotonicity of the proposed method are analyzed in two artificial datasets and eight real datasets of different sizes. Experimental results show that the performance of GSC method outperforms those current state-of-the-art algorithms.
|
computer science
|
We use a reformulation of compositional game theory to reunite game theory with game semantics, by viewing an open game as the System and its choice of contexts as the Environment. Specifically, the system is jointly controlled by $n \geq 0$ noncooperative players, each independently optimising a real-valued payoff. The goal of the system is to play a Nash equilibrium, and the goal of the environment is to prevent it. The key to this is the realisation that lenses (from functional programming) form a dialectica category, which have an existing game-semantic interpretation. In the second half of this paper, we apply these ideas to build a compact closed category of `computable open games' by replacing the underlying dialectica category with a wave-style geometry of interaction category, specifically the Int-construction applied to the cartesian monoidal category of directed-complete partial orders.
|
computer science
|
We study quantum corrections in four-dimensional theories with $N=1$ supersymmetry in the context of Quantum Gravity Conjectures. According to the Emergent String Conjecture, infinite distance limits in quantum gravity either lead to decompactification of the theory or result in a weakly coupled string theory. We verify this conjecture in the framework of $N=1$ supersymmetric F-theory compactifications to four dimensions including perturbative $\alpha'$ as well as non-perturbative corrections. After proving uniqueness of the emergent critical string at the classical level, we show that quantum corrections obstruct precisely those limits in which the scale of the emergent critical string would lie parametrically below the Kaluza-Klein scale. Limits in which the tension of the asymptotically tensionless string sits at the Kaluza-Klein scale, by contrast, are not obstructed. In the second part of the paper we study the effect of quantum corrections for the Weak Gravity Conjecture away from the strict weak coupling limit. We propose that gauge threshold corrections and mass renormalisation effects modify the super-extremality bound in four dimensions. For the infinite distance limits in F-theory the classical super-extremality bound is generically satisfied by a sublattice of states in the tower of excitations of an emergent heterotic string. By matching the F-theory $\alpha'$ corrections to gauge threshold corrections of the dual heterotic theory we predict how the masses of this tower must be renormalised in order for the Weak Gravity Conjecture to hold at the quantum level.
|
high energy physics theory
|
The Tangent Works team participated in GEFCom 2017 to test its automatic model building strategy for time series known as Tangent Information Modeller (TIM). Model building using TIM combined with historical temperature shuffling resulted in winning the competition. This strategy involved one remaining degree of freedom, a decision on using a trend variable. This paper describes our modelling efforts in the competition, and furthermore outlines a fully automated scenario where the decision on using the trend variable is handled by TIM. The results show that such a setup would also win the competition.
|
statistics
|
The collision dynamics of hard spheres and cylindrical pores is solved exactly, which is the minimal model for a regularly porous membrane. Nonequilibrium event-driven molecular dynamics simulations are used to show that the permeability $P$ of hard spheres of size $\sigma$ through cylinderical pores of size $d$ follow the hindered diffusion mechanism due to size exclusion as $P \propto (1-\sigma/d)^2$. Under this law, the separation of binary mixtures of large and small particles exhibits a linear relationship between $\alpha^{-1/2}$ and $P^{-1/2}$, where $\alpha$ and $P$ are the selectivity and permeability of the smaller particle, respectively. The mean permeability through polydisperse pores is the sum of permeabilities of individual pores, weighted by the fraction of the single pore area over the total pore area.
|
condensed matter
|
The free energy of ABJM theory has previously been computed in the strong and weak coupling limits. In this note, we report on results for the computation of the first non-vanishing quantum correction to the free energy, from the field theory side. The correction can be expressed in terms of a thermal mass for the scalar fields. This mass vanishes to 1-loop order, but there is a non-vanishing result to 2-loop order. Hence, the leading correction to the free energy is non-analytic in the 't Hooft coupling constant lambda. The reason is that the infrared divergences necessitate a resummation of ring diagrams and a related reorganization of perturbation theory, in which already the leading correction receives contributions from all orders in lambda. These results suggest that the free energy interpolates smoothly between weak and strong coupling.
|
high energy physics theory
|
We prove a Serre relation in the $K$-theoretic Hall algebra of surfaces constructed by Kapranov-Vasserot and the second author.
|
mathematics
|
As the world's largest professional network, LinkedIn wants to create economic opportunity for everyone in the global workforce. One of its most critical missions is matching jobs with processionals. Improving job targeting accuracy and hire efficiency align with LinkedIn's Member First Motto. To achieve those goals, we need to understand unstructured job postings with noisy information. We applied deep transfer learning to create domain-specific job understanding models. After this, jobs are represented by professional entities, including titles, skills, companies, and assessment questions. To continuously improve LinkedIn's job understanding ability, we designed an expert feedback loop where we integrated job understanding models into LinkedIn's products to collect job posters' feedback. In this demonstration, we present LinkedIn's job posting flow and demonstrate how the integrated deep job understanding work improves job posters' satisfaction and provides significant metric lifts in LinkedIn's job recommendation system.
|
computer science
|
An `open' or $(\mu,P,T)$-ensemble describes equilibrium systems whose control parameters are chemical potential $\mu$, pressure $P$ and temperature $T$. Such an unconstrained ensemble is seldom used for applications to standard thermodynamic systems due to the fact that the corresponding free energy identically vanishes as a result of the Euler relation. However, an open ensemble is perfectly regular for the case of black holes, as the entropy is a quasi-homogeneous function of extensive thermodynamic variables with scaling dictated by the Smarr formula. Following a brief discussion on thermodynamics in the open ensemble, we compute the general form of logarithmic corrections to the entropy of a typical system, due to fluctuations in energy, thermodynamic volume and a generic charge $N$. This is then used to obtain the exact analytic form of the logarithmically corrected black hole entropy for charged and rotating black holes in anti-de Sitter spacetimes.
|
high energy physics theory
|
We study, within the classical fields approximation, a two-dimensional weakly interacting uniform Bose gas of a finite number of atoms. By using a grand canonical ensemble formalism we show that such systems exhibit, in addition to the Berezinskii-Kosterlitz-Thouless (BKT) and thermal phases, the intermediate region. This region is characterized by a decay of current-current correlations at low momenta and by an algebraic decay of the first-order correlations with an exponent being larger than the critical value predicted by the BKT theory. The density of the superfluid fraction at the temperature which separates the BKT phase from the intermediate region approaches the one found by Nelson and Kosterlitz for two-dimensional superfluids while the number of atoms is increased.
|
condensed matter
|
Electron transport in branched semiconductor nanostructures provides many possibilities for creating fundamentally new devices. We solve the problem of its calculation using a quantum network model. The proposed scheme consists of three computational parts: S-matrix of the network junction, S-matrix of the network in terms of its junctions' S-matrices, electric currents through the network based on its S-matrix. To calculate the S-matrix of the network junction, we propose scattering boundary conditions in a clear integro-differential form. As an alternative, we also consider the Dirichlet-to-Neumann and Neumann-to-Dirichlet map methods. To calculate the S-matrix of the network in terms of its junctions' S-matrices, we obtain a network combining formula. We find electrical currents through the network in the framework of the Landauer$-$B\"uttiker formalism. Everywhere for calculations, we use extended scattering matrices, which allows taking into account correctly the contribution of tunnel effects between junctions. We demonstrate the proposed calculation scheme by modeling nanostructure based on two-dimensional electron gas. For this purpose we offer a model of a network formed by smooth junctions with one, two and three adjacent branches. We calculate the electrical properties of such a network (by the example of GaAs), formed by four junctions, depending on the temperature.
|
condensed matter
|
Axion is a popular candidate for dark matter particles. Axionic dark matter may form Bose-Einstein condensate and may be gravitationally bound to form axion clumps. Under the presence of electromagnetic waves with frequency $\omega=m_{a}/2$, where $m_{a}$ is the axion mass, a resonant enhancement may occur, causing an instability of the axion clumps. With analytical and numerical approaches, we study the resonant instability of axionic dark matter clumps with infinite homogeneous mass distribution, as well as distribution with a finite boundary. After taking realistic astrophysical environments into consideration, including gravitational redshift and plasma effects, we obtain an instability region in the axion density-clump size parameter space with given mass and coupling of axions. In particular, we show that, for axion clumps formed by the QCD axions in equilibrium, no resonant instability will occur.
|
high energy physics phenomenology
|
We report an optically gated transistor composed of CdSe nanocrystals (NCs), sensitized with the dye Zinc beta-tetraaminophthalocyanine for operation in the first telecom window. This device shows a high ON/OFF ratio of six orders of magnitude in the red spectral region and an unprecedented 4.5 orders of magnitude at 847 nm. By transient absorption spectroscopy, we reveal that this unexpected infrared sensitivity is due to electron transfer from the dye to the CdSe NCs within 5 ps. We show by time-resolved photocurrent measurements that this enables fast rise times during near-infrared optical gating of 74 ns. Electronic coupling and accelerated non-radiative recombination of charge carriers at the interface between the dye and the CdSe NCs are further corroborated by steady-state and time-resolved photoluminescence measurements. Field-effect transistor measurements indicate that the increase in photocurrent upon laser illumination is mainly due to the increase in carrier concentration while the mobility remains unchanged. Our results illustrate that organic dyes as ligands for NCs invoke new optoelectronic functionalities, such as fast optical gating at sub-bandgap optical excitation energies.
|
condensed matter
|
Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such domain adaptation is typically done using one stage of fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process can yield substantial further gains and can be applied without modifying the model or learning objective.
|
computer science
|
Reanalysis data are widely used for simulating renewable energy and in particular wind power generation. While MERRA-2 has been a de-facto standard in many studies, the newer ERA5- reanalysis recently gained importance. Here, we use these two datasets to simulate wind power generation and evaluate the respective quality in terms of correlations and errors when validated against historical wind power generation. However, due to their coarse spatial resolution, reanalyses fail to adequately represent local climatic conditions. We therefore additionally apply mean bias correction with two versions of the Global Wind Atlas (GWA) and assess the respective quality of resulting simulations. Potential users of the dataset can also benefit from our analysis of the impact of spatial and temporal aggregation on simulation quality indicators. While similar studies have been conducted, they mainly cover limited areas in Europe. In contrast, we look into regions, which globally differ significantly in terms of the prevailing climate: the US, Brazil, South-Africa, and New Zealand. Our principal findings are that (i) ERA5 outperforms MERRA-2, (ii) no major improvements can be expected by using bias-correction with GWA2, while GWA3 even reduces simulation quality, and (iii) temporal aggregation increases correlations and reduces errors, while spatial aggregation does so only consistently when comparing very low and very high aggregation levels.
|
statistics
|
We apply a "color" tripole ansatz for describing the $D$-term of the proton. By fitting the experimental data of the vector meson J/$\psi$ and $\phi$ photoproductions near the thresholds, we firstly obtained the gluonic $D$-term of the proton. $D_g(0)$ is estimated to be $-2.16\pm0.42$ for J/$\psi$ and $-1.31\pm0.48$ for $\phi$, and the mechanical root mean square radius of proton is estimated to be $0.61\pm0.29$ fm for $\phi$ and $0.42\pm0.11$ fm for J/$\psi$.
|
high energy physics phenomenology
|
We refine and prove the central conjecture of our first paper for annuli with at least two marked intervals on each boundary component by computing the derived Hall algebras of their Fukaya categories.
|
mathematics
|
This letter aims at extending the Constrained Semiparametric Cramer-Rao Bound (CSCRB) for the joint estimation of mean vector and scatter matrix of Real Elliptically Symmetric (RES) distributions to Complex Elliptically Symmetric (CES) distributions. A closed form expression for the complex CSCRB (CCSCRB) is derived by exploiting the so-called \textit{Wirtinger} or $\mathbb{C}\mathbb{R}$-\textit{calculus}. Finally, the CCSCRB for the estimation of the complex mean vector and scatter matrix of a set of complex $t$-distributed random vectors is provided as an example of application.
|
electrical engineering and systems science
|
We expand the relativistic precession model to include nonequatorial and eccentric trajectories and apply it to quasi-periodic oscillations (QPOs) in black hole X-ray binaries (BHXRBs) and associate their frequencies with the fundamental frequencies of the general case of nonequatorial (with Carter's constant, $Q\neq 0$) and eccentric ($e\neq 0$) particle trajectories, around a Kerr black hole. We study cases with either two or three simultaneous QPOs and extract the parameters \{$e$, $r_p$, $a$, $Q$\}, where $r_p$ is the periastron distance of the orbit, and $a$ is the spin of the black hole. We find that the orbits with $\left[Q=0-4\right]$ should have $e\lesssim 0.5$ and $r_p \sim 2-20$ for the observed range of QPO frequencies, where $a \in [0,1]$, and that the spherical trajectories \{$e=0$, $Q \neq0$\} with $Q=2-4$ should have $r_s \sim 3-20$. We find nonequatorial eccentric solutions for both M82 X-1 and GROJ 1655-40. We see that these trajectories, when taken together, span a torus region and give rise to a strong QPO signal. For two simultaneous QPO cases, we found equatorial eccentric orbit solutions for XTEJ 1550-564, 4U 1630-47, and GRS 1915+105, and spherical orbit solutions for BHXRBs M82 X-1 and XTEJ 1550-564. We also show that the eccentric orbit solution fits the Psaltis-Belloni-Klis correlation observed in BHXRB GROJ 1655-40. Our analysis of the fluid flow in the relativistic disk edge suggests that instabilities cause QPOs to originate in the torus region. We also present some useful formulae for trajectories and frequencies of spherical and equatorial eccentric orbits.
|
astrophysics
|
De novo peptide sequencing algorithms have been widely used in proteomics to analyse tandem mass spectra (MS/MS) and assign them to peptides, but quality-control methods to evaluate the confidence of de novo peptide sequencing are lagging behind. A fundamental part of a quality-control method is the scoring function used to evaluate the quality of peptide-spectrum matches (PSMs). Here, we propose a genetic programming (GP) based method, called GP-PSM, to learn a PSM scoring function for improving the rate of confident peptide identification from MS/MS data. The GP method learns from thousands of MS/MS spectra. Important characteristics about goodness of the matches are extracted from the learning set and incorporated into the GP scoring functions. We compare GP-PSM with two methods including Support Vector Regression (SVR) and Random Forest (RF). The GP method along with RF and SVR, each is used for post-processing the results of peptide identification by PEAKS, a commonly used de novo sequencing method. The results show that GP-PSM outperforms RF and SVR and discriminates accurately between correct and incorrect PSMs. It correctly assigns peptides to 10% more spectra on an evaluation dataset containing 120 MS/MS spectra and decreases the false positive rate (FPR) of peptide identification.
|
computer science
|
In this note, we study the contributions from the S-wave resonances, f_{0}(980) and f_{0}(1500), to the B^{0}_{s}\rightarrow \psi(3770)\pi^ {+}\pi^{-} decay by introducing the S-wave \pi\pi distribution amplitudes within the framework of the perturbative QCD approach. Both resonant and nonresonant contributions are contained in the scalar form factor in the S-wave distribution amplitude \Phi^S_{\pi\pi}. Since the vector charmonium meson \psi(3770) is a S-D wave mixed state, we calculated the branching ratios of S-wave and D-wave respectively, and the results indicate that f_{0}(980) is the main contribution of the considered decay, and the branching ratio of the \psi(2S) mode is in good agreement with the experimental data. We also take the S-D mixed effect into the B^{0}_{s}\rightarrow \psi(3686)\pi^ {+}\pi^{-} decay. Our calculations show that the branching ratio of B^{0}_{s}\rightarrow \psi(3770)(\psi(3686))\pi^ {+}\pi^{-} can be at the order of 10^{-5}, which can be tested by the running LHC-b experiments.
|
high energy physics phenomenology
|
In this paper, we study the so-called intermediate disorder regime for a directed polymer in a random environment with heavy-tail. Consider a simple symmetric random walk $(S_n)_{n\geq 0}$ on $\mathbb{Z}^d$, with $d\geq 1$, and modify its law using Gibbs weights in the product form $\prod_{n=1}^{N} (1+\beta\eta_{n,S_n})$, where $(\eta_{n,x})_{n\ge 0, x\in \mathbb{Z}^d}$ is a field of i.i.d. random variables whose distribution satisfies $\mathbb{P}(\eta>z) \sim z^{-\alpha}$ as $z\to\infty$, for some $\alpha\in(0,2)$. We prove that if $\alpha< \min(1+\frac{2}{d},2)$, when sending $N$ to infinity and rescaling the disorder intensity by taking $\beta=\beta_N \sim N^{-\gamma}$ with $\gamma =\frac{d}{2\alpha}(1+\frac{2}{d}-\alpha)$, the distribution of the trajectory under diffusive scaling converges in law towards a random limit, which is the continuum polymer with L\'evy $\alpha$-stable noise constructed in the companion paper arXiv:2007.06484.
|
mathematics
|
In this paper, we will apply the Goldstone equivalence gauge to calculate the $1 \leftrightarrow 2$ processes of a sterile neutrino in the thermal plasma below the standard model (SM) critical temperature $T_c \approx 160 \text{ GeV}$. The sterile neutrino's mass is around the electroweak scale $50 \text{ GeV} \leq m_N \leq 200 \text{ GeV}$, and the acquired thermal averaged effective width $\bar{\Gamma}_{\text{tot}}$ is continuous around the cross-over. We will also apply our results to perform a preliminary calculation of the leptogenesis.
|
high energy physics phenomenology
|
We present a new video compression framework (ViSTRA2) which exploits adaptation of spatial resolution and effective bit depth, down-sampling these parameters at the encoder based on perceptual criteria, and up-sampling at the decoder using a deep convolution neural network. ViSTRA2 has been integrated with the reference software of both the HEVC (HM 16.20) and VVC (VTM 4.01), and evaluated under the Joint Video Exploration Team Common Test Conditions using the Random Access configuration. Our results show consistent and significant compression gains against HM and VVC based on Bj{\o}negaard Delta measurements, with average BD-rate savings of 12.6% (PSNR) and 19.5% (VMAF) over HM and 5.5% (PSNR) and 8.6% (VMAF) over VTM.
|
electrical engineering and systems science
|
Recently, the investigation of metasurface has been extended to wave control through exploiting nonlinearity. Among all of the ways to achieve tunable metasurfaces with multiplexed performances, nonlinearity is one of the promising choices. Although several proposals have been reported to obtain nonlinear architectures at visible frequencies, the area of incorporating nonlinearity in form of passive-designing at microwave metasurfaces is open for investigation. In this paper, a passive wideband nonlinear metasurface is manifested, which is composed of embedded L-shape and {\Gamma}-shape meta-atoms with PIN-diode elements. The proposed self-biased nonlinear metasurface has two operational states: at low power intensities, it acts as Quarter Wave Plate (QWP) in the frequency range from 13.24 GHz to 16.38 GHz with an Axial Ratio (AR) of over 21.2%. In contrast, at high power intensities, by using the polarization conversion property of the proposed PIN-diode based meta-atoms, the metasurface can act as a digital metasurface. It means that by arranging the meta-atoms with a certain coding pattern, the metasurface can manipulate the scattered beams and synthesize well-known patterns such as Diffusion-like pattern at an ultra-wide frequency range from 8.12 GHz to 19.27 GHz (BW=81.4%). Full-wave and nonlinear simulations are carried out to justify the performance of the wideband nonlinear metasurface. We expect the proposed self-biased nonlinear metasurface at microwave frequencies reveals excellent opportunities to design limiter metasurfaces and compact reconfigurable imaging systems.
|
physics
|
We show that the mixed volumes of arbitrary convex bodies are equal to mixed multiplicities of graded families of monomial ideals, and to normalized limits of mixed multiplicities of monomial ideals. This result evinces the close relation between the theories of mixed volumes from convex geometry and mixed multiplicities from commutative algebra.
|
mathematics
|
We devise a method based on the tensor-network formalism to calculate genuine multisite entanglement in ground states of infinite spin chains containing spin-1/2 or spin-1 quantum particles. The ground state is obtained by employing an infinite time-evolving block decimation method acting upon an initial matrix product state for the infinite spin system. We explicitly show how such infinite matrix product states with translational invariance provide a natural framework to derive the generalized geometric measure, a computable measure of genuine multisite entanglement, in the thermodynamic limit of quantum many-body systems with both spin-1/2 and higher-spin particles.
|
quantum physics
|
Motivated by Stanley's $\mathbf{(3+1)}$-free conjecture on chromatic symmetric functions, Foley, Ho\`{a}ng and Merkel introduced the concept of strong $e$-positivity and conjectured that a graph is strongly $e$-positive if and only if it is (claw, net)-free. In order to study strongly $e$-positive graphs, they further introduced the twinning operation on a graph $G$ with respect to a vertex $v$, which adds a vertex $v'$ to $G$ such that $v$ and $v'$ are adjacent and any other vertex is adjacent to both of them or neither of them. Foley, Ho\`{a}ng and Merkel conjectured that if $G$ is $e$-positive, then so is the resulting twin graph $G_v$ for any vertex $v$. Based on the theory of chromatic symmetric functions in non-commuting variables developed by Gebhard and Sagan, we establish the $e$-positivity of a class of graphs called tadpole graphs. By considering the twinning operation on a subclass of these graphs with respect to certain vertices we disprove the latter conjecture of Foley, Ho\`{a}ng and Merkel. We further show that if $G$ is $e$-positive, the twin graph $G_v$ and more generally the clan graphs $G^{(k)}_v$ ($k \ge 1$) may not even be $s$-positive, where $G^{(k)}_v$ is obtained from $G$ by applying $k$ twinning operations to $v$.
|
mathematics
|
Surface bound catalytic chemical reactions self-propel chemically active Janus particles. In the vicinity of boundaries, these particles exhibit rich behavior, such as the occurrence of wall-bound steady states of "sliding". Most active particles tend to sediment as they are density mismatched with the solution. Moreover Janus spheres, which consist of an inert core material decorated with a cap-like, thin layer of a catalyst, are gyrotactic ("bottom-heavy"). Occurrence of sliding states near the horizontal walls depends on the interplay between the active motion and the gravity-driven sedimentation and alignment. It is thus important to understand and quantify the influence of these gravity-induced effects on the behavior of model chemically active particles moving near walls. For model gyrotactic, self-phoretic Janus particles, here we study theoretically the occurrence of sliding states at horizontal planar walls that are either below ("floor") or above ("ceiling") the particle. We construct "state diagrams" characterizing the occurrence of such states as a function of the sedimentation velocity and of the gyrotactic response of the particle, as well as of the phoretic mobility of the particle. We show that in certain cases sliding states may emerge simultaneously at both the ceiling and the floor, while the larger part of the experimentally relevant parameter space corresponds to particles that would exhibit sliding states only either at the floor or at the ceiling or there are no sliding states at all. These predictions are critically compared with the results of previous experimental studies and our experiments conducted on Pt-coated polystyrene and silica-core particles suspended in aqueous hydrogen peroxide solutions.
|
condensed matter
|
Computer experiments are becoming increasingly important in scientific investigations. In the presence of uncertainty, analysts employ probabilistic sensitivity methods to identify the key-drivers of change in the quantities of interest. Simulation complexity, large dimensionality and long running times may force analysts to make statistical inference at small sample sizes. Methods designed to estimate probabilistic sensitivity measures at relatively low computational costs are attracting increasing interest. We propose a fully Bayesian approach to the estimation of probabilistic sensitivity measures based on a one-sample design. We discuss, first, new estimators based on placing piecewise constant priors on the conditional distributions of the output given each input, by partitioning the input space. We then present two alternatives, based on Bayesian non-parametric density estimation, which bypass the need for predefined partitions. In all cases, the Bayesian paradigm guarantees the quantification of uncertainty in the estimation process through the posterior distribution over the sensitivity measures, without requiring additional simulator evaluations. The performance of the proposed methods is compared to that of traditional point estimators in a series of numerical experiments comprising synthetic but challenging simulators, as well as a realistic application. $\textit{An Updated Version of the Manuscript is Forthcoming in Statistics and Computing.}$
|
statistics
|
In April 2016, Daniela Frauchiger and Renato Renner published an article online in which they introduce a Gedankenexperiment that led them to conclude that single-world interpretation of quantum theory cannot be self-consistent. In a new version of the paper, published in September 2018, the authors moderate their original claim by concluding that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner. The purpose of this article is to offer a careful reconstruction of the F-R argument, which allows us to show that: (i) the argument can be more clearly formulated with no reference to what subjects know or see, but rather only in terms of quantum propositions, (ii) in contrast to what some commentators suppose, the argument does not require the hypothesis of collapse to arrive to its conclusion, and (iii) the contradiction resulting from the F-R argument is inferred by making classical conjunctions between different and incompatible contexts. On the basis of this clarification, we will finally argue that the conclusion of the F-R argument is not as novel and original as its great impact might make us to suppose.
|
quantum physics
|
This paper shows that the eccentric debris rings seen around the stars Fomalhaut and HD 202628 are narrower than expected in the standard eccentric planet perturbation scenario (sometimes referred to as "pericenter glow"). The standard scenario posits an initially circular and narrow belt of planetesimals at semi-major axis $a$, whose eccentricity is increased to $e_f$ after the gas disc has dispersed by secular perturbations from an eccentric planet, resulting in a belt of width $2ae_f$. In a minor modification of this scenario, narrower belts can arise if the planetesimals are initially eccentric, which could result from earlier planet perturbations during the gas-rich protoplanetary disc phase. However, a primordial eccentricity could alternatively be caused by instabilities that increase the disc eccentricity, without the need for any planets. Whether these scenarios produce detectable eccentric rings within protoplanetary discs is unclear, but they nevertheless predict that narrow eccentric planetesimal rings should exist before the gas in protoplanetary discs is dispersed. PDS 70 is noted as a system hosting an asymmetric protoplanetary disc that may be a progenitor of eccentric debris ring systems.
|
astrophysics
|
Extended scalar and fermion sectors offer new opportunities for generating the observed strong hierarchies in the fermion mass and mixing patterns of the Standard Model (SM). In this work, we elaborate on the prospects of a particular extension of the Inert Higgs doublet model where the SM hierarchies are generated sequentially by radiative virtual corrections in a fully renormalisable way, i.e. without adding any non-renormalisable Yukawa terms or soft-breaking operators to the scalar potential. Our model has a potential to explain the recently observed $R_{K}$ and $R_{K^{\ast }}$ anomalies, thanks to the non universal $U_{1X}$ assignments of the fermionic fields that yield non universal $Z^{\prime}$ couplings to fermions. We explicitly demonstrate the power of this model for generating the realistic quark, lepton and neutrino mass spectra. In particular, we show that due to the presence of both continuous and discrete family symmetries in the considered framework, the top quark acquires a tree-level mass, lighter quarks and leptons get their masses at one- and two-loop order, while neutrino masses are generated at three-loop level. The minimal field content, particle spectra and scalar potential of this model are discussed in detail.
|
high energy physics phenomenology
|
In this paper, we investigate sizeable interference effects between a heavy charged Higgs boson signal produced via $gg\to t\bar b H^-$ (+ c.c.) followed by the decay $H^-\to b\bar t$ (+ c.c.) and the irreducible background given by $gg\to t\bar t b \bar b$ topologies at the Large Hadron Collider (LHC). We show how such effects could spoil current $H^\pm$ searches where signal and background are normally treated separately. The reason for this is that a heavy charged Higgs boson can have a large total width, in turn enabling such interferences, altogether leading to very significant alterations, both at the inclusive and exclusive level, of the yield induced by the signal alone. This therefore implies that currently established LHC searches for such wide charged Higgs bosons require modifications. We show such effects quantitatively using two different benchmark configurations of the minimal realisation of Supersymmetry, wherein such $H^\pm$ states naturally exist.
|
high energy physics phenomenology
|
(Un)conscious bias affects every aspect of the astronomical profession, from scientific activities (e.g., invitations to join collaborations, proposal selections, grant allocations, publication review processes, and invitations to attend and speak at conferences) to activities more strictly related to career advancement (e.g., reference letters, fellowships, hiring, promotion, and tenure). For many, (un)conscious bias is still the main hurdle to achieving excellence, as the most diverse talents encounter bigger challenges and difficulties to reach the same milestones than their more privileged colleagues. Over the past few years, the Space Telescope Science Institute (STScI) has constructed tools to raise awareness of (un)conscious bias and has designed guidelines and goals to increase diversity representation and outcome in its scientific activities, including career-related matters and STScI sponsored fellowships, conferences, workshops, and colloquia. STScI has also addressed (un)conscious bias in the peer-review process by anonymizing submission and evaluation of Hubble Space Telescope (and soon to be James Webb Space Telescope) observing proposals. In this white paper we present a plan to standardize these methods with the expectation that these universal recommendations will truly increase diversity, inclusiveness and fairness in Astronomy if applied consistently throughout all the scientific activities of the Astronomical community.
|
astrophysics
|
Algebraic statistics uses tools from algebra (especially from multilinear algebra, commutative algebra and computational algebra), geometry and combinatorics to provide insight into knotty problems in mathematical statistics. In this survey we illustrate this on three problems related to networks, namely network models for relational data, causal structure discovery and phylogenetics. For each problem we give an overview of recent results in algebraic statistics with emphasis on the statistical achievements made possible by these tools and their practical relevance for applications to other scientific disciplines.
|
mathematics
|
The production of counterfeit money has a long history. It refers to the creation of imitation currency that is produced without the legal sanction of government. With the growth of the cryptocurrency ecosystem, there is expanding evidence that counterfeit cryptocurrency has also appeared. In this paper, we empirically explore the presence of counterfeit cryptocurrencies on Ethereum and measure their impact. By analyzing over 190K ERC-20 tokens (or cryptocurrencies) on Ethereum, we have identified 2, 117 counterfeit tokens that target 94 of the 100 most popular cryptocurrencies. We perform an end-to-end characterization of the counterfeit token ecosystem, including their popularity, creators and holders, fraudulent behaviors and advertising channels. Through this, we have identified two types of scams related to counterfeit tokens and devised techniques to identify such scams. We observe that over 7,104 victims were deceived in these scams, and the overall financial loss sums to a minimum of $ 17 million (74,271.7 ETH). Our findings demonstrate the urgency to identify counterfeit cryptocurrencies and mitigate this threat.
|
computer science
|
We suggest a deep learning based sensor signal processing method to remove chemical, kinetic and electrical artifacts from ion selective electrodes' measured values. An ISE is used to investigate the concentration of a specific ion from aqueous solution, by measuring the Nernst potential along the glass membrane. However, application of ISE on a mixture of multiple ion has some problem. First problem is a chemical artifact which is called ion interference effect. Electrically charged particles interact with each other and flows through the glass membrane of different ISEs. Second problem is the kinetic artifact caused by the movement of the liquid. Water molecules collide with the glass membrane causing abnormal peak values of voltage. The last artifact is the interference of ISEs. When multiple ISEs are dipped into same solution, one electrode's signal emission interference voltage measurement of other electrodes. Therefore, an ISE is recommended to be applied on single-ion solution, without any other sensors applied at the same time. Deep learning approach can remove both 3 artifacts at the same time. The proposed method used 5 layers of artificial neural networks to regress correct signal to remove complex artifacts with one-shot calculation. Its MAPE was less than 1.8% and R2 of regression was 0.997. A randomly chosen value of AI-processed data has MAPE less than 5% (p-value 0.016).
|
computer science
|
We explore the prospects for direct detection of dark energy by current and upcoming terrestrial dark matter direct detection experiments. If dark energy is driven by a new light degree of freedom coupled to matter and photons then dark energy quanta are predicted to be produced in the Sun. These quanta free-stream towards Earth where they can interact with Standard Model particles in the detection chambers of direct detection experiments, presenting the possibility that these experiments could be used to test dark energy. Screening mechanisms, which suppress fifth forces associated with new light particles, and are a necessary feature of many dark energy models, prevent production processes from occurring in the core of the Sun, and similarly, in the cores of red giant, horizontal branch, and white dwarf stars. Instead, the coupling of dark energy to photons leads to production in the strong magnetic field of the solar tachocline via a mechanism analogous to the Primakoff process. This then allows for detectable signals on Earth while evading the strong constraints that would typically result from stellar probes of new light particles. As an example, we examine whether the electron recoil excess recently reported by the XENON1T collaboration can be explained by chameleon-screened dark energy, and find that such a model is preferred over the background-only hypothesis at the $2.0\sigma$ level, in a large range of parameter space not excluded by stellar (or other) probes. This raises the tantalizing possibility that XENON1T may have achieved the first direct detection of dark energy. Finally, we study the prospects for confirming this scenario using planned future detectors such as XENONnT, PandaX-4T, and LUX-ZEPLIN.
|
high energy physics phenomenology
|
As the plug-in electric vehicle (PEV) market expands worldwide, PEV penetration has out-paced public PEV charging accessibility. In addition to charging infrastructure deployment, charging station operation is another key factor for improving charging service accessibility. In this paper, we propose a mathematical framework to optimally operate a PEV charging station, whose service capability is constrained by the number of available chargers. This mathematical framework specifically exploits human behavioral modeling to alleviate the "overstaying" issue that occurs when a vehicle is fully charged. Our behavioral model effectively captures human decision-making when humans are exposed to multiple charging product options, which differ in both price and quality-of-service. We reformulate the associated non-convex problem to a multi-convex problem via the Young-Fenchel transform. We then apply the Block Coordinate Descent algorithm to efficiently solve the optimization problem. Numerical experiments illustrate the performance of the proposed method. Simulation results show that a station operator who leverages optimally priced charging options could realize benefits in three ways: (i) net profits gains, (ii) overstay reduction, and (iii) increased quality-of-service.
|
electrical engineering and systems science
|
We present CosmoHub (https://cosmohub.pic.es), a web application based on Hadoop to perform interactive exploration and distribution of massive cosmological datasets. Recent Cosmology seeks to unveil the nature of both dark matter and dark energy mapping the large-scale structure of the Universe, through the analysis of massive amounts of astronomical data, progressively increasing during the last (and future) decades with the digitization and automation of the experimental techniques. CosmoHub, hosted and developed at the Port d'Informaci\'o Cient\'ifica (PIC), provides support to a worldwide community of scientists, without requiring the end user to know any Structured Query Language (SQL). It is serving data of several large international collaborations such as the Euclid space mission, the Dark Energy Survey (DES), the Physics of the Accelerating Universe Survey (PAUS) and the Marenostrum Institut de Ci\`encies de l'Espai (MICE) numerical simulations. While originally developed as a PostgreSQL relational database web frontend, this work describes the current version of CosmoHub, built on top of Apache Hive, which facilitates scalable reading, writing and managing huge datasets. As CosmoHub's datasets are seldomly modified, Hive it is a better fit. Over 60 TiB of catalogued information and $50 \times 10^9$ astronomical objects can be interactively explored using an integrated visualization tool which includes 1D histogram and 2D heatmap plots. In our current implementation, online exploration of datasets of $10^9$ objects can be done in a timescale of tens of seconds. Users can also download customized subsets of data in standard formats generated in few minutes.
|
astrophysics
|
In this paper, we study the learning of safe policies in the setting of reinforcement learning problems. This is, we aim to control a Markov Decision Process (MDP) of which we do not know the transition probabilities, but we have access to sample trajectories through experience. We define safety as the agent remaining in a desired safe set with high probability during the operation time. We therefore consider a constrained MDP where the constraints are probabilistic. Since there is no straightforward way to optimize the policy with respect to the probabilistic constraint in a reinforcement learning framework, we propose an ergodic relaxation of the problem. The advantages of the proposed relaxation are threefold. (i) The safety guarantees are maintained in the case of episodic tasks and they are kept up to a given time horizon for continuing tasks. (ii) The constrained optimization problem despite its non-convexity has arbitrarily small duality gap if the parametrization of the policy is rich enough. (iii) The gradients of the Lagrangian associated with the safe-learning problem can be easily computed using standard policy gradient results and stochastic approximation tools. Leveraging these advantages, we establish that primal-dual algorithms are able to find policies that are safe and optimal. We test the proposed approach in a navigation task in a continuous domain. The numerical results show that our algorithm is capable of dynamically adapting the policy to the environment and the required safety levels.
|
electrical engineering and systems science
|
A minimal coupling quantum hydrodynamic model of spin-1/2 fermions at the full spin polarization corresponding to a nonlinear Schrodinger equation is considered. The nonlinearity is primarily caused by the Fermi pressure. It provides an effective repulsion between fermions. However, there is the additional contribution of the short-range interaction appearing in the third order by the interaction radius. It leads to the modification of the pressure contribution. Solitons are considered for the infinite medium with no restriction on the amplitude of the wave. The Fermi pressure leads to the soliton in form of the area of decreased concentration. However, the center of solution corresponding to the area of minimal concentration has nonzero value of concentration. Therefore, the grey soliton is found. Soliton exist if the speed of its propagation is below the Fermi velocity.
|
condensed matter
|
In tone reservation (TR) based OFDM systems, the peak to average power ratio (PAPR) reduction performance mainly depends on the selection of the peak reduction tone (PRT) set and the optimal target clipping level. Finding the optimal PRT set requires an exhaustive search of all combinations of possible PRT sets, which is a nondeterministic polynomial-time (NP-hard) problem, and this search is infeasible for the number of tones used in practical systems. The existing selection methods, such as the consecutive PRT set, equally spaced PRT set and random PRT set, perform poorly compared to the optimal PRT set or incur high computational complexity. In this paper, an efficient scheme based on genetic algorithm (GA) with lower computational complexity is proposed for searching a nearly optimal PRT set. While TR-based clipping is simple and attractive for practical implementation, determining the optimal target clipping level is difficult. To overcome this problem, we propose an adaptive clipping control algorithm.Simulation results show that our proposed algorithms efficiently obtain a nearly optimal PRT set and good PAPR reductions.
|
electrical engineering and systems science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.