text
stringlengths
11
9.77k
label
stringlengths
2
104
The observation of cosmic neutrinos up to 2 PeV is used to put bounds on the energy scale of Lorentz invariance violation through the loss of energy due to the production of $e^+e^-$ pairs in the propagation of superluminal neutrinos. A model to study this effect, which allows to understand qualitatively the results of numerical simulations, is presented.
high energy physics phenomenology
A new relaxion mechanism is proposed where a small electroweak scale is preferably selected earlier than the larger one due to a potential instability, which is different from previously proposed stopping mechanisms by either Hubble friction from an increasing periodic barrier or thermal friction from gauge boson production. The sub-Planckian field excursion of an axion can be achieved without violating the bound on the $e$-folding number from a quantum gravity perspective; and our relaxion can be identified as a QCD axion, preserving the Peccei-Quinn solution to the strong $CP$ problem as well as making up all the cold dark matter in our current Universe.
high energy physics phenomenology
Resonant radiofrequency cavities enable exquisite time-energy control of electron beams when synchronized with laser driven photoemission. We present a lossless monochromator design that exploits this fine control in the one-electron-per-pulse regime. The theoretically achievable maximum beam current on target is orders of magnitude greater than state-of-the-art monochromators for the same spatial and energy resolution. This improvement is the result of monochromating in the time domain, unconstrained by the transverse brightness of the electron source. We show analytically and confirm numerically that cavity parameters chosen to minimize energy spread perform the additional function of undoing the appreciable effect of chromatic aberration in the upstream optics. We argue that our design has significant applications in both ultra-fast and non-time-resolved microscopy, provided photoelectron sources of sufficiently small size and laser sources of sufficiently high repetition rate. Our design achieves in simulations more than two orders of magnitude reduction in beam energy spread, down to single digit meV. Overcoming the minimum probe-size limit that chromatic aberration imposes, our design clears a path for high-current, high-resolution electron beam applications at primary energies from single to hundreds of keV.
physics
We prove a global existence result of a unique strong solution in $\dot H^{5/2} \cap \dot H^{3/2}$ with small $\dot H^{3/2}$ semi-norm for the 2D Muskat problem, hence allowing the interface to have arbitrary large finite slopes and finite energy (thanks to the $L^{2}$ maximum principle). The proof is based on the use of a new formulation of the Muskat equation that involves oscillatory terms. Then, a careful use of interpolation inequalities in homogeneneous Besov spaces allows us to close the {\emph{a priori}} estimates.
mathematics
The successful employment of high-dimensional quantum correlations and its integration in telecommunication infrastructures is vital in cutting-edge quantum technologies for increasing robustness and key generation rate. Position-momentum Einstein-Podolsky-Rosen (EPR) entanglement of photon pairs are a promising resource of such high-dimensional quantum correlations. Here, we experimentally certify EPR correlations of photon pairs generated by spontaneous parametric down-conversion (SPDC) in a nonlinear crystal with type-0 phase-matching at telecom wavelength for the first time. To experimentally observe EPR entanglement, we perform scanning measurements in the near- and far-field planes of the signal and idler modes. We certify EPR correlations with high statistical significance of up to 45 standard deviations. Furthermore, we determine the entanglement of formation of our source to be greater than one, which gives evidence for the the high-dimensional entanglement between the photons. Operating at telecom wavelengths around 1550 nm, our source is compatible with today's deployed telecommunication infrastructure, thus paving the way for integrating sources of high-dimensional entanglement into quantum-communication infrastructures.
quantum physics
A nonlinear small-strain elastic theory is constructed from a systematic expansion in Biot strains, truncated at quadratic order. The primary motivation is the need for a clean separation between stretching and bending energies for shells, which appears to arise only from reduction of a bulk energy of this type. An approximation of isotropic invariants, bypassing the solution of a quartic equation or computation of tensor square roots, allows stretches, rotations, stresses, and balance laws to be written in terms of derivatives of position. Two-field formulations are also presented. Extensions to anisotropic theories are briefly discussed.
condensed matter
We develop a framework of coupled transport equations for open heavy flavor and quarkonium states, in order to describe their transport inside the quark-gluon plasma. Our framework is capable of studying simultaneously both open and hidden heavy flavor observables in heavy-ion collision experiments and can account for both, uncorrelated and correlated recombination. Our recombination implementation depends on real-time open heavy quark and antiquark distributions. We carry out consistency tests to show how the interplay among open heavy flavor transport, quarkonium dissociation and recombination drives the system to equilibrium. We then apply our framework to study bottomonium production in heavy-ion collisions. We include $\Upsilon(1S)$, $\Upsilon(2S)$, $\Upsilon(3S)$, $\chi_b(1P)$ and $\chi_b(2P)$ in the framework and take feed-down contributions during the hadronic gas stage into account. Cold nuclear matter effects are included by using nuclear parton distribution functions for the initial primordial heavy flavor production. A calibrated $2+1$ dimensional viscous hydrodynamics is used to describe the bulk QCD medium. We calculate both the nuclear modification factor $R_{\mathrm{AA}}$ of all bottomonia states and the azimuthal angular anisotropy coefficient $v_2$ of the $\Upsilon(1S)$ state and find that our results agree reasonably with experimental measurements. Our calculations indicate that correlated cross-talk recombination is an important production mechanism of bottomonium in current heavy-ion experiments. The importance of correlated recombination can be tested experimentally by measuring the ratio of $R_{\mathrm{AA}}(\chi_b(1P))$ and $R_{\mathrm{AA}}(\Upsilon(2S))$.
high energy physics phenomenology
Microscopic images from multiple modalities can produce plentiful experimental information. In practice, biological or physical constraints under a given observation period may prevent researchers from acquiring enough microscopic scanning. Recent studies demonstrate that image synthesis is one of the popular approaches to release such constraints. Nonetheless, most existing synthesis approaches only translate images from the source domain to the target domain without solid geometric associations. To embrace this challenge, we propose an innovative model architecture, BANIS, to synthesize diversified microscopic images from multi-source domains with distinct geometric features. The experimental outcomes indicate that BANIS successfully synthesizes favorable image pairs on C. elegans microscopy embryonic images. To the best of our knowledge, BANIS is the first application to synthesize microscopic images that associate distinct spatial geometric features from multi-source domains.
electrical engineering and systems science
XEFT is a low-energy effective field theory for charm mesons and pions that provides a systematically improvable description of the $X(3872)$ resonance. To simplify calculations beyond leading order, we introduce a new formulation of XEFT with a dynamical field for a pair of charm mesons in the resonant channel. We simplify the renormalization of XEFT by introducing a new renormalization scheme that involves the subtraction of amplitudes at the complex $D^{*0} \bar D^0$ threshold. The new formulation and the new renormalization scheme are illustrated by calculating the complex pole energy of $X$ and the $D^{*0} \bar D^0$ scattering amplitude to next-to-leading order using Galilean-invariant XEFT.
high energy physics phenomenology
We describe a theory on photo-thermoelectric properties of a semiconductor, which include photo-conductivity, photo-Seebeck coefficient, and photo-Hall effect. We demonstrate that these properties provide a powerful tool for the study of carrier transport in semiconductors. Even though photo-carrier generation is a complicated process which often prohibits quantitative analysis as their species or numbers are not known. Using bulk samples seems even less likely as the photo-carrier only affect a thin layer. Our method will allow researchers to bypass these difficulties, to use only measured properties and determine both electron and hole mobilities as well as the ratio between electrons and holes from a bulk sample. We provide initial experiment verification of our theory in the end using two distinctively different semiconductors.
condensed matter
We consider and resolve the gap problem for almost quaternion-Hermitian structures, i.e. we determine the maximal and submaximal symmetry dimensions, both for Lie algebras and Lie groups, in the class of almost quaternion-Hermitian manifolds. We classify all structures with such symmetry dimensions. Geometric properties of the submaximally symmetric spaces are studied, in particular we identify locally conformally quaternion-K\"ahler structures as well as quaternion-K\"ahler with torsion.
mathematics
Despite gait recognition and person re-identification researches have made a lot of progress, the accuracy of identification is not high enough in some specific situations, for example, people carrying bags or changing coats. In order to alleviate above situations, we propose a simple but effective Consecutive Horizontal Dropout (CHD) method apply on human feature extraction in deep learning network to avoid overfitting. Within the CHD, we intensify the robust of deep learning network for cross-view gait recognition and person re-identification. The experiments illustrate that the rank-1 accuracy on cross-view gait recognition task has been increased about 10% from 68.0% to 78.201% and 8% from 83.545% to 91.364% in person re-identification task in wearing coat or jacket condition. In addition, 100% accuracy of NM condition was first obtained with CHD. On the benchmarks of CASIA-B, above accuracies are state-of-the-arts.
computer science
We prove that if $\tau$ is a large positive number, then the atomic Goldberg-type space $\mathfrak{h}^1(N)$ and the space $\mathfrak{h}_{\mathcal R_\tau}^1(N)$ of all integrable functions on $N$ whose local Riesz transform $\mathcal R_\tau$ is integrable are the same space on any complete noncompact Riemannian manifold $N$ with Ricci curvature bounded from below and positive injectivity radius. We also relate $\mathfrak{h}^1(N)$ to a space of harmonic functions on the slice $N\times (0,\delta)$ for $\delta>0$ small enough.
mathematics
We study the gas kinematics traced by the 21-cm emission of a sample of six HI$-$rich low surface brightness galaxies classified as ultra-diffuse galaxies (UDGs). Using the 3D kinematic modelling code $\mathrm{^{3D}}$Barolo we derive robust circular velocities, revealing a startling feature: HI$-$rich UDGs are clear outliers from the baryonic Tully-Fisher relation, with circular velocities much lower than galaxies with similar baryonic mass. Notably, the baryon fraction of our UDG sample is consistent with the cosmological value: these UDGs are compatible with having no "missing baryons" within their virial radii. Moreover, the gravitational potential provided by the baryons is sufficient to account for the amplitude of the rotation curve out to the outermost measured point, contrary to other galaxies with similar circular velocities. We speculate that any formation scenario for these objects will require very inefficient feedback and a broad diversity in their inner dark matter content.
astrophysics
It has long been known that the covariant formulation of quantum electrodynamics conflicts with the local description of states in the charged sector. Some of the solutions to this problem amount to modifications of the subsidiary conditions below some arbitrarily low photon frequency. Such infrared modified theories have been shown to be equivalent to standard Maxwell electrodynamics with an additional classical electromagnetic current induced by the quantum charges. The induced current only has support for very small frequencies and cancels the effects of the physical charges on large scales. In this work we explore the possibility that this de-electrification effect could allow for the existence of isotropic charged cosmologies, thus evading the stringent limits on the electric charge asymmetry of the universe. We consider a simple model of infrared-modified scalar electrodynamics in the cosmological context and find that the charged sector generates a new contribution to the energy-momentum tensor whose dominant contribution at late times is a cosmological constant-like term. If the charge asymmetry was generated during inflation, the limits on the asymmetry parameter in order not to produce a too-large cosmological constant are very stringent $\eta_Q <10^{-131}- 10^{-144}$ for a number of e-folds $N=50-60$ in typical models. However if the charge imbalance is produced after inflation, the limits are relaxed in such a way that $\eta_Q<10^{-43}(100 \,\mbox{GeV}/T_Q)$, with $T_Q$ the temperature at which the asymmetry was generated. If the charge asymmetry has ever existed and the associated electromagnetic fields vanish in the asymptotic future, the limit can be further reduced to $\eta_Q<10^{-28}$.
high energy physics phenomenology
Dielectron production in reactions $\pi^- p \rightarrow n e^+e^-$ and $\pi^- p \rightarrow n e^+e^- \gamma$ at energies less than 1 GeV is studied assuming electron-positron pair production to occur in the virtual time-like photon splitting process. Theoretical predictions of the effective mass distribution of dielectrons and their angular dependence are presented. Extraction of the electromagnetic form factor of baryon transition in the time-like region from future experiments of the HADES Collaboration is discussed.
high energy physics phenomenology
We have studied complexes of gold atoms and imidazole (C$_3$N$_2$H$_4$, abbreviated Im) produced in helium nanodroplets. Following the ionization of the doped droplets we detect a broad range of different Au$_m$Im$_n^+$ complexes, however we find that for specific values of $m$ certain $n$ are "magic" and thus particularly abundant. Our density functional theory calculations indicate that these abundant clusters sizes are partially the result of particularly stable complexes, e.g. AuIm$_2^+$, and partially due to a transition in fragmentation patterns from the loss of neutral imidazole molecules for large systems to the loss of neutral gold atoms for smaller systems.
physics
We describe an implementation of a subtraction scheme in the nonrelativistic-QCD treatment of heavy-quarkonium production at next-to-leading-order in the strong-coupling constant, covering $S$- and $P$-wave bound states. It is based on the dipole subtraction in the massless version by Catani and Seymour and its extension to massive quarks by Phaf and Weinzierl. Important additions include the treatment of heavy-quark bound states, in particular due to the more complicated infrared-divergence structure in the case of $P$-wave states.
high energy physics phenomenology
Introduction: estimation of confidence intervals (CIs) of binomial proportions has been reviewed more than once but the directional interpretation, distinguishing the overestimation from the underestimation, was neglected while the sample size and theoretical proportion variances from experiment to experiment have not been formally taken in account. Herein, we define and apply new evaluation criteria, then give recommendations for the practical use of these CIs. Materials & methods: Google Scholar was used for bibliographic research. Evaluation criteria were (i) one-sided conditional errors, (ii) one-sided local average errors assuming a random theoretical proportion and (iii) expected half-widths of CIs. Results: Wald's CI did not control any of the risks, even when the expected number of successes reached 32. The likelihood ratio CI had a better balance than the logistic Wald CI. The Clopper-Pearson mid-P CI controlled well one-sided local average errors whereas the simple Clopper-Pearson CI was strictly conservative on both one-sided conditional errors. The percentile and basic bootstrap CIs had the same bias order as Wald's CI whereas the studentized CIs and BCa, modified for discrete bootstrap distributions, were less biased but not as efficient as the parametric methods. The half-widths of CIs mirrored local average errors. Conclusion: we recommend using the Clopper-Pearson mid-P CI for the estimation of a proportion except for observed-theoretical proportion comparison under controlled experimental conditions in which the Clopper-Pearson CI may be better.
statistics
A comprehensive magnetotransport study including resistivity ($\rho_{xx}$) at various fields, isothermal magnetoresistance and Hall resistivity ($\rho_{xy}$) has been carried out at different temperatures on the Co$_{2}$TiAl Heusler alloy. Co$_{2}$TiAl alloy shows a paramagnetic (PM) to ferromagnetic (FM) transition below the curie temperature (T$_{C}$) $\sim$ 125 K. In the FM region, resistivity and magnetoresistance reveals a spin flip electron-magnon scattering and the Hall resistivity unveils the anomalous Hall resistivity ($\rho_{xy}^{AH}$). Scaling of anomalous Hall resistivity with resistivity establishes the extrinsic scattering process responsible for the anomalous hall resistivity; however Skew scattering is the dominant mechanism compared to the side-jump contribution. A one to one correspondence between magnetoresistance and side-jump contribution to anomalous Hall resistivity verifies the electron-magnon scattering being the source of side-jump contribution to the anomalous hall resistivity.
condensed matter
A multiple conjugation quandle is an algebra whose axioms are motivated from handlebody-knot theory. Any linear extension of a multiple conjugation quandle can be described by using a pair of maps called an MCQ Alexander pair. In this paper, we show that any affine extension of a multiple conjugation quandle can be described by using a quadruple of maps, called an augmented MCQ Alexander pair.
mathematics
We focus on BPS solutions of the gauged O(3) Sigma model, due to Schroers, and use these ideas to study the geometry of the moduli space. The model has an asymmetry parameter $\tau$ breaking the symmetry of vortices and antivortices on the field equations. It is shown that the moduli space is incomplete both on the Euclidean plane and on a compact surface. On the Euclidean plane, the L2 metric on the moduli space is approximated for well separated cores and results consistent with similar approximations for the Ginzburg-Landau functional are found. The scattering angle of approaching vortex-antivortex pairs of different effective mass is computed numerically and is shown to be different from the well known scattering of approaching Ginzburg-Landau vortices. The volume of the moduli space for general $\tau$ is computed for the case of the round sphere and flat tori. The model on a compact surface is deformed introducing a neutral field and a Chern-Simons term. A lower bound for the Chern-Simons constant $\kappa$ such that the extended model admits a solution is shown to exist, and if the total number of vortices and antivortices are different, the existence of an upper bound is also shown. Existence of multiple solutions to the governing elliptic problem is established on a compact surface as well as the existence of two limiting behaviours as $\kappa \to 0$. A localization formula for the deformation is found for both Ginzburg-Landau and the O(3) Sigma model vortices and it is shown that it can be extended to the coalescense set. This rules out the possibility that this is Kim-Lee's term in the case of Ginzburg-Landau vortices, moreover, the deformation term is compared on the plane with the Ricci form of the surface and it is shown they are different, hence also discarding that this is the term proposed by Collie-Tong to model vortex dynamics with Chern-Simons interaction.
high energy physics theory
This article presents an algorithm for reducing measurement uncertainty of one quantity when given measurements of two quantities with correlated noise. The algorithm assumes that the measurement uncertainty in both physical quantities follows a Gaussian distribution and provides concrete justification for these assumptions. When applied to temperature-compensated humidity sensors, it provides reduced uncertainty in humidity estimates from correlated temperature and humidity measurements. In an experimental evaluation, the algorithm achieves average uncertainty reduction of 10.3 %. The algorithm incurs an execution time overhead of 5.3% when compared to the minimum algorithm required to measure and calculate the uncertainty. Detailed instruction-level emulation of a C-language implementation compiled to the RISC-V architecture shows that the uncertainty reduction program required 0.05% more instructions per iteration than the minimum operations required to calculate the uncertainty.
electrical engineering and systems science
Drift pairs are an unusual type of fine structure sometimes observed in dynamic spectra of solar radio emission. They appear as two identical short narrowband drifting stripes separated in time; both positive and negative frequency drifts are observed. Using the Low Frequency Array (LOFAR), we report unique observations of a cluster of drift pair bursts in the frequency range of 30-70 MHz made on 12 July 2017. Spectral imaging capabilities of the instrument have allowed us for the first time to resolve the temporal and frequency evolution of the source locations and sizes at a fixed frequency and along the drifting pair components. Sources of two components of a drift pair have been imaged and found to propagate in the same direction along nearly the same trajectories. Motion of the second component source is delayed in time with respect to that of the first one. The source trajectories can be complicated and non-radial; positive and negative frequency drifts correspond to opposite propagation directions. The drift pair bursts with positive and negative frequency drifts, as well as the associated broadband type-III-like bursts, are produced in the same regions. The visible source velocities are variable from zero to a few $10^4$ (up to ${\sim 10^5}$) km/s, which often exceeds the velocities inferred from the drift rate ($\sim 10^4$ km/s). The visible source sizes are of about $10'-18'$; they are more compact than typical type III sources at the same frequencies. The existing models of drift pair bursts cannot adequately explain the observed features. We discuss the key issues that need to be addressed, and in particular the anisotropic scattering of the radio waves. The broadband bursts observed simultaneously with the drift pairs differ in some aspects from common type III bursts and may represent a separate type of emission.
astrophysics
Similarity-driven multi-view linear reconstruction (SiMLR) is an algorithm that exploits inter-modality relationships to transform large scientific datasets into smaller, more well-powered and interpretable low-dimensional spaces. SiMLR contributes a novel objective function for identifying joint signal, regularization based on sparse matrices representing prior within-modality relationships and an implementation that permits application to joint reduction of large data matrices, each of which may have millions of entries. We demonstrate that SiMLR outperforms closely related methods on supervised learning problems in simulation data, a multi-omics cancer survival prediction dataset and multiple modality neuroimaging datasets. Taken together, this collection of results shows that SiMLR may be applied with default parameters to joint signal estimation from disparate modalities and may yield practically useful results in a variety of application domains.
statistics
In recent years, with the enlightenment of some issues encountered at muon colliders, muon colliders have become more feasible for the high-energy physics community. For this reason, we studied single production of excited muon at muon colliders via contact interaction. Besides, we assumed that the excited muon is produced via contact interactions and decays to the photon and the muon through the gauge interaction. Then, signal and background analyses were performed at the muon anti-muon collider options with 6 TeV, 14 TeV, and 100 TeV center-of-mass energies for the excited muon. Attainable mass and compositeness scale limits were calculated for the excited muon at the muon anti-muon colliders. As a result of the calculations, it was concluded that the muon-antimuon colliders would be a perfect collider option for the excited muon researchers.
high energy physics phenomenology
We formulate a model of noncommutative four-dimensional gravity on a covariant fuzzy space based on SO(1,4), that is the fuzzy version of the $\text{dS}_4$. The latter requires the employment of a wider symmetry group, the SO(1,5), for reasons of covariance. Addressing along the lines of formulating four-dimensional gravity as a gauge theory of the Poincar\'e group, spontaneously broken to the Lorentz, we attempt to construct a four-dimensional gravitational model on the fuzzy de Sitter spacetime. In turn, first we consider the SO(1,4) subgroup of the SO(1,5) algebra, in which we were led to, as we want to gauge the isometry part of the full symmetry. Then, the construction of a gauge theory on such a noncommutative space directs us to use an extension of the gauge group, the SO(1,5)$\times$U(1), and fix its representation. Moreover, a 2-form dynamic gauge field is included in the theory for reasons of covariance of the transformation of the field strength tensor. Finally, the gauge theory is considered to be spontaneously broken to the Lorentz group with an extension of a U(1), i.e. SO(1,3)$\times$U(1). The latter defines the four-dimensional noncommutative gravity action which can lead to equations of motion, whereas the breaking induces the imposition of constraints that will lead to expressions relating the gauge fields. It should be noted that we use the euclidean signature for the formulation of the above programme.
high energy physics theory
We show that the data of $p_{T}$ spectra of $\Omega^{-}$ and $\phi$ at midrapidity in inelastic events in $pp$ collisions at $\sqrt{s}=$ 13 TeV exhibit a constituent quark number scaling property, which is a clear signal of quark combination mechanism at hadronization. We use a quark combination model under equal velocity combination approximation to systematically study the production of identified hadrons in $pp$ collisions at $\sqrt{s}$= 13 TeV. The midrapidity data of $p_{T}$ spectra of proton, $\Lambda$, $\Xi^{-}$, $\Omega^{-}$, $\phi$ and $K^{*}$ in inelastic events are simultaneously well fitted by the model. The data of multiplicity dependency of yields of these hadrons are also well understood. The strong $p_{T}$ dependence for data of $p/\phi$ ratio is well explained by the model, which further suggests that the production of two hadrons with similar masses is determined by their quark contents at hadronization. $p_{T}$ spectra of strange hadrons at midrapidity in different multiplicity classes in $pp$ collisions at $\sqrt{s}=$ 13 TeV are predicted to further test the model in the future. The midrapidity $p_{T}$ spectra of soft ($p_T<2$ GeV/c) strange quark and up/down quark at hadronization in $pp$ collisions at $\sqrt{s}=$ 13 TeV are extracted.
high energy physics phenomenology
HTTP/2 video streaming has caught a lot of attentions in the development of multimedia technologies over the last few years. In HTTP/2, the server push mechanism allows the server to deliver more video segments to the client within a single request in order to deal with the requests explosion problem. As a result, recent research efforts have been focusing on utilizing such a feature to enhance the streaming experience while reducing the request-related overhead. However, current works only optimize the performance of a single client, without necessary concerns of possible influences on other clients in the same network. When multiple streaming clients compete for a shared bandwidth in HTTP/1.1, they are likely to suffer from unfairness, which is defined as the inequality in their bitrate selections. For HTTP/1.1, existing works have proven that the network-assisted solutions are effective in solving the unfairness problem. However, the feasibility of utilizing such an approach for the HTTP/2 server push has not been investigated. Therefore, in this paper, a novel proxy-based framework is proposed to overcome the unfairness problem in adaptive streaming over HTTP/2 with the server push. Experimental results confirm the outperformance of the proposed framework in ensuring the fairness, assisting the clients to avoid rebuffering events and lower bitrate degradation amplitude, while maintaining the mechanism of the server push feature.
computer science
It has been shown that the spectrum of quantum gravity contains at least two new modes in addition to the massless graviton: a massive spin-0 and a massive spin-2. We calculate their power spectrum during inflation and we argue that they could leave an imprint in the cosmic microwave background should their masses be below the inflationary scale.
high energy physics theory
Demand response (DR) programs are interesting ways to attract consumers' participation in order to improve the electric consumption patterns. DR programs motivate customers to change the consumption patterns in response to price changes. This could be done by paying incentives or considering penalties either when wholesale market prices are high or when network reliability is at risk. The overall purpose of implementing DR programs is to improve the network reliability and reduce the costs. Successful implementation of these programs requires prerequisites, which without them, there is no guarantee of the success of these programs. Different sciences have proposed various scientific solutions for creating optimal power consumption behavior in customers, such as solutions based on the technical and economic aspects. Although each of these solutions might be efficient and effective, they could not cover all the aspects of the solutions. The results of studies conducted by many researchers show that in addition to the technical and economic issues, social, cultural, and behavioral factors are also very important. Therefore, in this paper, cultural, social, and behavioral aspects are investigated and analyzed as one of the vital requirements for better implementation of DR.
physics
Gudder, in a recent paper, defined a candidate entanglement measure which is called the entanglement number. The entanglement number is first defined on pure states and then it extends to mixed states by the convex roof construction. In Gudder's article it was left as an open problem to show that Optimal Pure State Ensembles (OPSE) exist for the convex roof extension of the entanglement number from pure to mixed states. We answer Gudder's question in the affirmative, and therefore we obtain that the entanglement number vanishes only on the separable states. More generally we show that OPSE exist for the convex roof extension of any function that is norm continuous on the pure states of a finite dimensional Hilbert space. Further we prove that the entanglement number is an LOCC monotone, (and thus an entanglement measure), by using a criterion that was developed by Vidal in 2000. We present a simplified proof of Vidal's result where moreover we use an interesting point of view of tree representations for LOCC communications. Lastly, we generalize Gudder's entanglement number by producing a monotonic family of entanglement measures which converge in a natural way to the entropy of entanglement.
quantum physics
This paper deals with some analytic aspects of GG system introduced by I.M.Gelfand and M.I.Graev: we compute the dimension of the solution space of GG system over the field of functions meromorphic and periodic with respect to a lattice. We describe the monodromy invariant subspace of the solution space. We give a connection formula between a pair of bases consisting of $\Gamma$-series solutions of GG system associated to a pair of regular triangulations adjacent to each other in the secondary fan.
mathematics
Solving Bayesian inference problems approximately with variational approaches can provide fast and accurate results. Capturing correlation within the approximation requires an explicit parametrization. This intrinsically limits this approach to either moderately dimensional problems, or requiring the strongly simplifying mean-field approach. We propose Metric Gaussian Variational Inference (MGVI) as a method that goes beyond mean-field. Here correlations between all model parameters are taken into account, while still scaling linearly in computational time and memory. With this method we achieve higher accuracy and in many cases a significant speedup compared to traditional methods. MGVI is an iterative method that performs a series of Gaussian approximations to the posterior. We alternate between approximating the covariance with the inverse Fisher information metric evaluated at an intermediate mean estimate and optimizing the KL-divergence for the given covariance with respect to the mean. This procedure is iterated until the uncertainty estimate is self-consistent with the mean parameter. We achieve linear scaling by avoiding to store the covariance explicitly at any time. Instead we draw samples from the approximating distribution relying on an implicit representation and numerical schemes to approximately solve linear equations. Those samples are used to approximate the KL-divergence and its gradient. The usage of natural gradient descent allows for rapid convergence. Formulating the Bayesian model in standardized coordinates makes MGVI applicable to any inference problem with continuous parameters. We demonstrate the high accuracy of MGVI by comparing it to HMC and its fast convergence relative to other established methods in several examples. We investigate real-data applications, as well as synthetic examples of varying size and complexity and up to a million model parameters.
statistics
We describe a class of holographic models that may describe the physics of certain four-dimensional big-bang / big-crunch cosmologies. The construction involves a pair of 3D Euclidean holographic CFTs each on a homogeneous and isotropic space $M$ coupled at either end of an interval ${\cal I}$ to a Euclidean 4D CFT on $M \times {\cal I}$ with many fewer local degrees of freedom. We argue that in some cases, when the size of $M$ is much greater than the length of ${\cal I}$, the theory flows to a gapped / confining three-dimensional field theory on $M$ in the infrared, and this is reflected in the dual description by the asymptotically AdS spacetimes dual to the two 3D CFTs joining up in the IR to give a Euclidean wormhole. The Euclidean construction can be reinterpreted as generating a state of Lorentzian 4D CFT on $M \times {\rm time}$ whose dual includes the physics of a big-bang / big-crunch cosmology. When $M$ is $\mathbb{R}^3$, we can alternatively analytically continue one of the $\mathbb{R}^3$ directions to get an eternally traversable four-dimensional planar wormhole. We suggest explicit microscopic examples where the 4D CFT is ${\cal N}=4$ SYM theory and the 3D CFTs are superconformal field theories with opposite orientation. In this case, the two geometries dual to the pair of 3D SCFTs can be understood as a geometrical version of a brane-antibrane pair, and the tendency of the geometries to connect up is related to the standard instability of brane-antibrane systems.
high energy physics theory
The explosion of IoT and wearable devices determined a rising attention towards energy harvesting as source for powering these systems. In this context, many applications cannot afford the presence of a battery because of size, weight and cost issues. Therefore, due to the intermittent nature of ambient energy sources, these systems must be able to save and restore their state, in order to guarantee progress across power interruptions. In this work, we propose a specialized backup/restore controller that dynamically tracks the memory accesses during the execution of the program. The controller then commits the changes to a snapshot in a Non-Volatile Memory (NVM) when a power failure is detected. Our approach does not require complex hybrid memories and can be implemented with standard components. % and integrated in any MCU with Results on a set of benchmarks show an average $8\times$ reduction in backup size. Thanks to our dedicated controller, the backup time is further reduced by more than $100\times$, with an area and power overhead of only 0.4\% and 0.8\%, respectively, w.r.t. a low-end IoT node.
computer science
The extensive computer-aided search applied in [arXiv:2010.10519] to find the minimal charge sourced by the fluxes that stabilize all the (flux-stabilizable) moduli of a smooth K3xK3 compactification uses differential evolutionary algorithms supplemented by local searches. We present these algorithms in detail and show that they can also solve our minimization problem for other lattices. Our results support the Tadpole Conjecture: The minimal charge grows linearly with the dimension of the lattice and, for K3xK3, this charge is larger than allowed by tadpole cancelation. Even if we are faced with an NP-hard lattice-reduction problem at every step in the minimization process, we find that differential evolution is a good technique for identifying the regions of the landscape where the fluxes with the lowest tadpole can be found. We then design a "Spider Algorithm," which is very efficient at exploring these regions and producing large numbers of minimal-tadpole configurations.
high energy physics theory
We show that two popular selective inference procedures, namely data carving (Fithian et al., 2017) and selection with a randomized response (Tian et al., 2018b), when combined with the polyhedral method (Lee et al., 2016), result in confidence intervals whose length is bounded. This contrasts results for confidence intervals based on the polyhedral method alone, whose expected length is typically infinite (Kivaranovic and Leeb, 2020). Moreover, we show that these two procedures always dominate corresponding sample-splitting methods in terms of interval length.
statistics
The fast forward scheme of adiabatic quantum dynamics is applied to finite regular spin clusters with various geometries and the nature of driving interactions is elucidated. The fast forward is the quasi-adiabatic dynamics guaranteed by regularization terms added to the reference Hamiltonian, followed by a rescaling of time with use of a large scaling factor. With help of the regularization terms consisting of pair-wise and 3-body interactions, we apply the proposed formula (Phys. Rev.A 96, 052106(2017)) to regular triangle and open linear chain for N = 3 spin systems, and to triangular pyramid, square, primary star graph and open linear chain for N = 4 spin systems. The geometry-induced symmetry greatly decreases the rank of coefficient matrix of the linear algebraic equation for regularization terms. Choosing a transverse Ising Hamiltonian as a reference, we find: (1) for N = 3 spin clusters, the driving interaction consists of only the geometry-dependent pairwise interactions and there is no need for the 3-body interaction; (2) for N = 4 spin clusters, the geometry-dependent pair-wise interactions again constitute major part of the driving interaction, whereas the universal 3-body interaction free from the geometry is necessary but plays a subsidiary role. Our scheme predicts the practical driving interaction in accelerating the adiabatic quantum dynamics of structured regular spin clusters.
quantum physics
The TeV $\gamma$-ray halo around the Geminga pulsar is an important indicator of cosmic-ray (CR) propagation in the local zone of the Galaxy as it reveals the spatial distribution of the electrons and positrons escaping from the pulsar. Considering the intricate magnetic field in the interstellar medium (ISM), it is proposed that superdiffusion model could be more realistic to describe the CR propagation than the commonly used normal diffusion model. In this work, we test the superdiffusion model in the ISM around the Geminga pulsar by fitting to the surface brightness profile of the Geminga halo measured by HAWC. Our results show that the chi-square statistic monotonously increases as $\alpha$ decreases from 2 to 1, where $\alpha$ is the characteristic index of superdiffusion describing the degree of fractality of the ISM and $\alpha=2$ corresponds to the normal diffusion model. We find that model with $\alpha<1.32$ (or $<1.4$, depending on the data used in fit) is disfavored at 95\% confidence level. Superdiffusion model with $\alpha$ close to 2 can well explain the morphology of the Geminga halo, while it predicts much higher positron flux on the Earth than the normal diffusion model. This has important implication for the interpretation of the CR positron excess.
astrophysics
The K-means algorithm is a widely used clustering algorithm that offers simplicity and efficiency. However, the traditional K-means algorithm uses the random method to determine the initial cluster centers, which make clustering results prone to local optima and then result in worse clustering performance. Many initialization methods have been proposed, but none of them can dynamically adapt to datasets with various characteristics. In our previous research, an initialization method for K-means based on hybrid distance was proposed, and this algorithm can adapt to datasets with different characteristics. However, it has the following drawbacks: (a) When calculating density, the threshold cannot be uniquely determined, resulting in unstable results. (b) Heavily depending on adjusting the parameter, the parameter must be adjusted five times to obtain better clustering results. (c) The time complexity of the algorithm is quadratic, which is difficult to apply to large datasets. In the current paper, we proposed an adaptive initialization method for the K-means algorithm (AIMK) to improve our previous work. AIMK can not only adapt to datasets with various characteristics but also obtain better clustering results within two interactions. In addition, we then leverage random sampling in AIMK, which is named as AIMK-RS, to reduce the time complexity. AIMK-RS is easily applied to large and high-dimensional datasets. We compared AIMK and AIMK-RS with 10 different algorithms on 16 normal and six extra-large datasets. The experimental results show that AIMK and AIMK-RS outperform the current initialization methods and several well-known clustering algorithms. Furthermore, AIMK-RS can significantly reduce the complexity of applying it to extra-large datasets with high dimensions. The time complexity of AIMK-RS is O(n).
computer science
Recent precise measurement of the electron anomalous magnetic moment (AMM) adds to the longstanding tension of the muon AMM and together strongly point towards physics beyond the Standard Model (BSM). In this work, we propose a solution to both anomalies in an economical fashion via a light scalar that emerges from a second Higgs doublet and resides in the $\mathcal{O}(10)$-MeV to $\mathcal{O}(1)$-GeV mass range yielding the right sizes and signs for these deviations due to one-loop and two-loop dominance for the muon and the electron, respectively. A scalar of this type is subject to a number of various experimental constraints, however, as we show, it can remain sufficiently light by evading all experimental bounds and has the great potential to be discovered in the near-future low-energy experiments. The analysis provided here is equally applicable to any BSM scenario for which a light scalar is allowed to have sizable flavor-diagonal couplings to the charged leptons. In addition to the light scalar, our theory predicts the existence of a nearly degenerate charged scalar and a pseudoscalar, which have masses of the order of the electroweak scale. We analyze possible ways to probe new-physics signals at colliders and find that this scenario can be tested at the LHC by looking at the novel process $pp \to H^\pm H^\pm jj \to l^\pm l^\pm j j + {E\!\!\!\!/}_{T}$ via same-sign pair production of charged Higgs bosons.
high energy physics phenomenology
We extend non-Hermitian topological quantum walks on a Su-Schrieffer-Heeger (SSH) lattice [M. S. Rudner and L. Levitov, Phys. Rev. Lett. 102, 065703 (2009)] to the case of non-Markovian evolution. This non-Markovian model is established by coupling each unit cell in the SSH lattice to a reservoir formed by a quasi-continuum of levels. We find a topological transition in this model even in the case of non-Markovian evolution, where the walker may visit the reservoir and return to the SSH lattice at a later time. The existence of a topological transition does, however, depend on the low-frequency properties of the reservoir, characterized by a spectral density $J(\epsilon)\propto |\epsilon|^\alpha$. In particular, we find a robust topological transition for a sub-Ohmic ($\alpha<1$) and Ohmic ($\alpha=1$) reservoir, but no topological transition for a super-Ohmic ($\alpha>1$) reservoir. This behavior is directly related to the well-known localization transition for the spin-boson model. We confirm the presence of non-Markovian dynamics by explicitly evaluating a measure of Markovianity for this model.
quantum physics
We examine proton decay mediated by color-triplet Higgsinos in minimal supersymmetric $SU(5)$ grand unified theory in light of the discovery of the Higgs boson and the absence of SUSY signals at the LHC. We pay special attention to various threshold effects arising from Planck-suppressed operators that affect the color-triplet Higgsino mass and also correct the wrong mass relations for the light fermions. Our analysis allows for a non-universal SUSY spectrum with the third family scalars having a separate mass compared to the first two families. We identify the allowed parameter space of the model and show that the SUSY scalar masses are constrained by current limits from proton lifetime to be above 5 TeV, while the glunio, Wino and the Higgsinos may be within reach of the LHC. When the SUSY scalar masses are required to be $\leq 20$ TeV, so that they are within reach of next generation collider experiments, we find that proton lifetime for the decay $p \rightarrow \overline{\nu} K^+$ is bounded by $\tau(p \rightarrow \overline{\nu} K^+) \leq 1.1 \times 10^{35}$ yrs.
high energy physics phenomenology
There is an extensive history of scholarship into what constitutes a "basic" color term, as well as a broadly attested acquisition sequence of basic color terms across many languages, as articulated in the seminal work of Berlin and Kay (1969). This paper employs a set of diverse measures on massively cross-linguistic data to operationalize and critique the Berlin and Kay color term hypotheses. Collectively, the 14 empirically-grounded computational linguistic metrics we design---as well as their aggregation---correlate strongly with both the Berlin and Kay basic/secondary color term partition (gamma=0.96) and their hypothesized universal acquisition sequence. The measures and result provide further empirical evidence from computational linguistics in support of their claims, as well as additional nuance: they suggest treating the partition as a spectrum instead of a dichotomy.
computer science
Social Media has been transforming numerous activities of everyday life, impacting also healthcare. However, few studies investigate the medical use of social media by patients and medical practitioners, especially in the Arabian Gulf region and Kuwait. To understand the behavior of patients and medical practitioners in social media toward healthcare and medical purposes, we conducted user studies. Through an online survey, we identified a decrease in patients and medical practitioners use of social media for medical purposes. Patients reported to be more aware than practitioners concerning: health education, health-related network support, and communication activities. While practitioners use social media mostly as a source of medical information, for clinician marketing and for professional development. The findings highlighted the need to design a social media platform that support healthcare online campaign, professional career identity, medical repository, and social privacy setting to increase users engagements toward medical purposes.
computer science
Since its introduction, television has been the main channel of investment for advertisements in order to influence customers purchase behavior. Many have attributed the mere exposure effect as the source of influence in purchase intention and purchase decision; however, most of the studies of television advertisement effects are not only outdated, but their sample size is questionable and their environments do not reflect reality. With the advent of the internet, social media and new information technologies, many recent studies focus on the effects of online advertisement, meanwhile, the investment in television advertisement still has not declined. In response to this, we applied machine learning algorithms SVM and XGBoost, as well as Logistic Regression, to construct a number of prediction models based on at-home advertisement exposure time and demographic data, examining the predictability of Actual Purchase and Purchase Intention behaviors of 3000 customers across 36 different products during the span of 3 months. If models based on exposure time had unreliable predictability in contrast to models based on demographic data, doubts would surface about the effectiveness of the hard investment in television advertising. Based on our results, we found that models based on advert exposure time were consistently low in their predictability in comparison with models based on demographic data only, and with models based on both demographic data and exposure time data. We also found that there was not a statistically significant difference between these last two kinds of models. This suggests that advert exposure time has little to no effect in the short-term in increasing positive actual purchase behavior.
computer science
Linguistics holds unique characteristics of generality, stability, and nationality, which will affect the formulation of extraction strategies and should be incorporated into the relation extraction. Chinese open relation extraction is not well-established, because of the complexity of Chinese linguistics makes it harder to operate, and the methods for English are not compatible with that for Chinese. The diversities between Chinese and English linguistics are mainly reflected in morphology and syntax.
computer science
Systematic classification of Z2xZ2 orbifold compactifications of the heterotic-string was pursued by using its free fermion formulation. The method entails random generation of string vacua and analysis of their entire spectra, and led to discovery of spinor-vector duality and three generation exophobic string vacua. The classification was performed for string vacua with unbroken SO(10) GUT symmetry, and progressively extended to models in which the SO(10) symmetry is broken to the SO(6)xSO(4), SU(5)xU(1), SU(3)xSU(2)xU(1)^2 and SU(3)xU(1)xSU(2)^2 subgroups. Obtaining sizeable number of phenomenologically viable vacua in the last two cases requires identification of fertility conditions. Adaptation of machine learning tools to identify the fertility conditions will be useful when the frequency of viable models becomes exceedingly small in the total space of vacua.
high energy physics theory
We report the direct characterization of energy-time entanglement of narrowband biphotons produced from spontaneous four-wave mixing in cold atoms. The Stokes and anti-Stokes two-photon temporal correlation is measured by single-photon counters with nano second temporal resolution, and their joint spectrum is determined by using a narrow linewidth optical cavity. The energy-time entanglement is verified by the joint frequency-time uncertainty product of 0.063 +/- 0.0044, which does not only violate the separability criterion but also satisfies the continuous variable Einstein-Podolsky-Rosen steering inequality.
quantum physics
When the Dark Matter mass is below the eV-scale, its cosmological occupation number exceeds the ones of photons from the cosmic microwave background as well as of relic neutrinos. If such Dark Matter decays to pairs of neutrinos, it implies that experiments that seek the detection of the cosmic neutrino background may as well be sensitive to this additional form of "dark radiation". Here we study the prospects for detection taking into account various options for the forecasted performance of the future PTOLEMY experiment. From a detailed profile likelihood analysis we find that Dark Matter decays with lifetime as large as $10^4$ Gyr or a sub-% Dark Matter fraction decaying today can be discovered. The prospects are facilitated by the distinct spectral event shape that is introduced from galactic and cosmological neutrino dark radiation fluxes. In the process we also clarify the importance of Pauli-blocking in the Dark Matter decay. The scenarios presented in this work can be considered early physics targets in the development of these instruments with relaxed demands on performance and energy resolution.
astrophysics
Use of continuous shrinkage priors -- with a "spike" near zero and heavy-tails towards infinity -- is an increasingly popular approach to induce sparsity in parameter estimates. When the parameters are only weakly identified by the likelihood, however, the posterior may end up with tails as heavy as the prior, jeopardizing robustness of inference. A natural solution is to "shrink the shoulders" of a shrinkage prior by lightening up its tails beyond a reasonable parameter range, yielding the regularized version of the prior. We develop a regularization approach which, unlike previous proposals, preserves computationally attractive structures of original shrinkage priors. We study theoretical properties of the Gibbs sampler on resulting posterior distributions, with emphasis on convergence rates of the P{\'o}lya-Gamma Gibbs sampler for sparse logistic regression. Our analysis shows that the proposed regularization leads to geometric ergodicity under a broad range of global-local shrinkage priors. Essentially, the only requirement is for the prior $\pi_{\rm local}$ on the local scale $\lambda$ to satisfy $\pi_{\rm local}(0) < \infty$. If $\pi_{\rm local}(\cdot)$ further satisfies $\lim_{\lambda \to 0} \pi_{\rm local}(\lambda) / \lambda^a < \infty$ for $a > 0$, as in the case of Bayesian bridge priors, we show the sampler to be uniformly ergodic.
statistics
Artificial light-at-night (ALAN), emitted from the ground and visible from space, marks human presence on Earth. Since the launch of the Suomi National Polar Partnership satellite with the Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS/DNB) onboard, global nighttime images have significantly improved; however, they remained panchromatic. Although multispectral images are also available, they are either commercial or free of charge, but sporadic. In this paper, we use several machine learning techniques, such as linear, kernel, random forest regressions, and elastic map approach, to transform panchromatic VIIRS/DBN into Red Green Blue (RGB) images. To validate the proposed approach, we analyze RGB images for eight urban areas worldwide. We link RGB values, obtained from ISS photographs, to panchromatic ALAN intensities, their pixel-wise differences, and several land-use type proxies. Each dataset is used for model training, while other datasets are used for the model validation. The analysis shows that model-estimated RGB images demonstrate a high degree of correspondence with the original RGB images from the ISS database. Yet, estimates, based on linear, kernel and random forest regressions, provide better correlations, contrast similarity and lower WMSEs levels, while RGB images, generated using elastic map approach, provide higher consistency of predictions.
statistics
Self consistent solution to electromagnetic (EM)-circuit systems is of significant interest for a number of applications. This has resulted in exhaustive research on means to couple them. In time domain, this typically involves a tight integration with field and non-linear circuit solvers. This is in stark contrast to coupled analysis of linear/weakly non-linear circuits and EM systems in frequency domain. Here, one typically extracts equivalent port parameters that are then fed into the circuit solver. Such an approach has several advantages; (a) the number of ports is typically smaller than the number of degrees of freedom, resulting in cost savings; (b) is circuit agnostic. A port representation is tantamount to an impulse response of the linear EM system. In time domain, the deconvolution required to effect this is unstable. Recently, a novel approach was developed for time domain integral equations to overcome this bottleneck. We extend this approach to time domain finite element method, and demonstrate its utility via a number of examples; significantly, we demonstrate that the coupled and port parameter solutions are identical to desired precision for non-linear circuit systems.
electrical engineering and systems science
In this paper, based on the embedded homology groups of hypergraphs defined in \cite{h1}, we define the product of hypergraphs and prove the corresponding K\"{u}nneth formula of hypergraphs which can be generalized to the K\"{u}nneth formula for the embedded homology of graded subsets of chain complexes with coefficients in a principal ideal domain.
mathematics
The holographic entanglement of purification (EoP) in AdS$_{4}$ and AdS-RN black hole backgrounds is studied. We develop an algorithm to compute the EoP for bipartite configuration with infinitely long strips. The temperature behavior of EoP is revealed for small, intermediate and large configurations: EoP monotonically increases with the temperature for small configurations; while for intermediate configurations, EoP is configuration-dependent; EoP vanishes for large configurations. Our numerical results verify some important inequalities of EoP, which we also prove geometrically in Poincar\'e coordinate.
high energy physics theory
To produce a fermionic model exhibiting an entanglement entropy volume law, we propose a particular version of nonlocality in which the energy-momentum dispersion relation is effectively randomized at the shortest length scales while preserving translation invariance. In contrast to the ground state of local fermions, exhibiting an entanglement entropy area law with logarithmic corrections, the entropy of nonlocal fermions is extensive, scaling as the volume of the subregion and crossing over to the anomalous fermion area law at scales larger than the locality scale, {\alpha}. In the 1-d case, we are able to show that the central charge appearing in the universal entropy expressions for large subregions is simply related to the locality scale. These results are demonstrated by exact diagonalizations of the corresponding discrete lattice fermion models. Within the Ryu-Takayanagi holographic picture, the relation between the central charge and the locality scale suggest a dual spacetime in which the size of the flat UV portion and the radius of AdS in the IR are both proportional to the locality scale, {\alpha}.
quantum physics
Unlike standard linear regression, quantile regression captures the relationship between covariates and the conditional response distribution as a whole, rather than only the relationship between covariates and the expected value of the conditional response. However, while there are well-established quantile regression methods for continuous variables and some forms of discrete data, there is no widely accepted method for ordinal variables, despite their importance in many medical contexts. In this work, we describe two existing ordinal quantile regression methods and demonstrate their weaknesses. We then propose a new method, Bayesian ordinal quantile regression with a partially collapsed Gibbs sampler (BORPS). We show superior results using BORPS versus existing methods on an extensive set of simulations. We further illustrate the benefits of our method by applying BORPS to the Fragile Families and Child Wellbeing Study data to tease apart associations with early puberty among both genders. Software is available at: GitHub.com/igrabski/borps.
statistics
This paper is concerned with the approximation of linear and nonlinearinitial-boundary-value problems of pseudo-parabolic equations with Dirichlet boundary conditions. They are discretized in space by spectral Galerkin and collocation methods based on Legendre and Chebyshev polynomials. The time integration is carried out suitably with robust schemes attending to qualitative features such as stiffness and preservation of strong stability to simulate nonregular problems more correctly. The corresponding semidiscrete and fully discrete schemes are described and the performance of the methods is analyzed computationally.
mathematics
We consider an optomechanical system that is composed of a mechanical and an optical mode interacting through a linear and quadratic optomechanical dispersive couplings. The system is operated in an unresolved side band limit with a high quality factor mechanical resonator. Such a system then acts as parametrically driven oscillator, giving access to an intensity assisted tunability of the spring constant. This enables the operation of optomechanical system in its 'soft mode' wherein the mechanical spring softens and responds with a lower resonance frequency. We show that this soft mode can be exploited to non-linearize backaction noise which yields higher force sensitivity beyond the conventional standard quantum limit.
quantum physics
Because of the relatively rigid coupling between the upper dentition and the skull, instrumented mouthguards have been shown to be a viable way of measuring head impact kinematics for assisting in understanding the underlying biomechanics of concussions. This has led various companies and institutions to further develop instrumented mouthguards. However, their use as a research tool for understanding concussive impacts makes quantification of their accuracy critical, especially given the conflicting results from various recent studies. Here we present a study that uses a pneumatic impactor to deliver impacts characteristic to football to a Hybrid III headform, in order to validate and compare five of the most commonly used instrumented mouthguards. We found that all tested mouthguards gave accurate measurements for the peak angular acceleration (mean relative error, MRE < 13%), the peak angular velocity (MRE < 8%), brain injury criteria values (MRE < 13%) and brain deformation (described as maximum principal strain and fiber strain, calculated by a convolutional neural network based brain model, MRE < 9%). Finally, we found that the accuracy of the measurement varies with the impact locations yet is not sensitive to the impact velocity for the most part.
physics
We introduce a new variational inference (VI) framework, called energetic variational inference (EVI). It minimizes the VI objective function based on a prescribed energy-dissipation law. Using the EVI framework, we can derive many existing Particle-based Variational Inference (ParVI) methods, including the popular Stein Variational Gradient Descent (SVGD) approach. More importantly, many new ParVI schemes can be created under this framework. For illustration, we propose a new particle-based EVI scheme, which performs the particle-based approximation of the density first and then uses the approximated density in the variational procedure, or "Approximation-then-Variation" for short. Thanks to this order of approximation and variation, the new scheme can maintain the variational structure at the particle level, and can significantly decrease the KL-divergence in each iteration. Numerical experiments show the proposed method outperforms some existing ParVI methods in terms of fidelity to the target distribution.
statistics
Understanding nanoscale force transmission in pico-to-nanoNewton range is important in polymer physics. While physical approaches have limitations to analyze local force distribution in condensed environments, chemical doping of mechano-probes is promising. However, there are demanding requirements to probe the local force without structural damage, which corresponds to the force range below covalent bond scission (nanoNewton) and over thermal fluctuation (picoNewton). Here we report a conformationally flexible dual fluorescent mechano-probe with 100-picoNewton threshold that realizes ratiometric analysis on nanoscale force distribution in stretched polymer chain network. Without changing original polymer properties, the force distribution has been reversibly monitored in real time. Chemical control of the probe location demonstrated that local stress concentration is twice more biased at crosslinkers than main chains particularly in a strain-hardening region. Due to the sensitive response, proportion of the stressed force probe was estimated more than 1000 times higher than activation ratio of conventional mechanophores.
condensed matter
Undoubtedly, Raman spectroscopy is one of the most elaborated spectroscopy tools in materials science, chemistry, medicine and optics. However, when it comes to the analysis of nanostructured specimens, accessing the Raman spectra resulting from an exciting electric field component oriented perpendicularly to the substrate plane is a difficult task and conventionally can only be achieved by mechanically tilting the sample, or by sophisticated sample preparation. Here, we propose a novel experimental method based on the utilization of polarization tailored light for Raman spectroscopy of individual nanostructures. As a proof of principle, we create three-dimensional electromagnetic field distributions at the nanoscale using tightly focused cylindrical vector beams impinging normally onto the specimen, hence keeping the conventional beam-path of commercial Raman systems. Using this excitation scheme, we experimentally show that the recorded Raman spectra of individual gallium-nitride nanostructures of sub-wavelength diameter used as a test platform depend sensitively on their location relative to the focal vector field. The observed Raman spectra can be attributed to the interaction with transverse or longitudinal electric field components. This novel technique may pave the way towards a characterization of Raman active nanosystems using full information of all Raman modes.
physics
We present the complete two-loop corrections in massless QCD for the production of two photons and a jet, taking into account all color structures. In particular, we analytically compute all two-loop helicity amplitudes for the quark-antiquark, quark-gluon, and antiquark-gluon channel, and check them with an independent calculation of the polarization-summed interference with the tree amplitude. This is the first time that two-loop QCD corrections to a five-point scattering process have been computed beyond the leading-color approximation for all helicity configurations.
high energy physics phenomenology
We introduce a new general modeling approach for multivariate discrete event data with categorical interacting marks, which we refer to as marked Bernoulli processes. In the proposed model, the probability of an event of a specific category to occur in a location may be influenced by past events at this and other locations. We do not restrict interactions to be positive or decaying over time as it is commonly adopted, allowing us to capture an arbitrary shape of influence from historical events, locations, and events of different categories. In our modeling, prior knowledge is incorporated by allowing general convex constraints on model parameters. We develop two parameter estimation procedures utilizing the constrained Least Squares (LS) and Maximum Likelihood (ML) estimation, which are solved using variational inequalities with monotone operators. We discuss different applications of our approach and illustrate the performance of proposed recovery routines on synthetic examples and a real-world police dataset.
mathematics
Automatic image captioning has improved significantly in the last few years, but the problem is far from being solved. Furthermore, while the standard automatic metrics, such as CIDEr and SPICE~\cite{cider,spice}, can be used for model selection, they cannot be used at inference-time given a previously unseen image since they require ground-truth references. In this paper, we focus on the related problem called Quality Estimation (QE) of image-captions. In contrast to automatic metrics, QE attempts to model caption quality without relying on ground-truth references. It can thus be applied as a second-pass model (after caption generation) to estimate the quality of captions even for previously unseen images. We conduct a large-scale human evaluation experiment, in which we collect a new dataset of more than 600k ratings of image-caption pairs. Using this dataset, we design and experiment with several QE modeling approaches and provide an analysis of their performance. Our results show that QE is feasible for image captioning.
computer science
Confining hidden sectors are an attractive possibility for physics beyond the Standard Model (SM). They are especially motivated by neutral naturalness theories, which reconcile the lightness of the Higgs with the strong constraints on colored top partners. We study hidden QCD with one light quark flavor, coupled to the SM via effective operators suppressed by the mass $M$ of new electroweak-charged particles. This effective field theory is inspired by a new tripled top model of supersymmetric neutral naturalness. The hidden sector is accessed primarily via the $Z$ and Higgs portals, which also mediate the decays of the hidden mesons back to SM particles. We find that exotic $Z$ decays at the LHC and future $Z$ factories provide the strongest sensitivity to this scenario, and we outline a wide array of searches. For a larger hidden confinement scale $\Lambda\sim O(10)\;\mathrm{GeV}$, the exotic $Z$ decays dominantly produce final states with two hidden mesons. ATLAS and CMS can probe their prompt decays up to $M\sim 3\;\mathrm{TeV}$ at the high luminosity phase, while a TeraZ factory would extend the reach up to $M\sim 20\;\mathrm{TeV}$ through a combination of searches for prompt and displaced signals. For smaller $\Lambda \sim O(1)\;\mathrm{GeV}$, the $Z$ decays to the hidden sector produce jets of hidden mesons, which are long-lived. LHCb will be a powerful probe of these emerging jets. Furthermore, the light hidden vector meson could be detected by proposed dark photon searches.
high energy physics phenomenology
In a so-called overpopulated world, sustainable consumption is of existential importance.However, the expanding spectrum of product choices and their production complexity challenge consumers to make informed and value-sensitive decisions. Recent approaches based on (personalized) psychological manipulation are often intransparent, potentially privacy-invasive and inconsistent with (informational) self-determination. In contrast, responsible consumption based on informed choices currently requires reasoning to an extent that tends to overwhelm human cognitive capacity. As a result, a collective shift towards sustainable consumption remains a grand challenge. Here we demonstrate a novel personal shopping assistant implemented as a smart phone app that supports a value-sensitive design and leverages sustainability awareness, using experts' knowledge and "wisdom of the crowd" for transparent product information and explainable product ratings. Real-world field experiments in two supermarkets confirm higher sustainability awareness and a bottom-up behavioral shift towards more sustainable consumption. These results encourage novel business models for retailers and producers, ethically aligned with consumer preferences and with higher sustainability.
computer science
This is a new version of our previous work. In this version, we fill a gap included in the original proof of Theorem 1.1 in our previous paper entitled "An iterative method for Kirchhoff type equations and its applications".
mathematics
We introduce Pathogen.jl for simulation and inference of transmission network individual level models (TN-ILMs) of infectious disease spread. TN-ILMs can be used to jointly infer transmission networks, event times, and model parameters within a Bayesian framework via Markov chain Monte Carlow (MCMC). We detail our specific strategies for conducting MCMC for TN-ILMs, and our implementation of these strategies in the Julia package, Pathogen.jl, which leverages key features of the Julia language. We provide an example using Pathogen.jl to simulate an epidemic following a susceptible-infectious-removed (SIR) TN-ILM, and then perform inference using observations that were generated from that epidemic. We also demonstrate the functionality of Pathogen.jl with an application of TN-ILMs to data from a measles outbreak that occurred in Hagelloch, Germany in 1861 (Pfeilsticker 1863; Oesterle 1992).
statistics
We discovered a so called high-temperature blackbody (HBB) component, found in the 15 -- 40 keV range, in the broad-band X-ray energy spectra of black hole (BH) candidate sources. A detailed study of this spectral feature is presented using data from five of the Galactic BH binaries, Cyg X-1, GX 339-4, GRS 1915+105, SS 433 and V4641~Sgr in the low/hard, intermediate, high/soft and very soft spectral states (LHS, IS, HSS and VSS, respectively) and spectral transitions between them using {\it RXTE}, INTEGRAL and BeppoSAX data. In order to fit the broad-band energy spectra of these sources we used an additive XSPEC model, composed of the Comptonization component and the Gaussian line component. In particular, we reveal that the IS spectra have the HBB component which color temperature, kT_HBB} is in the range of 4.5 -- 5.9 keV. This HBB feature has been detected in some spectra of these five sources only in the IS (for the photon index Gamma>1.9) using different X-ray telescopes. We also demonstrate that a timescale of the HBB-feature is of orders of magnitude shorter than the timescale of the iron line and its edge. That leads us to conclude that these spectral features are formed in geometrically different parts of the source and which are not connected to each other. Laurent & Titarchuk (2018) demonstrated a presence of a gravitational redshifted annihilation line emission in a BH using the Monte-Carlo simulations and therefore the observed HBB hump leads us to suggest this feature is a gravitational redshifted annihilation line observed in these black holes
astrophysics
This paper concerns a distributed optimal control problem for a tumor growth model of Cahn-Hilliard type including chemotaxis with possibly singular potentials, where the control and state variables are nonlinearly coupled. First, we discuss the weak well-posedness of the system under very general assumptions for the potentials, which may be singular and nonsmooth. Then, we establish the strong well-posedness of the system in a reduced setting, which however admits the logarithmic potential: this analysis will lay the foundation for the study of the corresponding optimal control problem. Concerning the optimization problem, we address the existence of minimizers and establish both first-order necessary and second-order sufficient conditions for optimality. The mathematically challenging second-order analysis is completely performed here, after showing that the solution mapping is twice continuously differentiable between suitable Banach spaces via the implicit function theorem. Then, we completely identify the second-order Fr\'echet derivative of the control-to-state operator and carry out a thorough and detailed investigation about the related properties.
mathematics
Recently it was found that quantum gravity theories may involve constructing a quantum theory on non-Cauchy hypersurfaces. However this is problematic since the ordinary Poisson brackets are not causal in this case. We suggest a method to identify classical brackets that are causal on 2+1 non-Cauchy hypersurfaces and use it in order to show that the evolution of scalars and vectors fields in the 3rd spatial direction can be constructed by using a Hamilton-like procedure. Finally, we discuss the relevance of this result to quantum gravity.
high energy physics theory
The purpose of this paper is to generalize some results on $n$-Lie algebras and $n$-Hom-Lie algebras to $n$-Hom-Lie color algebras. Then we introduce and give some constructions of $n$-Hom-Lie color algebras.
mathematics
The dissipative dynamics of strongly interacting systems are often characterised by the timescale set by the inverse temperature $\tau_P\sim\hbar/(k_BT)$. We show that near a class of strongly interacting quantum critical points that arise in the infra-red limit of translationally invariant holographic theories, there is a collective excitation (a quasinormal mode of the dual black hole spacetime) whose lifetime $\tau_{eq}$ is parametrically longer than $\tau_P$: $\tau_{eq}\gg T^{-1}$. The lifetime is enhanced due to its dependence on a dangerously irrelevant coupling that breaks the particle-hole symmetry and the invariance under Lorentz boosts of the quantum critical point. The thermal diffusivity (in units of the butterfly velocity) is anomalously large near the quantum critical point and is governed by $\tau_{eq}$ rather than $\tau_P$. We conjecture that there exists a long-lived, propagating collective mode with velocity $v_s$, and in this case the relation $D=v_s^2\tau_{eq}$ holds exactly in the limit $T\tau_{eq}\gg1$. While scale invariance is broken, a generalised scaling theory still holds provided that the dependence of observables on the dangerously irrelevant coupling is incorporated. Our work further underlines the connection between dangerously irrelevant deformations and slow equilibration.
high energy physics theory
The thermal expansion coefficient $\alpha$ and the Gr\"{u}neisen parameter $\Gamma$ near the magnetic quantum critical point (QCP) are derived on the basis of the self-consistent renormalization (SCR) theory of spin fluctuation. From the SCR entropy, the specific heat $C_{V}$, $\alpha$, and $\Gamma$ are shown to be expressed in a simple form as $C_{V}=C_{\rm a}-C_{\rm b}$, $\alpha=\alpha_{\rm a}+\alpha_{\rm b}$, and $\Gamma=\Gamma_{\rm a}+\Gamma_{\rm b}$, respectively, where $C_{\rm i}$, $\alpha_{\rm i}$, and $\Gamma_{\rm i}$ $({\rm i}={\rm a}, {\rm b})$ are related with each other. As the temperature $T$ decreases, $C_{\rm a}$, $\alpha_{\rm b}$, and $\Gamma_{\rm b}$ become dominant in $C_{V}$, $\alpha$, and $\Gamma$, respectively. The inverse susceptibility of spin fluctuation coupled to the volume $V$ in $\Gamma_{\rm b}$ is found to give rise to the divergence of $\Gamma$ at the QCP for each class of ferromagnetism and antiferromagnetism (AFM) in spatial dimensions $d=3$ and $2$. This $V$-dependent inverse susceptibility in $\alpha_{b}$ and $\Gamma_{\rm b}$ contributes to the $T$ dependences of $\alpha$ and $\Gamma$, and even affects their criticality in the case of the AFM QCP in $d=2$. $\Gamma_{\rm a}$ is expressed as $\Gamma_{\rm a}(T=0)=-\frac{V}{T_0}\left(\frac{\partial T_0}{\partial V}\right)_{T=0}$ with $T_0$ being the characteristic temperature of spin fluctuation, which has an enhanced value in heavy electron systems.
condensed matter
Taking into account the increasing interest in measuring high energy gamma ray polarization, Boldishev et. at. \cite{[Boldy]} published an extensive and very comprehensive work on the possibility of using the recoil electrons in the production of pairs on electrons. However, this work is based on using only 2 Feynmann diagrams of the 8 that the process has. This eliminates the difficulty of distinguishing, in the theory, which is the recoil electron and which is the created In this work we have analyzed the eight Feynman diagrams and we have shown that for energies lower to $\sim 1000mc^2$, the assumption just described is not a good approximation, so we propose a different way to work \cite{Marcos}: we classify the electrons into the less energetic and the most energetic ones without taking into account their origin. Under these conditions (lower or higher energy value), we have calculated the contribution of the different diagrams to the distribution(we compare the sum of them with that obtained by Haug \cite{Haug_e+}\cite{Haug_e-}, and how these distributions are modified by introducing a threshold for the momentum detection for electrons. For the study of polarization we presented on the angular distribution of particles for high-energy gamma rays (where only Borsellino diagrams predominate). Our results on the azimuthal distribution show that it is highly influenced by the orientation (in the plane perpendicular to the direction of the photon), prior to the interaction, that the polarization vector has with respect to the position of the electron in whose field the pair will be generated.
high energy physics phenomenology
We investigate the noncommutative gauge theories arising on the worldvolumes of D-branes in non-geometric backgrounds obtained by T-duality from twisted tori. We revisit the low-energy effective description of D-branes on three-dimensional T-folds, examining both cases of parabolic and elliptic twists in detail. We give a detailed description of the decoupling limits and explore various physical consequences of the open string non-geometry. The T-duality monodromies of the non-geometric backgrounds lead to Morita duality monodromies of the noncommutative Yang-Mills theories induced on the D-branes. While the parabolic twists recover the well-known examples of noncommutative principal torus bundles from topological T-duality, the elliptic twists give new examples of noncommutative fibrations with non-geometric torus fibres. We extend these considerations to D-branes in backgrounds with R-flux, using the doubled geometry formulation, finding that both the non-geometric background and the D-brane gauge theory necessarily have explicit dependence on the dual coordinates, and so have no conventional formulation in spacetime.
high energy physics theory
We show that the electron recoil excess around 2 keV claimed by the Xenon collaboration can be fitted by DM or DM-like particles having a fast component with velocity of order $\sim 0.1$. Those particles cannot be part of the cold DM halo of our Galaxy, so we speculate about their possible nature and origin, such as fast moving DM sub-haloes, semi-annihilations of DM and relativistic axions produced by a nearby axion star. Feasible new physics scenarios must accommodate exotic DM dynamics and unusual DM properties.
high energy physics phenomenology
A complete one-loop matching calculation for real singlet scalar extensions of the Standard Model to the Standard Model effective field theory (SMEFT) of dimension-six operators is presented. We compare our analytic results obtained by using Feynman diagrams to the expressions derived in the literature by a combination of the universal one-loop effective action (UOLEA) approach and Feynman calculus. After identifying contributions that have been overlooked in the existing calculations, we find that the pure diagrammatic approach and the mixed method lead to identical results. We highlight some of the subtleties involved in computing one-loop matching corrections in SMEFT.
high energy physics phenomenology
Current literature on posterior approximation for Bayesian inference offers many alternative methods. Does our chosen approximation scheme work well on the observed data? The best existing generic diagnostic tools treating this kind of question by looking at performance averaged over data space, or otherwise lack diagnostic detail. However, if the approximation is bad for most data, but good at the observed data, then we may discard a useful approximation. We give graphical diagnostics for posterior approximation at the observed data. We estimate a "distortion map" that acts on univariate marginals of the approximate posterior to move them closer to the exact posterior, without recourse to the exact posterior.
statistics
We introduce a new method to establish time-quantitative density in flat dynamical systems. First we give a shorter and different proof of our earlier result that a half-infinite geodesic on an arbitrary finite polysquare surface P is superdense on P if the slope of the geodesic is a badly approximable number. We then adapt our method to study time-quantitative density of half-infinite geodesics on algebraic polyrectangle surfaces.
mathematics
We construct a mass dimension one fermionic field associated with flag-dipole spinors. These spinors are related to Elko (flag-pole spinors) by a one-parameter matrix transformation $\mathcal{Z}(z)$ where $z$ is a complex number. The theory is non-local and non-covariant. While it is possible to obtain a Lorentz-invariant theory via $\tau$-deformation, we choose to study the effects of non-locality and non-covariance. Our motivation for doing so is explained. We show that a fermionic field with $|z|\neq1$ and $|z|=1$ are physically equivalent. But for fermionic fields with more than one value of $z$, their interactions are $z$-dependent thus introducing an additional fermionic degeneracy that is absent in the Lorentz-invariant theory. We study the fermionic self-interaction and the local $U(1)$ interaction. In the process, we obtained non-local contributions for fermionic self-interaction that have previously been neglected. For the local $U(1)$ theory, the interactions contain time derivatives that renders the interacting density non-commutative at space-like separation. We show that this problem can be resolved by working in the temporal gauge. This issue is also discussed in the context of gravity.
high energy physics theory
Dynamical systems describe the changes in processes that arise naturally from their underlying physical principles, such as the laws of motion or the conservation of mass, energy or momentum. These models facilitate a causal explanation for the drivers and impediments of the processes. But do they describe the behaviour of the observed data? And how can we quantify the models' parameters that cannot be measured directly? This paper addresses these two questions by providing a methodology for estimating the solution; and the parameters of linear dynamical systems from incomplete and noisy observations of the processes. The proposed procedure builds on the parameter cascading approach, where a linear combination of basis functions approximates the implicitly defined solution of the dynamical system. The systems' parameters are then estimated so that this approximating solution adheres to the data. By taking advantage of the linearity of the system, we have simplified the parameter cascading estimation procedure, and by developing a new iterative scheme, we achieve fast and stable computation. We illustrate our approach by obtaining a linear differential equation that represents real data from biomechanics. Comparing our approach with popular methods for estimating the parameters of linear dynamical systems, namely, the non-linear least-squares approach, simulated annealing, parameter cascading and smooth functional tempering reveals a considerable reduction in computation and an improved bias and sampling variance.
statistics
The subgrid-scale modelling of a low Mach number strongly anisothermal turbulent flow is investigated using direct numerical simulations. The study is based on the filtering of the low Mach number equations, suited to low Mach number flows with highly variable fluid properties. The results are relevant to formulations of the filtered low Mach number equations established with the classical filter or the Favre filter. The two most significant subgrid terms of the filtered low Mach number equations are considered. They are associated with the momentum convection and the density-velocity correlation. We focus on eddy-viscosity and eddy-diffusivity models. Subgrid-scale models from the literature are analysed and two new models are proposed. The subgrid-scale models are compared to the exact subgrid term using the instantaneous flow field of the direct numerical simulation of a strongly anisothermal fully developed turbulent channel flow. There is no significant differences between the use of the classical and Favre filter regarding the performance of the models. We suggest that the models should take into account the asymptotic near-wall behaviour of the filter length. Eddy-viscosity and eddy-diffusivity models are able to represent the energetic contribution of the subgrid term but not its effect in the flow governing equations. The AMD and scalar AMD models are found to be in better agreement with the exact subgrid terms than the other investigated models in the a priori tests.
physics
This paper proposes a self-supervised low light image enhancement method based on deep learning, which can improve the image contrast and reduce noise at the same time to avoid the blur caused by pre-/post-denoising. The method contains two deep sub-networks, an Image Contrast Enhancement Network (ICE-Net) and a Re-Enhancement and Denoising Network (RED-Net). The ICE-Net takes the low light image as input and produces a contrast enhanced image. The RED-Net takes the result of ICE-Net and the low light image as input, and can re-enhance the low light image and denoise at the same time. Both of the networks can be trained with low light images only, which is achieved by a Maximum Entropy based Retinex (ME-Retinex) model and an assumption that noises are independently distributed. In the ME-Retinex model, a new constraint on the reflectance image is introduced that the maximum channel of the reflectance image conforms to the maximum channel of the low light image and its entropy should be the largest, which converts the decomposition of reflectance and illumination in Retinex model to a non-ill-conditioned problem and allows the ICE-Net to be trained with a self-supervised way. The loss functions of RED-Net are carefully formulated to separate the noises and details during training, and they are based on the idea that, if noises are independently distributed, after the processing of smoothing filters (\eg mean filter), the gradient of the noise part should be smaller than the gradient of the detail part. It can be proved qualitatively and quantitatively through experiments that the proposed method is efficient.
computer science
We show that charge-quantization of the M-theory C-field in J-twisted Cohomotopy implies emergence of a higher Sp(1)-gauge field on single heterotic M5-branes, which exhibits worldvolume twisted String structure.
high energy physics theory
Noncommutativity of the spacetime coordinates has been explored in several contexts, mostly associated to phenomena at the Planck length scale. However, approaching this question through deformation theory and the principle of stability of physical theories, one concludes that the scales of noncommutativity of the coordinates and noncommutativity of the generators of translations are independent. This suggests that the scale of the spacetime coordinates noncommutativity could be larger than the Planck length. This paper attempts to explore the experimental perspectives to settle this question, either on the lab or by measurements of phenomena of cosmological origin.
high energy physics theory
Van der Waals forces between atoms and molecules are universally assumed to act along the line separating them. Inspired by recent works on effects which can propel atoms parallel to a macroscopic surface via the Casimir--Polder force, we predict a lateral van der Waals force between two atoms, one of which is in an excited state with non-zero angular momentum and the other is isotropic and in its ground state. The resulting force acts in the same way as a planetary gear, in contrast to the rack-and-pinion motion predicted in works on the lateral Casimir--Polder force in the analogous case, for which the force predicted here is the microscopic origin. We illustrate the effect by predicting the trajectories of an excited caesium in the vicinity of ground-state rubidium, finding behaviour qualitatively different to that if lateral forces are ignored.
quantum physics
Nonlinear control methodologies have successfully realized stable human-like walking on powered prostheses. However, these methods are typically restricted to model independent controllers due to the unknown human dynamics acting on the prosthesis. This paper overcomes this restriction by introducing the notion of a separable subsystem control law, independent of the full system dynamics. By constructing an equivalent subsystem, we calculate the control law with local information. We build a subsystem model of a general open-chain manipulator to demonstrate the control method's applicability. Employing these methods for an amputee-prosthesis model, we develop a model dependent prosthesis controller that relies solely on measurable states and inputs but is equivalent to a controller developed with knowledge of the human dynamics and states.
computer science
A single-photon CMOS image sensor design based on pinned photodiode (PPD) with multiple charge transfers and sampling is described. In the proposed pixel architecture, the photogenerated signal is sampled non-destructively multiple times and the results are averaged. Each signal measurement is statistically independent and by averaging the electronic readout noise is reduced to a level where single photons can be distinguished reliably. A pixel design using this method has been simulated in TCAD and several layouts have been generated for a 180 nm CMOS image sensor process. Using simulations, the noise performance of the pixel has been determined as a function of the number of samples, sense node capacitance, sampling rate, and transistor characteristics. The strengths and the limitations of the proposed design are discussed in detail, including the trade-off between noise performance and readout rate and the impact of charge transfer inefficiency. The projected performance of our first prototype device indicates that single-photon imaging is within reach and could enable ground-breaking performance in many scientific and industrial imaging applications.
physics
The analysis of the LHCb data on $X(6900)$ found in the di-$\jpsi$ system is performed using a momentum-dependent Flatt\`{e}-like parameterization. The use of the pole counting rule and spectral density function sum rule give consistent conclusions that both confining states and molecular states are possible, or it is unable to distinguish the nature of $X(6900)$, if only the di-$\jpsi$ experimental data with current statistics are available. We suggest that experiments measure other channels, such as $\jpsi \psi(3770)$, $\jpsi \psi_2(3823)$, $\jpsi \psi_3(3842)$, and $\chi_{c0} \chi_{c1}$, to better understand the nature of $X(6900)$, and $\jpsi \psi(4160)$, $\chi_{c0}\chi_{c1}(3872)$, $etc.$, to confirm the $X(7200)$ state.
high energy physics phenomenology
Starting from the operator algebra of the (1+1)D Ising model on a spatial lattice, this paper explicitly constructs a subalgebra of smooth operators that are natural candidates for continuum fields in the scaling limit. At the critical value of the transverse field, these smooth operators are analytically shown to reproduce the operator product expansions found in the Ising conformal field theory.
high energy physics theory
Current front-ends for robust automatic speech recognition(ASR) include masking- and mapping-based deep learning approaches to speech enhancement. A recently proposed deep learning approach toa prioriSNR estimation, called DeepXi, was able to produce enhanced speech at a higher quality and intelligibility than current masking- and mapping-based approaches. Motivated by this, we investigate Deep Xi as a front-end for robust ASR. Deep Xi is evaluated using real-world non-stationary and coloured noise sources at multiple SNR levels. Our experimental investigation shows that DeepXi as a front-end is able to produce a lower word error rate than recent masking- and mapping-based deep learning front-ends. The results presented in this work show that Deep Xi is a viable front-end, and is able to significantly increase the robustness of an ASR system. Availability: Deep Xi is available at:https://github.com/anicolson/DeepXi
electrical engineering and systems science
Edge and fog computing have grown popular as IoT deployments become wide-spread. While application composition and scheduling on such resources are being explored, there exists a gap in a distributed data storage service on the edge and fog layer, instead depending solely on the cloud for data persistence. Such a service should reliably store and manage data on fog and edge devices, even in the presence of failures, and offer transparent discovery and access to data for use by edge computing applications. Here, we present Elfstore, a first-of-its-kind edge-local federated store for streams of data blocks. It uses reliable fog devices as a super-peer overlay to monitor the edge resources, offers federated metadata indexing using Bloom filters, locates data within 2-hops, and maintains approximate global statistics about the reliability and storage capacity of edges. Edges host the actual data blocks, and we use a unique differential replication scheme to select edges on which to replicate blocks, to guarantee a minimum reliability and to balance storage utilization. Our experiments on two IoT virtual deployments with 20 and 272 devices show that ElfStore has low overheads, is bound only by the network bandwidth, has scalable performance, and offers tunable resilience.
computer science
We present a novel stagewise strategy for improving greedy algorithms for sparse recovery. We demonstrate its efficiency both for synthesis and analysis sparse priors, where in both cases we demonstrate its computational efficiency and competitive reconstruction accuracy. In the synthesis case, we also provide theoretical guarantees for the signal recovery that are on par with the existing perfect reconstruction bounds for the relaxation-based solvers and other sophisticated greedy algorithms.
mathematics
The outstanding transport properties expected at the edge of two-dimensional time-reversal invariant topological insulators have proven to be challenging to realize experimentally, and have so far only been demonstrated in very short devices. In search for an explanation to this puzzling observation, we here report a full first-principles calculation of topologically protected transport at the edge of novel quantum spin Hall insulators - specifically, Bismuth and Antimony halides - based on the non-equilibrium Green's functions formalism. Our calculations unravel two different scattering mechanisms that may affect two-dimensional topological insulators, namely time-reversal symmetry breaking at vacancy defects and inter-edge scattering mediated by multiple co-operating impurities, possibly non-magnetic. We discuss their drastic consequences for typical non-local transport measurements as well as strategies to mitigate their negative impact. Finally, we provide an instructive comparison of the transport properties of topologically protected edge states to those of the trivial edge states in MoS$_2$ ribbons. Although we focus on a few specific cases (in terms of materials and defect types) our results should be representative for the general case and thus have significance beyond the systems studied here.
condensed matter
This paper proposes a new minimum description length procedure to detect multiple changepoints in time series data when some times are a priori thought more likely to be changepoints. This scenario arises with temperature time series homogenization pursuits, our focus here. Our Bayesian procedure constructs a natural prior distribution for the situation, and is shown to estimate the changepoint locations consistently, with an optimal convergence rate. Our methods substantially improve changepoint detection power when prior information is available. The methods are also tailored to bivariate data, allowing changes to occur in one or both component series.
statistics
In the past few years supervised and adversarial learning have been widely adopted in various complex computer vision tasks. It seems natural to wonder whether another branch of artificial intelligence, commonly known as Reinforcement Learning (RL) can benefit such complex vision tasks. In this study, we explore the plausible usage of RL in super resolution of remote sensing imagery. Guided by recent advances in super resolution, we propose a theoretical framework that leverages the benefits of supervised and reinforcement learning. We argue that a straightforward implementation of RL is not adequate to address ill-posed super resolution as the action variables are not fully known. To tackle this issue, we propose to parameterize action variables by matrices, and train our policy network using Monte-Carlo sampling. We study the implications of parametric action space in a model-free environment from theoretical and empirical perspective. Furthermore, we analyze the quantitative and qualitative results on both remote sensing and non-remote sensing datasets. Based on our experiments, we report considerable improvement over state-of-the-art methods by encapsulating supervised models in a reinforcement learning framework.
computer science