text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Feedback-based online optimization algorithms have gained traction in recent years because of their simple implementation, their ability to reject disturbances in real time, and their increased robustness to model mismatch. While the robustness properties have been observed both in simulation and experimental results, the theoretical analysis in the literature is mostly limited to nominal conditions. In this work, we propose a framework to systematically assess the robust stability of feedback-based online optimization algorithms. We leverage tools from monotone operator theory, variational inequalities and classical robust control to obtain tractable numerical tests that guarantee robust convergence properties of online algorithms in feedback with a physical system, even in the presence of disturbances and model uncertainty. The results are illustrated via an academic example and a case study of a power distribution system. | mathematics |
Inertial measurement units are commonly used in a growing number of application fields to track or capture motions of kinematic chains, such as human limbs, exoskeletons or robotic actuators. A major challenge is the presence of magnetic disturbances that result in unreliable magnetometer readings. Recent research revealed that this problem can be overcome by exploitation of kinematic constraints. While typically each segment of the kinematic chain is equipped with an IMU, a novel approach called sparse inertial motion tracking aims at infering the complete motion states from measurements of a reduced set of sensors. In the present contribution, we combine the magnetometer-free and the sparse approach for real-time motion tracking of double-hinge joint systems with non-parallel joint axes. Analyzing the observability of the system, we find a condition which assures that the relative orientations between all segments are uniquely determined by a kinematic constraint, which contains only the gyroscope readings. Furthermore, we propose a moving-horizon estimator and validate it in a simulation study of three movements with different degrees of excitation. The results of this study confirm all theoretical conjectures and demonstrate that magnetometer-free sparse inertial real-time motion tracking is feasible under precise and simple excitation conditions. | electrical engineering and systems science |
We study self-interacting dark matter (SIDM) scenarios, where the $s$-wave self-scattering cross section almost saturates the Unitarity bound. Such self-scattering cross sections are singly parameterized by the dark matter mass, and are featured by strong velocity dependence in a wide range of velocities. They may be indicated by observations of dark matter halos in a wide range of masses, from Milky Way's dwarf spheroidal galaxies to galaxy clusters. We pin down the model parameters that saturates the Unitarity bound in well-motivated SIDM models: the gauged $L_{\mu} - L_{\tau}$ model and composite asymmetric dark matter model. We discuss implications and predictions of such model parameters for cosmology like the $H_{0}$ tension and dark-matter direct-detection experiments, and particle phenomenology like the beam-dump experiments. | high energy physics phenomenology |
Some generalizations of the Sachdev-Ye-Kitaev (SYK) model and different patterns of their reparametrization symmetry breaking are discussed. The analysis of such (pseudo)holographic systems relates their generalized one-dimensional Schwarzian dynamics to (quasi) two-dimensional Liouvillian quantum mechanics. As compared to the original SYK case, the latter might be dissipative or have discrete states in its spectrum, either of which properties alters thermodynamics and correlations while preserving the underlying $SL(2,R)$ symmetry. | high energy physics theory |
We review the present knowledge for $\alpha_s$, the fundamental coupling underlying the interactions of quarks and gluons in QCD. The dependence of $\alpha_s(Q^2)$ on momentum transfer $Q$ encodes the underlying dynamics of hadron physics -from color confinement in the infrared domain to asymptotic freedom at short distances. We review constraints on $\alpha_s(Q^2)$ at high $Q^2$, as predicted by perturbative QCD, and its analytic behavior at small $Q^2$, based on models of nonperturbative dynamics. In the introductory part of this review, we explain the phenomenological meaning of $\alpha_s$, the reason for its running, and the challenges facing a complete understanding of its analytic behavior in the infrared domain. In the second, more technical, part of the review, we discuss the behavior of $\alpha_s(Q^2)$ in the high $Q^2$ domain of QCD. We review how $\alpha_s$ is defined, including its renormalization scheme dependence, the definition of its renormalization scale, the utility of effective charges, as well as Commensurate Scale Relations which connect the various definitions of $\alpha_s$ without renormalization-scale ambiguity. We also report recent measurements and theoretical analyses which have led to precise QCD predictions at high energy. In the last part of the review, we discuss the challenge of understanding the analytic behavior $\alpha_s(Q^2)$ in the infrared domain. We also review important methods for computing $\alpha_s$, including lattice QCD, the Schwinger-Dyson equations, the Gribov-Zwanziger analysis and light-front holographic QCD. After describing these approaches and enumerating their conflicting predictions, we discuss the origin of these discrepancies and how to remedy them. Our aim is not only to review the advances in this difficult area, but also to suggest what could be an optimal definition of $\alpha_s$ in order to bring better unity to the subject. | high energy physics phenomenology |
We compute the next-to-leading order virtual corrections to the partonic cross-section of the process $gg\to ZH$, in the high-energy and large-$m_t$ limits. We use Pad\'e approximants to increase the radius of convergence of the high-energy expansion in $m_t^2/s$, $m_t^2/t$ and $m_t^2/u$ and show that precise results can be obtained down to energies which are fairly close to the top quark pair threshold. We present results both for the form factors and the next-to-leading order virtual cross-section. | high energy physics phenomenology |
Recent progress in studies of holographic dualities, originally motivated by insights from string theory, has led to a confluence with concepts and techniques from quantum information theory. A particularly successful approach has involved capturing holographic properties by means of tensor networks which not only give rise to physically meaningful correlations of holographic boundary states, but also reproduce and refine features of quantum error correction in holography. This topical review provides an overview over recent successful realizations of such models. It does so by building on an introduction of the theoretical foundations of AdS/CFT and necessary quantum information concepts, many of which have themselves developed into independent, rapidly evolving research fields. | quantum physics |
One of the challenges in the NLP field is training large classification models, a task that is both difficult and tedious. It is even harder when GPU hardware is unavailable. The increased availability of pre-trained and off-the-shelf word embeddings, models, and modules aim at easing the process of training large models and achieving a competitive performance. We explore the use of off-the-shelf BERT models and share the results of our experiments and compare their results to those of LSTM networks and more simple baselines. We show that the complexity and computational cost of BERT is not a guarantee for enhanced predictive performance in the classification tasks at hand. | computer science |
This letter focuses on power allocation schemes for a basic multicast cell with wireless regenerative network coding (RNC). In RNC, mixed signals received from the two sources are jointly decoded by the relay where decoded symbols are superposed in either the complex field (RCNC) or Galois field (RGNC) before being retransmitted. We deduce the optimal statistical channels state information (CSI) based power allocation and give a comparison between the two RNCs. When instantaneous CSI is available at each transmitter, we propose a suboptimal power allocation for RCNC, which achieves better performance. | computer science |
Orthogonal frequency-division multiplexing (OFDM) with subcarrier number modulation (OFDM-SNM) manifests its superior nature of high spectral efficiency (SE) and low complexity for signal estimation. To exploit the spatial gain of OFDM-SNM, we propose a relay assisted OFDM-SNM scheme for multi-hop cooperative systems in this letter. It is stipulated that a relay operated by decode-and-forward (DF) and half-duplex (HD) protocols exists in each hop. We analyze the outage performance of the relay assisted OFDM-SNM system. The average outage probability is approximated in closed form. Moreover, to reveal the diversity and coding gains of relay assisted OFDM-SNM, we explore the asymptotic outage performance at high signal-to-noise ratio (SNR) by power series expansion. To verify the improvement on outage performance and energy efficiency, we carry out the comparison among different multi-hops systems with traditional OFDM-SNM fixing a certain distance from source to destination. Simulation results corroborate the derived outage probabilities and provide insight into the proposed system in this letter. | electrical engineering and systems science |
Using the available data on deeply virtual Compton scattering (DVCS) off protons and utilizing neural networks enhanced by the dispersion relation constraint, we determine six out of eight leading Compton form factors in the valence quark kinematic region. Furthermore, adding recent data on DVCS off neutrons, we separate contributions of up and down quarks to the dominant form factor, thus paving the way towards a three-dimensional picture of the nucleon. | high energy physics phenomenology |
WhatsApp is the most popular messaging app in the world. Due to its popularity, WhatsApp has become a powerful and cheap tool for political campaigning being widely used during the 2019 Indian general election, where it was used to connect to the voters on a large scale. Along with the campaigning, there have been reports that WhatsApp has also become a breeding ground for harmful speech against various protected groups and religious minorities. Many such messages attempt to instil fear among the population about a specific (minority) community. According to research on inter-group conflict, such `fear speech' messages could have a lasting impact and might lead to real offline violence. In this paper, we perform the first large scale study on fear speech across thousands of public WhatsApp groups discussing politics in India. We curate a new dataset and try to characterize fear speech from this dataset. We observe that users writing fear speech messages use various events and symbols to create the illusion of fear among the reader about a target community. We build models to classify fear speech and observe that current state-of-the-art NLP models do not perform well at this task. Fear speech messages tend to spread faster and could potentially go undetected by classifiers built to detect traditional toxic speech due to their low toxic nature. Finally, using a novel methodology to target users with Facebook ads, we conduct a survey among the users of these WhatsApp groups to understand the types of users who consume and share fear speech. We believe that this work opens up new research questions that are very different from tackling hate speech which the research community has been traditionally involved in. | computer science |
We solve for the elementary excitation in infinite quasi-1D quantum lattices by extending the recently developed infinite quasi-1D entanglement perturbation theory. The wave function of an excited state is variationally determined by optimizing superposition of cluster operation, each of which is composed of simultaneous on-site operation inside a block of lattice sites, on the ground state in a form of plane wave. The excitation energy with respect to the wave number gives the spectra for an elementary excitation. Our method is artificial broadening free and is adaptive for various quasi-particle pictures. Using the triplet spectrum, the application to $\infty$-by-$N$ antiferromagnetic spin-$\frac{1}{2}$ ladders for $N=2, 4, 6, 8$, and $10$ confirms a previous report that there is a quantum dimensional transition, namely, the lattice transits from quasi-1D to 2D at a finite critical value $N_c=10$. The massless triplet dispersion at $\left( \pi, \pi \right)$ sees a vanishing gap. Our results detect the anomaly at $\left(\pi,0\right)$ in the triplet spectrum, agreeing well with the inelastic neutron scattering measurement of a macroscopic sample. Surprisingly, our results also reveal a gapless and massive 1D singlet dispersion channel that is much lower than the triplet excitation. We note, however, the dimensional transition is determined by the massless triplet dispersion. | condensed matter |
Modern statistical software and machine learning libraries are enabling semi-automated statistical inference. Within this context, it appears easier and easier to try and fit many models to the data at hand, reversing thereby the Fisherian way of conducting science by collecting data after the scientific hypothesis (and hence the model) has been determined. The renewed goal of the statistician becomes to help the practitioner choose within such large and heterogeneous families of models, a task known as model selection. The Bayesian paradigm offers a systematized way of assessing this problem. This approach, launched by Harold Jeffreys in his 1935 book Theory of Probability, has witnessed a remarkable evolution in the last decades, that has brought about several new theoretical and methodological advances. Some of these recent developments are the focus of this survey, which tries to present a unifying perspective on work carried out by different communities. In particular, we focus on non-asymptotic out-of-sample performance of Bayesian model selection and averaging techniques, and draw connections with penalized maximum likelihood. We also describe recent extensions to wider classes of probabilistic frameworks including high-dimensional, unidentifiable, or likelihood-free models. | statistics |
We propose the factorized action variational autoencoder (FAVAE), a state-of-the-art generative model for learning disentangled and interpretable representations from sequential data via the information bottleneck without supervision. The purpose of disentangled representation learning is to obtain interpretable and transferable representations from data. We focused on the disentangled representation of sequential data since there is a wide range of potential applications if disentanglement representation is extended to sequential data such as video, speech, and stock market. Sequential data are characterized by dynamic and static factors: dynamic factors are time dependent, and static factors are independent of time. Previous models disentangle static and dynamic factors by explicitly modeling the priors of latent variables to distinguish between these factors. However, these models cannot disentangle representations between dynamic factors, such as disentangling "picking up" and "throwing" in robotic tasks. FAVAE can disentangle multiple dynamic factors. Since it does not require modeling priors, it can disentangle "between" dynamic factors. We conducted experiments to show that FAVAE can extract disentangled dynamic factors. | statistics |
This paper is concerned with the design of a non-intrusive model order reduction (MOR) for the system of parametric time-domain Maxwell equations. A time- and parameter-independent reduced basis (RB) is constructed by using a two-step proper orthogonal decomposition (POD) technique from a collection of full-order electromagnetic field solutions, which are generated via a discontinuous Galerkin time-domain (DGTD) solver. The mapping between the time/parameter values and the projection coefficients onto the RB space is approximated by a Gaussian process regression (GPR). Based on the data characteristics of electromagnetic field solutions, the singular value decomposition (SVD) is applied to extract the principal components of the training data of each projection coefficient, and the GPR models are trained for time- and parameter-modes respectively, by which the final global regression function can be represented as a linear combination of these time- and parameter-Gaussian processes. The extraction of the RB and the training of GPR surrogate models are both completed in the offline stage. Then the field solution at any new input time/parameter point can be directly recovered in the online stage as a linear combination of the RB with the regression outputs as the coefficients. In virtue of its non-intrusive nature, the proposed POD-GPR framework, which is equation-free, decouples the offline and online stages completely, and hence can predict the electromagnetic solution fields at unseen parameter locations quickly and effectively. The performance of our method is illustrated by a scattering problem of a multi-layer dielectric cylinder. | mathematics |
We calculate the complete tree and one-loop matching of the dimension 6 Standard Model Effective Field Theory (SMEFT) with unbroken $U(3)^5$ flavour symmetry to the operators of the Weak Effective Theory (WET) which are responsible for flavour changing neutral current effects among down-type quarks. We also explicitly calculate the effects of SMEFT corrections to input observables on the WET Wilson coefficients, a necessary step on the way to a well-defined, complete prediction. These results will enable high-precision flavour data to be incorporated into global fits of the SMEFT at high energies, where the flavour symmetry assumption is widespread. | high energy physics phenomenology |
Understanding the total flux and polarization signals of Earth-like planets and their spectral and temporal variability is essential for the future characterization of such exoplanets. We provide computed total (F) and linearly (Q and U) and circularly (V) polarized fluxes, and the degree of polarization P of sunlight that is reflected by a model Earth, to be used for instrument designs, optimizing observational strategies, and/or developing retrieval algorithms. We modeled a realistic Earth-like planet using one year of daily Earth-observation data: cloud parameters (distribution, optical thickness, top pressure, and particle effective radius), and surface parameters (distribution, surface type, and albedo). The Stokes vector of the disk-averaged reflected sunlight was computed for phase angles alpha from 0 to 180 degrees, and for wavelengths lambda from 350 to 865 nm. The total flux F is one order of magnitude higher than the polarized flux Q, and Q is two and four orders of magnitude higher than U and V, respectively. Without clouds, the peak-to-peak daily variations due to the planetary rotation increase with increasing lambda for F, Q, and P, while they decrease for U and V. Clouds modify but do not completely suppress the variations that are due to rotating surface features. With clouds, the variation in F increases with increasing lambda, while in Q, it decreases with increasing lambda, except at the largest phase angles. In earlier work, it was shown that with oceans, Q changes color from blue through white to red. The alpha where the color changes increases with increasing cloud coverage. Here, we show that this unique color change in Q also occurs when the oceans are partly replaced by continents, with or without clouds. The degree of polarization P shows a similar color change. Our computed fluxes and degree of polarization will be made publicly available. | astrophysics |
The recently introduced 4D 64-ary polarisation-ring-switching format is investigated in dispersion-managed systems. Numerical simulations show a reach increase of $25\%$ with respect to PM-8QAM. This gain is achieved from the nonlinear tolerance of the format and a 4D demapper using correlated noise assumptions. | electrical engineering and systems science |
This paper is a continuation of [17] where we introduced the basic framework of nonstandard large-scale topology. In the present paper, we apply our framework to various topics in large-scale topology: spaces interacting with both small-scale and large-scale structures, large-scale structures on nonstandard extensions, size properties of subsets of coarse spaces, and coarse hyperspaces. | mathematics |
Background: Natural or quasi experiments are appealing for public health research because they enable the evaluation of events or interventions that are difficult or impossible to manipulate experimentally, such as many policy and health system reforms. However, there remains ambiguity in the literature about their definition and how they differ from randomised controlled experiments and from other observational designs. Methods: We conceptualise natural experiments in in the context of public health evaluations, align the study design to the Target Trial Framework, and provide recommendation for improvement of their design and reporting. Results: Natural experiment studies combine features of experiments and non-experiments. They differ from RCTs in that exposure allocation is not controlled by researchers while they differ from other observational designs in that they evaluate the impact of event or exposure changes. As a result they are, in theory, less susceptible to bias than other observational study designs. Importantly, the strength of causal inferences relies on the plausibility that the exposure allocation can be considered "as-if randomised". The target trial framework provides a systematic basis for assessing the plausibility of such claims, and enables a structured method for assessing other design elements. Conclusions: Natural experiment studies should be considered a distinct study design rather than a set of tools for analyses of non-randomised interventions. Alignment of natural experiments to the Target Trial framework will clarify the strength of evidence underpinning claims about the effectiveness of public health interventions. | statistics |
We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bounds on the number of parameters with respect to the dimension and the target accuracy $\epsilon$. While the approximation still suffers from curse of dimensionality, to the best of our knowledge, these are first results in the literature with explicit error bounds. Moreover, we also discuss neural network architecture that can be suitable for approximating symmetric and anti-symmetric functions. | mathematics |
Current trends suggest that significant gender disparities exist within Science, Technology, Engineering, and Mathematics (STEM) education at university, with female students being underrepresented in physics, but more equally represented in life sciences (e.g., biology, medicine). To understand these trends, it is important to consider the context in which students make decisions about which university courses to enrol in. The current study seeks to investigate gender differences in STEM through a unique approach that combines network analysis of student enrolment data with an interpretive lens based on the sociological theory of Pierre Bourdieu. We generate a network of courses taken by around 9000 undergraduate physics students (from 2009 to 2014) to quantify Bourdieu's concept of field. We explore the properties of this network to investigate gender differences in transverse movements (between different academic fields) and vertical movements (changes in students' achievement rankings within a field). Our findings indicate that female students are more likely to make transverse movements into life science fields. We also find that university physics does a poor job in attracting high achieving students, and especially high achieving female students. Of the students who do choose to study physics, low achieving female students are less likely to continue than their male counterparts. The results and implications are discussed in the context of Bourdieu's theory, and previous research. We argue that in order to remove constraints on female student's study choices, the field of physics needs to provide a culture in which all students feel like they belong. | physics |
The recent association between IC-170922A and the blazar TXS0506+056 highlights the importance of real-time observations for identifying possible astrophysical neutrino sources. Thanks to its near-100\% duty cycle, 4$\pi$ steradian field of view, and excellent sensitivity over many decades of energy, IceCube is well suited both to generate alerts for follow-up by other instruments and to rapidly follow up alerts generated by other instruments. Detection of neutrinos in coincidence with transient astrophysical phenomena serves as a smoking gun for hadronic processes and supplies essential information about the identities and mechanisms of cosmic-ray accelerators. In 2016, the IceCube Neutrino Observatory established a pipeline to rapidly search for neutrinos from astrophysical transients on timescales ranging from a fraction of a second to multiple weeks. Since then, 67 dedicated analyses have been performed searching for associations between IceCube neutrinos and astrophysical transients reported by radio, optical, X-ray, and gamma-ray instruments in addition to searching for lower energy neutrino signals in association with IceCube's own high-energy alerts. We present the event selection, maximum likelihood analysis method, and sensitivity of the IceCube real-time pipeline. We also summarize the results of all follow-up analyses to date. | astrophysics |
Time series forecasting is an active research topic in academia as well as industry. Although we see an increasing amount of adoptions of machine learning methods in solving some of those forecasting challenges, statistical methods remain powerful while dealing with low granularity data. This paper introduces a refined Bayesian exponential smoothing model with the help of probabilistic programming languages including Stan. Our model refinements include additional global trend, transformation for multiplicative form, noise distribution and choice of priors. A benchmark study is conducted on a rich set of time-series data sets for our models along with other well-known time series models. | statistics |
Non-local correlations that obey the no-signalling principle contain intrinsic randomness. In particular, for a specific Bell experiment, one can derive relations between the amount of randomness produced, as quantified by the min-entropy of the output data, and its associated violation of a Bell inequality. In practice, due to finite sampling, certifying randomness requires the development of statistical tools to lower-bound the min-entropy of the data as a function of the estimated Bell violation. The quality of such bounds relies on the choice of certificate, i.e., the Bell inequality whose violation is estimated. In this work, we propose a method for choosing efficiently such a certificate. It requires sacrificing a part of the output data in order to estimate the underlying correlations. Regularising this estimate then allows one to find a Bell inequality that is well suited for certifying practical randomness from these specific correlations. We then study the effects of various parameters on the obtained min-entropy bound and explain how to tune them in a favourable way. Lastly, we carry out several numerical simulations of a Bell experiment to show the efficiency of our method: we nearly always obtain higher min-entropy rates than when we use a pre-established Bell inequality, namely the Clauser-Horne-Shimony-Holt inequality. | quantum physics |
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions. We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling. Results provide strong evidence that the NN correctly learns an order parameter describing the quantum transition in this model. We demonstrate that influence functions allow to check that the network, trained to recognize known quantum phases, can predict new unknown ones within the data set. Moreover, we show they can guide physicists in understanding patterns responsible for the phase transition. This method requires no a priori knowledge on the order parameter, has no dependence on the NN's architecture or the underlying physical model, and is therefore applicable to a broad class of physical models or experimental data. | quantum physics |
Graphite and diamond are two well-known allotropes of carbon with distinct physical properties due to different atomic connectivity. Graphite has a layered structure in which the honeycomb carbon sheets can easily glide, while atoms in diamond are strongly bonded in all three dimensions. The transition from graphite to diamond has been a central subject in physical science. One way to turn graphite into diamond is to apply the high pressure and high temperature (HPHT) conditions. However, atomistic mechanism of this transition is still under debate. From a series of large-scale molecular dynamics (MD) simulations, we report a mechanism that the diamond nuclei originate at the graphite grain boundaries and propagate in two preferred directions. In addition to the widely accepted [001] direction, we found that the growth along [120] direction of graphite is even faster. In this scenario, cubic diamond (CD) is the kinetically favorable product, while hexagonal diamond (HD) would appear as minor amounts of twinning structures in two main directions. Following the crystallographic orientation relationship, the coherent interface t-(100)gr//(11-1)cd + [010]gr//[1-10]cd was also confirmed by high-resolution transmission electron microscopy (HR-TEM) experiment. The proposed phase transition mechanism does not only reconcile the longstanding debate regarding the role of HD in graphite-diamond transition, but also yields the atomistic insight into microstructure engineering via controlled solid phase transition. | condensed matter |
Computational modeling of the properties of crystalline materials has become an increasingly important aspect of materials research, consuming hundreds of millions of CPU-hours at scientific computing centres around the world each year, if not more. A routine operation in such calculations is the evaluation of integrals over the Brillouin zone. We have previously demonstrated that performing such integrals using generalized Monkhorst-Pack k-point grids can roughly double the speed of these calculations relative to the widely-used traditional Monkhorst-Pack grids, and such grids can be rapidly generated by querying a free, internet-accessible database of pre-generated grids. To facilitate the widespread use of generalized k-point grids, we present new algorithms that allow rapid generation of optimized generalized Monkhorst-Pack grids on the fly, an open-source library to facilitate their integration into external software packages, and an open-source implementation of the database tool that can be used offline. We also present benchmarks of the speed of our algorithms on structures randomly selected from the Inorganic Crystal Structure Database. For grids that correspond to a real-space supercell with at least 50 angstroms between lattice points, which is sufficient to converge density functional theory calculations within 1 meV/atom for nearly all materials, our algorithm finds optimized grids in an average of 0.19 seconds on a single processing core. For 100 angstroms between real-space lattice points, our algorithm finds optimal grids in less than 5 seconds on average. | physics |
The Zeeman effect is of limited utility for probing the magnetism of the quiet solar chromosphere. The Hanle effect in some spectral lines is sensitive to such magnetism, but the interpretation of the scattering polarization signals requires taking into account that the chromospheric plasma is highly inhomogeneous and dynamic (i.e., that the magnetic field is not the only cause of symmetry breaking). Here we investigate the reliability of a well-known formula for mapping the azimuth of chromospheric magnetic fields directly from the scattering polarization observed in the \ion{Ca}{2}~8542~\AA\, line, which is typically in the saturation regime of the Hanle effect. To this end, we use the Stokes profiles of the \ion{Ca}{2}~8542~\AA\, line computed with the PORTA radiative transfer code in a three-dimensional (3D) model of the solar chromosphere, degrading them to mimic spectropolarimetric observations for a range of telescope apertures and noise levels. The simulated observations are used to obtain the magnetic field azimuth at each point of the field of view, which we compare with the actual values within the 3D model. We show that, apart from intrinsic ambiguities, the method provides solid results. Their accuracy depends more on the noise level than on the telescope diameter. Large-aperture solar telescopes, like DKIST and EST, are needed to achieve the required noise-to-signal ratios using reasonable exposure times. | astrophysics |
Recent analysis suggests that the faint optical point source observed around Fomalhaut from 2004-2014 (Fomalhaut b) is gradually fading and expanding, supporting the case that it may be a dispersing dust cloud resulting from the sudden disruption of a planetesimal. These types of disruptions may arise from catastrophic collisions of planetesimals, which are perturbed from their original orbits in the Fomalhaut dust ring by nearby giant planets. However, disruptions can also occur when the planetesimals pass within the tidal disruption field of the planet(s) that perturbed them in the first place, similar to the Shoemaker-Levy event observed in the Solar System. Given that a gravitationally focusing giant planet has a much larger interaction cross-section than a planetesimal, tidal disruption events can match or outnumber planetesimal collision events in realistic regions of parameter space. Intriguingly, the Fomalhaut dust cloud offers an opportunity to directly distinguish between these scenarios. A tidal disruption scenario leads to a very specific prediction of ephemerides for the planet causing the event. At a most probable mass of 66 Mearth, a semi-major axis of 117 AU, and a system age of 400-500 Myr, this planet would be readily detectable with the James Webb Space Telescope. The presence or absence of this planet at the specific, predicted position is therefore a distinctive indicator of whether the dispersing cloud originated from a collision of two planetesimals or from the disruption of a planetesimal in the tidal field of a giant planet. | astrophysics |
The Clifford group plays a central role in quantum randomized benchmarking, quantum tomography, and error correction protocols. Here we study the structural properties of this group. We show that any Clifford operator can be uniquely written in the canonical form $F_1HSF_2$, where $H$ is a layer of Hadamard gates, $S$ is a permutation of qubits, and $F_i$ are parameterized Hadamard-free circuits chosen from suitable subgroups of the Clifford group. Our canonical form provides a one-to-one correspondence between Clifford operators and layered quantum circuits. We report a polynomial-time algorithm for computing the canonical form. We employ this canonical form to generate a random uniformly distributed $n$-qubit Clifford operator in runtime $O(n^2)$. The number of random bits consumed by the algorithm matches the information-theoretic lower bound. A surprising connection is highlighted between random uniform Clifford operators and the Mallows distribution on the symmetric group. The variants of the canonical form, one with a short Hadamard-free part and one allowing a circuit depth $9n$ implementation of arbitrary Clifford unitaries in the Linear Nearest Neighbor architecture are also discussed. Finally, we study computational quantum advantage where a classical reversible linear circuit can be implemented more efficiently using Clifford gates, and show an explicit example where such an advantage takes place. | quantum physics |
Although quality indicators play a crucial role in benchmarking evolutionary multi-objective optimization algorithms, their properties are still unclear. One promising approach for understanding quality indicators is the use of the optimal distribution of objective vectors that optimizes each quality indicator. However, it is difficult to obtain the optimal distribution for each quality indicator, especially when its theoretical property is unknown. Thus, optimal distributions for most quality indicators have not been well investigated. To address these issues, first, we propose a problem formulation of finding the optimal distribution for each quality indicator on an arbitrary Pareto front. Then, we approximate the optimal distributions for nine quality indicators using the proposed problem formulation. We analyze the nine quality indicators using their approximated optimal distributions on eight types of Pareto fronts of three-objective problems. Our analysis demonstrates that uniformly-distributed objective vectors over the entire Pareto front are not optimal in many cases. Each quality indicator has its own optimal distribution for each Pareto front. We also examine the consistency among the nine quality indicators. | computer science |
Fraud acts as a major deterrent to a companys growth if uncontrolled. It challenges the fundamental value of Trust in the Insurance business. COVID-19 brought additional challenges of increased potential fraud to health insurance business. This work describes implementation of existing and enhanced fraud detection methods in the pre-COVID-19 and COVID-19 environments. For this purpose, we have developed an innovative enhanced fraud detection framework using actuarial and data science techniques. Triggers specific to COVID-19 are identified in addition to the existing triggers. We have also explored the relationship between insurance fraud and COVID-19. To determine this we calculated Pearson correlation coefficient and fitted logarithmic regression model between fraud in health insurance and COVID-19 cases. This work uses two datasets: health insurance dataset and Kaggle dataset on COVID-19 cases for the same select geographical location in India. Our experimental results shows Pearson correlation coefficient of 0.86, which implies that the month on month rate of fraudulent cases is highly correlated with month on month rate of COVID-19 cases. The logarithmic regression performed on the data gave the r-squared value of 0.91 which indicates that the model is a good fit. This work aims to provide much needed tools and techniques for health insurance business to counter the fraud. | statistics |
The traditional cycle of industrial products has been linear since its inception. Raw resources are acquired, processed, distributed, used and ultimately disposed of. This linearity has led to a dangerously low efficiency degree in resource use, and has brought forth serious concerns for the viability of our natural ecosystem. Circular economy is introducing a circular workflow for the lifetime of products. It generalizes the disposal phase, reconnecting it to manufacturing, distribution and end-use, thus limiting true deposition to the environment. This process has not been extended so far to software. Nonetheless, the development of software follows the same phases, and also entails the use-and waste-of considerable resources. This include human effort, as well as human and infrastructure sustenance products such as food, traveling and energy. This paper introduces circular economy principles to the software development, and particularly to network management logic and security. It employs a recently proposed concept-the Socket Store-which is an online store distributing end-user network logic in modular form. The Store modules act as mediators between the end-user network logic and the network resources. It is shown that the Socket Store can implement all circular economy principles to the software life-cycle, with considerable gains in resource waste. | computer science |
We present the design, bench-top setup, and experimental results of a compact heterodyne interferometer that achieves picometer-level displacement sensitivities in air over frequencies above 100 mHz. The optical configuration with spatially separated beams prevents frequency and polarization mixing, and therefore eliminates periodic errors. The interferometer is designed to maximize common-mode optical laser beam paths to obtain high rejection of environmental disturbances, such as temperature fluctuations and acoustics. The results of our experiments demonstrate the short- and long-term stabilities of the system during stationary and dynamic measurements. In addition, we provide measurements that compare our interferometer prototype with a commercial system, verifying our higher sensitivity of 3\,pm, higher thermal stability by a factor of two, and periodic-error-free performance. | physics |
In this work we obtain the analytical expressions for the boundaries of the charged current quasi-elastic double differential cross section in terms of dimensionless energy and momentum transfers, for the Relativistic Fermi Gas (RFG) and the Super-Scaling approach with relativistic effective mass (SuSAM*) models, within the scaling formalism. In addition, this new double differential cross section in the scaling formalism has very good properties to be implemented in the Monte Carlo (MC) neutrino event generator, particularly because its peak is almost flat with the (anti)neutrino energy. This makes it especially well-suited for the event generation by the acceptance-rejection method usually used in the neutrino generators. Finally, we analyze the total charged current quasi-elastic (CCQE) cross section $\sigma(E_{\nu})$ for both models and attribute the enhancement observed in the SuSAM* total cross section to the high-momentum components which are present, in a phenomenological way, in its scaling function, while these are absent in the RFG model. | high energy physics phenomenology |
We present a new bound on the ultra-light axion (ULA) dark matter mass $m_\text{a}$, using the Lyman-alpha forest to look for suppressed cosmic structure growth: a 95% lower limit $m_\text{a} > 2 \times 10^{-20}\,\text{eV}$. This strongly disfavors ($> 99.7\%$ credibility) the canonical ULA with $10^{-22}\,\text{eV} < m_\text{a} < 10^{-21}\,\text{eV}$, motivated by the string axiverse and solutions to possible tensions in the cold dark matter model. We strengthen previous equivalent bounds by about an order of magnitude. We demonstrate the robustness of our results using an optimized emulator of improved hydrodynamical simulations. | astrophysics |
We present the results of a large search for intrinsic HI 21 cm and OH 18 cm absorption in 145 compact radio sources in the redshift range 0.02< z <3.8 with the Green Bank Telescope. We re-detect HI 21 cm absorption toward six known absorption systems but detect no new HI or OH absorption in 102 interference-free sources. 79 sources have not previously been observed for HI 21 cm absorption. We recover a mean optical depth limit of $\tau_{3\sigma}<0.023$ for all the non-detections in the survey. Our results do not support the high intrinsic absorption rates found by previous studies in compact radio sources at low redshift. Our results do, however, support the hypothesis proposed by Curran et al. (2008) that high ultraviolet (UV) luminosity active galactic nuclei (AGN) do not show intrinsic HI 21 cm absorption, confirming a threshold of $L_{\rm UV} = 10^{23}$ W Hz$^{-1}$, above which our intrinsic absorption fraction is zero (54 sources). The exact nature of the UV luminosity effect on HI absorption systems remains ambiguous. We additionally find no statistical correlation between the 1.4 GHz radio luminosity or the source size and the 21~cm absorption detection rate. We attribute the lack of intrinsic absorption in our survey to the UV luminosity effect caused by an optical selection bias and a decreased column density sensitivity with increasing redshift due to lower radio continuum flux densities, high radio frequency interference, and higher telescope system temperatures at low frequencies. | astrophysics |
Estimation of permutation entropy (PE) using Bayesian statistical methods is presented for systems where the ordinal pattern sampling follows an independent, multinomial distribution. It is demonstrated that the PE posterior distribution is closely approximated by a standard Beta distribution, whose hyperparameters can be estimated directly from moments computed analytically from observed ordinal pattern counts. Equivalence with expressions derived previously using frequentist methods is also demonstrated. Because Bayesian estimation of PE naturally incorporates uncertainty and prior information, the orthodox requirement that $N \gg D!$ is effectively circumvented, allowing PE to be estimated even for very short time series. Self-similarity tests on PE posterior distributions computed for a semiconductor laser with optical feedback (SLWOF) system show its PE to vary periodically over time. | physics |
We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event $A$ from the tail $\sigma$-algebra of MC $(Z_n)$, for large $n$, with probability near one, the trajectories of the MC are in states $i$, where $P(A|Z_n=i)$ is either near $0$ or near $1$. A similar statement holds for the entrance $\sigma$-algebra, when $n$ tends to $-\infty$. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by $\mathbb Z_-$ or $\mathbb Z$ in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs. | mathematics |
We compute the collisional energy loss of an {energetic} massive fermion crossing a chiral plasma at finite temperature characterized by an imbalance between the populations of left-handed and right-handed fermions. We find a new contribution to the energy loss which is proportional to the helicity of the test fermion and depends on the amount of chiral imbalance in the plasma. We then compute the difference between the energy loss of a fermion with the two opposite helicities, to assess whether this could be used to quantify the chiral imbalance in the plasma. We find that the leading contribution to these helicity-dependent energy loss contributions comes from the exchange of hard photons (or gluons for QCD) with the medium constituents, and in some scenarios can become comparable to the leading-order result for a plasma without any chiral imbalance. We also evaluate the contribution arising from soft photon exchange, which is a subleading effect, and requires regularization. We illustrate how dimensional regularization is a well suited prescription to be applied to these energy loss computations. | high energy physics phenomenology |
Frequency entangled photon sources are in high demand in a variety of optical quantum technologies, including quantum key distribution, cluster state quantum computation and quantum metrology. In the recent decade, chip-scale entangled photon sources have been developed using silicon platforms, offering robustness, large scalability and CMOS technology compatibility. Here, we report the generation of frequency correlated photon pairs using a 150-GHz silicon nitride ring cavity. First, the device is characterized for studying the phase matching condition during spontaneous four-wave mixing. Next, we evaluate the joint spectrum intensity of the generated photons and confirm the photon pair generation in a total of42 correlated frequency mode pairs, corresponding to a bandwidth of 51.25 nm. Finally, the experimental results are analyzed and the joint spectral intensity is quantified in terms of the phase matching condition. | quantum physics |
In this paper, we present the analysis of new radio and optical observations of the narrow-line Seyfert 1 galaxy Mrk 783. 1.6 GHz observations performed with the e-MERLIN interferometer confirm the presence of the diffuse emission previously observed. The Very Long Baseline Array (VLBA) also detects the nuclear source both at 1.6 GHz (L-Band) and 5 GHz (C-band). While the L-band image shows only an unresolved core, the C-band image shows the presence of a partially resolved structure, at a position angle of 60{\deg}. The brightness temperature of the emission in both bands ($>10^6$ K) suggests that it is a pc-scale jet produced by the AGN. The relatively steep VLBA spectral index ($\alpha_{VLBA} = 0.63\pm0.03$) is consistent with the presence of optically thin emission on milliarcsecond scales. Finally, we investigated two possible scenarios that can result in the misalignment between the kpc and pc-scale radio structure detected in the galaxy. We also analysed the optical morphology of the galaxy, which suggests that Mrk 783 underwent a merging in relatively recent times. | astrophysics |
This paper proposes a novel approach to nonlinear state-feedback control design that has three main advantages: (i) it ensures exponential stability and $ \mathcal{L}_2 $-gain performance with respect to a user-defined set of reference trajectories, and (ii) it provides constructive conditions based on convex optimization and a path-integral-based control realization, and (iii) it is less restrictive than previous similar approaches. In the proposed approach, first a virtual representation of the nonlinear dynamics is constructed for which a behavioral (parameter-varying) embedding is generated. Then, by introducing a virtual control contraction metric, a convex control synthesis formulation is derived. Finally, a control realization with a virtual reference generator is computed, which is guaranteed to achieve exponential stability and $ \mathcal{L}_2 $-gain performance for all trajectories of the targeted reference behavior. Connections with the linear-parameter-varying (LPV) theory are also explored showing that the proposed methodology is a generalization of LPV state-feedback control in two aspects. First, it is a unified generalization of the two distinct categories of LPV control approaches: global and local methods. Second, it provides rigorous stability and performance guarantees when applied to the true nonlinear system, while such properties are not guaranteed for tracking control using LPV approaches. | electrical engineering and systems science |
We study an apparently new question about the behaviour of Weyl sums on a subset $\mathcal{X}\subseteq [0,1)^d$ with a natural measure $\mu$ on $\mathcal{X}$. For certain measure spaces $(\mathcal{X}, \mu)$ we obtain non-trivial bounds for the mean values of the Weyl sums, and for $\mu$-almost all points of $\mathcal{X}$ the Weyl sums satisfy the square root cancellation law. Moreover we characterise the size of the exceptional sets in terms of Hausdorff dimension. Finally, we derive variants of the Vinogradov mean value theorem averaging over measure spaces $(\mathcal{X}, \mu)$. We obtain general results, which we refine for some special spaces $\mathcal{X}$ such as spheres, moment curves and line segments. | mathematics |
In general, the typical approach to discriminate antibunching, bunching or superbunching categories make use of calculating the second-order coherence function ${g^{(2)}}(\tau )$ of light. Although the classical light sources correspond to the specific degree of second-order coherence ${g^{(2)}}(0)$, it does not alone constitute a distinguishable metric to characterize and determine light sources. Here we propose a new mechanism to directly classify and generate antibunching, bunching or superbunching categories of light, as well as the classical light sources such as thermal and coherent light, by Gamma fitting according to only one characteristic parameter $\alpha$ or $\beta$. Experimental verification of beams from four-wave mixing process is in agreement with the presented mechanism, and the in fluence of temperature $T$ and laser detuning $\Delta$ on the measured results are investigated. The proposal demonstrates the potential of classifying and identifying light with different nature, and the most importantly, provides a convenient and simple method to generate light sources meeting various application requirements according to the presented rules. Most notably, the bunching and superbunching are distinguishable in super-Poissonian statistics using our mechanism. | physics |
Dimension reduction techniques are often used when the high-dimensional tensor has relatively low intrinsic rank compared to the ambient dimension of the tensor. The CANDECOMP/PARAFAC (CP) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model, an $\ell_1$ regularized optimization problem was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. However, the emergence of the large amount of data poses onerous computational burden for computing the regularization parameter via classical approaches such as the weighted generalized cross validation (WGCV), the unbiased predictive risk estimator, and the discrepancy principle. In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method into the framework of the CP tensor. The main benefits of this method include incorporating regularization automatically and efficiently, improved reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the performance of the propose algorithm. | mathematics |
Any four-dimensional $\mathcal{N} \geqslant 2$ superconformal field theory possess a protected subsector isomorphic to a two-dimensional chiral algebra \cite{Beem:2013sza}. The goal of these lectures is to provide an introduction to the subject, covering the construction of the chiral algebras, the consequences for four-dimensional physics, as well as a brief summary of recent progress. This is the writeup of the lectures given at the Winter School "YRISW 2020" to appear in a special issue of JPhysA. | high energy physics theory |
In this paper we investigate neutrino oscillations with altered dispersion relations in the presence of sterile neutrinos. Modified dispersion relations represent an agnostic way to parameterize new physics. Models of this type have been suggested to explain global neutrino oscillation data, including deviations from the standard three-neutrino paradigm as observed by a few experiments. We show that, unfortunately, in this type of models new tensions arise turning them incompatible with global data. | high energy physics phenomenology |
We present a method to extract, in the leading and next-to-leading order approximations, the longitudinal deep-inelastic scattering structure function FL(x,Q2) from the experimental data by relying on a Froissart-bounded parametrization of the transversal structure function F2(x,Q2) and, partially, on the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi equations. Particular attention is paid on kinematics of low and ultra low values of the Bjorken variable x. Analytical expressions for FL(x,Q2) in terms of the effective parameters of the parametrization of F2(x,Q2) are presented explicitly. We argue that the obtained structure functions FL(x,Q2) within both, the leading and next-to-leading order approximations, manifestly obey the Froissart boundary conditions. Numerical calculations and comparison with available data from ZEUS and H1-Collaborations at HERA demonstrate that the suggested method provides reliable structure functions FL(x,Q2) at low x in a wide range of the momentum transfer (1 GeV2 < Q2 < 3000 GeV2) and can be applied as well in analyses of ultra-high energy processes with cosmic neutrinos. | high energy physics phenomenology |
The significance of new physics appearing in the loop-induced decays of neutral Higgs bosons into pairs of dibosons $\gamma\gamma$ and $Z\gamma$ will be discussed in the framework of the 3-3-1 models based on a recent work~\cite{Okada:2016whh}, where the Higgs sector becomes effectively the same as that in the two Higgs doublet models (2HDM) after the first symmetry breaking from $SU(3)_L$ scale into the electroweak scale. For large $SU(3)_L$ scale $v_3\simeq10$ TeV, dominant one-loop contributions to the two decay amplitudes arise from only the single charged Higgs boson predicted by the 2HDM, leading to that experimental constraint on the signal strength $\mu^{331}_{\gamma\gamma}$ of the Standard Model-like Higgs boson decay $h\rightarrow \gamma\gamma$ will result in a strict upper bound on the signal strength $\mu^{331}_{Z\gamma}$ of the decay $h\rightarrow\, Z\gamma$. For a particular model with lower $v_3$ around 3 TeV, contributions from heavy charged gauge and Higgs bosons may have the same order, therefore may give strong destructive or constructive correlations. As a by-product, a deviation from the SM prediction $|\mu^{331}_{\gamma\gamma}-1| \le 0.04$ still allows $|\mu^{331}_{Z\gamma}-1|$ to reach values near 0.1. We also show that there exists an $CP$-even neutral Higgs boson $h^0_3$ predicted by the 3-3-1 models, but beyond the 2HDM, has an interesting property that the branching ratio Br$(h^0_3\rightarrow \gamma\gamma)$ is very sensitive to the parameter $\beta$ used to distinguish different 3-3-1 models. | high energy physics phenomenology |
Classical deep learning algorithms have aroused great interest in both academia and industry for their utility in image recognition, language translation, decision-making problems and more. In this work, we have provided a quantum deep learning scheme based on multi-qubit entanglement states, including computation and training of neural network in full quantum process. In the course of training, efficient calculation of the distance between unknown unit vector and known unit vector has been realized by proper measurement based on the Greenberger-Horne-Zeilinger entanglement states. An exponential speedup over classical algorithms has been demonstrated. In the process of computation, quantum scheme corresponding to multi-layer feedforward neural network has been provided. We have shown the utility of our scheme using Iris dataset. The extensibility of the present scheme to different types of model has also been analyzed | quantum physics |
We study the relations between quantum coherence and quantum nonlocality, genuine quantum entanglement and genuine quantum nonlocality. We show that the coherence of a qubit state can be converted to the nonlocality of two-qubit states via incoherent operations. The results are also generalized to qudit case. Furthermore, rigorous relations between the quantum coherence of a single-partite state and the genuine multipartite quantum entanglement, as well as the genuine three-qubit quantum nonlocality are established. | quantum physics |
We present the distribution of unpolarized quarks in a transversely polarized proton in three-dimensional momentum space. Our results are based on consistent extractions of the unpolarized and Sivers transverse momentum dependent parton distributions (TMDs). | high energy physics phenomenology |
Filon-Simpson quadrature rules are derived for integrals of the type \int_a^b dx f(x) sin(xy)/(xy) and \int_a^b dx f(x) 4 sin^2(xy/2)/(xy)^2 which are needed in applications of the worldline variational approach to Quantum Field Theory. These new integration rules reduce to the standard Simpson rule for y = 0 and are exact for y \to \infty when a = 0 and f(0) \ne 0.The subleading term in the asymptotic expansion is also reproduced more and more precisely when the number of integration points is increased. Tests show that the numerical results are indeed stable over a wide range of y-values whereas usual Gauss-Legendre quadrature rules are more precise at low y but fail completely for large values of y. The associated Filon-Simpson weights are given in terms of sine and cosine integrals and have to be evaluated for each value of y. A Fortran program to calculate them in a fast and accurate manner is available. A detailed comparison with the double exponential method of Ooura and Mori is made. | high energy physics phenomenology |
The concept of a $\lambda$-lattice was introduced by V. Sn\'a\v sel in order to generalize some lattice concepts for directed posets whose elements need not have suprema or infima. We extend the concept of semimodularity from lattices to $\lambda$-lattices and show connections to the lower covering condition and its generalizations. We further show that, contrary to the case of lattices, for $\lambda$-lattices semimodularity and the (weak) lower covering condition are independent properties. However, under some additional conditions semimodularity implies the (weak) lower covering condition. Examples of corresponding $\lambda$-lattices are presented. | mathematics |
A naive application of the heavy quark expansion (HQE) yields theory estimates for the decay rate of neutral $D$ mesons that are four orders of magnitude below the experimental determination. It is well known that this huge suppression results from severe GIM cancellations. We find that this mismatch can be solved by individually choosing the renormalisation scale of the different internal quark contributions. For $b$ and $c$ hadron lifetimes, as well as for the decay rate difference of neutral $B$ mesons the effect of our scale setting procedure lies within the previously quoted theory uncertainties, while we get enlarged theory uncertainties for the semi-leptonic CP asymmetries in the $B$ system. | high energy physics phenomenology |
Estimating conditional dependence graphs and precision matrices are some of the most common problems in modern statistics and machine learning. When data are fully observed, penalized maximum likelihood-type estimators have become standard tools for estimating graphical models under sparsity conditions. Extensions of these methods to more complex settings where data are contaminated with additive or multiplicative noise have been developed in recent years. In these settings, however, the relative performance of different methods is not well understood and algorithmic gaps still exist. In particular, in high-dimensional settings these methods require using non-positive semidefinite matrices as inputs, presenting novel optimization challenges. We develop an alternating direction method of multipliers (ADMM) algorithm for these problems, providing a feasible algorithm to estimate precision matrices with indefinite input and potentially nonconvex penalties. We compare this method with existing alternative solutions and empirically characterize the tradeoffs between them. Finally, we use this method to explore the networks among US senators estimated from voting records data. | statistics |
We study the large $N$ expansion of twisted partition functions of 3d $\mathcal{N}=2$ superconformal field theories arising from $N$ M5-branes wrapped on a hyperbolic 3-manifold, $M_3$. Via the 3d-3d correspondence, the partition functions of these 3d ${\cal N}=2$ superconformal field theories are related to simple topological invariants on the 3-manifold. The partition functions can be expressed using only classical and one-loop perturbative invariants of $PSL(N,\mathbb{C})$ Chern-Simons theory around irreducible flat connections on $M_3$. Using mathematical results on the asymptotics of the invariants, we compute the twisted partition functions in the large $N$ limit including perturbative corrections to all orders in $1/N$. Surprisingly, the perturbative expansion terminates at finite order. The leading part of the partition function is of order $N^3$ and agrees with the Bekenstein-Hawking entropy of the dual black holes. The subleading part, in particular the $\log N$-terms in the field theory partition function is found to precisely match the one-loop quantum corrections in the dual eleven dimensional supergravity. The field theory results of other terms in $1/N$ provide a stringent prediction for higher order corrections in the holographic dual, which is M-theory. | high energy physics theory |
This paper introduces FastVC, an end-to-end model for fast Voice Conversion (VC). The proposed model can convert speech of arbitrary length from multiple source speakers to multiple target speakers. FastVC is based on a conditional AutoEncoder (AE) trained on non-parallel data and requires no annotations at all. This model's latent representation is shown to be speaker-independent and similar to phonemes, which is a desirable feature for VC systems. While the current VC systems primarily focus on achieving the highest overall speech quality, this paper tries to balance the development concerning resources needed to run the systems. Despite the simple structure of the proposed model, it outperforms the VC Challenge 2020 baselines on the cross-lingual task in terms of naturalness. | electrical engineering and systems science |
In this paper, we extend the collinear superspace formalism to include the full range of $\mathcal{N} = 1$ supersymmetric interactions. Building on the effective field theory rules developed in a companion paper - "Navigating Collinear Superspace" - we construct collinear superspace Lagrangians for theories with non-trivial $F$- and $D$-term auxiliary fields. For (massless) Wess-Zumino models, the key ingredient is a novel type of Grassmann-valued supermultiplet whose lowest component is a (non-propagating) fermionic degree of freedom. For gauge theories coupled to charged chiral matter, the key ingredient is a novel type of vector superfield whose lowest component is a non-propagating gauge potential. This unique vector superfield is used to construct a gauge-covariant derivative; while such an object does not appear in the standard full superspace formalism, it is crucial for modeling gauge interactions when the theory is expressed on a collinear slice. This brings us full circle, by showing that all types of $\mathcal{N} = 1$ theories in four dimensions can be constructed in collinear superspace from purely infrared considerations. We speculate that supersymmetric theories with $\mathcal{N} > 1$ could also be implemented using similar collinear superspace constructions. | high energy physics theory |
It is commonly admitted that non-reversible Markov chain Monte Carlo (MCMC) algorithms usually yield more accurate MCMC estimators than their reversible counterparts. In this note, we show that in addition to their variance reduction effect, some non-reversible MCMC algorithms have also the undesirable property to slow down the convergence of the Markov chain. This point, which has been overlooked by the literature, has obvious practical implications. We illustrate this phenomenon for different non-reversible versions of the Metropolis-Hastings algorithm on several discrete state space examples and discuss ways to mitigate the risk of a small asymptotic variance/slow convergence scenario. | statistics |
The finite remainder function for planar, color-ordered, maximally helicity violating scattering processes in N=4 super Yang-Mills theory possesses a non-vanishing multi-Regge limit that depends on the choice of a Mandelstam region. We analyze the combined multi-Regge collinear limit in all Mandelstam regions through an analytic continuation of the Wilson loop OPE. At leading order, the former is determined by the gluon excitation of the Gubser-Klebanov-Polyakov string. We illustrate the general procedure at the example of the heptagon remainder function at two loops. In this case, the continuation of the leading order terms in the Wilson loop OPE suffices to determine the two-loop multi-Regge heptagon functions in all Mandelstam regions from their symbols. The expressions we obtain are fully consistent with recent results by Del Duca et al. | high energy physics theory |
Exact ground truth invariant polynomial systems can be written for arbitrarily correlated binary classifiers. Their solutions give estimates for sample statistics that require knowledge of the ground truth of the correct labels in the sample. Of these polynomial systems, only a few have been solved in closed form. Here we discuss the exact solution for independent binary classifiers - resolving an outstanding problem that has been presented at this conference and others. Its practical applicability is hampered by its sole remaining assumption - the classifiers need to be independent in their sample errors. We discuss how to use the closed form solution to create a self-consistent test that can validate the independence assumption itself absent the correct labels ground truth. It can be cast as an algebraic geometry conjecture for binary classifiers that remains unsolved. A similar conjecture for the ground truth invariant algebraic system for scalar regressors is solvable, and we present the solution here. We also discuss experiments on the Penn ML Benchmark classification tasks that provide further evidence that the conjecture may be true for the polynomial system of binary classifiers. | statistics |
We explore Fourier transforms of the reciprocal of the Riemann zeta function that have connections to the RH. A partial answer to a recently posed problem is explored by exploiting the fact that $\zeta(s)\neq0$ when $\Re(s)=1.$ | mathematics |
Closed Timelike Curves are relativistically valid objects allowing time travel to the past. Treating them as computational objects opens the door to a wide range of results which cannot be achieved using non relativistic quantum mechanics. Recently, research in classical and quantum computation has focused on effectively harnessing the power of these curves. In particular, Brun (Found. Phys. Lett., 2003) has shown that CTCs can be utilized to efficiently solve problems like factoring and QSAT (Quantified Satisfiability Problem). In this paper, we find a flaw in Brun's algorithm and propose a modified algorithm to circumvent the flaw. | quantum physics |
Thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) particles of different sizes are synthesized by varying the concentration of sodium dodecyl sulphate (SDS) in a one-pot method. The sizes, size polydispersities and the thermoresponsivity of the PNIPAM particles are characterized by using dynamic light scattering and scanning electron microscopy. It is observed that the sizes of these particles decrease with increase in SDS concentration. Swelling ratios of PNIPAM particles measured from the thermoresponsive curves are observed to increase with decrease in particle size. This observation is understood by minimizing the Helmholtz free energy of the system with respect to the swelling ratio of the particles. Finally, the dynamics of these particles in jammed aqueous suspensions are investigated by performing rheological measurements. | condensed matter |
We present a new approach for the calculation of the valence quark distributions in the nucleon based on the scenario in which the spectrum of the valence quarks at x>0.05 is generated through three main mechanisms: interaction of valence quarks with the mean field generated by the residual nucleon system, two and three quark short range interactions through gluon exchanges. In the current report we present the first phase of the project in which we develop a non-perturbative model for valence quark interaction in the mean field of the nucleonic interior to describe their distribution in the moderate x region (0.05 < x < 0.4). The short range quark-quark interaction effects in our approach generate the high x tail of valence quark distributions. The presented non-perturbative model is based on the picture in which three relativistic valence quarks occupy the nucleon core at distances of $\le 0.5$ Fm while interacting in the mean field generated by the residual nucleon system. The calculations are based on the assumption of the a factorization of the internal interaction of short-range three valence quarks with the long-range interaction of these quarks with the residual system. The theoretical approach is based on effective light-front diagrammatic approach which allows us to introduce the valence quark and residual system wave functions in a consistent way The parameters of these wave functions are fixed by the position of the peak of the xf_q(x) distribution of valence quarks at Q_0 corresponding to the charm-quark mass. With few parameters we achieved a very reasonable description of the up and down valence quark distributions in the moderate x region (x < 0.4), where one expects the mean field dynamics to dominate. The model, however, systematically underestimates the high $x$ region where enhanced contributions from partonic short-range correlations are expected. | high energy physics phenomenology |
In this paper we investigate some Korovkin type approximation properties of the q-Meyer-K\"onig and Zeller operators and Durrmeyer variant of the q-Meyer-K\"onig and Zeller operators via Abel summability method which is a sequence-to-function transformation and which extends the ordinary convergence. We show that the approximation results obtained in this paper are more general than some previous results. Finally, we obtain the rate of Abel convergence for the corresponding operators. | mathematics |
We examine the L^2-gradient flow of Euler's elastic energy for closed curves in hyperbolic space and prove convergence to the global minimizer for initial curves with elastic energy bounded by 16. We show the sharpness of this bound by constructing a class of curves whose lengths blow up in infinite time. The convergence results follow from a constrained sharp Reilly-type inequality. | mathematics |
We propose a low-complexity near-optimal wavelength allocation technique for quantum key distribution access networks that rely on wavelength division multiple access. Such networks would allow users to send quantum and classical signals simultaneously on the same optical fiber infrastructure. Users can be connected to the access network via optical wireless or wired links. We account for the background noise present in the environment, as well as the Raman noise generated by classical channels, and calculate the secret key generation rate for quantum channels in the finite-key setting. This allows us to examine the feasibility of such systems in realistic scenarios when the secret key exchange needs to be achieved in a limited time scale. Our numerical results show that, by proper choice of system parameters for this noisy system, it is possible to exchange a secret key in tens of seconds. Moreover, our proposed algorithm can enhance the key rate of quantum channels, especially in high noise and/or high loss regimes of operation. | quantum physics |
We present a method for descattering and quantitative density reconstruction in polyenergetic X-ray computed tomography (CT) based on fitting local models of scatter. X-ray CT is widely used in medical and industrial applications. If not accounted for during reconstruction, X-ray scatter creates a loss of contrast and introduces severe image artifacts including cupping, shading, and streaks. Even when these qualitative artifacts are not apparent, scatter poses a major obstacle in obtaining accurate quantitative radiographic reconstructions. Our approach to estimating scatter is to generate a training set of radiographs with and without scatter using particle transport simulation software. We then learn a locally adaptive model, i.e., one comprised of many models, each fit to a local neighborhood of the training data. We use this scatter model inside an iterative descattering algorithm and reconstruct densities from the corrected data. Our experiments on monoenergetic and polyenergetic data show that, when applied locally, even simple, linear models are highly-effective at estimating scatter. Further, descattering approaches based on these local models can reduce the effect of scatter on density reconstruction by more than half. | electrical engineering and systems science |
Logical connectives and their implications on the meaning of a natural language sentence are a fundamental aspect of understanding. In this paper, we investigate whether visual question answering (VQA) systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions. When put under this \textit{Lens of Logic}, state-of-the-art VQA models have difficulty in correctly answering these logically composed questions. We construct an augmentation of the VQA dataset as a benchmark, with questions containing logical compositions and linguistic transformations (negation, disjunction, conjunction, and antonyms). We propose our {Lens of Logic (LOL)} model which uses question-attention and logic-attention to understand logical connectives in the question, and a novel Fr\'echet-Compatibility Loss, which ensures that the answers of the component questions and the composed question are consistent with the inferred logical operation. Our model shows substantial improvement in learning logical compositions while retaining performance on VQA. We suggest this work as a move towards robustness by embedding logical connectives in visual understanding. | computer science |
Several recent studies have shown that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, resulting in models that fail to generalize to out-of-domain datasets and are likely to perform poorly in real-world scenarios. We propose two learning strategies to train neural models, which are more robust to such biases and transfer better to out-of-domain datasets. The biases are specified in terms of one or more bias-only models, which learn to leverage the dataset biases. During training, the bias-only models' predictions are used to adjust the loss of the base model to reduce its reliance on biases by down-weighting the biased examples and focusing the training on the hard examples. We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data. Results show that our debiasing methods greatly improve robustness in all settings and better transfer to other textual entailment datasets. Our code and data are publicly available in \url{https://github.com/rabeehk/robust-nli}. | computer science |
Normal operations of electrochemical devices such as solid oxide fuel cells (SOFC), solid oxide electrolyzer cells (SOEC) and lithium ion batteries (LIB) sometimes fail because of unexpected formation of internal phases. These phases include oxygen bubbles at grain boundaries inside the zirconia electrolyte of SOEC, isolated Li metal islands inside the (garnet type) Li7La3Zr2O12 electrolyte of all-solid-state LIB, and similar Na metal islands inside the Na-beta-alumina and NASICON electrolytes of Na-S batteries. Remarkably, although the devices can operate in both polarities, the propensity for failure depends on the polarity. Here we explain these and other phenomena in nominally ionic solid electrolytes and mixed-conducting electrodes in simple thermodynamic and kinetic terms: the unexpected internal phases are caused by a large potential jump that is needed to push a constant ion or electron flow through its internal transport bottleneck. Definite rules for internal phase formation including its polarity dependence are formulated to help predict and mitigate it, which leads to microstructural instability, efficiency deterioration and breakdown. | condensed matter |
In this paper, we investigate the problem of estimating the phase of a coherent state in the presence of unavoidable noisy quantum states. These unwarranted quantum states are represented by outlier quantum states in this study. We first present a statistical framework of robust statistics in a quantum system to handle outlier quantum states. We then apply the method of M-estimators to suppress untrusted measurement outcomes due to outlier quantum states. Our proposal has the advantage over the classical methods in being systematic, easy to implement, and robust against occurrence of noisy states. | quantum physics |
Cavity-optomechanics is an ideal platform for the generation non-Gaussian quantum states due to the anharmonic interaction between the light field and the mechanical oscillator; but exactly this interaction also impedes the preparation in pure states of the light field. In this paper we derive a driving protocol that helps to exploit the anharmonic interaction for state preparation, and that ensures that the state of the light field remains close-to-pure. This shall enable the deterministic preparation of photon Fock states or coherent superpositions thereof. | quantum physics |
Performance indicators that contributed to success at the group stage and play-off stages of the 2019 Rugby World Cup were analysed using publicly available data obtained from the official tournament website using both a non-parametric statistical technique, Wilcoxon's signed rank test, and a decision rules technique from machine learning called RIPPER. Our statistical results found that ball carry effectiveness (percentage of ball carries that penetrated the opposition gain-line) and total metres gained (kick metres plus carry metres) were found to contribute to success at both stages of the tournament and that indicators that contributed to success during the group stages (dominating possession, making more ball carries, making more passes, winning more rucks, and making less tackles) did not contribute to success at the play-off stage. Our results using RIPPER found that low ball carries and a low lineout success percentage jointly contributed to losing at the group stage, while winning a low number of rucks and carrying over the gain-line a sufficient number of times contributed to winning at the play-off stage of the tournament. The results emphasise the need for teams to adapt their playing strategies from the group stage to the play-off stage at tournament in order to be successful. | statistics |
In spite of its fundamental importance in quantum science and technology, the experimental certification of nonclassicality is still a challenging task, especially in realistic scenarios where losses and noise imbue the system. Here, we present the first experimental implementation of the recently introduced phase-space inequalities for nonclassicality certification, which conceptually unite phase-space representations with correlation conditions. We demonstrate the practicality and sensitivity of this approach by studying nonclassicality of a family of noisy and lossy quantum states of light. To this end, we experimentally generate single-photon-added thermal states with various thermal mean photon numbers and detect them at different loss levels. Based on the reconstructed Wigner and Husimi Q functions, the inequality conditions detect nonclassicality despite the fact that the involved distributions are nonnegative, which includes cases of high losses (93%) and cases where other established methods do not reveal nonclassicality. We show the advantages of the implemented approach and discuss possible extensions that assure a wide applicability for quantum science and technologies. | quantum physics |
We show that color-breaking vacua may develop at high temperature in the Mini-Split SUSY scenario. This can lead to a nontrivial cosmological history of the universe, including strong first order phase transitions and domain wall production. Given the typical PeV energy scale associated with Mini-Split SUSY models, a stochastic gravitational wave background at frequencies around 100 Hz is expected. We study the potential for detection of such a signal in future gravitational wave experiments. | high energy physics phenomenology |
We study symmetries of open bosonic systems in the presence of laser pumping. Non-Hermitian Hamiltonians describing these systems can be parity-time (${\cal{PT}}$) symmetric in special cases only. Systems exhibiting this symmetry are characterised by real-valued energy spectra and can display exceptional points, where a symmetry-breaking transition occurs. We demonstrate that there is a more general type of symmetry, i.e., rotation-time (${\cal{RT}}$) symmetry. We observe that ${\cal{RT}}$-symmetric non-Hermitian Hamiltonians exhibit real-valued energy spectra which can be made singular by symmetry breaking. To calculate the spectra of the studied bosonic non-diagonalisable Hamiltonians we apply diagonalisation methods based on bosonic algebra. Finally, we list a versatile set rules allowing to immediately identifying or constructing ${\cal{RT}}$-symmetric Hamiltonians. We believe that our results on the ${\cal{RT}}$-symmetric class of bosonic systems and their spectral singularities can lead to new applications inspired by those of the ${\cal{PT}}$-symmetric systems. | quantum physics |
We consider local (or perturbative) gauge anomalies in models which extend the rank of the Standard Model (SM) gauge group and the chiral fermion content only by $n$ SM singlets. We give a general solution to the anomaly cancellation conditions (ACCs) of an additional $U(1)$ subgroup for the ACCs that involve only SM fermions and we examine whether a corresponding solution exists for the remaining ACCs. We show that a solution to the remaining ACCs always exists for $n \geq 5$ in the family non-universal case or $n \geq 3$ in the family-universal case. In the special case where only a single family carries non-vanishing charges, we find a general solution to all ACCs, for any value of $n$. | high energy physics theory |
A famous and wide-open problem, going back to at least the early 1970's, concerns the classification of chromatic polynomials of graphs. Toward this classification problem, one may ask for necessary inequalities among the coefficients of a chromatic polynomial, and we contribute such inequalities when a chromatic polynomial $\chi_G(n) = \chi^*_0 \binom {n+d} d + \chi^*_1 \binom {n+d-1} d + \dots + \chi^*_d \binom n d$ is written in terms of a binomial-coefficient basis. For example, we show that $\chi^*_{ j } \le \chi^*_{ d-j }$, for $0 \le j \le \frac{ d }{ 2 }$. Similar results hold for flow and tension polynomials enumerating either modular or integral nowhere-zero flows/tensions of a graph. Our theorems follow from connections among chromatic, flow, tension, and order polynomials, as well as Ehrhart polynomials of lattice polytopes that admit unimodular triangulations. Our results use Ehrhart inequalities due to Athanasiadis and Stapledon and are related to recent work by Hersh--Swartz and Breuer--Dall, where inequalities similar to some of ours were derived using algebraic-combinatorial methods. | mathematics |
The Tunka Radio Extension (Tunka-Rex) is a digital antenna array operating in the frequency band of 30-80 MHz, measuring the radio emission of air-showers induced by ultra-high energy cosmic rays. Tunka-Rex is co-located with the TAIGA experiment in Siberia and consists of 63 antennas, 57 of them in a densely instrumented area of about 1km2. The signals from the air showers are short pulses, which have a duration of tens of nanoseconds and are recorded in traces of about 5{\mu}s length. The Tunka-Rex analysis of cosmic-ray events is based on the reconstruction of these signals, in particular, their positions in the traces and amplitudes. This reconstruction suffers at low signal-to-noise ratios, i.e. when the recorded traces are dominated by background. To lower the threshold of the detection and increase the efficiency, we apply advanced methods of signal reconstruction, namely matched filtering and deep neural networks with autoencoder architecture. In the present work we show the comparison between the signal reconstructions obtained with these techniques, and give an example of the first reconstruction of the Tunka-Rex signals obtained with a deep neural networks. | astrophysics |
In recent years, the mean field theory has been applied to the study of neural networks and has achieved a great deal of success. The theory has been applied to various neural network structures, including CNNs, RNNs, Residual networks, and Batch normalization. Inevitably, recent work has also covered the use of dropout. The mean field theory shows that the existence of depth scales that limit the maximum depth of signal propagation and gradient backpropagation. However, the gradient backpropagation is derived under the gradient independence assumption that weights used during feed forward are drawn independently from the ones used in backpropagation. This is not how neural networks are trained in a real setting. Instead, the same weights used in a feed-forward step needs to be carried over to its corresponding backpropagation. Using this realistic condition, we perform theoretical computation on linear dropout networks and a series of experiments on dropout networks. Our empirical results show an interesting phenomenon that the length gradients can backpropagate for a single input and a pair of inputs are governed by the same depth scale. Besides, we study the relationship between variance and mean of statistical metrics of the gradient and shown an emergence of universality. Finally, we investigate the maximum trainable length for deep dropout networks through a series of experiments using MNIST and CIFAR10 and provide a more precise empirical formula that describes the trainable length than original work. | computer science |
This paper reports certain ambiguities in the calculation of the ensemble average $\left<T_\mu{}_\nu\right>$ of the stress-energy-momentum tensor of an arbitrarily coupled massless scalar field in one-dimensional boxes in flat spacetime. The study addresses a box with periodic boundary condition (a circle) and boxes with reflecting edges (with Dirichlet's or Neumann's boundary conditions at the endpoints). The expressions for $\left<T^\mu{}^\nu\right>$ are obtained from finite-temperature Green functions. In an appendix, in order to control divergences typical of two dimensions, these Green functions are calculated for related backgrounds with arbitrary number of dimensions and for scalar fields of arbitrary mass, and specialized in the text to two dimensions and for massless fields. The ambiguities arise due to the presence in $\left<T^\mu{}^\nu\right>$ of double series that are not absolutely convergent. The order in which the two associated summations are evaluated matters, leading to two different thermodynamics for each type of box. In the case of a circle, it is shown that the ambiguity corresponds to the classic controversy in the literature whether or not zero mode contributions should be taken into account in computations of partition functions. In the case of boxes with reflecting edges, it results that one of the thermodynamics corresponds to a total energy (obtained by integrating the non homogeneous energy density over space) that does not depend on the curvature coupling parameter $\xi$ as expected; whereas the other thermodynamics curiously corresponds to a total energy that does depend on $\xi$. Thermodynamic requirements (such as local and global stability) and their restrictions to the values of $\xi$ are considered. | high energy physics theory |
Open systems may be perturbed out of equilibrium states either by subjecting them to nonconservative forces or by injecting external currents. For small perturbations, the linear response is quantified by two different matrices. In the framework of network thermodynamics, under very broad assumptions, we show that the two matrices are connected by a supersymmetry, predicting that they have the same spectrum - up to the degeneracy of the ground state. Our approach brings into the mathematics of supersymmetry a new ingredient, namely oblique projection operators. | condensed matter |
Difficult it is to formulate achievable sensitivity bounds for quantum multiparameter estimation. Consider a special case, one parameter from many: many parameters of a process are unknown; estimate a specific linear combination of these parameters without having the ability to control any of the parameters. Superficially similar to single-parameter estimation, the problem retains genuinely multiparameter aspects. Geometric reasoning demonstrates the conditions, necessary and sufficient, for saturating the fundamental and attainable quantum-process bound in this context. | quantum physics |
Resonant electron-positron pair production by a high-energy gamma quantum in the field of a nucleus and a quasi-monochromatic laser wave was theoretically studied. Under the resonant condition an intermediate virtual electron (positron) in the laser field becomes a real particle. Due to that fact the initial process of the second order in the fine structure constant in a laser field effectively reduces into two successive processes of the first order: the laser-stimulated Breit-Wheeler process and the laser-assisted process of an intermediate electron (positron) scattering by a nucleus. It is shown that there is a threshold energy for the initial gamma quantum, which significantly depends on the number of absorbed photons of a wave. In the resonant condition the electron-positron pair energy is determined by the outgoing angle of a positron (for the channel A) or an electron (for the channel B) relative to the initial gamma quantum momentum. The differential cross sections for the first few resonances with simultaneous registration of the energy and the outgoing angle of a positron or an electron were obtained. For the initial gamma quantum energy ${\omega_i} = 125\;{\rm{GeV}}$ the resonant energies of an electron-positron pair for the case of first three resonances can be measured with a very high magnitude of the differential cross section: from $ \sim {10^{13}}$ for the first resonance to $ \sim {10^8}$ (in the units of $\alpha {Z^2}r_e^2$) for the third resonance. | physics |
We examine the theoretical motivations for long-lived particle (LLP) signals at the LHC in a comprehensive survey of Standard Model (SM) extensions. LLPs are a common prediction of a wide range of theories that address unsolved fundamental mysteries such as naturalness, dark matter, baryogenesis and neutrino masses, and represent a natural and generic possibility for physics beyond the SM (BSM). In most cases the LLP lifetime can be treated as a free parameter from the $\mu$m scale up to the Big Bang Nucleosynthesis limit of $\sim 10^7$m. Neutral LLPs with lifetimes above $\sim$ 100m are particularly difficult to probe, as the sensitivity of the LHC main detectors is limited by challenging backgrounds, triggers, and small acceptances. MATHUSLA is a proposal for a minimally instrumented, large-volume surface detector near ATLAS or CMS. It would search for neutral LLPs produced in HL-LHC collisions by reconstructing displaced vertices (DVs) in a low-background environment, extending the sensitivity of the main detectors by orders of magnitude in the long-lifetime regime. In this white paper we study the LLP physics opportunities afforded by a MATHUSLA-like detector at the HL-LHC. We develop a model-independent approach to describe the sensitivity of MATHUSLA to BSM LLP signals, and compare it to DV and missing energy searches at ATLAS or CMS. We then explore the BSM motivations for LLPs in considerable detail, presenting a large number of new sensitivity studies. While our discussion is especially oriented towards the long-lifetime regime at MATHUSLA, this survey underlines the importance of a varied LLP search program at the LHC in general. By synthesizing these results into a general discussion of the top-down and bottom-up motivations for LLP searches, it is our aim to demonstrate the exceptional strength and breadth of the physics case for the construction of the MATHUSLA detector. | high energy physics phenomenology |
Preterm neonates are highly likely to suffer from ventriculomegaly, a dilation of the Cerebral Ventricular System (CVS). This condition can develop into life-threatening hydrocephalus and is correlated with future neuro-developmental impairments. Consequently, it must be detected and monitored by physicians. In clinical routing, manual 2D measurements are performed on 2D ultrasound (US) images to estimate the CVS volume but this practice is imprecise due to the unavailability of 3D information. A way to tackle this problem would be to develop automatic CVS segmentation algorithms for 3D US data. In this paper, we investigate the potential of 2D and 3D Convolutional Neural Networks (CNN) to solve this complex task and propose to use Compositional Pattern Producing Network (CPPN) to enable the CNNs to learn CVS location. Our database was composed of 25 3D US volumes collected on 21 preterm nenonates at the age of $35.8 \pm 1.6$ gestational weeks. We found that the CPPN enables to encode CVS location, which increases the accuracy of the CNNs when they have few layers. Accuracy of the 2D and 3D CNNs reached intraobserver variability (IOV) in the case of dilated ventricles with Dice of $0.893 \pm 0.008$ and $0.886 \pm 0.004$ respectively (IOV = $0.898 \pm 0.008$) and with volume errors of $0.45 \pm 0.42$ cm$^3$ and $0.36 \pm 0.24$ cm$^3$ respectively (IOV = $0.41 \pm 0.05$ cm$^3$). 3D CNNs were more accurate than 2D CNNs in the case of normal ventricles with Dice of $0.797 \pm 0.041$ against $0.776 \pm 0.038$ (IOV = $0.816 \pm 0.009$) and volume errors of $0.35 \pm 0.29$ cm$^3$ against $0.35 \pm 0.24$ cm$^3$ (IOV = $0.2 \pm 0.11$ cm$^3$). The best segmentation time of volumes of size $320 \times 320 \times 320$ was obtained by a 2D CNN in $3.5 \pm 0.2$ s. | electrical engineering and systems science |
We propose a generalized model of electronic structure modification in HTSC cuprates and ferropnictides under doping. In this model the role of doping consists in only a local change in the electronic structures of the parent phases of cuprates and ferropnictides due to the formation of trion complexes comprising a doped carrier localized in unit cell and charge transfer (CT) excitons around it. These CT excitons emerge in CuO4 or AsFe4 plaquettes in the CuO2 or FeAs basal planes (CT plaquettes) under the influence of doped carrier, restricting its itinerancy. As the dopant concentration is increased, CT plaquettes combine into clusters of the so called CT phase. It is this CT phase that is related in the model to the HTSC phase. In support of this assumption, we determined the ranges of dopant concentrations conforming to the existence of percolation clusters of the CT phase; these ranges were shown to coincide with the positions of the superconducting domes on the phase diagrams of these compounds. The model also perfectly describes subtle features of the phase diagrams of various cuprates and ferropnictides including the 1/8 anomaly, narrow peaks in the dependences of the London penetration depth on the concentration of the dopant, and other specific features. The mechanism of the generation of free carriers in the CT phase, provided by intrinsic self-doping, was considered. The mechanism is not directly related to external doping, but is due to the interaction of band electrons with so called Heitler-London (HL) centres inherently existing in the percolation cluster of CT phase and representing pairs of adjacent CuO4 or AsFe4 CT plaquettes in the CuO2 or FeAs basal planes. Material in CT phase was shown to represent a medium, in which the mechanism of excitonic superconductivity, specified by the interaction of band electrons with HL centres, can be realized. | condensed matter |
The general charge-conserving effective scalar field theory incorporating violations of Lorentz symmetry is presented. The dispersion relation is used to infer the effects of spin-independent Lorentz violation on point-particle motion. A large class of associated Finsler spaces is derived, and the properties of these spaces is explored. | high energy physics phenomenology |
We consider the reduction of an elliptic curve defined over the rational numbers modulo primes in a given arithmetic progression and investigate how often the subgroup of rational points of this reduced curve is cyclic as a special case of Serre's Cyclicity Conjecture. | mathematics |
Cosmology is well suited to study the effects of long range interactions due to the large densities in the early Universe. In this article, we explore how the energy density and equation of state of a fermion system diverge from the commonly assumed ideal gas form under the presence of scalar long range interactions with a range much smaller than cosmological scales. In this scenario, "small"-scale physics can impact our largest-scale observations. As a benchmark, we apply the formalism to self-interacting neutrinos, performing an analysis to present and future cosmological data. Our results show that the current cosmological neutrino mass bound is fully avoided in the presence of a long range interaction, opening the possibility for a laboratory neutrino mass detection in the near future. We also demonstrate an interesting complementarity between neutrino laboratory experiments and the future EUCLID survey. | high energy physics phenomenology |
One of the most pressing questions in climate science is that of the effect of anthropogenic aerosol on the Earth's energy balance. Aerosols provide the `seeds' on which cloud droplets form, and changes in the amount of aerosol available to a cloud can change its brightness and other physical properties such as optical thickness and spatial extent. Clouds play a critical role in moderating global temperatures and small perturbations can lead to significant amounts of cooling or warming. Uncertainty in this effect is so large it is not currently known if it is negligible, or provides a large enough cooling to largely negate present-day warming by CO2. This work uses deep convolutional neural networks to look for two particular perturbations in clouds due to anthropogenic aerosol and assess their properties and prevalence, providing valuable insights into their climatic effects. | physics |
Two-dimensional (2D) semiconductors are widely recognized as attractive channel materials for low-power electronics. However, an unresolved challenge is the integration of high-quality, ultrathin high-\k{appa} dielectrics that fully meet the roadmap requirements for low-power applications. With a dangling-bond free surface, the deposition of dielectrics by atomic layer deposition (ALD) on 2D materials is usually characterized with non-uniform nucleation and island formation, producing a highly porous dielectric layer with serious leakage particularly at the small equivalent oxide thickness (EOT) limit. Here, we report the robust ALD of highly uniform high-\k{appa} dielectric on 2D semiconductors by using ~0.3 nm-thick exclusively monolayer molecular crystal as seeding layer. Ultrathin dielectrics down to 1 nm EOT is realized on graphene, MoS2 and WSe2, with considerably reduced roughness, density of interface states, leakage current and improved breakdown field compared to prior methods. Taking advantage of the reduced EOT, we demonstrate graphene RF transistors operating at 60 GHz, as well as MoS2 and WSe2 complementary metal-oxide-semiconductor (CMOS) transistors with Vdd =0.8 V and ideal subthreshold swing (SS) of 60 mV/dec, 20 nm-channel-length MoS2 transistors with on/off ratio over 10^7. These studies highlight that our dielectric integration method is generally applicable for different 2D materials, and compatible with top-down fabrication process on large-area chemical vapor deposited films. | physics |
Motivated by the goal of having a building block in the direct design of data-driven controllers for nonlinear systems, we show how, for an unknown discrete-time bilinear system, the data collected in an offline open-loop experiment enable us to design a feedback controller and provide a guaranteed under-approximation of its basin of attraction. Both can be obtained by solving a linear matrix inequality for a fixed scalar parameter, and possibly iterating on different values of that parameter. The results of this data-based approach are compared with the ideal case when the model is known perfectly. | electrical engineering and systems science |
Renewable energy has achieved high penetration rates in many areas, leading to curtailment, especially if existing network infrastructure is insufficient and energy generated cannot be exported. In this context, Distribution Network Operators (DNOs) face a significant knowledge gap about how to implement curtailment rules that achieve desired operational objectives, but at the same time minimise disruption and economic losses for renewable generators. In this work, we study the properties of several curtailment rules widely used in UK renewable energy projects, and their effect on the viability of renewable generation investment. Moreover, we propose a new curtailment rule which guarantees fair allocation of curtailment amongst all generators with minimal disruption. Another key knowledge gap faced by DNOs is how to incentivise private network upgrades, especially in settings where several generators can use the same line against the payment of a transmission fee. In this work, we provide a solution to this problem by using tools from algorithmic game theory. Specifically, this setting can be modelled as a Stackelberg game between the private transmission line investor and local renewable generators, who are required to pay a transmission fee to access the line. We provide a method for computing the empirical equilibrium of this game, using a model that captures the stochastic nature of renewable energy generation and demand. Finally, we use the practical setting of a grid reinforcement project from the UK and a large dataset of wind speed measurements and demand to validate our model. We show that charging a transmission fee as a proportion of the feed-in tariff price between 15%-75% would allow both investors to implement their projects and achieve desirable distribution of the profit. | electrical engineering and systems science |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.