text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Stability of recurrent models is closely linked with trainability, generalizability and in some applications, safety. Methods that train stable recurrent neural networks, however, do so at a significant cost to expressibility. We propose an implicit model structure that allows for a convex parametrization of stable models using contraction analysis of non-linear systems. Using these stability conditions we propose a new approach to model initialization and then provide a number of empirical results comparing the performance of our proposed model set to previous stable RNNs and vanilla RNNs. By carefully controlling stability in the model, we observe a significant increase in the speed of training and model performance. | computer science |
We report, for the first time, on the cascade weight shedding phenomenon in deep neural networks where in response to pruning a small percentage of a network's weights, a large percentage of the remaining is shed over a few epochs during the ensuing fine-tuning phase. We show that cascade weight shedding, when present, can significantly improve the performance of an otherwise sub-optimal scheme such as random pruning. This explains why some pruning methods may perform well under certain circumstances, but poorly under others, e.g., ResNet50 vs. MobileNetV3. We provide insight into why the global magnitude-based pruning, i.e., GMP, despite its simplicity, provides a competitive performance for a wide range of scenarios. We also demonstrate cascade weight shedding's potential for improving GMP's accuracy, and reduce its computational complexity. In doing so, we highlight the importance of pruning and learning-rate schedules. We shed light on weight and learning-rate rewinding methods of re-training, showing their possible connections to the cascade weight shedding and reason for their advantage over fine-tuning. We also investigate cascade weight shedding's effect on the set of kept weights, and its implications for semi-structured pruning. Finally, we give directions for future research. | computer science |
Ultrafast physical random bit generation at hundreds of Gb/s rates, with verified randomness, is a crucial ingredient in secure communication and have recently emerged using optics based physical systems. Here we examine the inverse problem and measure the ratio of information bits that can be systematically embedded in a random bit sequence without degrading its certified randomness. These ratios exceed 0.01 in experimentally obtained long random bit sequences. Based on these findings we propose a high-capacity private-key cryptosystem with a finite key length, where the existence as well as the content of the communication is concealed in the random sequence. Our results call for a rethinking of the current quantitative definition of practical classical randomness as well as the measure of randomness generated by quantum methods, which have to include bounds using the proposed inverse information embedding method. | electrical engineering and systems science |
We study the first-order in $\alpha'$ corrections to non-extremal 4-dimensional dyonic Reissner-Nordstr\"om (RN) black holes with equal electric and magnetic charges in the context of Heterotic Superstring effective field theory (HST) compactified on a $T^{6}$. The particular embedding of the dyonic RN black hole in HST considered here is not supersymmetric in the extremal limit. We show that, at first order in $\alpha'$, consistency with the equations of motion of the HST demands additional scalar and vector fields become active, and we provide explicit expressions for all of them. We determine analytically the position of the event horizon of the black hole, as well as the corrections to the extremality bound, to the temperature and to the entropy, checking that they are related by the first law of black-hole thermodynamics, so that $\partial S/\partial M=1/T$. We discuss the implications of our results in the context of the Weak Gravity Conjecture, clarifying that entropy corrections for fixed mass and charge at extremality do not necessarily imply corrections to the extremal charge-to-mass ratio. | high energy physics theory |
Sentiment classification typically relies on a large amount of labeled data. In practice, the availability of labels is highly imbalanced among different languages, e.g., more English texts are labeled than texts in any other languages, which creates a considerable inequality in the quality of related information services received by users speaking different languages. To tackle this problem, cross-lingual sentiment classification approaches aim to transfer knowledge learned from one language that has abundant labeled examples (i.e., the source language, usually English) to another language with fewer labels (i.e., the target language). The source and the target languages are usually bridged through off-the-shelf machine translation tools. Through such a channel, cross-language sentiment patterns can be successfully learned from English and transferred into the target languages. This approach, however, often fails to capture sentiment knowledge specific to the target language, and thus compromises the accuracy of the downstream classification task. In this paper, we employ emojis, which are widely available in many languages, as a new channel to learn both the cross-language and the language-specific sentiment patterns. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language. The learned representations are then integrated to facilitate cross-lingual sentiment classification. The proposed method demonstrates state-of-the-art performance on benchmark datasets, which is sustained even when sentiment labels are scarce. | computer science |
Here we present a novel microlocal analysis of generalized Radon transforms which describe the integrals of $L^2$ functions of compact support over surfaces of revolution of $C^{\infty}$ curves $q$. We show that the Radon transforms are elliptic Fourier Integral Operators (FIO) and provide an analysis of the left projections $\Pi_L$. Our main theorem shows that $\Pi_L$ satisfies the semi-global Bolker assumption if and only if $g=q'/q$ is an immersion. An analysis of the visible singularities is presented, after which we derive novel Sobolev smoothness estimates for the Radon FIO. Our theory has specific applications of interest in Compton Scattering Tomography (CST) and Bragg Scattering Tomography (BST). We show that the CST and BST integration curves satisfy the Bolker assumption and provide simulated reconstructions from CST and BST data. Additionally we give example "sinusoidal" integration curves which do not satisfy Bolker and provide simulations of the image artefacts. The observed artefacts in reconstruction are shown to align exactly with our predictions. | mathematics |
Lung ultrasound imaging is reaching growing interest from the scientific community. On one side, thanks to its harmlessness and high descriptive power, this kind of diagnostic imaging has been largely adopted in sensitive applications, like the diagnosis and follow-up of preterm newborns in neonatal intensive care units. On the other side, state-of-the-art image analysis and pattern recognition approaches have recently proven their ability to fully exploit the rich information contained in these data, making them attractive for the research community. In this work, we present a thorough analysis of recent deep learning networks and training strategies carried out on a vast and challenging multicenter dataset comprising 87 patients with different diseases and gestational ages. These approaches are employed to assess the lung respiratory status from ultrasound images and are evaluated against a reference marker. The conducted analysis sheds some light on this problem by showing the critical points that can mislead the training procedure and proposes some adaptations to the specific data and task. The achieved results sensibly outperform those obtained by a previous work, which is based on textural features, and narrow the gap with the visual score predicted by the human experts. | electrical engineering and systems science |
Defect centers are promising candidates for waveguide-integrated silicon light sources. We demonstrate microresonator- and waveguide-coupled photoluminescence from silicon W~centers. Microphotoluminescence measurements indicate wavelengths on-resonance with resonator modes are preferentially coupled to an adjacent waveguide. Quality factors of at least 5,300 are measured, and free spectral ranges closely match expectation. The W~center phonon sideband can be used as a spectral diagnostic for a broader range waveguide-based devices on cryogenic silicon photonic platforms. | physics |
We study the dynamical properties of a driven-dissipative Bose-Hubbard model in the strongly interacting regime through a quantum trajectory approach with a cluster-Gutzwiller Ansatz for the wave function. This allows us to take classical and quantum correlations into account. By studying the dynamical hysteresis surface that arises by sweeping through the coherent driving strength we show that the phase diagram for this system is in qualitative correspondence with the Gutzwiller mean-field result. However, quantitative differences are present and the inclusion of classical and quantum correlations causes a significant shift of the critical parameters. Additionally, we show that approximation techniques relying on a unimodal distribution such as the mean field and $1/z$ expansion drastically underestimate the particle number fluctuations. Finally, we show that a proposed mapping of the driven-dissipative many-body Bose-Hubbard model onto a single driven-dissipative Kerr model is not accurate for parameters in the hysteresis regime. | quantum physics |
The left-right symmetric model (LRSM) is a well-motivated framework to restore parity and implement seesaw mechanisms for the tiny neutrino masses at or above the TeV-scale, and has a very rich phenomenology at both the high-energy and high-precision frontiers. In this paper we examine the phase transition and resultant gravitational waves (GWs) in the minimal version of LRSM. Taking into account all the theoretical and experimental constraints on LRSM, we identify the parameter regions with strong first-order phase transition and detectable GWs in the future experiments. It turns out in a sizeable region of the parameter space, GWs can be generated in the phase transition with the strength of $10^{-17}$ to $10^{-12}$ at the frequency of 0.1 to 10 Hz, which can be detected by BBO and DECIGO. Furthermore, GWs in the LRSM favor a relatively light $SU(2)_R$-breaking scalar $H_3^0$, which is largely complementary to the direct searches of a long-lived neutral scalar at the high-energy colliders. It is found that the other heavy scalars and the right-handed neutrinos in the LRSM also play an important part for GW signal production in the phase transition. | high energy physics phenomenology |
In traffic forecasting, graph convolutional networks (GCNs), which model traffic flows as spatio-temporal graphs, have achieved remarkable performance. However, existing GCN-based methods heuristically define the graph structure as the physical topology of the road network, ignoring potential dependence of the graph structure over traffic data. And the defined graph structure is deterministic, which lacks investigation of uncertainty. In this paper, we propose a Bayesian Spatio-Temporal Graph Convolutional Network (BSTGCN) for traffic prediction. The graph structure in our network is learned from the physical topology of the road network and traffic data in an end-to-end manner, which discovers a more accurate description of the relationship among traffic flows. Moreover, a parametric generative model is proposed to represent the graph structure, which enhances the generalization capability of GCNs. We verify the effectiveness of our method on two real-world datasets, and the experimental results demonstrate that BSTGCN attains superior performance compared with state-of-the-art methods. | computer science |
One of the open issues in evaluations of the contribution from hadronic light-by-light scattering to the anomalous magnetic moment of the muon $(g-2)_\mu$ concerns the role of heavier scalar, axial-vector, and tensor-meson intermediate states. The coupling of axial vectors to virtual photons is suppressed for small virtualities by the Landau-Yang theorem, but otherwise there are few rigorous constraints on the corresponding form factors. In this paper, we first derive the Lorentz decomposition of the two-photon matrix elements into scalar functions following the general recipe by Bardeen, Tung, and Tarrach. Based on this decomposition, we then calculate the asymptotic behavior of the meson transition form factors from a light-cone expansion in analogy to the asymptotic limits for the pseudoscalar transition form factor derived by Brodsky and Lepage. Finally, we compare our results to existing data as well as previous models employed in the literature. | high energy physics phenomenology |
GW190521 is a merger of two black holes (BHs), at least one of which is in so-called the pair-instability (PI) mass gap, and should be hard to form due to effects of PI supernovae (SNe) and pulsational PI (PPI). We study the formation of GW190521-like BH-BHs under Population (Pop.) III environments by binary population synthesis (BPS) calculations. We reveal that convective overshooting in stellar evolution strongly affects the formation of GW190521-like BH-BHs. A model with a small overshoot parameter (similar to GENEC) can form GW190521-like BH-BHs. The derived merger rate is $4 \times 10^{-2}$ yr$^{-1}$ Gpc$^{-3}$ at redshift of $\sim 0.82$, comparable to the merger rate of GW190521-like BH-BHs inferred by gravitational wave observations. In this model, a $\sim 90 M_\odot$ star can collapse to a $\sim 90 M_\odot$ BH avoiding PPI and PISN even if it is a member of a binary star. This is because it expands up to $\sim 10^2 R_\odot$, and lose little mass through binary evolution. However, a model with a large overshoot parameter (similar to Stern) cannot form GW190521-like BH-BHs at all. We cannot conclude that a Pop. III binary is the origin of GW190521, since the correct overshoot parameter is highly uncertain. If a Pop. III binary is the origin of GW190521, the merger rate of BH-BHs including a $100-135 M_\odot$ BH is much smaller than that of GW190521-like BH-BHs. This will be assessed by gravitational wave observations in the near future. | astrophysics |
A major difficulty in the analysis of propagation of the coronavirus is that many infected individuals show no symptoms of Covid-19. This implies a lack of information on the total counts of infected individuals and of recovered and immunized individuals. In this paper, we consider parametric time varying Markov processes of Coronavirus propagation and show how to estimate the model parameters and approximate the unobserved counts from daily numbers of infected and detected individuals and total daily death counts. This model-based approach is illustrated in an application to French data. | statistics |
Descriptive titles provide crucial context for interpreting tables that are extracted from web pages and are a key component of table-based web applications. Prior approaches have attempted to produce titles by selecting existing text snippets associated with the table. These approaches, however, are limited by their dependence on suitable titles existing a priori. In our user study, we observe that the relevant information for the title tends to be scattered across the page, and often--more than 80% of the time--does not appear verbatim anywhere in the page. We propose instead the application of a sequence-to-sequence neural network model as a more generalizable means of generating high-quality titles. This is accomplished by extracting many text snippets that have potentially relevant information to the table, encoding them into an input sequence, and using both copy and generation mechanisms in the decoder to balance relevance and readability of the generated title. We validate this approach with human evaluation on sample web tables and report that while sequence models with only a copy mechanism or only a generation mechanism are easily outperformed by simple selection-based baselines, the model with both capabilities outperforms them all, approaching the quality of crowdsourced titles while training on fewer than ten thousand examples. To the best of our knowledge, the proposed technique is the first to consider text generation methods for table titles and establishes a new state of the art. | computer science |
With the increasing penetration of electronic loads and distributed energy resources (DERs), conventional load models cannot capture their dynamics. Therefore, a new comprehensive composite load model is developed by Western Electricity Coordinating Council (WECC). However, this model is a complex high-order nonlinear system with multi-time-scale property, which poses challenges on stability analysis and computational burden in large-scale simulations. In order to reduce the computational burden while preserving the accuracy of the original model, this paper proposes a generic high-fidelity order reduction approach and then apply it to WECC composite load model. First, we develop a large-signal order reduction (LSOR) method using singular perturbation theory. In this method, the fast dynamics are integrated into the slow dynamics to preserve the transient characteristics of fast dynamics. Then, we propose the necessary conditions for accurate order reduction and embed them into the LSOR to improve and guarantee the accuracy of reduced-order model. Finally, we develop the reduced-order WECC composite load model using the proposed algorithm. Simulation results show the reduced-order large signal model significantly alleviates the computational burden while maintaining similar dynamic responses as the original composite load model. | electrical engineering and systems science |
We propose a method for improving adversarial robustness by addition of a new bounded function just before softmax. Recent studies hypothesize that small logits (inputs of softmax) by logit regularization can improve adversarial robustness of deep learning. Following this hypothesis, we analyze norms of logit vectors at the optimal point under the assumption of universal approximation and explore new methods for constraining logits by addition of a bounded function before softmax. We theoretically and empirically reveal that small logits by addition of a common activation function, e.g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms. From the theoretical findings, we develop the new bounded function. The addition of our function improves adversarial robustness because it makes logit and pre-logit vectors have small norms. Since our method only adds one activation function before softmax, it is easy to combine our method with adversarial training. Our experiments demonstrate that our method is comparable to logit regularization methods in terms of accuracies on adversarially perturbed datasets without adversarial training. Furthermore, it is superior or comparable to logit regularization methods and a recent defense method (TRADES) when using adversarial training. | statistics |
We find that the best RVB state of the spin-$\frac{1}{2}$ Kagome antiferromagnetic Heisenberg model(spin-$\frac{1}{2}$ KAFH) is described by a $Z_{2}$ gapped mean field ansatz, which hosts a mean field spinon dispersion very different from that of the widely studied $U(1)$ Dirac spin liquid state. However, we find that the physical spin fluctuation spectrum calculated from the Gutzwiller projected RPA(GRPA) theory above such an RVB state is actually gapless and is almost identical to that above the $U(1)$ Dirac spin liquid state. We find that such a peculiar behavior can be attributed to the unique flat band physics on the Kagome lattice, which makes the mapping between the mean field ansatz and the RVB state non-injective. We find that the spin fluctuation spectrum of the spin-$\frac{1}{2}$ KAFH is not at all featureless, but is characterized by a prominent spectral peak at about $0.25J$ around the $\mathbf{M}$ point, which is immersed in a broad continuum extending to $2.7J$. Based on these results, we argue that the spectral peak below 2 meV in the inelastic neutron scattering(INS) spectrum of Hebertsmithite ZnCu$_{3}$(OH)$_{6}$Cl$_{2}$, which has been attributed to the contribution of Cu$^{2+}$ impurity spins occupying the Zn$^{2+}$ site, should rather be understood as the intrinsic contribution from the Kagome layer. We propose to verify such a picture by measuring the Knight shift on the Cu site, rather than the O site, which is almost blind to the spin fluctuation at the $\mathbf{M}$ point as a result of the strong antiferromagnetic correlation between nearest neighboring spins on the Kagome lattice. | condensed matter |
We present in detail and validate an effective Monte Carlo approach for the calculation of the nuclear vibrational densities via integration of molecular eigenfunctions that we have preliminary employed to calculate the densities of the ground and the excited OH stretch vibrational states in protonated glycine molecule [C. Aieta et. al. Nat. Commun. 11, 4348 (2020)]. Here, we first validate and discuss in detail the features of the method on a benchmark water molecule. Then, we apply it to calculate on-the-fly the ab initio anharmonic nuclear densities in correspondence of the fundamental transitions of NH and CH stretches in protonated glycine. We show how we can gain both qualitative and quantitative physical insight by inspection of different one-nucleus densities and assign a character to spectroscopic absorption peaks using the expansion of vibrational states in terms of harmonic basis functions. The visualization of the nuclear vibrations in a purely quantum picture allows us to observe and quantify the effects of anharmonicity on the molecular structure, and to exploit the effect of IR excitations on specific bonds or functional groups, beyond the harmonic approximation. We also calculate the quantum probability distribution of bond-lengths, angles and dihedrals of the molecule. Notably, we observe how in the case of one type of fundamental NH stretching the typical harmonic nodal pattern is absent in the anharmonic distribution. | physics |
In this paper, we use the sieve methods to give a good upper bound of the exceptional real zeros. | mathematics |
ZX-calculus is graphical language for quantum computing which usually focuses on qubits. In this paper, we generalise qubit ZX-calculus to qudit ZX-calculus in any finite dimension by introducing suitable generators, especially a carefully chosen triangle node. As a consequence we obtain a set of rewriting rules which can be seen as a direct generalisation of qubit rules, and a normal form for any qudit vectors. Based on the qudit ZX-calculi, we propose a graphical formalism called qufinite ZX-calculus as a unified framework for all qudit ZX-calculi, which is universal for finite quantum theory due to a normal form for matrix of any finite size. As a result, it would be interesting to give a fine-grained version of the diagrammatic reconstruction of finite quantum theory [Selby2021reconstructing] within the framework of qufinite ZX-calculus. | quantum physics |
Time has emerged as a new degree of freedom for metamaterials, promising new pathways in wave control. However, electromagnetism suffers from limitations in the modulation speed of material parameters. Here we argue that these limitations can be circumvented by introducing a traveling-wave refractive index modulation, with the same phase velocity of the waves. We show how the concept of "luminal grating" can yield giant nonreciprocity, achieve efficient one-way amplification, pulse compression and frequency up-conversion, proposing a realistic implementation in double-layer graphene. | condensed matter |
We analyzed the archival data of the continuum emission at six wavelengths from 3 to 0.4 mm and 13CO and C18O (1-0, 2-1, and 3-2) lines in the protoplanetary disk around HD 142527 obtained with the Atacama Large Millimeter/submillimeter Array. We performed fitting to the spectral energy distributions obtained at the six wavelengths with the gray-body slab models to estimate the distributions of the dust surface density and spectral index of dust absorption coefficient beta. We also estimated the distribution of the gas column density by fitting the C18O spectra and measured the disk rotation by fitting the Keplerian disk models to the C18O data. We found super- and sub-Keplerian rotation inside and outside the dust ring in the northwest in the HD 142527 disk, suggestive of the presence of a local pressure bump. In comparison with our estimated dust and gas distributions, the location of the pressure bump is coincident with the region showing a three times higher dust density and a three times lower gas-to-dust mass ratio than the mean values in the disk, suggesting dust trapping in the pressure bump. Nevertheless, there is no correlation between our derived beta distribution and the location of the pressure bump. In addition, we found that the width of the dust ring is comparable or larger than the width of the pressure bump, which could suggest that dust feedback is significant in the pressure bump. | astrophysics |
In 2005 DARPA labeled the realization of viable autonomous vehicles (AVs) a grand challenge; a short time later the idea became a moonshot that could change the automotive industry. Today, the question of safety stands between reality and solved. Given the right platform the CPS community is poised to offer unique insights. However, testing the limits of safety and performance on real vehicles is costly and hazardous. The use of such vehicles is also outside the reach of most researchers and students. In this paper, we present F1/10: an open-source, affordable, and high-performance 1/10 scale autonomous vehicle testbed. The F1/10 testbed carries a full suite of sensors, perception, planning, control, and networking software stacks that are similar to full scale solutions. We demonstrate key examples of the research enabled by the F1/10 testbed, and how the platform can be used to augment research and education in autonomous systems, making autonomy more accessible. | computer science |
We introduce hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. These algorithms use data reduction techniques to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed by the classical computer alone, or via an interactive protocol in which the outputs of the quantum computer are used to help decide which elements of X to use. By using the quantum computer to perform Grover search or rejection sampling, this yields quantum speedups for maximum likelihood estimation, Bayesian inference and saddle-point optimization. Concrete applications include k-means clustering, logistical regression, zero-sum games and boosting. | quantum physics |
We study the online problem of reading articles that are listed in an aggregated form in a dynamic stream, e.g., in news feeds, as abbreviated social media posts, or in the daily update of new articles on arXiv. In such a context, the brief information on an article in the listing only hints at its content. We consider readers who want to maximize their information gain within a limited time budget, hence either discarding an article right away based on the hint or accessing it for reading. The reader can decide at any point whether to continue with the current article or skip the remaining part irrevocably. In this regard, Reading Articles Online, RAO, does differ substantially from the Online Knapsack Problem, but also has its similarities. Under mild assumptions, we show that any $\alpha$-competitive algorithm for the Online Knapsack Problem in the random order model can be used as a black box to obtain an $(\mathrm{e} + \alpha)C$-competitive algorithm for RAO, where $C$ measures the accuracy of the hints with respect to the information profiles of the articles. Specifically, with the current best algorithm for Online Knapsack, which is $6.65<2.45\mathrm{e}$-competitive, we obtain an upper bound of $3.45\mathrm{e} C$ on the competitive ratio of RAO. Furthermore, we study a natural algorithm that decides whether or not to read an article based on a single threshold value, which can serve as a model of human readers. We show that this algorithmic technique is $O(C)$-competitive. Hence, our algorithms are constant-competitive whenever the accuracy $C$ is a constant. | computer science |
Generalized Bayes posterior distributions are formed by putting a fractional power on the likelihood before combining with the prior via Bayes's formula. This fractional power, which is often viewed as a remedy for potential model misspecification bias, is called the learning rate, and a number of data-driven learning rate selection methods have been proposed in the recent literature. Each of these proposals has a different focus, a different target they aim to achieve, which makes them difficult to compare. In this paper, we provide a direct head-to-head comparison of these learning rate selection methods in various misspecified model scenarios, in terms of several relevant metrics, in particular, coverage probability of the generalized Bayes credible regions. In some examples all the methods perform well, while in others the misspecification is too severe to be overcome, but we find that the so-called generalized posterior calibration algorithm tends to outperform the others in terms of credible region coverage probability. | statistics |
Tang derived the exact power formulae for t tests and analysis of covariance (ANCOVA) in superiority, noninferiority and equivalence trials. The power calculation in equivalence trials can be simplified by using Owen's Q function, which is available in standard statistical software. We extend the exact power determination method for ANCOVA to unstratified and stratified multi-arm randomized trials. The method is applied to the design of multi-arm trials and gold standard noninferiority trials. | statistics |
We study possible new physics interactions in the $ZZH$ vertex contributing to the Higgsstrahlung process $e^+ e^- \to ZH$ at proposed future $e^+e^-$ colliders using the polarization of the $Z$ as a probe. We calculate the spin density matrix of the $Z$ for the process and determine the eight independent polarization parameters of the $Z$ boson which have the potential to constrain the anomalous couplings. We study angular asymmetries using the decay leptons from the $Z$ boson which are simply related to the polarization observables. We also estimate the limits that can be placed on the anomalous couplings using measurements of these angular asymmetries at centre of mass energies of 250 GeV and 500 GeV and various combinations of polarized $e^+$ and $e^-$ beams. | high energy physics phenomenology |
We present a quantum kernel method for high-dimensional data analysis using Google's universal quantum processor, Sycamore. This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements. Instead of using a synthetic dataset of low dimension or pre-processing the data with a classical machine learning algorithm to reduce the data dimension, this experiment demonstrates that machine learning with real, high dimensional data is possible using a quantum processor; but it requires careful attention to shot statistics and mean kernel element size when constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67 dimensional data - significantly higher dimensionality than the largest prior quantum kernel experiments - resulting in classification accuracy that is competitive with noiseless simulation and comparable classical techniques. | quantum physics |
In this work, we propose an iterative reconstruction scheme (ALONE - Adaptive Learning Of NEtworks) for 2D radial cine MRI based on ground truth-free unsupervised learning of shallow convolutional neural networks. The network is trained to approximate patches of the current estimate of the solution during the reconstruction. By imposing a shallow network topology and constraining the $L_2$-norm of the learned filters, the network's representation power is limited in order not to be able to recover noise. Therefore, the network can be interpreted to perform a low dimensional approximation of the patches for stabilizing the inversion process. We compare the proposed reconstruction scheme to two ground truth-free reconstruction methods, namely a well known Total Variation (TV) minimization and an unsupervised adaptive Dictionary Learning (DIC) method. The proposed method outperforms both methods with respect to all reported quantitative measures. Further, in contrast to DIC, where the sparse approximation of the patches involves the solution of a complex optimization problem, ALONE only requires a forward pass of all patches through the shallow network and therefore significantly accelerates the reconstruction. | electrical engineering and systems science |
In this short note, we pointed out that Warm inflationary scenario remains to be favoured over its cold counterpart by the recently proposed conjectures, which aim to overrule de Sitter like constructions in String Landscapes. On the other hand, the canonical cold (single-field slow-roll) inflationary models, which were in tension with the previously proposed de Sitter conjecture, have now become even more unlikely to realise in String Landscapes as these scenarios fail to cope up with both the conjectures, de Sitter and trans-Planckian censorship, at one go. | high energy physics theory |
We show that the canonical seesaw mechanism implemented by the $U(1)_{B-L}$ gauge symmetry provides two-component dark matter naturally. The seesaw scale that breaks $B-L$ defines a residual gauge symmetry to be $Z_6=Z_2\otimes Z_3$, where $Z_2$ leads to the usual matter parity, while $Z_3$ is newly recognized, transforming quark fields nontrivially. The dark matter components -- that transform nontrivially under the matter parity and $Z_3$, respectively -- can gain arbitrary masses, despite the fact that the $Z_3$ dark matter may be heavier than the light quarks $u,d$. This dark matter setup can address the XENON1T anomaly recently observed and other observables, given that the dark matter masses are nearly degenerate, heavier than the electron and the $B-L$ gauge boson $Z'$, as well as the fast-moving $Z_3$ dark matter has a large $B-L$ charge, while the $Z'$ is viably below the beam dump experiment sensitive regime. | high energy physics phenomenology |
Linear polarization measurements in the optical band show polarization degrees of a few percent at late times. Recently, polarization at sub-percent level was also detected in radio by ALMA, opening the window for multi-wavelength polarimetry and stressing the importance of properly modeling polarization in GRB afterglows across the EM spectrum. We introduce a numerical tool that can calculate the polarization from relativistically moving surfaces by discretizing them to small patches of uniform magnetic field, calculating the polarized emission from each cell assuming synchrotron radiation and summing it to obtain the total degree of polarization. We apply this tool to afterglow shocks with random magnetic fields confined to the shock plane, considering electron radiative cooling. We analyze the observed polarization curves in several wavelengths above the cooling frequency and below the minimal synchrotron frequency and point to the characteristic differences between them. We present a method to constrain the jet opening angle and the viewing angle within the context of our model. Applying it to GRB 021004 we obtain angles of 10 and 8 degrees respectively and conclude that a non-negligible component of radial magnetic field is required to explain the 1% polarization level observed 3.5 days after the burst. | astrophysics |
As a modern musician and cultural icon, Taylor Swift has earned worldwide acclaim via pieces which predominantly draw upon the complex dynamics of personal and interpersonal experiences. Here we show, for the first time, how Swift's lyrical and melodic structure have evolved in their representation of emotions over a timescale of $\tau\sim14$ yr. Previous progress on this topic has been challenging based on the sheer volume of the relevant discography, and that uniquely identifying a song that optimally describes a hypothetical emotional state represents a multi-dimensional and complex task. To quantify the emotional state of a song, we separate the criteria into the level of optimism ($H$) and the strength of commitment to a relationship ($R$), based on lyrics and chordal tones. We apply these criteria to a set of 149 pieces spanning almost the entire repertoire. We find an overall trend toward positive emotions in stronger relationships, with a best-fit linear relationship of $R=0.642^{+0.086}_{-0.053}H-1.74^{+0.39}_{-0.29}$. We find no significant trends in mean happiness ($H$) within individual albums over time. The mean relationship score ($R$) shows trends which we speculate may be due to age and the global pandemic. We provide tentative indications that partners with blue eyes and/or bad reputations may lead to overall less positive emotions, while those with green or indigo-colored eyes may produce more positive emotions and stronger relationships. However, we stress that these trends are based on small sample sizes, and more data are necessary to validate them. Finally, we present the taylorswift python package which can be used to optimize song selection according to a specific mood. | physics |
Polymeric films with greater impact and ballistic resistance are highly desired for numerous applications, but molecular configurations that best address this need remain subject to debate. We study the resistance to ballistic impact of thin polymer films using coarse-grained molecular dynamics simulations, investigating melts of linear polymer chains and star polymers with varying number 2<=f<=16 and degree of polymerization 10<=M<=50 of the arms. We show that increasing the number of arms f or the length of the arms M both result in greater specific penetration energy within the parameter ranges studied. Greater interpenetration of chains in stars with larger f allows energy to be dissipated predominantly through rearrangement of the stars internally, rather than chain sliding. During film deformation, stars with large f show higher energy absorption rates soon after contact with the projectile, whereas stars with larger M have a delayed response where dissipation arises primarily from chain sliding, which results in significant back face deformation. Our results suggest that stars may be advantageous for tuning energy dissipation mechanisms of ultra-thin films. These findings set the stage for a topology-based strategy for the design of impact-resistant polymer films. | condensed matter |
We study the one-dimensional branching random walk in the case when the step size distribution has a stretched exponential tail, and, in particular, no finite exponential moments. The tail of the step size $X$ decays as $\mathbb{P}[X \geq t] \sim a \exp(-\lambda t^r)$ for some constants $a, \lambda > 0$ where $r \in (0,1)$. We give a detailed description of the asymptotic behaviour of the position of the rightmost particle, proving almost-sure limit theorems, convergence in law and some integral tests. The limit theorems reveal interesting differences betweens the two regimes $ r \in (0, 2/3)$ and $ r \in (2/3, 1)$, with yet different limits in the boundary case $r = 2/3$. | mathematics |
Nonadiabatic geometric quantum computation is dedicated to the realization of high-fidelity and robust quantum gates, which is the necessity of fault-tolerant quantum computation. However, it is limited by cyclic and mutative evolution path, which usually requires longer gate-time and abrupt pulse control, and thus weaken the gate performance. Here, we propose a scheme to realize geometric quantum gates with noncyclic and nonadiabatic evolutions, which effectively shorten the gate duration, and universal quantum gates can be realized in one step without path mutation. Our numerical simulation shows that, comparing with the conventional dynamical gates, the constructed geometric gates have stronger resistant not only to noise but also to decoherence effect. In addition, our scheme can also be implemented on a superconducting circuit platform, with the fidelities of single-qubit and two-qubit gates higher than 99.97$\%$ and 99.84$\%$, respectively. Therefore, our scheme provides a promising way to realize high-fidelity fault-tolerant quantum gates for scalable quantum computation. | quantum physics |
We consider a context-based dynamic pricing problem of online products which have low sales. Sales data from Alibaba, a major global online retailer, illustrate the prevalence of low-sale products. For these products, existing single-product dynamic pricing algorithms do not work well due to insufficient data samples. To address this challenge, we propose pricing policies that concurrently perform clustering over products and set individual pricing decisions on the fly. By clustering data and identifying products that have similar demand patterns, we utilize sales data from products within the same cluster to improve demand estimation and allow for better pricing decisions. We evaluate the algorithms using the regret, and the result shows that when product demand functions come from multiple clusters, our algorithms significantly outperform traditional single-product pricing policies. Numerical experiments using a real dataset from Alibaba demonstrate that the proposed policies, compared with several benchmark policies, increase the revenue. The results show that online clustering is an effective approach to tackling dynamic pricing problems associated with low-sale products. Our algorithms were further implemented in a field study at Alibaba with 40 products for 30 consecutive days, and compared to the products which use business-as-usual pricing policy of Alibaba. The results from the field experiment show that the overall revenue increased by 10.14%. | statistics |
We study the dynamics of a particle tunneling between the ground states of a symmetric double potential well system, in the presence of simultaneous diagonal and non-diagonal couplings to an Ohmic oscillator bath. We use the non-interacting-blip-approximation to investigate coherence effects across a wide range of system-environment coupling strengths at physiological temperatures. We show how the presence of a non-diagonal coupling mechanism significantly alters the dynamics of the tunneling particle, producing coherent oscillations despite strong thermal fluctuations in the environment. Fluctuations in both the particle's polarization, as well as tunneling energies, lead to competing influences on the coherent nature of tunneling particle dynamics, with non-diagonal couplings introducing a relatively long-lived oscillatory mode in an otherwise incoherent setting. | quantum physics |
We prove that all R\'enyi entanglement entropies of spin-chains described by generic (gapped), translational invariant matrix product states (MPS) are extensive for disconnected sub-systems: All R\'enyi entanglement entropy densities of the sub-system consisting of every k-th spin are non-zero in the thermodynamic limit if and only if the state does not converge to a product state in the thermodynamic limit. Furthermore, we provide explicit lower bounds to the entanglement entropy in terms of the expansion coefficient of the transfer operator of the MPS and spectral properties of its fixed point in canonical form. As side-result we obtain a lower bound for the expansion coefficient and singular value distribution of a primitve quantum channel in terms of its Kraus-rank and entropic properties of its fixed-point. For unital quantum channels this yields a very simple lower bound on the distribution of singular values and the expansion coefficient in terms of the Kraus-rank. Physically, our results are motivated by questions about equilibration in many-body localized systems, which we review. | quantum physics |
We generalize Rado's extension theorem to complex spaces. | mathematics |
Relation extraction (RE) aims to identify the semantic relations between named entities in text. Recent years have witnessed it raised to the document level, which requires complex reasoning with entities and mentions throughout an entire document. In this paper, we propose a novel model to document-level RE, by encoding the document information in terms of entity global and local representations as well as context relation representations. Entity global representations model the semantic information of all entities in the document, entity local representations aggregate the contextual information of multiple mentions of specific entities, and context relation representations encode the topic information of other relations. Experimental results demonstrate that our model achieves superior performance on two public datasets for document-level RE. It is particularly effective in extracting relations between entities of long distance and having multiple mentions. | computer science |
Appropriately representing elements in a database so that queries may be accurately matched is a central task in information retrieval. This recently has been achieved by embedding the graphical structure of the database into a manifold so that the hierarchy is preserved. Persistent homology provides a rigorous characterization for the database topology in terms of both its hierarchy and connectivity structure. We compute persistent homology on a variety of datasets and show that some commonly used embeddings fail to preserve the connectivity. Moreover, we show that embeddings which successfully retain the database topology coincide in persistent homology. We introduce the dilation-invariant bottleneck distance to capture this effect, which addresses metric distortion on manifolds. We use it to show that distances between topology-preserving embeddings of databases are small. | statistics |
The flow regime of micro flow varies from collisionless regime to hydrodynamic regime according to the Knudsen number. On the kinetic scale, the dynamics of micro flow can be described by the linearized kinetic equation. In the continuum regime, hydrodynamic equations such as linearized Navier-Stokes equations and Euler equations can be derived from the linearized kinetic equation by the Chapman-Enskog asymptotic analysis. In this paper, based on the linearized kinetic equation we are going to propose a unified gas kinetic scheme scheme (UGKS) for micro flow simulation, which is an effective multiscale scheme in the whole micro flow regime. The important methodology of UGKS is the following. Firstly, the evolution of microscopic distribution function is coupled with the evolution of macroscopic flow quantities. Secondly, the numerical flux of UGKS is constructed based on the integral solution of kinetic equation, which provides a genuinely multiscale and multidimensional numerical flux. The UGKS recovers the linear kinetic solution in the rarefied regime, and converges to the linear hydrodynamic solution in the continuum regime. An outstanding feature of UGKS is its capability of capturing the accurate viscous solution even when the cell size is much larger than the kinetic kinetic length scale, such as the capturing of the viscous boundary layer with a cell size ten times larger than the particle mean free path. Such a multiscale property is called unified preserving (UP) which has been studied in \cite{guo2019unified}. In this paper, we are also going to give a mathematical proof for the UP property of UGKS. | physics |
Digital mosaics have usually used regular tiles, simulating the historical "tessellated" mosaics. In this paper, we present a method for synthesizing pebble mosaics, a historical mosaic style in which the tiles are rounded pebbles. We address both the tiling problem, where pebbles are distributed over the image plane so as to approximate the input image content, and the problem of geometry, creating a smooth rounded shape for each pebble. We adapt SLIC, simple linear iterative clustering, to obtain elongated tiles conforming to image content, and smooth the resulting irregular shapes into shapes resembling pebble cross-sections. Then, we create an interior and exterior contour for each pebble and solve a Laplace equation over the region between them to obtain height-field geometry. The resulting pebble set approximates the input image while presenting full geometry that can be rendered and textured for a highly detailed representation of a pebble mosaic. | computer science |
For crystalline materials with long-range orders, the phonon modes involved in the phonon-assisted radiation process are generally involving one or several phonons with specific vibration frequencies 1-4. In some glassy material, the phonon modes broaden in a short range 5-7. However, the locally distinct chemical environments or mass disorder in high-entropy systems can induce an anharmonic phonon-phonon coupling and composition disorder, which leads to significant phonon broadening 8,9.The terminology of high-entropy comes from the high configuration entropy larger than 1.5R (R is the gas constant), which results from randomly distributed multiple nearly equal components in a crystal lattice 10,11. Inspired by the high-entropy strategy, we deployed a high-entropy glass system (HEGS) doped with neodymium ions, which exhibits a complex structure with tetrahedral voids filled by different ions, including, Li+, Zn2+, Si4+, P5+, S6+, etc. Phonon spectral broadening up to thousands of wavenumbers in the HEGS allows strong wide-band absorption in both the near-infrared and mid-infrared ranges and assists the system radiation, i.e., broadened phonon-assisted wide-band radiation (BPAWR). The subsequent low-threshold self-absorption coherence modulation (SACM) was also observed in the HEGS, modulated by changing excitation wavelengths, sample size, and doping concentrations. The time delay of the BPAWR signal is up to 1.66 ns relative to the zero-delay signal, while the time delay of the Raman process is typically in the order of fs to ps, rarely up to 10 ps 12-15.The BPAWR-SACM can be applied to realize signal amplification of the centered non-absorption band when dual-wavelength lasers pump the HEGS sample, and signal amplification can be up to 26.02 dB. The spectral characteristics of the BPAWR and the dynamics of the energy distribution of the excited species are investigated in detail. | physics |
With the advent of deep learning, research on noise-robust automatic speech recognition (ASR) has progressed rapidly. However, ASR performance in noisy conditions of single-channel systems remains unsatisfactory. Indeed, most single-channel speech enhancement (SE) methods (denoising) have brought only limited performance gains over state-of-the-art ASR back-end trained on multi-condition training data. Recently, there has been much research on neural network-based SE methods working in the time-domain showing levels of performance never attained before. However, it has not been established whether the high enhancement performance achieved by such time-domain approaches could be translated into ASR. In this paper, we show that a single-channel time-domain denoising approach can significantly improve ASR performance, providing more than 30 % relative word error reduction over a strong ASR back-end on the real evaluation data of the single-channel track of the CHiME-4 dataset. These positive results demonstrate that single-channel noise reduction can still improve ASR performance, which should open the door to more research in that direction. | electrical engineering and systems science |
Fair representation learning aims to encode invariant representation with respect to the protected attribute, such as gender or age. In this paper, we design Fairness-aware Disentangling Variational AutoEncoder (FD-VAE) for fair representation learning. This network disentangles latent space into three subspaces with a decorrelation loss that encourages each subspace to contain independent information: 1) target attribute information, 2) protected attribute information, 3) mutual attribute information. After the representation learning, this disentangled representation is leveraged for fairer downstream classification by excluding the subspace with the protected attribute information. We demonstrate the effectiveness of our model through extensive experiments on CelebA and UTK Face datasets. Our method outperforms the previous state-of-the-art method by large margins in terms of equal opportunity and equalized odds. | computer science |
This work develops entropy-stable positivity-preserving DG methods as a computational scheme for Boltzmann-Poisson systems modeling the pdf of electronic transport along energy bands in semiconductor crystal lattices. We pose, using spherical or energy-angular variables as momentum coordinates, the corresponding Vlasov Boltzmann eq. with a linear collision operator with a singular measure modeling the scattering as functions of the energy band. We show stability results of semi-discrete DG schemes under an entropy norm for 1D-position 2D-momentum, and 2D-position 3D-momentum, using the dissipative properties of the collisional operator given its entropy inequality, which depends on the whole Hamiltonian rather than only the kinetic energy. For the 1D problem, knowledge of the analytic solution to Poisson and of the convergence to a constant current is crucial to obtain full stability. For the 2D problem, specular reflection BC are considered in addition to periodicity in the estimate for stability under an entropy norm. Regarding positivity preservation (1D position), we treat the collision operator as a source term and find convex combinations of the transport and collision terms which guarantee the positivity of the cell average of our numerical pdf at the next time step. The positivity of the numerical pdf in the whole domain is guaranteed by applying the natural limiters that preserve the cell average but modify the slope of the piecewise linear solutions in order to make the function non-negative. The use of a spherical coordinate system $\vec{p}(|\vec{p}|,\mu=cos\theta,\varphi)$ is slightly different to the choice in previous DG solvers for BP, since the proposed DG formulation gives simpler integrals involving just piecewise polynomial functions for both transport and collision terms, which is more adequate for Gaussian quadrature than previous approaches. | mathematics |
Small-scale density fluctuations can significantly affect reionization but are typically modelled quite crudely. Unresolved fluctuations in numerical simulations and analytical calculations are included using a gas clumping factor, typically assumed to be independent of the local environment. In Paper I, we presented an improved, local density-dependent model for the sub-grid gas clumping. Here we extend this using an empirical stochastic model based on the results from high-resolution numerical simulations which fully resolve all relevant fluctuations. Our model reproduces well both the mean density-clumping relation and its scatter. We applied our stochastic model, along with the mean clumping one and the Paper I deterministic model, to create a large-volume realisation of the clumping field, and used these in radiative transfer simulations of cosmic reionization. Our results show that the simplistic mean clumping model delays reionization compared to local density-dependent models, despite producing fewer recombinations overall. This is due to the very different spatial distribution of clumping, resulting in much higher photoionization rates in the latter cases. The mean clumping model produces smaller HII regions throughout most of the reionization, but those percolate faster at late times. It also causes a significant delay in the 21-cm fluctuations peak and yields lower non-Gaussianity and many fewer bright pixels in the PDF distribution. The stochastic density-dependent model shows relatively minor differences from the deterministic one, mostly concentrated around overlap, where it significantly suppresses the 21-cm fluctuations, and at the bright tail of the 21-cm PDFs, where it produces noticeably more bright pixels. | astrophysics |
This report summarises the work and results produced at the 146th European Study Group with Industry/co-creation event with society on the challenge \textit{Breaking barriers for women in Science}. The aim of this challenge, proposed by the Cyprus-based non-profit AIPFE Cyprus-Women of Europe, was to quantify the barriers that women face in science so that eventually policy changes can take place in Cyprus and elsewhere. Two distinct but related challenges were considered. The first challenge was to quantify the wage gap between men and women in 28 European countries. In this connection, we analysed Eurostat data and developed a mathematical model quantifying how probable it is for countries to decrease their wage gap. Secondly, we analysed data provided by the University of Cyprus and determined the percentage of women and men in STEM (Science, Technology, Engineering and Mathematics) departments as they move up the academic ladder, starting from the undergraduate level. Studying the latter challenge is a first step in studying the wage gap in all Cypriot universities and in other universities abroad. This work was supported financially by the EU project SciShops.eu, the EU Mathematics for Industry Network (MI-NET) and several other organisations. | physics |
The James Webb Space Telescope (JWST) will devote significant observing time to the study of exoplanets. It will not be serviceable as was the Hubble Space Telescope, and therefore the spacecraft/instruments will have a relatively limited life. It is important to get as much science as possible out of this limited observing time. We provide an analysis framework (including publicly released computational tools) that can be used to optimize lists of exoplanet targets for atmospheric characterization. Our tools take catalogs of planet detections, either simulated, or actual; categorize the targets by planet radius and equilibrium temperature; estimate planet masses; generate model spectra and simulated instrument spectra; perform a statistical analysis to determine if the instrument spectra can confirm an atmospheric detection; and finally, rank the targets within each category by observation time required. For a catalog of simulated Transiting Exoplanet Survey Satellite planet detections, we determine an optimal target ranking for the observing time available. Our results are generally consistent with other recent studies of JWST exoplanet target optimization. We show that assumptions about target planet atmospheric metallicity, instrument performance (especially the noise floor), and statistical detection threshold, can have a significant effect on target ranking. Over its full 10-year (fuel-limited) mission, JWST has the potential to increase the number of atmospheres characterized by transmission spectroscopy by an order of magnitude (from about 50 currently to between 400 and 500). | astrophysics |
In this paper, the power flow in electrical systems is modelled in the time domain by using Geometric Algebra and the Hilbert Transform. The use of this mathematical framework overcomes some of the limitations shown by the existing methodologies under distorted supply or unbalanced load. In such cases, the derived instantaneous active current may not be the lowest RMS current in all circuit conditions and could contain higher levels of harmonic distortion than the supply voltage. Moreover, they cannot be applied to single phase systems. The proposed method can be used for sinusoidal and non-sinusoidal power supplies, non-linear loads, single- and multi-phase systems, and it provides meaningful engineering results with a compact formulation. Several examples have been included to prove the validity of the proposed theory. | electrical engineering and systems science |
The relationship of two dimensional quantum field theory and isomonodromic deformations of Fuchsian systems has a long history. Recently four-dimensional $\mathcal{N}=2$ gauge theories joined the party in a multitude of roles. In this paper we study the vacuum expectation values of intersecting half-BPS surface defects in $SU(2)$ theory with $N_f=4$ fundamental hypermultiplets. We show they form a horizontal section of a Fuchsian system on a sphere with $5$ regular singularities, calculate the monodromy, and define the associated isomonodromic tau-function. Using the blowup formula in the presence of half-BPS surface defects, initiated in the companion paper, we obtain the GIL formula, establishing an unexpected relation of the topological string/free fermion regime of supersymmetric gauge theory to classical integrability. | high energy physics theory |
We develop a non-perturbative method for calculating partition functions of strongly coupled quantum mechanical systems with interactions between subsystems described by a path integral of a dual system. The dual path integral is derived starting from non-interacting subsystems at zeroth order and then by introducing couplings of increasing complexity at each order of an iterative procedure. These orders of interactions play the role of a dual time and the full quantum partition function is expressed as a transition amplitude in the dual system. More precisely, it is expressed as a path integral from a deformation-operators dependent initial state at zero time/order to the inverse-temperature dependent final state at later time/order. We provide three examples of strongly coupled systems with first-order, second-order and higher-order interactions and discuss a possible emergence of space-time, quantum field theories and general relativity in context of the dual path integral. | high energy physics theory |
This work reports on the application of a novel electric field-ionization setup for high-resolution laser spectroscopy measurements on bunched fast atomic beams in a collinear geometry. In combination with multi-step resonant excitation to Rydberg states using pulsed lasers, the field ionization technique demonstrates increased sensitivity for isotope separation and measurement of atomic parameters over non-resonant laser ionization methods. The setup was tested at the Collinear Resonance Ionization Spectroscopy experiment at ISOLDE-CERN to perform high-resolution measurements of transitions in the indium atom from the 5s$^2$5d~$^2$D$_{5/2}$ and 5s$^2$5d~$^2$D$_{3/2}$ states to 5s$^2$($n$)p~$^2$P and 5s$^2$($n$)f~$^2$F Rydberg states, up to a principal quantum number of $n$ = 72. The extracted Rydberg level energies were used to re-evaluate the ionization potential of the indium atom to be 46670.1055(21) cm$^{-1}$. The nuclear magnetic dipole and nuclear electric quadrupole hyperfine structure constants and level isotope shifts of the 5s$^2$5d~$^2$D$_{5/2}$ and 5s$^2$5d~$^2$D$_{3/2}$ states were determined for $^{113,115}$In. The results are compared to calculations using relativistic coupled-cluster theory. A good agreement is found with the ionization potential and isotope shifts, while disagreement of hyperfine structure constants indicates an increased importance of electron correlations in these excited atomic states. With the aim of further increasing the detection sensitivity for measurements on exotic isotopes, a systematic study of the field-ionization arrangement implemented in the work was performed and an improved design was simulated and is presented. The improved design offers increased background suppression independent of the distance from field ionization to ion detection. | physics |
Thermal vorticity in non-central Au+Au collisions at energies $7.7 \leq \sqrt{s} \leq 62.4$ GeV is calculated within the UrQMD transport model. Tracing the $\Lambda$ and $\bar{\Lambda}$ hyperons back to their last interaction point we were able to obtain the temperature and the chemical potentials at the time of emission by fitting the extracted bulk characteristics of hot and dense medium to statistical model of ideal hadron gas. Then the polarization of both hyperons was calculated. The polarization of $\Lambda$ and $\bar{\Lambda}$ increases with decreasing energy of nuclear collisions. The stronger polarization of $\bar{\Lambda}$ is explained by the different space-time distributions of $\Lambda$ and $\bar{\Lambda}$ and by different freeze-out conditions of both hyperons. | high energy physics phenomenology |
We frame beta-minus decay rate perturbations in the context of charged-current (CC) nonstandard neutrino interactions (NSI). In particular, we first outline one NSI parameterization for modeling the CC NSI. Then, we demonstrate that the strength of the NSI constrained by beta-minus decay data is comparable to previously reported bounds on general CC NSI at $\mathcal{O}(10^{-4})$ to $\mathcal{O}(10^{-2})$. After discussing possible parameters involved in beta-minus decay NSI, we establish a working framework to probe potentially new physics in nuclear decay rates. Finally, we determine that current nuclear reactor technology could be used for experiments that are sensitive to these NSI parameters. These would include NSI contributions from two types of parameters: (i) the relative NSI effects from the three neutrino flavors and (ii) the change in flux of electron neutrinos through a decaying sample. | high energy physics phenomenology |
In the present paper we address the real-time detection problem of a change-point in the coefficients of a linear model with the possibility that the model errors are asymmetrical and that the explanatory variables number is large. We build test statistics based on the cumulative sum (CUSUM) of the expectile function derivatives calculated on the residuals obtained by the expectile and adaptive LASSO expectile estimation methods. The asymptotic distribution of these statistics are obtained under the hypothesis that the model does not change. Moreover, we prove that they diverge when the model changes at an unknown observation. The asymptotic study of the test statistics under these two hypotheses allows us to find the asymptotic critical region and the stopping time, that is the observation where the model will change. The empirical performance is investigated by a comparative simulation study with other statistics of CUSUM type. Two examples on real data are also presented to demonstrate its interest in practice. | statistics |
Saving lives or economy is a dilemma for epidemic control in most cities while smart-tracing technology raises people's privacy concerns. In this paper, we propose a solution for the life-or-economy dilemma that does not require private data. We bypass the private-data requirement by suppressing epidemic transmission through a dynamic control on inter-regional mobility that only relies on Origin-Designation (OD) data. We develop DUal-objective Reinforcement-Learning Epidemic Control Agent (DURLECA) to search mobility-control policies that can simultaneously minimize infection spread and maximally retain mobility. DURLECA hires a novel graph neural network, namely Flow-GNN, to estimate the virus-transmission risk induced by urban mobility. The estimated risk is used to support a reinforcement learning agent to generate mobility-control actions. The training of DURLECA is guided with a well-constructed reward function, which captures the natural trade-off relation between epidemic control and mobility retaining. Besides, we design two exploration strategies to improve the agent's searching efficiency and help it get rid of local optimums. Extensive experimental results on a real-world OD dataset show that DURLECA is able to suppress infections at an extremely low level while retaining 76\% of the mobility in the city. Our implementation is available at https://github.com/anyleopeace/DURLECA/. | computer science |
An excess of $\sim$10-20 GeV cosmic-ray antiprotons has been identified in the spectrum reported by the AMS-02 Collaboration. The systematic uncertainties associated with this signal, however, have made it difficult to interpret these results. In this paper, we revisit the uncertainties associated with the time, charge and energy-dependent effects of solar modulation, the antiproton production cross section, and interstellar cosmic-ray propagation. After accounting for these uncertainties, we confirm the presence of a 4.7$\sigma$ antiproton excess, consistent with that arising from a $m_{\chi} \approx 64-88$ GeV dark matter particle annihilating to $b\bar{b}$ with a cross section of $\sigma v \simeq (0.8-5.2) \times 10^{-26}$ cm$^{3}$/s. If we allow for the stochastic acceleration of secondary antiprotons in supernova remnants, the data continues to favor a similar range of dark matter models ($m_{\chi}\approx 46-94$ GeV, $\sigma v \approx (0.7-3.8)\times 10^{-26}$ cm$^3/$s) with a significance of 3.3$\sigma$. The same range of dark matter models that are favored to explain the antiproton excess can also accommodate the excess of GeV-scale gamma rays observed from the Galactic Center. | astrophysics |
The effect of strain on the magnetic order and band structure of single-layer CrAsS$_4$ has been investigated by first-principles calculations based on density functional theory. We found that single-layer CrAsS$_4$ was an antiferromagnetic (AFM) semiconductor, and would have a phase transition from AFM state to ferromagnetic (FM) state by applying a uniaxial tensile strain of 2.99\% along the y-direction or compressive strain of 1.76\% along the x-direction. The underlying physical mechanism of strain-dependent magnetic stability was further elucidated as the result of the competition between the direct exchange and indirect superexchange interactions. Moreover, band gap exhibit a abrupt change along with phase transition of magnetic order. Our study provides an intuitional approach to design strain-modulated spintronic devices. | condensed matter |
Despite the undeniable success of the Standard Model of particle physics (SM), there are some phenomena that SM cannot explain. These phenomena indicate that the SM has to be modified. One of the possible ways to extend SM is to introduce heavy neutral leptons (HNL). To search for HNL in intensity frontier experiments, one has to consider HNL production both in 2-body and 3-body decays of some mesons. We verified the possibility to use the PYTHIA approach for the calculation of HNL production in 3-body decays. We conclude that the PYTHIA approach is quite suitable for the estimation of the sensitivity region for the intensity frontier experiments. | high energy physics phenomenology |
The Knowledge Base (KB) used for real-world applications, such as booking a movie or restaurant reservation, keeps changing over time. End-to-end neural networks trained for these task-oriented dialogs are expected to be immune to any changes in the KB. However, existing approaches breakdown when asked to handle such changes. We propose an encoder-decoder architecture (BoSsNet) with a novel Bag-of-Sequences (BoSs) memory, which facilitates the disentangled learning of the response's language model and its knowledge incorporation. Consequently, the KB can be modified with new knowledge without a drop in interpretability. We find that BoSsNet outperforms state-of-the-art models, with considerable improvements (> 10\%) on bAbI OOV test sets and other human-human datasets. We also systematically modify existing datasets to measure disentanglement and show BoSsNet to be robust to KB modifications. | computer science |
Infrared imaging is a crucial technique in a multitude of applications, including night vision, autonomous vehicles navigation, optical tomography, and food quality control. Conventional infrared imaging technologies, however, require the use of materials like narrow-band gap semiconductors which are sensitive to thermal noise and often require cryogenic cooling. Here, we demonstrate a compact all-optical alternative to perform infrared imaging in a metasurface composed of GaAs semiconductor nanoantennas, using a nonlinear wave-mixing process. We experimentally show the up-conversion of short-wave infrared wavelengths via the coherent parametric process of sum-frequency generation. In this process, an infrared image of a target is mixed inside the metasurface with a strong pump beam, translating the image from infrared to the visible in a nanoscale ultra-thin imaging device. Our results open up new opportunities for the development of compact infrared imaging devices with applications in infrared vision and life sciences. | physics |
We present a novel setup to measure the transverse magneto-optical Kerr effect in the extreme ultraviolet spectral range at exceptionally high repetition rates based on a fiber laser amplifier system. This affords a very high and stable flux of extreme ultraviolet light, which we use to measure element-resolved demagnetization dynamics with unprecedented depth of information. Furthermore, the setup is equipped with a strong electromagnet and a cryostat, allowing measurements between 10 and 420 K using magnetic fields up to 0.86 T. The performance of our setup is demonstrated by a set of temperature- and time-dependent magnetization measurements showing distinct element-dependent behavior. | physics |
This work studies the learning abilities of agents sharing partial beliefs over social networks. The agents observe data that could have risen from one of several hypotheses and interact locally to decide whether the observations they are receiving have risen from a particular hypothesis of interest. To do so, we establish the conditions under which it is sufficient to share partial information about the agents' belief in relation to the hypothesis of interest. Some interesting convergence regimes arise. | electrical engineering and systems science |
We study the dependence of mechanical conformations of graphene sheets located on flat substrates on the density of unilateral (one-side) attachment of hydrogen, fluorine or chlorine atoms to them. It is shown that chemically modified graphene sheet can take four main forms on a flat substrate: the form of a flat sheet located parallel to the surface of the substrate, the form of convex sheet partially detached from the substrate with bent edges adjacent to the substrate, and the forms of single and double rolls on the substrate. On the surface of crystalline graphite, the flat form of the sheet is lowest in energy for hydrogenation density p <0.21, fluorination density p<0.20 and chlorination density p<0.16. The surface of crystalline nickel has higher adsorption energy for graphene monolayer and the flat form of chemically modified sheet on such substrate is lowest in energy for hydrogenation density p<0.47, fluorination density p<0.30 and chlorination density p<0.21. The flat shape of the graphene sheet remains basic on a substrate also when molecular groups CH3, CH2-CH3 or rings C6H5 are one-side attached to its outer surface. At the attachment density p=1/6 (one group per 6 sheet atoms) the sheet becomes the nanocarpet the basis of which is formed by a sheet of graphene and the pile of which is formed by the attached molecular groups forming a tightly packed regular lattice. The addition of hydroxyl groups OH with attachment density p=1/4 leads to the formation of hexagonal lattices of hydroxyl groups on the outer surface of graphene sheet on a substrate. In this lattice, the groups can form various configurations of hydrogen bonds, which turns the chemically modified sheet into a multistable system. | condensed matter |
In multi-agent reinforcement learning, discovering successful collective behaviors is challenging as it requires exploring a joint action space that grows exponentially with the number of agents. While the tractability of independent agent-wise exploration is appealing, this approach fails on tasks that require elaborate group strategies. We argue that coordinating the agents' policies can guide their exploration and we investigate techniques to promote such an inductive bias. We propose two policy regularization methods: TeamReg, which is based on inter-agent action predictability and CoachReg that relies on synchronized behavior selection. We evaluate each approach on four challenging continuous control tasks with sparse rewards that require varying levels of coordination as well as on the discrete action Google Research Football environment. Our experiments show improved performance across many cooperative multi-agent problems. Finally, we analyze the effects of our proposed methods on the policies that our agents learn and show that our methods successfully enforce the qualities that we propose as proxies for coordinated behaviors. | computer science |
Solution processable two-dimensional (2D) materials have provided an ideal platform for both fundamental studies and wearable electronic applications. Apart from graphene and 2D dichalcogenides, IVA-VI monochalcogenides (MMCs) has emerged recently as a promising candidate for next generation electronic applications. However, the dispersion behavior, which is crucial for the quality, solubility and stability of MMCs, has been quite unexplored. Here, the exfoliation and the dispersion behavior of Germanium (II) monosulfide (GeS) and Tin (II) monosulfide (SnS) nanosheets has been investigated in a wide range of organic solvents. Nine different organic solvents were examined and analyzed, considering the solvent polarity, surface tension, and Hansen solubility parameters. A significant yield of isolated GeS and SnS flakes, namely ~16.4 and ~23.08 {\mu}g/ml in 2-propanol and N-Methyl-2-pyrrolidone respectively were attained. The isolated flakes are few-layers nanosheets with lateral sizes over a few hundreds of nanometers. The MMCs colloids exhibit long-term stability, suggesting the MMCs applicability for scalable solution processable printed electronic device applications. | condensed matter |
We investigate how much randomness can be extracted from a generic partially entangled pure state of two qubits in a device-independent setting, where a Bell test is used to certify the correct functioning of the apparatus. For any such state, we first show that two bits of randomness are always attainable both if projective measurements are used to generate the randomness globally or if a nonprojective measurement is used to generate the randomness locally. We then prove that the maximum amount of randomness that can be generated using nonprojective measurements globally is restricted to between approximately 3.58 and 3.96 bits. The upper limit rules out that a bound of four bits potentially obtainable with extremal qubit measurements can be attained. We point out this is a consequence of the fact that nonprojective qubit measurements with four outcomes can only be self-tested to a limited degree in a Bell experiment. | quantum physics |
We present a method for finding, in principle, all asymptotic gravitational charges. The basic idea is that one must consider all possible contributions to the action that do not affect the equations of motion for the theory of interest; such terms include topological terms. As a result we observe that the first order formalism is best suited to an analysis of asymptotic charges. In particular, this method can be used to provide a Hamiltonian derivation of recently found dual charges. | high energy physics theory |
The performance of a Thick-COBRA (THCOBRA) gaseous detector is studied using an optical readout technique. The operation principle of this device is described, highlighting its operation in a gas mixture of Ar/CF4 (80/20%) for visible scintillation light emission. The contributions to the total gain from the holes and the anode strips as a function of the applied bias voltage were visualized. The preservation of spatial information from the initial ionizations was demonstrated by analyzing the light emission from 5.9keV X-rays of an 55Fe source. The observed non-uniformity of the scintillation light from the holes supports the claim of a space localization accuracy better than the pitch of the holes. The acquired images were used to identify weak points and sources of instabilities in view of the development of new optimized structures. | physics |
A cornerstone of the theory of lambda-calculus is that intersection types characterise termination properties. They are a flexible tool that can be adapted to various notions of termination, and that also induces adequate denotational models. Since the seminal work of de Carvalho in 2007, it is known that multi types (i.e. non-idempotent intersection types) refine intersection types with quantitative information and a strong connection to linear logic. Typically, type derivations provide bounds for evaluation lengths, and minimal type derivations provide exact bounds. De Carvalho studied call-by-name evaluation, and Kesner used his system to show the termination equivalence of call-by-need and call-by-name. De Carvalho's system, however, cannot provide exact bounds on call-by-need evaluation lengths. In this paper we develop a new multi type system for call-by-need. Our system produces exact bounds and induces a denotational model of call-by-need, providing the first tight quantitative semantics of call-by-need. | computer science |
We consider constrained multi-Hamiltonian formulation for the extended Chern-Simons theory with higher derivatives of arbitrary finite order. The order $n$ extension of the theory admits $(n-1)$-parametric series of conserved tensors. The $00$-component of any representative of the series can be chosen as Hamiltonian. The theory admits a series of Hamiltonian formulations, including the canonical Ostrogradski formulation. The Hamiltonian formulations with different Hamiltonians are not connected by canonical transformations. Also, we demonstrate the inclusion of stable interactions with charged scalar field that preserves one specified Hamiltonian from the series. | high energy physics theory |
The dynamics of open quantum systems (i.e., of quantum systems interacting with an uncontrolled environment) forms the basis of numerous active areas of research from quantum thermodynamics to quantum computing. One approach to modeling open quantum systems is via a Collision Model. For instance, one could model the environment as being composed of many small quantum systems (ancillas) which interact with the target system sequentially, in a series of "collisions". In this thesis I will discuss a novel method for constructing a continuous-time master equation from the discrete-time dynamics given by any such collision model. This new approach works for any interaction duration, $\delta t$, by interpolating the dynamics between the time-points $t = n\,\delta t$. I will contrast this with previous methods which only work in the continuum limit (as $\delta t\to 0$). Moreover, I will show that any continuum-limit-based approach will always yield unitary dynamics unless it is fine-tuned in some way. For instance, it is common to find non-unitary dynamics in the continuum limit by taking an (I will argue unphysical) divergence in the interaction strengths, $g$, such that $g^2 \delta t$ is constant as $\delta t \to 0$. | quantum physics |
Markov chain Monte Carlo (MCMC) is one of the most useful approaches to scientific computing because of its flexible construction, ease of use and generality. Indeed, MCMC is indispensable for performing Bayesian analysis. Two critical questions that MCMC practitioners need to address are where to start and when to stop the simulation. Although a great amount of research has gone into establishing convergence criteria and stopping rules with sound theoretical foundation, in practice, MCMC users often decide convergence by applying empirical diagnostic tools. This review article discusses the most widely used MCMC convergence diagnostic tools. Some recently proposed stopping rules with firm theoretical footing are also presented. The convergence diagnostics and stopping rules are illustrated using three detailed examples. | statistics |
The automatic detection of hypernymy relationships represents a challenging problem in NLP. The successful application of state-of-the-art supervised approaches using distributed representations has generally been impeded by the limited availability of high quality training data. We have developed two novel data augmentation techniques which generate new training examples from existing ones. First, we combine the linguistic principles of hypernym transitivity and intersective modifier-noun composition to generate additional pairs of vectors, such as "small dog - dog" or "small dog - animal", for which a hypernymy relationship can be assumed. Second, we use generative adversarial networks (GANs) to generate pairs of vectors for which the hypernymy relation can also be assumed. We furthermore present two complementary strategies for extending an existing dataset by leveraging linguistic resources such as WordNet. Using an evaluation across 3 different datasets for hypernymy detection and 2 different vector spaces, we demonstrate that both of the proposed automatic data augmentation and dataset extension strategies substantially improve classifier performance. | computer science |
We report $^{77}$Se NMR data in the normal and superconducting states of a single crystal of FeSe for several different field orientations. The Knight shift is suppressed in the superconducting state for in-plane fields, but does not vanish at zero temperature. For fields oriented out of the plane, little or no reduction is observed below $T_c$. These results reflect spin-singlet pairing emerging from a nematic state with large orbital susceptibility and spin-orbit coupling. The spectra and spin-relaxation rate data reveal electronic inhomogeneity that is enhanced in the superconducting state, possibly arising from enhanced density of states in the vortex cores. Despite the spin polarization of these states, there is no evidence for antiferromagnetic fluctuations. | condensed matter |
Android applications are executed on smartphones equipped with a variety of resources that must be properly accessed and controlled, otherwise the correctness of the executions and the stability of the entire environment might be negatively affected. For example, apps must properly acquire, use, and release microphones, cameras, and other multimedia devices otherwise the behavior of the apps that use the same resources might be compromised. Unfortunately, several apps do not use resources correctly, for instance due to faults and inaccurate design decisions. By interacting with these apps users may experience unexpected behaviors, which in turn may cause instability and sporadic failures, especially when resources are accessed. In this paper, we present an approach that lets users protect their environment from the apps that use resources improperly by enforcing the correct usage protocol. This is achieved by using software enforcers that can observe executions and change them when necessary. For instance, enforcers can detect that a resource has been acquired but not released, and automatically perform the release operation, thus giving the possibility to use that same resource to the other apps. The main idea is that software libraries, in particular the ones controlling access to resources, can be augmented with enforcers that can be activated and deactivated on demand by users to protect their environment from unwanted app behaviors. We call the software libraries augmented with one or more enforcers proactive libraries because the activation of the enforcer decorates the library with proactive behaviors that can guarantee the correctness of the execution despite the invocation of the operations implemented by the library. | computer science |
The off-diagonal parton-scattering channels $g+\gamma^*$ and $q+\phi^*$ in deep-inelastic scattering are power-suppressed near threshold $x\to 1$. We address the next-to-leading power (NLP) resummation of large double logarithms of $1-x$ to all orders in the strong coupling, which are present even in the off-diagonal DGLAP splitting kernels. The appearance of divergent convolutions prevents the application of factorization methods known from leading power resummation. Employing $d$-dimensional consistency relations from requiring $1/\epsilon$ pole cancellations in dimensional regularization between momentum regions, we show that the resummation of the off-diagonal parton-scattering channels at the leading logarithmic order can be bootstrapped from the recently conjectured exponentiation of NLP soft-quark Sudakov logarithms. In particular, we derive a result for the DGLAP kernel in terms of the series of Bernoulli numbers found previously by Vogt directly from algebraic all-order expressions. We identify the off-diagonal DGLAP splitting functions and soft-quark Sudakov logarithms as inherent two-scale quantities in the large-$x$ limit. We use a refactorization of these scales and renormalization group methods inspired by soft-collinear effective theory to derive the conjectured soft-quark Sudakov exponentiation formula. | high energy physics phenomenology |
In this work we analyse the forward-backward asymmetry of the $h\to Vff'$ decay in the Aligned two-Higgs Doublet Model. The Standard Model prediction for this asymmetry for $V=W$ is small, as it suffers from Yukawa suppression and is absent for $V=Z$. This does not necessarily have to hold true in the Aligned model where these contributions can in principle be re-enhanced through the independent alignment factors $\varsigma_f$. In this analysis we conclude that, due to the additional contributions corresponding to the Aligned two-Higgs Doublet Model together with extra sources of CP-violation for the $V=Z$ channel, the Standard Model predictions can be significantly modified in a great region of the parameter space. These deviations, that could be potentially measured at the High Luminosity LHC or future Higgs factories, would be a clear signal of new physics, and would shed new light on the possible extensions of the Standard Model and new sources of CP-violation. | high energy physics phenomenology |
The mean field games system is a coupled pair of nonlinear partial differential equations arising in differential game theory, as a limit as the number of agents tends to infinity. We prove existence and uniqueness of classical solutions for time-dependent mean field games with Sobolev data. Many works in the literature assume additive separability of the Hamiltonian, as well as further structure such as convexity and monotonicity of the resulting components. Problems arising in practice, however, may not have this separable structure; we therefore consider the non-separable problem. For our existence and uniqueness results, we introduce new smallness constraints which simultaneously consider the size of the time horizon, the size of the data, and the strength of the coupling in the system. | mathematics |
Extremal compact hyperbolic surfaces contain a packing of discs of the largest possible radius permitted by the topology of the surface. It is well known that arithmetic conditions on the uniformizing group are necessary for the existence of a second extremal packing in the same surface, but constructing explicit examples of this phenomenon is a complicated task. We present a brute force computational procedure that can be used to produce examples in all cases. | mathematics |
In this work, we present the integrated structure-control design of a 2-DOF underactuated mechanical system, aiming to achieve a periodic motion of the end-effector. The desired behavior is generated via input-output linearization, followed by structural optimization of the zero dynamics. Inspired by recent works on the control-oriented design of multibody systems, we define an optimization problem based on the simulation of the system's response. In particular, relevant model parameters are used to match the reference with a specific orbit of the zero dynamics, while also penalizing the input energy. For the considered application, the selected parameters are related to the mechanism's elasticities and mass distribution. Notably, we show that it is possible to reach a desirable trade-off between mass reduction and periodic motion accuracy. With an optimal zero dynamics response available, the control scheme can be completed with established orbital stabilization techniques, ensuring a robust oscillating behavior. | electrical engineering and systems science |
We investigate the stellar kinematics of the bulge and disk components in 826 galaxies with a wide range of morphology from the Sydney-AAO Multi-object Integral-field spectroscopy (SAMI) Galaxy Survey. The spatially-resolved rotation velocity (V) and velocity dispersion ($\sigma$) of bulge and disk components have been simultaneously estimated using the penalized pixel fitting (pPXF) method with photometrically defined weights for the two components. We introduce a new subroutine of pPXF for dealing with degeneracy in the solutions. We show that the V and $\sigma$ distributions in each galaxy can be reconstructed using the kinematics and weights of the bulge and disk components. The combination of two distinct components provides a consistent description of the major kinematic features of galaxies over a wide range of morphological types. We present Tully-Fisher and Faber-Jackson relations showing that the galaxy stellar mass scales with both V and $\sigma$ for both components of all galaxy types. We find a tight Faber-Jackson relation even for the disk component. We show that the bulge and disk components are kinematically distinct: (1) the two components show scaling relations with similar slopes, but different intercepts; (2) the spin parameter $\lambda_R$ indicates bulges are pressure-dominated systems and disks are supported by rotation; (3) the bulge and disk components have, respectively, low and high values in intrinsic ellipticity. Our findings suggest that the relative contributions of the two components explain, at least to first order, the complex kinematic behaviour of galaxies. | astrophysics |
We solve the problem of salient object detection by investigating how to expand the role of pooling in convolutional neural networks. Based on the U-shape architecture, we first build a global guidance module (GGM) upon the bottom-up pathway, aiming at providing layers at different feature levels the location information of potential salient objects. We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down pathway. By adding FAMs after the fusion operations in the top-down pathway, coarse-level features from the GGM can be seamlessly merged with features at various scales. These two pooling-based modules allow the high-level semantic features to be progressively refined, yielding detail enriched saliency maps. Experiment results show that our proposed approach can more accurately locate the salient objects with sharpened details and hence substantially improve the performance compared to the previous state-of-the-arts. Our approach is fast as well and can run at a speed of more than 30 FPS when processing a $300 \times 400$ image. Code can be found at http://mmcheng.net/poolnet/. | computer science |
We construct and study quantum trimer models and resonating SU(3)-singlet models on the kagome lattice, which generalize quantum dimer models and the Resonating Valence Bond wavefunctions to a trimer and SU(3) setting. We demonstrate that these models carry a Z_3 symmetry which originates in the structure of trimers and the SU(3) representation theory, and which becomes the only symmetry under renormalization. Based on this, we construct simple and exact parent Hamiltonians for the model which exhibit a topological 9-fold degenerate ground space. A combination of analytical reasoning and numerical analysis reveals that the quantum order ultimately displayed by the model depends on the relative weight assigned to different types of trimers -- it can display either Z_3 topological order or form a symmetry-broken trimer crystal, and in addition possesses a point with an enhanced U(1) symmetry and critical behavior. Our results accordingly hold for the SU(3) model, where the two natural choices for trimer weights give rise to either a topological spin liquid or a system with symmetry-broken order, respectively. Our work thus demonstrates the suitability of resonating trimer and SU(3)-singlet ansatzes to model SU(3) topological spin liquids on the kagome lattice. | condensed matter |
Using our HST/ACS observations of the recently found isolated dwarf spheroidal galaxies, we homogeneously measured their star formation histories. We determined star formation rate as a function of time, as well as age and metallicity of the stellar populations. All objects demonstrate complex star formation history, with a significant portion of stars formed 10-13 Gyr ago. Nevertheless, stars of middle ages (1-8 Gyr) are presented. In order to understand how the star formation parameters influence the evolution of dSphs, we also studied a sample of nearest dSphs in different environment: isolated (d < 2 Mpc); beyond the Local Group virial radius (but within the Local Group zero velocity sphere); and the satellites of M 31 located within the virial zone (300 kpc). Using archival HST/ACS observations, we measured their star formation histories. A comparative analysis of the parameters obtained allow us to distinguish a possible effect of the spatial segregation on the dSphs evolution scenario. | astrophysics |
Recently, there has been significant progress in computing scattering amplitudes in the high-energy limit using rapidity evolution equations. We describe the state-of-the-art and demonstrate the interplay between exponentiation of high-energy logarithms and that of infrared singularities. The focus in this talk is the imaginary part of 2 to 2 partonic amplitudes, which can be determined by solving the BFKL equation. We demonstrate that the wavefunction is infrared finite, and that its evolution closes in the soft approximation. Within this approximation we derive a closed-form solution for the amplitude in dimensional regularization, which fixes the soft anomalous dimension to all orders at NLL accuracy. We then turn to finite contributions of the amplitude and show that the remaining hard contributions can be determined algorithmically, by iteratively solving the BFKL equation in exactly two dimensions within the class of single-valued harmonic polylogarithms. To conclude we present numerical results and analyse large-order behaviour of the amplitude. | high energy physics phenomenology |
Increased adoption of artificial intelligence (AI) systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows. This could quickly become problematic, particularly where guidance or regulations change and once-acceptable best practice becomes outdated, or where data sources are later discredited as biased or inaccurate. This paper presents a novel method for deriving a quantifiable metric capable of ranking the overall transparency of the process pipelines used to generate AI systems, such that users, auditors and other stakeholders can gain confidence that they will be able to validate and trust the data sources and contributors in the AI systems that they rely on. The methodology for calculating the metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are evaluated through models published at ModelHub and PyTorch Hub, popular archives for sharing science resources, and is found to be helpful in driving consideration of the contributions made to generating AI systems and approaches towards effective documentation and improving transparency in machine learning assets shared within scientific communities. | computer science |
The framework of block-encodings is designed to develop a variety of quantum algorithms by encoding a matrix as a block of a unitary. To harness the potential advantages of block-encoding, it is essential to implement the block-encodings with preferred parameters for various matrices. In this paper, we present a new approach to implement the block-encodings of $n\times n$ dense matrices by decomposing the matrices into linear combinations of displacement matrices. It is shown that our approach will generate a $(\chi;2\textrm{log}n;\epsilon)$-block-encoding if the displacement of the matrix has been stored in a quantum-accessible data structure, where $\chi$ is the $l_{1}$-norm of the displacement of the matrix and $\epsilon$ is a parameter related to precision. As applications, we also introduce quantum algorithms for solving linear systems with displacement structures, which are exponentially faster than their classical counterparts. | quantum physics |
As an extension of our recent work (Results in Physics 12, 147-152 (2019)), we show, both theoretically and numerically, that the breakdown of agreement between the non-relativistic and relativistic quantum dynamical predictions in the non-relativistic regime also occurs for an electron subjected to a time-dependent force. | quantum physics |
We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency. By pretraining on the resulting corpora we obtain significant improvements on SQuAD2 and NQ, establishing a new state-of-the-art on the latter. Our synthetic data generation models, for both question generation and answer extraction, can be fully reproduced by finetuning a publicly available BERT model on the extractive subsets of SQuAD2 and NQ. We also describe a more powerful variant that does full sequence-to-sequence pretraining for question generation, obtaining exact match and F1 at less than 0.1% and 0.4% from human performance on SQuAD2. | computer science |
In this work, a covariant formulation of the gluon self-energy in presence of ellipsoidal anisotropy is considered. It is shown that the general structure of the gluon self-energy can be written in terms of six linearly independent projection tensors. Similar to the spheroidal anisotropy, mass scales can be introduced for each of the collective modes considering the static limits. With a simplified ellipsoidal generalization of the Romatschke-Strickland form, the angular dependencies of the mass scales are studied. It is observed that, compared to the spheroidal case, additional unstable mode may appear in presence of ellipsoidal anisotropy depending upon the choice of the parameters. | high energy physics phenomenology |
An unusual regime for liquid-state nuclear magnetic resonance (NMR) where the magnetic field strength is so low that the $J$-coupling (intramolecular spin-spin) interactions dominate the spin Hamiltonian opens a new paradigm with applications in spectroscopy, quantum control, and in fundamental-physics experiments, including searches for well-motivated dark-matter candidates. An interesting possibility is to bring this kind of "extreme NMR" together with another one---single nuclear spin detected with a single-spin quantum sensor. This would enable single-molecule $J$-spectroscopy. | physics |
Electrification of transport is a key strategy in reducing carbon emissions. Many countries have adopted policies of complete but gradual transformation to electric vehicles (EVs). However, mass EV adoption also means a spike in load (kW), which in turn can disrupt existing electricity infrastructure. Smart or controlled charging is widely seen as a potential solution to alleviate this stress on existing networks. Learning from the recent EV trials in the UK and elsewhere we take into account two key aspects which are largely ignored in current research: EVs actually charging at any given time and wide range of EV types, especially battery capacity-wise. Taking a minimalistic scenario-based approach, we study forecasting models for mean number of active chargers and mean EV consumption for distinct scenarios. Focusing on residential charging the models we consider range from simple regression models to more advanced machine and deep learning models such as XGBoost and LSTMs. We then use these models to evaluate the impacts of different levels of future EV penetration on a specimen distribution transformer that captures typical real-world scenarios. In doing so, we also initiate the study of different types of controlled charging when fully controlled charging is not possible. This aligns with the outcomes from recent trials which show that a sizeable proportion of EV owners may not prefer fully controlled centralized charging. We study two possible control regimes and show that one is more beneficial from load-on-transformer point of view, while the other may be preferred for other objectives. We show that a minimum of 60% control is required to ensure that transformers are not overloaded during peak hours. | electrical engineering and systems science |
The approximation of functions in Orlicz space by multivariate operators on simplex is considered. The convergence rate is given by using modulus of smoothness. | mathematics |
We use a sample of 532 star-forming galaxies at redshifts $z\sim 1.4-2.6$ with deep rest-frame optical spectra from the MOSFIRE Deep Evolution Field (MOSDEF) survey to place the first constraints on the nebular attenuation curve at high redshift. Based on the first five low-order Balmer emission lines detected in the composite spectra of these galaxies (${\rm H\alpha}$ through ${\rm H\epsilon}$), we derive a nebular attenuation curve that is similar in shape to that of the Galactic extinction curve, suggesting that the dust covering fraction and absorption/scattering properties along the lines-of-sight to massive stars at high redshift are similar to those of the average Milky Way sightline. The curve derived here implies nebular reddening values that are on average systematically larger than those derived for the stellar continuum. In the context of stellar population synthesis models that include the effects of stellar multiplicity, the difference in reddening of the nebular lines and stellar continuum may imply molecular cloud crossing timescales that are a factor of $\gtrsim 3\times$ longer than those inferred for local molecular clouds, star-formation rates that are constant or increasing with time such that newly-formed and dustier OB associations always dominate the ionizing flux, and/or that the dust responsible for reddening the nebular emission may be associated with non-molecular (i.e., ionized and neutral) phases of the ISM. Our analysis points to a variety of investigations of the nebular attenuation curve that will be enabled with the next generation of ground- and space-based facilities. | astrophysics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.