text
stringlengths
11
9.77k
label
stringlengths
2
104
Stochastic inverse problems (SIP) address the behavior of a set of objects of the same kind but with variable properties, such as a population of cells. Using a population of mechanistic models from a single parametric family, SIP explains population variability by transferring real-world observations into the latent space of model parameters. Previous research in SIP focused on solving the parameter inference problem for a single population using Markov chain Monte Carlo methods. Here we extend SIP to address multiple related populations simultaneously. Specifically, we simulate control and treatment populations in experimental protocols by discovering two related latent spaces of model parameters. Instead of taking a Bayesian approach, our two-population SIP is reformulated as the constrained-optimization problem of finding distributions of model parameters. To minimize the divergence between distributions of experimental observations and model outputs, we developed novel deep learning models based on generative adversarial networks (GANs) which have the structure of our underlying constrained-optimization problem. The flexibility of GANs allowed us to build computationally scalable solutions and tackle complex model input parameter inference scenarios, which appear routinely in physics, biophysics, economics and other areas, and which can not be handled with existing methods. Specifically, we demonstrate two scenarios of parameter inference over a control population and a treatment population whose treatment either selectively affects only a subset of model parameters with some uncertainty or has a deterministic effect on all model parameters.
statistics
Sunquakes are helioseismic power enhancements initiated by solar flares, but not all flares generate sunquakes. It is curious why some flares cause sunquakes while others do not. Here we propose a hypothesis to explain the disproportionate occurrence of sunquakes: during a flare's impulsive phase when the flare's impulse acts upon the photosphere, delivered by shock waves, energetic particles from higher atmosphere, or by downward Lorentz Force, a sunquake tends to occur if the background oscillation at the flare footpoint happens to oscillate downward in the same direction with the impulse from above. To verify this hypothesis, we select 60 strong flares in Solar Cycle 24, and examine the background oscillatory velocity at the sunquake sources during the flares' impulsive phases. Since the Doppler velocity observations at sunquake sources are usually corrupted during the flares, we reconstruct the oscillatory velocity in the flare sites using helioseismic holography method with an observation-based Green's function. A total of 24 flares are found to be sunquake active, giving a total of 41 sunquakes. It is also found that in 3-5 mHz frequency band, 25 out of 31 sunquakes show net downward oscillatory velocities during the flares' impulsive phases, and in 5-7 mHz frequency band, 33 out of 38 sunquakes show net downward velocities. These results support the hypothesis that a sunquake more likely occurs when a flare impacts a photospheric area with a downward background oscillation.
astrophysics
In this work, we present a neuromorphic system that combines for the first time a neural recording headstage with a signal-to-spike conversion circuit and a multi-core spiking neural network (SNN) architecture on the same die for recording, processing, and detecting High Frequency Oscillations (HFO), which are biomarkers for the epileptogenic zone. The device was fabricated using a standard 0.18$\mu$m CMOS technology node and has a total area of 99mm$^{2}$. We demonstrate its application to HFO detection in the iEEG recorded from 9 patients with temporal lobe epilepsy who subsequently underwent epilepsy surgery. The total average power consumption of the chip during the detection task was 614.3$\mu$W. We show how the neuromorphic system can reliably detect HFOs: the system predicts postsurgical seizure outcome with state-of-the-art accuracy, specificity and sensitivity (78%, 100%, and 33% respectively). This is the first feasibility study towards identifying relevant features in intracranial human data in real-time, on-chip, using event-based processors and spiking neural networks. By providing "neuromorphic intelligence" to neural recording circuits the approach proposed will pave the way for the development of systems that can detect HFO areas directly in the operation room and improve the seizure outcome of epilepsy surgery.
electrical engineering and systems science
Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank.
computer science
Let $G$ be a simple graph on $n$ vertices and $\mathcal{I}_G$ denotes parity binomial edge ideal of $G$ in the polynomial ring $S = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_n].$ We obtain a lower bound for the regularity of parity binomial edge ideals of graphs. We then classify all graphs whose parity binomial edge ideals have regularity $3$. We classify graphs whose parity binomial edge ideals have pure resolution.
mathematics
Recently, the MiniBooNE experiment at Fermilab has updated the results with increased data and reported an excess of $560.6 \pm 119.6$ electron-like events ($4.7\sigma$) in the neutrino operation mode. In this paper, we propose a scenario to account for the excess where a Dirac-type sterile neutrino, produced by a charged kaon decay through the neutrino mixing, decays into a leptophilic axion-like particle ($\ell$ALP) and a muon neutrino. The electron-positron pairs produced from the $\ell$ALP decays can be interpreted as electron-like events provided that their opening angle is sufficiently small. In our framework, we consider the $\ell$ALP with a mass $m^{}_a = 20\,\text{MeV}$ and an inverse decay constant $c^{}_e/f^{}_a = 10^{-2}\,\text{GeV}^{-1}$, allowed by the astrophysical and experimental constraints. Then, after integrating the predicted angular or visible energy spectra of the $\ell$ALP to obtain the total excess event number, we find that our scenario with sterile neutrino masses within $150\,\text{MeV}\lesssim m^{}_N \lesssim 380 \,\text{MeV}$ ($150\,\text{MeV}\lesssim m^{}_N \lesssim 180 \,\text{MeV}$) and neutrino mixing parameters between $10^{-10} \lesssim |U_{\mu 4}|^2 \lesssim 10^{-8}$ ($3\times 10^{-7} \lesssim |U_{\mu 4}|^2 \lesssim 8 \times10^{-7}$) can explain the MiniBooNE data.
high energy physics phenomenology
Variational quantum eigensolver(VQE) typically minimizes energy with hybrid quantum-classical optimization, which aims to find the ground state. Here, we propose a VQE by minimizing energy variance, which is called as variance-VQE(VVQE). The VVQE can be viewed as an self-verifying eigensolver for arbitrary eigenstate by designing, since an eigenstate for a Hamiltonian should have zero energy variance. We demonstrate properties and advantages of VVQE for solving a set of excited states with quantum chemistry problems. Remarkably, we show that optimization of a combination of energy and variance may be more efficient to find low-energy excited states than those of minimizing energy or variance alone. We further reveal that the optimization can be boosted with stochastic gradient descent by Hamiltonian sampling, which uses only a few terms of the Hamiltonian and thus significantly reduces the quantum resource for evaluating variance and its gradients.
quantum physics
Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/.
electrical engineering and systems science
We describe the geometry of bend distortions in twist-bend nematic liquid crystals in terms of their fundamental degeneracies, which we call $\beta$ lines. These represent a new class of line-like topological defect. We use them to construct and characterise novel structures, including grain boundary and focal conic smectic-like defects, Skyrmions, and knotted merons. We analyse their local geometry and global structure, showing that their intersection with any surface is twice the Skyrmion number. Finally, we demonstrate how arbitrary knots and links can be created and describe them in terms of merons, giving a new geometric perspective on the fractionalisation of Skyrmions.
condensed matter
Systematic benchmark calculations for elemental bulks are presented to validate the accuracy of density functional theory for superconductors. We developed a method to treat the spin-orbit interaction (SOI) together with the spin fluctuation (SF) and examine their effect on the superconducting transition temperature. We found the following results from the benchmark calculations: (1) The calculations, including SOI and SF, reproduce the experimental superconducting transition temperature ($T_c$) quantitatively. (2) The effect by SOI is small excepting a few elements such as Pb, Tl, and Re. (3) SF reduces $T_c$s, especially for the transition metals, while this reduction is too weak to reproduce the $T_c$s of Zn and Cd. (4) We reproduced the absence of superconductivity for alkaline (earth) and noble metals. These calculations confirm that our method can be applied to a wide range of materials and implies a direction for the further improvement of the methodology.
condensed matter
The dimerized Kane-Mele model with/without the strong interaction is studied using analytical methods. The boundary of the topological phase transition of the model without strong interaction is obtained. Our results show that the occurrence of the transition only depends on dimerized parameter . From the one-particle spectrum, we obtain the completed phase diagram including the quantum spin Hall (QSH) state and the topologically trivial insulator. Then, using different mean-field methods, we investigate the Mott transition and the magnetic transition of the strongly correlated dimerized Kane-Mele model. In the region between the two transitions, the topological Mott insulator (TMI) with characters of Mott insulators and topological phases may be the most interesting phase. In this work, effects of the hopping anisotropy and Hubbard interaction U on boundaries of the two transitions are observed in detail. The completed phase diagram of the dimerized Kane-Mele-Hubbard model is also obtained in this work. Quantum fluctuations have extremely important influences on a quantum system. However, investigations are under the framework of the mean field treatment in this work and the effects of fluctuations in this model will be discussed in the future.
condensed matter
A topological order is a new quantum phase that is beyond Landau's symmetry-breaking paradigm. Its defining features include robust degenerate ground states, long-range entanglement and anyons. It was known that $R$- and $F$-matrices, which characterize the fusion-braiding properties of anyons, can be used to uniquely identify topological order. In this article, we explore an essential question: how can the $R$- and $F$-matrices be experimentally measured? By using quantum simulations based on a toric code model with boundaries and state-of-the-art technology, we show that the braidings, i.e. the $R$-matrices, can be completely determined by the half braidings of boundary excitations due to the boundary-bulk duality and the anyon condensation. The $F$-matrices can also be measured in a scattering quantum circuit involving the fusion of three anyons in two different orders. Thus we provide an experimental protocol for measuring the unique identifiers of topological order.
quantum physics
AIM: Large amplitude narrowband obliquely propagating whistler-mode waves at frequencies of ~0.2 fce (electron cyclotron frequency) are commonly observed at 1 AU, and are most consistent with the whistler heat flux fan instability. We want to determine whether similar whistler-mode waves occur inside 0.2 AU, and how their properties compare to those at 1 AU. METHODS: We utilize the waveform capture data from the Parker Solar Probe Fields instrument to develop a data base of narrowband whistler waves. The SWEAP instrument, in conjunction with the quasi-thermal noise measurement form Fields, provides the electron heat flux, beta, and other electron parameters. RESULTS: Parker Solar Probe observations inside ~0.3 AU show that the waves are more intermittent than at 1 AU, and are often interspersed with electrostatic whistler/Bernstein waves at higher frequencies. This is likely due to the more variable solar wind observed closer to the Sun. The whistlers usually occur within regions when the magnetic field is more variable and often with small increases in the solar wind speed. The near-sun whistler-mode waves are also narrowband and large amplitude, and associated with beta greater than 1. Wave angles are sometimes highly oblique (near the resonance cone), but angles have been determined for only a small fraction of the events. The association with heat flux and beta is generally consistent with the whistler fan instability although there are intervals where the heat flux is significantly lower than the instability limit. Strong scattering of strahl energy electrons is seen in association with the waves, providing evidence that the waves regulate the electron heat flux..
physics
We propose a new approach to the half-liberation question, for the compact groups $T_N\subset G_N\subset U_N$, where $T_N=\mathbb Z_2^N$. Indeed, we can construct a quantum group $T_N^*\subset G_N^*\subset U_N^*$, simply by setting $G_N^*=<G_N,T_N^*>$. We explain here how this construction fits into the known general theory of half-liberation, and we discuss as well some potential generalizations, with $T_N^*$ being replaced by more complicated objects.
mathematics
Classifier calibration does not always go hand in hand with the classifier's ability to separate the classes. There are applications where good classifier calibration, i.e. the ability to produce accurate probability estimates, is more important than class separation. When the amount of data for training is limited, the traditional approach to improve calibration starts to crumble. In this article we show how generating more data for calibration is able to improve calibration algorithm performance in many cases where a classifier is not naturally producing well-calibrated outputs and the traditional approach fails. The proposed approach adds computational cost but considering that the main use case is with small data sets this extra computational cost stays insignificant and is comparable to other methods in prediction time. From the tested classifiers the largest improvement was detected with the random forest and naive Bayes classifiers. Therefore, the proposed approach can be recommended at least for those classifiers when the amount of data available for training is limited and good calibration is essential.
computer science
Physical-layer security (PLS) has the potential to strongly enhance the overall system security as an alternative to or in combination with conventional cryptographic primitives usually implemented at higher network layers. Secret-key generation relying on wireless channel reciprocity is an interesting solution as it can be efficiently implemented at the physical layer of emerging wireless communication networks, while providing information-theoretic security guarantees. In this paper, we investigate and compare the secret-key capacity based on the sampling of the entire complex channel state information (CSI) or only its envelope, the received signal strength (RSS). Moreover, as opposed to previous works, we take into account the fact that the eavesdropper's observations might be correlated and we consider the high signal-to-noise ratio (SNR) regime where we can find simple analytical expressions for the secret-key capacity. As already found in previous works, we find that RSS-based secret-key generation is heavily penalized as compared to CSI-based systems. At high SNR, we are able to precisely and simply quantify this penalty: a halved pre-log factor and a constant penalty of about 0.69 bit, which disappears as Eve's channel gets highly correlated.
electrical engineering and systems science
Markov chain Monte Carlo (MCMC) algorithms are generally regarded as the gold standard technique for Bayesian inference. They are theoretically well-understood and conceptually simple to apply in practice. The drawback of MCMC is that in general performing exact inference requires all of the data to be processed at each iteration of the algorithm. For large data sets, the computational cost of MCMC can be prohibitive, which has led to recent developments in scalable Monte Carlo algorithms that have a significantly lower computational cost than standard MCMC. In this paper, we focus on a particular class of scalable Monte Carlo algorithms, stochastic gradient Markov chain Monte Carlo (SGMCMC) which utilises data subsampling techniques to reduce the per-iteration cost of MCMC. We provide an introduction to some popular SGMCMC algorithms and review the supporting theoretical results, as well as comparing the efficiency of SGMCMC algorithms against MCMC on benchmark examples. The supporting R code is available online.
statistics
Federated Learning (FL) is currently the most widely adopted framework for collaborative training of (deep) machine learning models under privacy constraints. Albeit it's popularity, it has been observed that Federated Learning yields suboptimal results if the local clients' data distributions diverge. To address this issue, we present Clustered Federated Learning (CFL), a novel Federated Multi-Task Learning (FMTL) framework, which exploits geometric properties of the FL loss surface, to group the client population into clusters with jointly trainable data distributions. In contrast to existing FMTL approaches, CFL does not require any modifications to the FL communication protocol to be made, is applicable to general non-convex objectives (in particular deep neural networks) and comes with strong mathematical guarantees on the clustering quality. CFL is flexible enough to handle client populations that vary over time and can be implemented in a privacy preserving way. As clustering is only performed after Federated Learning has converged to a stationary point, CFL can be viewed as a post-processing method that will always achieve greater or equal performance than conventional FL by allowing clients to arrive at more specialized models. We verify our theoretical analysis in experiments with deep convolutional and recurrent neural networks on commonly used Federated Learning datasets.
computer science
Vibrational spectroscopy, comprised of infrared absorption and Raman scattering spectroscopy, is widely used for label-free optical sensing and imaging in various scientific and industrial fields. The group theory states that the two molecular spectroscopy methods are sensitive to vibrations categorized in different point groups and provide complementary vibrational spectra. Therefore, complete vibrational information cannot be acquired by a single spectroscopic device, which has impeded the full potential of vibrational spectroscopy. Here, we demonstrate simultaneous infrared absorption and Raman scattering spectroscopy that allows us to measure the complete broadband vibrational spectra in the molecular fingerprint region with a single instrument based on an ultrashort pulsed laser. The system is based on dual-modal Fourier-transform spectroscopy enabled by efficient use of nonlinear optical effects. Our proof-of-concept experiment demonstrates rapid, broadband and high spectral resolution measurements of complementary spectra of organic liquids for precise and accurate molecular analysis.
physics
Scalable quantum information processing will require quantum networks of qubits with the ability to coherently transfer quantum states between the desired sender and receiver nodes. Here we propose a scheme to implement a quantum router that can direct quantum states from an input qubit to a preselected output qubit. The path taken by the transferred quantum state is controlled by the state of one or more ancilla qubits. This enables both directed transport between a sender and a number of receiver nodes, and generation of distributed entanglement in the network. We demonstrate the general idea using a two-output setup and discuss how the quantum routing may be expanded to several outputs. We also present a possible realization of our ideas with superconducting circuits.
quantum physics
Fog computing is transforming the network edge into an intelligent platform by bringing storage, computing, control, and networking functions closer to end-users, things, and sensors. How to allocate multiple resource types (e.g., CPU, memory, bandwidth) of capacity-limited heterogeneous fog nodes to competing services with diverse requirements and preferences in a fair and efficient manner is a challenging task. To this end, we propose a novel market-based resource allocation framework in which the services act as buyers and fog resources act as divisible goods in the market. The proposed framework aims to compute a market equilibrium (ME) solution at which every service obtains its favorite resource bundle under the budget constraint while the system achieves high resource utilization. This work extends the General Equilibrium literature by considering a practical case of satiated utility functions. Also, we introduce the notions of non-wastefulness and frugality for equilibrium selection, and rigorously demonstrate that all the non-wasteful and frugal ME are the optimal solutions to a convex program. Furthermore, the proposed equilibrium is shown to possess salient fairness properties including envy-freeness, sharing-incentive, and proportionality. Another major contribution of this work is to develop a privacy-preserving distributed algorithm, which is of independent interest, for computing an ME while allowing market participants to obfuscate their private information. Finally, extensive performance evaluation is conducted to verify our theoretical analyses.
computer science
Unraveling the 3D physical structure, the temperature and density distribution, of protoplanetary discs is an essential step if we are to confront simulations of embedded planets or dynamical instabilities. In this paper we focus on Submillimeter Array observations of the edge-on source, Gomez's Hamburger, believed to host an over-density hypothesised to be a product of gravitational instability in the disc, GoHam b. We demonstrate that, by leveraging the well characterised rotation of a Keplerian disc to deproject observations of molecular lines in position-position-velocity space into disc-centric coordinates, we are able to map out the emission distribution in the (r, z) plane and (x, y, z) space. We show that 12CO traces an elevated layer of $z\,/\,r \sim 0.3$, while 13CO traces deeper in the disc at $z\,/\,r \lesssim 0.2$. We localize emission associated with GoHam b, finding it at deprojected radius of approximately 500 au and at polar angle of +\- 30 degrees from the disc major axis. At the spatial resolution of $\sim 1.5^{\prime\prime}$, GoHam b is spatially unresolved, with an upper limit to its radius of $<190$~au.
astrophysics
In this paper, five virtual inertia control structures are implemented and tested on a variable speed hydropower (VSHP) plant. The results show that all five can deliver fast power reserves to maintain grid stability after disturbances after a disturbance. The VSHP is well suited for the purposed since its output power can be changed almost instantaneously by utilizing the rotational energy of the turbine and generator. This will cause the turbine rotational speed to deviate from its optimal value temporarily. Then the governor control will regain the turbine rotational speed by controlling the guide vane opening and thereby the turbine flow and mechanical power. With that, the VSHP output power can be changed permanently to contribute with primarily frequency reserves. Dynamic and eigenvalue analyses are performed to compare five different versions of the basic VSG and VSM control structures; VSG, power-frequency PID-controller with permanent droop (VSG-PID), VSM, VSM with power-frequency PD-controller (VSM-PD), and VSM with power-frequency PID-controller and permanent droop (VSM-PID). They are evaluated by two main criteria; their ability to deliver instantaneous power (inertia) to reduce the rate of change of frequency (ROCOF) and their contribution to frequency containment control (steady-state frequency droop response)
electrical engineering and systems science
Discrete minimal surface algebras and Yang Mills algebras may be related to (generalized) Kac Moody algebras, just as Membrane (matrix) models and the IKKT model - including a novel construction technique for minimal surfaces.
high energy physics theory
Regularized kernel-based methods such as support vector machines (SVMs) typically depend on the underlying probability measure $\mathrm{P}$ (respectively an empirical measure $\mathrm{D}_n$ in applications) as well as on the regularization parameter $\lambda$ and the kernel $k$. Whereas classical statistical robustness only considers the effect of small perturbations in $\mathrm{P}$, the present paper investigates the influence of simultaneous slight variations in the whole triple $(\mathrm{P},\lambda,k)$, respectively $(\mathrm{D}_n,\lambda_n,k)$, on the resulting predictor. Existing results from the literature are considerably generalized and improved. In order to also make them applicable to big data, where regular SVMs suffer from their super-linear computational requirements, we show how our results can be transferred to the context of localized learning. Here, the effect of slight variations in the applied regionalization, which might for example stem from changes in $\mathrm{P}$ respectively $\mathrm{D}_n$, is considered as well.
statistics
Epistemic logic programs constitute an extension of the stable models semantics to deal with new constructs called subjective literals. Informally speaking, a subjective literal allows checking whether some regular literal is true in all stable models or in some stable model. As it can be imagined, the associated semantics has proved to be non-trivial, as the truth of the subjective literal may interfere with the set of stable models it is supposed to query. As a consequence, no clear agreement has been reached and different semantic proposals have been made in the literature. Unfortunately, comparison among these proposals has been limited to a study of their effect on individual examples, rather than identifying general properties to be checked. In this paper, we propose an extension of the well-known splitting property for logic programs to the epistemic case. To this aim, we formally define when an arbitrary semantics satisfies the epistemic splitting property and examine some of the consequences that can be derived from that, including its relation to conformant planning and to epistemic constraints. Interestingly, we prove (through counterexamples) that most of the existing proposals fail to fulfill the epistemic splitting property, except the original semantics proposed by Gelfond in 1991.
computer science
Our earlier put forward model of dark matter, consisting of cm-size pearls with ordinary matter inside under high pressure and with a mass of order $1.4 *10^8$ kg, is used to explain the mysterious 3.5 keV X-ray line from the galactic center and various galaxies and galaxy clusters. The pearls are bubbles of a new type of vacuum and thus surrounded by a surface tension providing the high pressure. We have two rather successful order of magnitude numerical results: 1) the X-ray energy of 3.5 keV comes out as the homolumo-gap or rather as the energy due to screening of electrons in the high pressure ordinary matter inside the pearls, and is well fitted. %predicted correctly to within a factor %{\bf tbc 2} %3; %$40 \%$; 2) Using the fitting of Cline and Frey for dark matter radiation arising from collisions or annihilations of dark matter particles we fit the overall intensity of the radiation in our pearl model. We find that a pearl of the minimal size required just by stability, as used in our previous work \cite{Tunguska}, is inconsistent with the observed frequency and intensity of the 3.5 keV line. However the predictions of our model are very sensitive to the radius of the pearls and an excellent fit to both experimental quantities is obtained for a pearl of radius of 2.8 cm.
high energy physics phenomenology
Flaws in the process of modulation, or encoding of key bits in the quadratures of the electromagnetic light field, can make continuous-variable quantum key distribution systems susceptible to leakage of secret information. Here, we report such a modulation leakage vulnerability in a system that uses an optical in-phase and quadrature modulator to implement a single sideband encoding scheme. The leakage arises from the limited suppression of a quantum-information-carrying sideband during modulation. Based on the results from a proof-of-concept experiment, we theoretically analyse the impact of this vulnerability. Our results indicate that the leakage reduces the range over which a positive secret key can be obtained, and can even lead to a security breach if not properly taken into account. We also study the effectiveness of additional trusted noise as a countermeasure to this vulnerability.
quantum physics
Positronium (Ps) formation on the surface of clean polycrystalline copper (Cu), highly oriented pyrolytic graphite (HOPG) and multi layer graphene (MLG) grown on a polycrystalline copper substrate has been investigated as a function of incident positron kinetic energy (1.5eV to 1keV). Measurments on Cu indicate that as the kinetic energy of the incident positrons increases from 1.5eV to 900eV, the fraction of positrons that form Ps ($f_{Ps}$) decreases from ~0.5 to ~0.3. However, in HOPG and MLG, instead of a monotonic decrease of $f_{Ps}$ with positron kinetic energy, a sharp peak is observed at ~ 5eV and above ~200eV,remains nearly constant in HOPG and MLG. We propose that in HOPG and MLG, at low incident positron energies the Ps formation is dominated either by a surface Plasmon assisted electron pick up process or by an energy dependent back scattering process. Both these processes can explain the peak observed and the present data can help to augment the understanding of Ps formation from layered materials.
condensed matter
Several recent papers have shown a close relationship between entanglement wedge reconstruction and the unitarity of black hole evaporation in AdS/CFT. The analysis of these papers however has a rather puzzling feature: all calculations are done using bulk dynamics which are essentially those Hawking used to predict information loss, but applying ideas from entanglement wedge reconstruction seems to suggest a Page curve which is consistent with information conservation. Why should two different calculations in the same model give different answers for the Page curve? In this note we present a new pair of models which clarify this situation. Our first model gives a holographic illustration of unitary black hole evaporation, in which the analogue of the Hawking radiation purifies itself as expected, and this purification is reproduced by the entanglement wedge analysis. Moreover a smooth black hole interior persists until the last stages the evaporation process. Our second model gives an alternative holographic interpretation of the situation where the bulk evolution leads to information loss: unlike in the models proposed so far, this bulk information loss is correctly reproduced by the entanglement wedge analysis. This serves as an illustration that quantum extremal surfaces are in some sense kinematic: the time-dependence of the entropy they compute depends on the choice of bulk dynamics. In both models no bulk quantum corrections need to be considered: classical extremal surfaces are enough to do the job. We argue that our first model is the one which gives the right analogy for what actually happens to evaporating black holes, but we also emphasize that any complete resolution of the information problem will require an understanding of non-perturbative bulk dynamics.
high energy physics theory
We obtain analytically close forms of benchmark quantum dynamics of the collapse and revival (CR), reduced density matrix, Von Neumann entropy, and fidelity for the XXZ central spin problem. These quantities characterize the quantum decoherence and entanglement of the system with few to many bath spins, and for a short to infinitely long time evolution. For the homogeneous central spin problem, the effective magnetic field $B$, coupling constant $A$ and longitudinal interaction $\Delta$ significantly influence the time scales of the quantum dynamics of the central spin and the bath, providing a tunable resource for quantum metrology. Under the resonance condition $B=\Delta=A$, the location of the $m$-th revival peak in time reaches a simple relation $t_{r} \simeq\frac{\pi N}{A} m$ for a large $N$. For $\Delta =0$, $N\to \infty$ and a small polarization in the initial spin coherent state, our analytical result for the CR recovers the known expression found in the Jaynes-Cummings model, thus building up an exact dynamical connection between the central spin problems and the light-matter interacting systems in quantum nonlinear optics. In addition, the CR dynamics is robust to a moderate inhomogeneity of the coupling amplitudes, while disappearing at strong inhomogeneity.
quantum physics
We consider imperfect two-mode bosonic quantum transducers that cannot completely transfer an initial source-system quantum state due to insufficient coupling strength or other non-idealities. We show that such transducers can generically be made perfect by using interference and phase-sensitive amplification. Our approach is based on the realization that a particular kind of imperfect transducer (one which implements a swapped quantum non-demolition (QND) gate) can be made into a perfect one-way transducer using feed-forward and/or injected squeezing. We show that a generic imperfect transducer can be reduced to this case by repeating the imperfect transduction operation twice, interspersed with amplification. Crucially, our scheme only requires the ability to implement squeezing operations and/or homodyne measurement on one of the two modes involved. It is thus ideally suited to schemes where there is an asymmetry in the ability to control the two coupled systems (e.g. microwave-to-optics quantum state transfer). We also discuss a correction protocol that requires no injected squeezing and/or feed-forward operation.
quantum physics
The Weizsaecker-Williams transverse momentum dependent (TMD) gluon distributions can be probed in the production of a hard dijet in semi-inclusive DIS. This process is sensitive not only to the conventional but also to the linearly polarized gluon distribution. The latter gives rise to an azimuthal dependence of the dijet cross section and therefore can be distinguished from the former. Feasibility study of a measurement of these TMDs through dijet production at a future electron-ion collider shows that the extraction of the distribution of linearly polarized gluons with a statistical accuracy of 5% will require estimated luminosity of 20 fb^{-1} /A.
high energy physics phenomenology
In this note we study the conversion of nucleons into deltas induced by a strong magnetic field in ultraperipheral relativistic heavy ion collisions. The interaction Hamiltonian couples the magnetic field to the spin operator, which, acting on the spin part of the wave function, converts a spin 1/2 into a spin 3/2 state. We estimate this transition probability and calculate the cross section for delta production. This process can in principle be measured, since the delta moves close to the beam and decays almost exclusively into pions. Forward pions may be detected by forward calorimeters.
high energy physics phenomenology
Basic idea of this analysis is to achieve a two-component dark matter (DM) framework composed of a scalar and a fermion, with non-negligible DM-DM interaction contributing to thermal freeze out (hence relic density), but hiding them from direct detection bounds. We therefore augment the Standard Model (SM) with a scalar singlet ($S$) and three vectorlike fermions: two singlets ($\chi_1,\chi_2$) and a doublet ($N$). Stability of the two DM components is achieved by a discrete $\mathcal{Z}_2 \times {\mathcal{Z}^\prime}_2$ symmetry, under which the additional fields transform suitably. Fermion fields having same $\mathcal{Z}_2 \times {\mathcal{Z}^\prime}_2$ charge ($N,\chi_1$ in the model) mix after electroweak symmetry breaking (EWSB) and the lightest component becomes one of the DM candidates, while scalar singlet $S$ is the other DM component connected to visible sector by Higgs portal coupling. The heavy fermion ($\chi_2$) plays the role of mediator to connect the two DM candidates through Yukawa interaction. This opens up a large parameter space for the heavier DM component through DM-DM conversion. Hadronically quiet dilepton signature, arising from the fermion dark sector, can be observed at Large Hadron Collider (LHC) aided by the presence of a lighter scalar DM component, satisfying relic density and direct search bounds through DM-DM conversion.
high energy physics phenomenology
This is a perspective paper inspired from the study of Turing Test proposed by A.M. Turing (23 June 1912 - 7 June 1954) in 1950. Following one important implication of Turing Test for enabling a machine with a human-like behavior or performance, we define human-like robustness (HLR) for AI machines. The objective of the new definition aims to enforce AI machines with HLR, including to evaluate them in terms of HLR. A specific task is discussed only on object identification, because it is the most common task for every person in daily life. Similar to the perspective, or design, position by Turing, we provide a solution of how to achieve HLR AI machines without constructing them and conducting real experiments. The solution should consists of three important features in the machines. The first feature of HLR machines is to utilize common sense from humans for realizing a causal inference. The second feature is to make a decision from a semantic space for having interpretations to the decision. The third feature is to include a "human-in-the-loop" setting for advancing HLR machines. We show an "identification game" using proposed design of HLR machines. The present paper shows an attempt to learn and explore further from Turing Test towards the design of human-like AI machines.
computer science
This paper concerns the statistical analysis of a weighted graph through spectral embedding. Under a latent position model in which the expected adjacency matrix has low rank, we prove uniform consistency and a central limit theorem for the embedded nodes, treated as latent position estimates. In the special case of a weighted stochastic block model, this result implies that the embedding follows a Gaussian mixture model with each component representing a community. We exploit this to formally evaluate different weight representations of the graph using Chernoff information. For example, in a network anomaly detection problem where we observe a p-value on each edge, we recommend against directly embedding the matrix of p-values, and instead using threshold or log p-values, depending on network sparsity and signal strength.
statistics
We consider a simple extension of the standard model, which could give a solution for its $CP$ issues such as the origin of both CKM and PMNS phases and the strong $CP$ problem. The model is extended with singlet scalars which could cause spontaneous $CP$ violation to result in these phases at low energy regions. The singlet scalars could give a good inflaton candidate if they have a suitable nonminimal coupling with the Ricci scalar. $CP$ issues and inflation could be closely related through these singlet scalars in a natural way. In a case where inflaton is a mixture of the singlet scalars, we study reheating and leptogenesis as notable phenomena affected by the fields introduced in this extension.
high energy physics phenomenology
A general quantum channel consisting of a decohering and a filtering element carries one qubit of an entangled photon pair. As we apply a local filter to the other qubit, some mutual quantum information between the two qubits is restored depending on the properties of the noise mixed into the signal. We demonstrate a drastic difference between channels with bit-flip and phase-flip noise and further suggest a scheme for maximal recovery of the quantum information.
quantum physics
We present a collection of optimizers tuned for usage on Noisy Intermediate-Scale Quantum (NISQ) devices. Optimizers have a range of applications in quantum computing, including the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization (QAOA) algorithms. They are also used for calibration tasks, hyperparameter tuning, in machine learning, etc. We analyze the efficiency and effectiveness of different optimizers in a VQE case study. VQE is a hybrid algorithm, with a classical minimizer step driving the next evaluation on the quantum processor. While most results to date concentrated on tuning the quantum VQE circuit, we show that, in the presence of quantum noise, the classical minimizer step needs to be carefully chosen to obtain correct results. We explore state-of-the-art gradient-free optimizers capable of handling noisy, black-box, cost functions and stress-test them using a quantum circuit simulation environment with noise injection capabilities on individual gates. Our results indicate that specifically tuned optimizers are crucial to obtaining valid science results on NISQ hardware, and will likely remain necessary even for future fault tolerant circuits.
quantum physics
We consider methods for transporting a prediction model and assessing its performance for use in a new target population, when outcome and covariate information for model development is available from a simple random sample from the source population, but only covariate information is available on a simple random sample from the target population. We discuss how to tailor the prediction model for use in the target population, how to assess model performance in the target population (e.g., by estimating the target population mean squared error), and how to perform model and tuning parameter selection in the context of the target population. We provide identifiability results for the target population mean squared error of a potentially misspecified prediction model under a sampling design where the source study and the target population samples are obtained separately. We also introduce the concept of prediction error modifiers that can be used to reason about the need for tailoring measures of model performance to the target population and provide an illustration of the methods using simulated data.
statistics
In this work, we consider the relativistic Duffin-Kemmer-Petiau equation for spin-one particles with a nonminimal vector interaction in the presence of minimal uncertainty in momentum. By using the position space representation we exactly determine the bound-states spectrum and the corresponding eigenfunctions. We discuss the effects of the deformation and nonminimal vector coupling parameters on the energy spectrum analytically and numerically.
quantum physics
We consider the algebraic effective couplings for open superstring massless modes in the framework of the $A_\infty$ theory in the small Hilbert space. Focusing on quartic algebraic couplings, we reduce the effective action of the $A_\infty$ theory to the Berkovits one where we have already shown that such couplings are fully computed from contributions at the boundary of moduli space, when the massless fields under consideration are appropriately charged under an ${\cal N}\!=\!2$ $R$-symmetry. Here we offer a proof of localization which is in the small Hilbert space. We also discuss the flat directions of the obtained quartic potentials and give evidence for the existence of exactly marginal deformations in the $D3/D(-1)$ system in the framework of string field theory.
high energy physics theory
We experimentally demonstrate the generation of a three-photon discrete-energy-entangled W state using multi-photon-pair generation by spontaneous four-wave mixing in an optical fiber. We show that by making use of prior information on the photon source we can verify the state produced by this source without resorting to frequency conversion.
quantum physics
Speech enhancement is an essential task of improving speech quality in noise scenario. Several state-of-the-art approaches have introduced visual information for speech enhancement,since the visual aspect of speech is essentially unaffected by acoustic environment. This paper proposes a novel frameworkthat involves visual information for speech enhancement, by in-corporating a Generative Adversarial Network (GAN). In par-ticular, the proposed visual speech enhancement GAN consistof two networks trained in adversarial manner, i) a generator that adopts multi-layer feature fusion convolution network to enhance input noisy speech, and ii) a discriminator that attemptsto minimize the discrepancy between the distributions of the clean speech signal and enhanced speech signal. Experiment re-sults demonstrated superior performance of the proposed modelagainst several state-of-the-art
electrical engineering and systems science
Aims: We aim to determine whether Jupiter's obliquity is bound to remain exceptionally small in the Solar System, or if it could grow in the future and reach values comparable to those of the other giant planets. Methods: The spin axis of Jupiter is subject to the gravitational torques from its regular satellites and from the Sun. These torques evolve over time due to the long-term variations of its orbit and to the migration of its satellites. With numerical simulations, we explore the future evolution of Jupiter's spin axis for different values of its moment of inertia and for different migration rates of its satellites. Analytical formulas show the location and properties of all relevant resonances. Results: Because of the migration of the Galilean satellites, Jupiter's obliquity is currently increasing, as it adiabatically follows the drift of a secular spin-orbit resonance with the nodal precession mode of Uranus. Using the current estimates of the migration rate of the satellites, the obliquity of Jupiter can reach values ranging from 6{\deg} to 37{\deg} after 5 Gyrs from now, according to the precise value of its polar moment of inertia. A faster migration for the satellites would produce a larger increase in obliquity, as long as the drift remains adiabatic. Conclusions: Despite its peculiarly small current value, the obliquity of Jupiter is no different from other obliquities in the Solar System: It is equally sensitive to secular spin-orbit resonances and it will probably reach comparable values in the future.
astrophysics
We can construct passing networks when we regard a player as a node and a pass as a link in football games. Thus, we can analyze the networks by using tools developed in network science. Among various metrics characterizing a network, centrality metrics have often been used to identify key players in a passing network. However, a tolerance to players being marked or passes being blocked in a passing network, namely the robustness of the network, has been poorly understood so far. Because the robustness of a passing network can be connected to the increase of ball possession, it would be deeply related to the outcome of a game. Here, we developed position-dependent passing networks of 45 matches by 18 teams belonging to the Japan Professional Football League. Then, nodes or links were continuously removed from the passing networks by two removal methods so that we could evaluate the robustness of these networks against the removals. The results show that these passing networks commonly contain hubs (key players making passes). Then, we analyzed the most robust networks in detail and found that their full backs increase the robustness by often invoking a heavier emphasis on attack. Moreover, we showed that the robustness of the passing networks and the team performance have a positive correlation.
physics
In this work we study statistical properties of graph-based clustering algorithms that rely on the optimization of balanced graph cuts, the main example being the optimization of Cheeger cuts. We consider proximity graphs built from data sampled from an underlying distribution supported on a generic smooth compact manifold $M$. In this setting, we obtain high probability convergence rates for both the Cheeger constant and the associated Cheeger cuts towards their continuum counterparts. The key technical tools are careful estimates of interpolation operators which lift empirical Cheeger cuts to the continuum, as well as continuum stability estimates for isoperimetric problems. To our knowledge the quantitative estimates obtained here are the first of their kind.
mathematics
The prospect of using semiconductor quantum dots as an experimental tool to distinguish Majorana zero modes (MZMs) from other zero-energy excitations such as Kondo resonances has brought up the fundamental question of whether topological superconductivity and the Kondo effect can coexist in these systems. Here, we study the Kondo effect in a quantum dot coupled to a metallic contact and to a pair of MZMs. We consider a situation in which the MBS are spin polarized in opposite directions. By using numerical renormalization-group calculations and scaling analysis of the renormalization group equations, we show that the Kondo effect takes place at low temperatures, regardless the coupling to the MZMs. Interestingly, we find that the Kondo singlet essentially decouples from the MZMs such that the residual impurity entropy can show local non-Fermi liquid properties characteristic of the single Majorana excitations. This offers the possibility of tuning between Fermi-liquid and non-Fermi-liquid regimes simply by changing the quantum dot-MZM couplings.
condensed matter
Many real-life problems are represented as a black-box, i.e., the internal workings are inaccessible or a closed-form mathematical expression of the likelihood function cannot be defined. For continuous random variables likelihood-free inference problems can be solved by a group of methods under the name of Approximate Bayesian Computation (ABC). However, a similar approach for discrete random variables is yet to be formulated. Here, we aim to fill this research gap. We propose to use a population-based MCMC ABC framework. Further, we present a valid Markov kernel, and propose a new kernel that is inspired by Differential Evolution. We assess the proposed approach on a problem with the known likelihood function, namely, discovering the underlying diseases based on a QMR-DT Network, and three likelihood-free inference problems: (i) the QMR-DT Network with the unknown likelihood function, (ii) learning binary neural network, and (iii) Neural Architecture Search. The obtained results indicate the high potential of the proposed framework and the superiority of the new Markov kernel.
statistics
Standard Model extensions that use strictly off-shell degrees of freedom - the fakeons - allow for new measurable interactions at energy scales usually precluded by the constraints that target the on-shell propagation of new particles. Here we employ the interactions between a new fake scalar doublet and the muon to explain the recent Fermilab measurement of its anomalous magnetic moment. Remarkably, unlike in the case of usual particles, the experimental result can be matched for fakeon masses below the electroweak scale without contradicting the stringent precision data and collider bounds on new light degrees of freedom. Our analysis, therefore, demonstrates that the fakeon approach offers unexpected viable possibilities to model new physics naturally at low scales.
high energy physics phenomenology
Fairness has several interpretations in sports, one of them being that the rules should guarantee incentive compatibility, namely, a team cannot be worse off due to better results in any feasible scenario. The current seeding regime of the most prestigious annual European club football tournament, the UEFA (Union of European Football Associations) Champions League, is shown to violate this requirement since the 2015/16 season. In particular, if the titleholder qualifies for the first pot by being a champion in a high-ranked league, its slot is given to a team from a lower-ranked association, which can harm a top club from the domestic championship of the titleholder. However, filling all vacancies through the national leagues excludes the presence of perverse incentives. UEFA is encouraged to introduce this policy from the 2021-24 cycle onwards.
physics
We present a new algorithm for approximating the number of triangles in a graph $G$ whose edges arrive as an arbitrary order stream. If $m$ is the number of edges in $G$, $T$ the number of triangles, $\Delta_E$ the maximum number of triangles which share a single edge, and $\Delta_V$ the maximum number of triangles which share a single vertex, then our algorithm requires space: \[ \widetilde{O}\left(\frac{m}{T}\cdot \left(\Delta_E + \sqrt{\Delta_V}\right)\right) \] Taken with the $\Omega\left(\frac{m \Delta_E}{T}\right)$ lower bound of Braverman, Ostrovsky, and Vilenchik (ICALP 2013), and the $\Omega\left( \frac{m \sqrt{\Delta_V}}{T}\right)$ lower bound of Kallaugher and Price (SODA 2017), our algorithm is optimal up to log factors, resolving the complexity of a classic problem in graph streaming.
computer science
If their number density is high enough, clusters of axions can decay to photons via stimulated emission. We study both the special and general relativistic corrections to lasing in such dense axion clusters as they evolve in static spacetimes. Our main results are rate equations for the evolution of axion and photon number densities that include these corrections. We use Schwarzschild spacetime as a detailed example.
high energy physics phenomenology
We show that the Planck mass scale can be generated from conformal gravity in the Weyl conformal geometry via the Coleman-Weinberg mechanism of dimensional transmutation where quantum corrections stemming from the gravitational field and the Weyl gauge field trigger the symmetry breakdown of a local Weyl symmetry. It is also shown that the vacuum expectation value of a scalar field is transmitted to a sector of the standard model through a potential involving the scale invariant part and the contribution from the Coleman-Weinberg mechanism, thereby generating the electroweak scale.
high energy physics theory
We investigate higher-derivative extensions of Einstein-Maxwell theory that are invariant under electromagnetic duality rotations, allowing for non-minimal couplings between gravity and the gauge field. Working in a derivative expansion of the action, we characterize the Lagrangians giving rise to duality-invariant theories up to the eight-derivative level, providing the complete list of operators that one needs to include in the action. We also characterize the set of duality-invariant theories whose action is quadratic in the Maxwell field strength but which are non-minimally coupled to the curvature. Then we explore the effect of field redefinitions and we show that, to six derivatives, the most general duality-preserving theory can be mapped to Maxwell theory minimally coupled to a higher-derivative gravity containing only four non-topological higher-order operators. We conjecture that this is a general phenomenon at all orders, i.e, that any duality-invariant extension of Einstein-Maxwell theory is perturbatively equivalent to a higher-derivative gravity minimally coupled to Maxwell theory. Finally, we study charged black hole solutions in the six-derivative theory and we investigate additional constraints on the couplings motivated by the weak gravity conjecture.
high energy physics theory
For the task of open domain Knowledge Based Question Answering in CCKS2019, we propose a method combining information retrieval and semantic parsing. This multi-module system extracts the topic entity and the most related relation predicate from a question and transforms it into a Sparql query statement. Our method obtained the F1 score of 70.45% on the test data.
computer science
This paper studies the problem of finding the exact ranking from noisy comparisons. A comparison over a set of $m$ items produces a noisy outcome about the most preferred item, and reveals some information about the ranking. By repeatedly and adaptively choosing items to compare, we want to fully rank the items with a certain confidence, and use as few comparisons as possible. Different from most previous works, in this paper, we have three main novelties: (i) compared to prior works, our upper bounds (algorithms) and lower bounds on the sample complexity (aka number of comparisons) require the minimal assumptions on the instances, and are not restricted to specific models; (ii) we give lower bounds and upper bounds on instances with unequal noise levels; and (iii) this paper aims at the exact ranking without knowledge on the instances, while most of the previous works either focus on approximate rankings or study exact ranking but require prior knowledge. We first derive lower bounds for pairwise ranking (i.e., compare two items each time), and then propose (nearly) optimal pairwise ranking algorithms. We further make extensions to listwise ranking (i.e., comparing multiple items each time). Numerical results also show our improvements against the state of the art.
computer science
Chemotaxis of enzymes in response to gradients in the concentration of their substrate has been widely reported in recent experiments, but a basic understanding of the process is still lacking. Here, we develop a microscopic theory for chemotaxis, valid for enzymes and other small molecules. Our theory includes both non-specific interactions between enzyme and substrate, as well as complex formation through specific binding between the enzyme and the substrate. We find that two distinct mechanisms contribute to enzyme chemotaxis: a diffusiophoretic mechanism due to the non-specific interactions, and a new type of mechanism due to binding-induced changes in the diffusion coefficient of the enzyme. The latter chemotactic mechanism points towards lower substrate concentration if the substrate enhances enzyme diffusion, and towards higher substrate concentration if the substrate inhibits enzyme diffusion. For a typical enzyme, attractive phoresis and binding-induced enhanced diffusion will compete against each other. We find that phoresis dominates above a critical substrate concentration, whereas binding-induced enhanced diffusion dominates for low substrate concentration. Our results resolve an apparent contradiction regarding the direction of urease chemotaxis observed in experiments, and in general clarify the relation between enhanced diffusion and chemotaxis of enzymes. Finally, we show that the competition between the two distinct chemotactic mechanisms may be used to engineer nanomachines that move towards or away from regions with a specific substrate concentration.
physics
Certain forms of Lorentz violation in the photon sector are difficult to bound directly, since they are "vacuum orthogonal," meaning they do not change the solutions of the equations of motion in vacuum. However, these very same terms have a unique tendency to contribute large radiative corrections to effects in other sectors. Making use of this, we set bounds on four previously unconstrained $d=5$ photon operators at the $10^{-25}$-$10^{-31}$ GeV$^{-1}$ levels.
high energy physics phenomenology
We reproduce the two-loop seven-point remainder function in planar, maximally supersymmetric Yang-Mills theory by direct integration of conformally-regulated chiral integrands. The remainder function is obtained as part of the two-loop logarithm of the MHV amplitude, the regularized form of which we compute directly in this scheme. We compare the scheme-dependent anomalous dimensions and related quantities in the conformal regulator with those found for the Higgs regulator.
high energy physics theory
We explored a sample of 545 local galaxies using data from the 3XMM-DR7 and SDSS-DR8 surveys. We carried out all analyses up to z\,$\sim$\,0.2, and we studied the relation between X/O flux ratio and accretion rate for different classes of active galaxies such as LINERs and Seyfert 2. We obtained a slight correlation between the two parameters if the whole sample of AGN is used. However, LINERs and Sy2 galaxies show different properties, slight correlation and slight anti-correlation, respectively. This could confirm that LINERs and Sy2 galaxies have different accretion efficiencies and maybe different accretion disc properties,as has been suggested previously. Keywords: galaxies - active; AGN - accretion rate; AGN - black hole masses; AGN - X-ray properties; AGN - optical properties
astrophysics
In the last decade, the approximate vanishing ideal and its basis construction algorithms have been extensively studied in computer algebra and machine learning as a general model to reconstruct the algebraic variety on which noisy data approximately lie. In particular, the basis construction algorithms developed in machine learning are widely used in applications across many fields because of their monomial-order-free property; however, they lose many of the theoretical properties of computer-algebraic algorithms. In this paper, we propose general methods that equip monomial-order-free algorithms with several advantageous theoretical properties. Specifically, we exploit the gradient to (i) sidestep the spurious vanishing problem in polynomial time to remove symbolically trivial redundant bases, (ii) achieve consistent output with respect to the translation and scaling of input, and (iii) remove nontrivially redundant bases. The proposed methods work in a fully numerical manner, whereas existing algorithms require the awkward monomial order or exponentially costly (and mostly symbolic) computation to realize properties (i) and (iii). To our knowledge, property (ii) has not been achieved by any existing basis construction algorithm of the approximate vanishing ideal.
statistics
An absorption dip in the spectrum of the cosmic microwave background observed by the EDGES experiment suggests an unexplained reduction of the hydrogen spin temperature at cosmic redshift z ~ 17. The mass of dark-matter axions could correspond to the hyperfine splitting of 5.9 micro-eV, between the triplet (H1) and singlet (H0) state. We calculate the rate for a+ H0 <-> H1 in two ways, and find that it is orders of magnitude smaller than the CMB-mediated transition rate, so irrelevant. As a result, this process cannot be used to rule in or out dark matter axions of mass = hyperfine splitting. The axion rate nonetheless has interesting features, for example, on balance it heats the spin temperature, and the axion couplings to protons and electrons contribute on equal footing.
high energy physics phenomenology
With the upcoming launch of large constellations of satellites in the low-Earth orbit (LEO) region it will become important to organize the physical space occupied by the different operating satellites in order to minimize critical conjunctions and avoid collisions. Here, we introduce the definition of space occupancy as the domain occupied by an individual satellite as it moves along its nominal orbit under the effects of environmental perturbations throughout a given interval of time. After showing that space occupancy for the zonal problem is intimately linked to the concept of frozen orbits and proper eccentricity, we provide frozen-orbit initial conditions in osculating element space and obtain the frozen-orbit polar equation to describe the space occupancy region in closed analytical form. We then analyze the problem of minimizing space occupancy in a realistic model including tesseral harmonics, third-body perturbations, solar radiation pressure, and drag. The corresponding initial conditions, leading to what we call minimum space occupancy (MiSO) orbits, are obtained numerically for a set of representative configurations in LEO. The implications for the use of MiSO orbits to optimize the design of mega-constellations are discussed.
astrophysics
Variational regularization techniques are dominant in the field of mathematical imaging. A drawback of these techniques is that they are dependent on a number of parameters which have to be set by the user. A by now common strategy to resolve this issue is to learn these parameters from data. While mathematically appealing this strategy leads to a nested optimization problem (known as bilevel optimization) which is computationally very difficult to handle. It is common when solving the upper-level problem to assume access to exact solutions of the lower-level problem, which is practically infeasible. In this work we propose to solve these problems using inexact derivative-free optimization algorithms which never require exact lower-level problem solutions, but instead assume access to approximate solutions with controllable accuracy, which is achievable in practice. We prove global convergence and a worstcase complexity bound for our approach. We test our proposed framework on ROFdenoising and learning MRI sampling patterns. Dynamically adjusting the lower-level accuracy yields learned parameters with similar reconstruction quality as highaccuracy evaluations but with dramatic reductions in computational work (up to 100 times faster in some cases).
mathematics
V CVn is a semiregular variable star with a V-band amplitude of $\approx2$ mag. This star has an unusually high amplitude of polarimetric variability: up to 6 per cent. It also exhibits a prominent inverse correlation between the flux and the fraction of polarization and a substantial constancy of the angle of polarization. To figure out the nature of these features, we observed the object using the Differential Speckle Polarimetry at three bands centered on 550, 625 and 880 nm using the 2.5-m telescope of Sternberg Astronomical Institute. The observations were conducted on 20 dates distributed over three cycles of pulsation. We detected an asymmetric reflection nebula consisting of three regions and surrounding the star at the typical distance of 35 mas. The components of the nebula change their brightness with the same period as the star, but with significant and different phase shifts. We discuss several hypotheses that could explain this behavior.
astrophysics
We obtain several fundamental results on finite index ideals and additive subgroups of rings as well as on model-theoretic connected components of rings, which concern generating in finitely many steps inside additive groups of rings. Let $R$ be any ring equipped with an arbitrary additional first order structure, and $A$ a set of parameters. We show that whenever $H$ is an $A$-definable, finite index subgroup of $(R,+)$, then $H+RH$ contains an $A$-definable, two-sided ideal of finite index. As a corollary, we positively answer Question 3.9 of [Bohr compactifications of groups and rings, J. Gismatullin, G. Jagiella and K. Krupi\'nski]: if $R$ is unital, then $(\bar R,+)^{00}_A + \bar R \cdot (\bar R,+)^{00}_A + \bar R \cdot (\bar R,+)^{00}_A = \bar R^{00}_A$, where $\bar R \succ R$ is a sufficiently saturated elementary extension of $R$, and $(\bar R,+)^{00}_A$ [resp. $\bar R^{00}_A$] is the smallest $A$-type-definable, bounded index additive subgroup [resp. ideal] of $\bar R$. This implies that $\bar R^{00}_A=\bar R^{000}_A$, where $\bar R^{000}_A$ is the smallest invariant over $A$, bounded index ideal of $\bar R$. If $R$ is of finite characteristic (not necessarily unital), we get a sharper result: $(\bar R,+)^{00}_A + \bar R \cdot (\bar R,+)^{00}_A = \bar R^{00}_A$. We obtain similar results for finitely generated (not necessarily unital) rings and for topological rings. The above results imply that the simplified descriptions of the definable (so also classical) Bohr compactifications of triangular groups over unital rings obtained in Corollary 3.5 of the aforementioned paper are valid for all unital rings. We analyze many examples, where we compute the number of steps needed to generate a group by $(\bar R \cup \{1\}) \cdot (\bar R,+)^{00}_A$ and study related aspects, showing "optimality" of some of our main results and answering some natural questions.
mathematics
Towards realising larger scale quantum algorithms, the ability to prepare sizeable multi-qubit entangled states with full qubit control is used as a benchmark for quantum technologies. We investigate the extent to which entanglement is found within a prepared graph state on the 20-qubit superconducting quantum computer, IBM Q Poughkeepsie. We prepared a graph state along a path consisting of all twenty qubits within Poughkeepsie and performed full quantum state tomography on all groups of four connected qubits along this path. We determined that each pair of connected qubits was inseparable and hence the prepared state was entangled. Additionally, a genuine multipartite entanglement witness was measured on all qubit subpaths of the graph state and we found genuine multipartite entanglement on chains of up to three qubits.
quantum physics
Deriving a simple, analytic galaxy star formation history (SFH) using observational data is a complex task without the proper tool to hand. We therefore present SNITCH, an open source code written in Python, developed to quickly (~2 minutes) infer the parameters describing an analytic SFH model from the emission and absorption features of a galaxy spectrum dominated by star formation gas ionisation. SNITCH uses the Flexible Stellar Population Synthesis models of Conroy et al. (2009), the MaNGA Data Analysis Pipeline and a Markov Chain Monte Carlo method in order to infer three parameters (time of quenching, rate of quenching and model metallicity) which best describe an exponentially declining quenching history. This code was written for use on the MaNGA spectral data cubes but is customisable by a user so that it can be used for any scenario where a galaxy spectrum has been obtained, and adapted to infer a user-defined analytic SFH model for specific science cases. Herein we outline the rigorous testing applied to SNITCH and show that it is both accurate and precise at deriving the SFH of a galaxy spectra. The tests suggest that SNITCH is sensitive to the most recent epoch of star formation but can also trace the quenching of star formation even if the true decline does not occur at an exponential rate. With the use of both an analytical SFH and only five spectral features, we advocate that this code be used as a comparative tool across a large population of spectra, either for integral field unit data cubes or across a population of galaxy spectra.
astrophysics
While regression tasks aim at interpolating a relation on the entire input space, they often have to be solved with a limited amount of training data. Still, if the hypothesis functions can be sketched well with the data, one can hope for identifying a generalizing model. In this work, we introduce with the Neural Restricted Isometry Property (NeuRIP) a uniform concentration event, in which all shallow $\mathrm{ReLU}$ networks are sketched with the same quality. To derive the sample complexity for achieving NeuRIP, we bound the covering numbers of the networks in the Sub-Gaussian metric and apply chaining techniques. In case of the NeuRIP event, we then provide bounds on the expected risk, which hold for networks in any sublevel set of the empirical risk. We conclude that all networks with sufficiently small empirical risk generalize uniformly.
statistics
Coupling phase-stable single-cycle terahertz (THz) pulses to scanning tunneling microscope (STM) junctions enables spatio-temporal imaging with femtosecond temporal and \r{A}ngstrom spatial resolution. The time resolution achieved in such THz-gated STM is ultimately limited by the sub-cycle temporal variation of the tip-enhanced THz field acting as an ultrafast voltage pulse, and hence by the ability to feed high-frequency, broadband THz pulses into the junction. Here, we report on the coupling of ultrabroadband (1-30 THz) single-cycle THz pulses from a spintronic THz emitter(STE) into a metallic STM junction. We demonstrate broadband phase-resolved detection of the THz voltage transient directly in the STM junction via THz-field-induced modulation of ultrafast photocurrents. Comparison to the unperturbed far-field THz waveform reveals the antenna response of the STM tip. Despite tip-induced low-pass filtering, frequencies up to 15 THz can be detected in the tip-enhanced near-field, resulting in THz transients with a half-cycle period of 115 fs. We further demonstrate simple polarity control of the THz bias via the STE magnetization, and show that up to 2 V THz bias at 1 MHz repetition rate can be achieved in the current setup. Finally, we find a nearly constant THz voltage and waveform over a wide range of tip-sample distances, which by comparison to numerical simulations confirms the quasi-static nature of the THz pulses. Our results demonstrate the suitability of spintronic THz emitters for ultrafast THz-STM with unprecedented bandwidth of the THz bias, and provide insight into the femtosecond response of defined nanoscale junctions.
physics
This is the second paper of a series of three. We construct effective open-closed superstring couplings by classically integrating out massive fields from open superstring field theories coupled to an elementary gauge invariant tadpole proportional to an on-shell closed string state in both large and small Hilbert spaces, in the NS sector. This source term is well known in the WZW formulation and by explicitly performing a novel large Hilbert space perturbation theory we are able to characterize the first orders of the vacuum shift solution, its obstructions and the non-trivial open-closed effective couplings in closed form. With the aim of getting all order results, we also construct a new observable in the $A_\infty$ theory in the small Hilbert space which correctly provides a gauge invariant coupling to physical closed strings and which descends from the WZW open-closed coupling upon partial gauge fixing and field redefinition. Armed with this new $A_\infty$ observable we use tensor co-algebra techniques to efficiently package the whole perturbation theory necessary for computing the effective action and we give all order results for the open-closed effective couplings in the small Hilbert space.
high energy physics theory
We consider a finite two-dimensional Heisenberg triangular spin lattice at different degrees of anisotropy coupled to a dissipative Lindblad environment obeying the Born-Markovian constrain at finite temperature. We show how applying an inhomogeneous magnetic field to the system may significantly affect the entanglement distribution and properties among the spins in the asymptotic steady state of the system. Particularly, applying an inhomogeneous field with an inward (growing) gradient toward the central spin is found to considerably enhance the nearest neighbor entanglement and its robustness to the thermal dissipative decay effect in the completely anisotropic (Ising) system, whereas all the beyond nearest neighbor entanglements vanish entirely. Applying the same field to a partially anisotropic (XYZ) system, does not only enhance the nearest neighbor entanglements and their robustness but also all the beyond nearest neighbor ones. Nevertheless, the inhomogeneity of the field shows no effect on the asymptotic behavior of the entanglement in the isotropic (XXX) system, which vanishes under any system configuration. Moreover, the same inhomogeneous field exhibits the most influential impact, compared with the other ones, on the the spin dynamics as well. Although in the isotropic system the spins relax to a separable (disentangled) steady state with all the spins reaching a common spin state regardless of the field inhomogeneity, the spins in the steady state of the completely anisotropic system reach different distinguished spin states depending on their positions in the lattice. However, in the XYZ system, though the anisotropy is lower, the spin states become even more distinguished, accompanying the long range quantum correlation across the system, which is a sign of a critical behavior taking place at this combination of system anisotropy and field inhomogeneity.
quantum physics
We search for escaping helium from the hot super Earth 55 Cnc e by taking high-resolution spectra of the 1083 nm line during two transits using Keck/NIRSPEC. We detect no helium absorption down to a 90\% upper limit of 250 ppm in excess absorption or 0.27 milli-Angstrom in equivalent width. This corresponds to a mass loss rate of less than $\sim10^9$ g/s assuming a Parker wind model with a plausible exosphere temperature of 5000-6000 K, although the precise constraint is heavily dependent on model assumptions. We consider both hydrogen- and helium-dominated atmospheric compositions, and find similar bounds on the mass loss rate in both scenarios. Our hydrodynamical models indicate that if a lightweight atmosphere exists on 55 Cnc e, our observations would have easily detected it. Together with the non-detection of Lyman $\alpha$ absorption by Ehrenreich et al 2012, our helium non-detection indicates that 55 Cnc e either never accreted a primordial atmosphere in the first place, or lost its primordial atmosphere shortly after the dissipation of the gas disk.
astrophysics
Soon after Einstein's calculation of the effect of the Sun's gravitational field on the propagation of light in 1911, astronomers around the world tried to measure and verify the value. If the first attempts in Brazil in 1912 or Imperial Russia in 1914 had been successful, they would have proven Einstein wrong.
physics
Following Craven and Rouquier's computational method to tackle Brou\'e's abelian defect group conjecture, we present two algorithms implementing that procedure in the case of principal blocks of defect $D \cong C_{\ell} \times C_{\ell}$ for a prime $\ell$; the first describes a stable equivalence between $B_0(G)$ and $B_0(N_G(D))$, and the second tries to lift a such stable equivalence to a perverse derived equivalence. As an application, we show that Brou\'e's conjecture holds for the principal $5$-block of the simple group $\Omega^{+}_8(2)$.
mathematics
We introduce and analyze a novel approach to the problem of speaker identification in multi-party recorded meetings. Given a speech segment and a set of available candidate profiles, we propose a novel data-driven way to model the distance relations between them, aiming at identifying the speaker label corresponding to that segment. To achieve this we employ a recurrent, memory-based architecture, since this class of neural networks has been shown to yield advanced performance in problems requiring relational reasoning. The proposed encoding of distance relations is shown to outperform traditional distance metrics, such as the cosine distance. Additional improvements are reported when the temporal continuity of the audio signals and the speaker changes is modeled in. In this paper, we have evaluated our method in two different tasks, i.e. scripted and real-world business meeting scenarios, where we report a relative reduction in speaker error rate of 39.28% and 51.84%, respectively, compared to the baseline.
electrical engineering and systems science
We use the background field method to systematically derive CFT data for the critical $\phi^6$ vector model in three dimensions, and the Gross-Neveu model in dimensions $2\leq d \leq 4$. Specifically, we calculate the OPE coefficients and anomalous dimensions of various operators, up to next-to-leading order in the $1/N$ expansion.
high energy physics theory
The multi-output Gaussian process ($\mathcal{MGP}$) is based on the assumption that outputs share commonalities, however, if this assumption does not hold negative transfer will lead to decreased performance relative to learning outputs independently or in subsets. In this article, we first define negative transfer in the context of an $\mathcal{MGP}$ and then derive necessary conditions for an $\mathcal{MGP}$ model to avoid negative transfer. Specifically, under the convolution construction, we show that avoiding negative transfer is mainly dependent on having a sufficient number of latent functions $Q$ regardless of the flexibility of the kernel or inference procedure used. However, a slight increase in $Q$ leads to a large increase in the number of parameters to be estimated. To this end, we propose two latent structures that scale to arbitrarily large datasets, can avoid negative transfer and allow any kernel or sparse approximations to be used within. These structures also allow regularization which can provide consistent and automatic selection of related outputs.
statistics
Storage is vital to power systems as it provides the urgently needed flexibility to the system. Meanwhile, it can contribute more than flexibility. In this paper, we study the possibility of utilizing storage system for carbon emission reduction. The opportunity arises due to the pending implementation of carbon tax throughout the world. Without the right incentive, most system operators have to dispatch the generators according to the merit order of the fuel costs, without any control for carbon emissions. However, we submit that storage may provide necessary flexibility in carbon emission reduction even without carbon tax. We identify the non-convex structure to conduct storage control for this task and propose an easy to implement dynamic programming algorithm to investigate the value of storage in carbon emission reduction.
electrical engineering and systems science
We discuss interacting, closed, bosonic and superstrings in thermal equilibrium at temperatures close to the Hagedorn temperature in flat space. We calculate S-matrix elements of the strings at the Hagedorn temperature and use them to construct a low-energy effective action for interacting strings near the Hagedorn temperature. We show, in particular, that the four-point amplitude of massless winding modes leads to a positive quartic interaction. Furthermore, the effective field theory has a generalized conformal structure, namely, it is conformally invariant when the temperature is assigned an appropriate scaling dimension. Then, we show that the equations of motion resulting from the effective action possess a winding-mode-condensate background solution above the Hagedorn temperature and present a worldsheet conformal field theory, similar to a Sine-Gordon theory, that corresponds to this solution. We find that the Hagedorn phase transition in our setup is second order, in contrast to a first-order transition that was found previously in different setups.
high energy physics theory
We calculate the Wigner function for massive spin-1/2 particles in an inhomogeneous electromagnetic field to leading order in the Planck constant $\hbar$. Going beyond leading order in $\hbar$ we then derive a generalized Boltzmann equation in which the force exerted by an inhomogeneous electromagnetic field on the particle dipole moment arises naturally. Carefully taking the massless limit we find agreement with previous results. The case of global equilibrium with rotation is also studied. Finally, we outline the derivation of fluid-dynamical equations from the components of the Wigner function. The conservation of total angular momentum is promoted as an additional fluid-dynamical equation of motion. Our framework can be used to study polarization effects induced by vorticity and magnetic field in relativistic heavy-ion collisions.
high energy physics phenomenology
Holographic complexity, in the guise of the Complexity = Volume prescription, comes equipped with a natural correspondence between its rate of growth and the average infall momentum of matter in the bulk. This Momentum/Complexity correspondence can be related to an integrated version of the momentum constraint of general relativity. In this paper we propose a generalization, using the full Codazzi equations as a starting point, which successfully accounts for purely gravitational contributions to infall momentum. The proposed formula is explicitly checked in an exact pp-wave solution of the vacuum Einstein equations.
high energy physics theory
We search for host galaxy candidates of nearby fast radio bursts (FRBs), FRB 180729.J1316+55, FRB 171020, FRB 171213, FRB 180810.J1159+83, and FRB 180814.J0422+73 (the second repeating FRB). We compare the absolute magnitudes and the expected host dispersion measure $\rm DM_{host}$ of these candidates with that of the first repeating FRB, FRB 121102, as well as those of long gamma ray bursts (LGRBs) and superluminous supernovae (SLSNe), the proposed progenitor systems of FRB 121102. We find that while the FRB 121102 host is consistent with those of LGRBs and SLSNe, the nearby FRB host candidates, at least for FRB 180729.J1316+55, FRB 171020, and FRB180814.J0422+73, either have a smaller $\rm DM_{host}$ or are fainter than FRB121102 host, as well as the hosts of LGRBs and SLSNe. In order to avoid the uncertainty in estimating $\rm DM_{host}$ due to the line-of-sight effect, we propose a galaxy-group-based method to estimate the electron density in the inter-galactic regions, and hence, $\rm DM_{IGM}$. The result strengthens our conclusion. We conclude that the host galaxy of FRB 121102 is atypical, and LGRBs and SLSNe are likely not the progenitor systems of at least most nearby FRB sources. {The recently reported two FRB hosts differ from the host of FRB 121102 and also the host candidates suggested in this paper. This is consistent with the conclusion of our paper and suggests that the FRB hosts are very diverse. }
astrophysics
This paper presents a generalizable methodology for data-driven identification of nonlinear dynamics that bounds the model error in terms of the prediction horizon and the magnitude of the derivatives of the system states. Using higher-order derivatives of general nonlinear dynamics that need not be known, we construct a Koopman operator-based linear representation and utilize Taylor series accuracy analysis to derive an error bound. The resulting error formula is used to choose the order of derivatives in the basis functions and obtain a data-driven Koopman model using a closed-form expression that can be computed in real time. Using the inverted pendulum system, we illustrate the robustness of the error bounds given noisy measurements of unknown dynamics, where the derivatives are estimated numerically. When combined with control, the Koopman representation of the nonlinear system has marginally better performance than competing nonlinear modeling methods, such as SINDy and NARX. In addition, as a linear model, the Koopman approach lends itself readily to efficient control design tools, such as LQR, whereas the other modeling approaches require nonlinear control methods. The efficacy of the approach is further demonstrated with simulation and experimental results on the control of a tail-actuated robotic fish. Experimental results show that the proposed data-driven control approach outperforms a tuned PID (Proportional Integral Derivative) controller and that updating the data-driven model online significantly improves performance in the presence of unmodeled fluid disturbance. This paper is complemented with a video: https://youtu.be/9_wx0tdDta0.
statistics
Wave-particle duality is one of the basic features of quantum mechanics, giving rise to the use of complex numbers in describing states of quantum systems, their dynamics, and interaction. Since the inception of quantum theory, it has been debated whether complex numbers are actually essential, or whether an alternative consistent formulation is possible using real numbers only. Here, we attack this long-standing problem both theoretically and experimentally, using the powerful tools of quantum resource theories. We show that - under reasonable assumptions - quantum states are easier to create and manipulate if they only have real elements. This gives an operational meaning to the resource theory of imaginarity. We identify and answer several important questions which include the state-conversion problem for all qubit states and all pure states of any dimension, and the approximate imaginarity distillation for all quantum states. As an application, we show that imaginarity plays a crucial role for state discrimination: there exist real quantum states which can be perfectly distinguished via local operations and classical communication, but which cannot be distinguished with any nonzero probability if one of the parties has no access to imaginarity. We confirm this phenomenon experimentally with linear optics, performing discrimination of different two-photon quantum states by local projective measurements. These results prove that complex numbers are an indispensable part of quantum mechanics.
quantum physics
Full-duplex self-backhauling is promising to provide cost-effective and flexible backhaul connectivity for ultra-dense wireless networks, but also poses a great challenge to resource management between the access and backhaul links. In this paper, we propose a user-centric joint access-backhaul transmission framework for full-duplex self-backhauled wireless networks. In the access link, user-centric clustering is adopted so that each user is cooperatively served by multiple small base stations (SBSs). In the backhaul link, user-centric multicast transmission is proposed so that each user's message is treated as a common message and multicast to its serving SBS cluster. We first formulate an optimization problem to maximize the network weighted sum rate through joint access-backhaul beamforming and SBS clustering when global channel state information (CSI) is available. This problem is efficiently solved via the successive lower-bound maximization approach with a novel approximate objective function and the iterative link removal technique. We then extend the study to the stochastic joint access-backhaul beamforming optimization with partial CSI. Simulation results demonstrate the effectiveness of the proposed algorithms for both full CSI and partial CSI scenarios. They also show that the transmission design with partial CSI can greatly reduce the CSI overhead with little performance degradation.
electrical engineering and systems science
We examine the efficiency of Recurrent Neural Networks in forecasting the spatiotemporal dynamics of high dimensional and reduced order complex systems using Reservoir Computing (RC) and Backpropagation through time (BPTT) for gated network architectures. We highlight advantages and limitations of each method and discuss their implementation for parallel computing architectures. We quantify the relative prediction accuracy of these algorithms for the longterm forecasting of chaotic systems using as benchmarks the Lorenz-96 and the Kuramoto-Sivashinsky (KS) equations. We find that, when the full state dynamics are available for training, RC outperforms BPTT approaches in terms of predictive performance and in capturing of the long-term statistics, while at the same time requiring much less training time. However, in the case of reduced order data, large scale RC models can be unstable and more likely than the BPTT algorithms to diverge. In contrast, RNNs trained via BPTT show superior forecasting abilities and capture well the dynamics of reduced order systems. Furthermore, the present study quantifies for the first time the Lyapunov Spectrum of the KS equation with BPTT, achieving similar accuracy as RC. This study establishes that RNNs are a potent computational framework for the learning and forecasting of complex spatiotemporal systems.
electrical engineering and systems science
Massive MIMO communication systems have a huge potential both in terms of data rate and energy efficiency, although channel estimation becomes challenging for a large number of antennas. Using a physical model allows to ease the problem by injecting a priori information based on the physics of propagation. However, such a model rests on simplifying assumptions and requires to know precisely the configuration of the system, which is unrealistic in practice. In this paper we present mpNet, an unfolded neural network specifically designed for massive MIMO channel estimation. It is trained online in an unsupervised way. Moreover, mpNet is computationally efficient and automatically adapts its depth to the SNR. The method we propose adds flexibility to physical channel models by allowing a base station to automatically correct its channel estimation algorithm based on incoming data, without the need for a separate offline training phase. It is applied to realistic millimeter wave channels and shows great performance, achieving a channel estimation error almost as low as one would get with a perfectly calibrated system. It also allows incident detection and automatic correction, making the base station resilient and able to automatically adapt to changes in its environment.
electrical engineering and systems science
This paper proposes a discretionary lane selection algorithm. In particular, highway driving is considered as a targeted scenario, where each lane has a different level of traffic flow. When lane-changing is discretionary, it is advised not to change lanes unless highly beneficial, e.g., reducing travel time significantly or securing higher safety. Evaluating such "benefit" is a challenge, along with multiple surrounding vehicles in dynamic speed and heading with uncertainty. We propose a real-time lane-selection algorithm with careful cost considerations and with modularity in design. The algorithm is a search-based optimization method that evaluates uncertain dynamic positions of other vehicles under a continuous time and space domain. For demonstration, we incorporate a state-of-the-art motion planner framework (Neural Networks integrated Model Predictive Control) under a CARLA simulation environment.
computer science
In the early days of gene expression data, researchers have focused on gene-level analysis, and particularly on finding differentially expressed genes. This usually involved making a simplifying assumption that genes are independent, which made likelihood derivations feasible and allowed for relatively simple implementations. In recent years, the scope has expanded to include pathway and `gene set' analysis in an attempt to understand the relationships between genes. We develop a method to recover a gene network's structure from co-expression data, measured in terms of normalized Pearson's correlation coefficients between gene pairs. We treat these co-expression measurements as weights in the complete graph in which nodes correspond to genes. To decide which edges exist in the gene network, we fit a three-component mixture model such that the observed weights of `null edges' follow a normal distribution with mean 0, and the non-null edges follow a mixture of two lognormal distributions, one for positively- and one for negatively-correlated pairs. We show that this so-called L2N mixture model outperforms other methods in terms of power to detect edges, and it allows us to control the false discovery rate. Importantly, the method makes no assumptions about the true network structure.
statistics
Understanding the temporal dynamics of COVID-19 patient phenotypes is necessary to derive fine-grained resolution of pathophysiology. Here we use state-of-the-art deep neural networks over an institution-wide machine intelligence platform for the augmented curation of 15.8 million clinical notes from 30,494 patients subjected to COVID-19 PCR diagnostic testing. By contrasting the Electronic Health Record (EHR)-derived clinical phenotypes of COVID-19-positive (COVIDpos, n=635) versus COVID-19-negative (COVIDneg, n=29,859) patients over each day of the week preceding the PCR testing date, we identify anosmia/dysgeusia (37.4-fold), myalgia/arthralgia (2.6-fold), diarrhea (2.2-fold), fever/chills (2.1-fold), respiratory difficulty (1.9-fold), and cough (1.8-fold) as significantly amplified in COVIDpos over COVIDneg patients. The specific combination of cough and diarrhea has a 3.2-fold amplification in COVIDpos patients during the week prior to PCR testing, and along with anosmia/dysgeusia, constitutes the earliest EHR-derived signature of COVID-19 (4-7 days prior to typical PCR testing date). This study introduces an Augmented Intelligence platform for the real-time synthesis of institutional knowledge captured in EHRs. The platform holds tremendous potential for scaling up curation throughput, with minimal need for retraining underlying neural networks, thus promising EHR-powered early diagnosis for a broad spectrum of diseases.
computer science
In the context of Hecke algebras of complex reflection groups, we prove that the generalized Hecke algebras of normalizers of parabolic subgroups are semidirect products, under suitable conditions on the parameters involved in their definition.
mathematics
We study the radiative decays $h_{c}\rightarrow\gamma\eta^{(\prime)}$ in the framework of perturbative QCD and evaluate analytically the one-loop integrals with the light quark masses kept. Interestingly, the branching ratios $\mathcal{B}(h_{c}\rightarrow\gamma\eta^{(\prime)})$ are insensitive to both the light quark masses and the shapes of $\eta^{(\prime)}$ distribution amplitudes. And it is noticed that the contribution of the gluonic content of $\eta^{(\prime)}$ is almost equal to that of the quark-antiquark content of $\eta^{(\prime)}$ in the radiative decays $h_{c} \rightarrow \gamma\eta^{(\prime)}$. By employing the ratio $R_{h_{c}}=\mathcal{B}(h_{c}\rightarrow\gamma\eta)/\mathcal{B}(h_{c}\rightarrow\gamma\eta^{\prime})$, we extract the mixing angle $\phi=33.8^{\circ}\pm2.5^{\circ}$, which is in clear disagreement with the Feldmann-Kroll-Stech result $\phi=39.0^{\circ}\pm1.6^{\circ}$ extracted from the ratio $R_{J/\psi}$ with nonperturbative matrix elements $\langle 0\mid G^{a}_{\mu\nu}\tilde{G}^{a,\mu\nu}\mid\eta^{(\prime)}\rangle$, but in consistent with $\phi=33.5^{\circ}\pm0.9^{\circ}$ extracted from the asymptotic limit of the $\gamma^{\ast}\gamma-\eta^{\prime}$ transition form factor and $\phi=33.9^{\circ}\pm0.6^{\circ}$ extracted from $R_{J/\psi}$ in perturbative QCD. We also briefly discuss possible reasons for the difference in the determinations of the mixing angle.
high energy physics phenomenology
After one decade of the intensive theoretical and experimental explorations, whether interfacial spin-orbit coupling (ISOC) at metallic magnetic interfaces can effectively generate a spin current has remained in dispute. Here, utilizing the Ti/FeCoB bilayers that are unique for the negligible bulk spin Hall effect and the strong tunable ISOC, we establish that there is no significant charge-to-spin conversion at magnetic interfaces via spin-orbit filtering effect or Rashba-Edelstein(-like) effect even when the ISOC is stronger than that of a typical Pt/ferromagnet interface and can promote strong perpendicular magnetic anisotropy. We also find a minimal orbital Hall effect in 3d Ti.
condensed matter
The aim of the present note is to compare the recent LHC data at $\sqrt{s} =13 \,TeV$ with our previous theoretical proposal that the true Higgs boson $H_T$ should be a broad heavy resonance with mass around $750 \, GeV$. We focus on the so-called golden channel $H_T \rightarrow ZZ$ where the pair of Z bosons decay leptonically to $\ell^+ \ell^- \ell^+ \ell^-$, $\ell$ being either an electron or a muon. We use the data collected by the ATLAS and CMS Collaborations at $\sqrt{s} =13 \,TeV$ with an integrated luminosity of $36.1 \, fb^{-1}$ and $77.4 \, fb^{-1}$ respectively. We find that the experimental data from both the LHC Collaborations do display in the golden channel a rather broad resonance structure around $700 \, GeV$ with a sizeable statistical significance. Our theoretical expectations seem to be in fair good agreement with the experimental observations. Combining the data from both the ATLAS and CMS Collaborations we obtain an evidence of the heavy Higgs boson in this channel with an estimated statistical significance of more than five standard deviations.
high energy physics phenomenology
Revenue potential from offshore wind and energy storage systems for a Long Island node in the New York ISO (NYISO) is examined using advanced lithium-ion battery representations. These advanced mixed-integer-linear battery models account for the dynamic performance, as well as the degradation behavior of the batteries, which are usually not accounted for in power systems models. Multiple hybrid offshore wind and battery system designs are investigated to examine the impact of locating the battery offshore versus locating it onshore. For the examined systems, we explore different battery usable state-of-charge (SOC) windows, and corresponding dispatch of the battery to maximize energy- and capacity-market revenues. The impacts of variability of offshore wind output along with energy- and capacity-market prices are evaluated using publicly available data from 2010 to 2013. Locating the battery onshore resulted in higher revenues. For 2013, results highlight that without accurate battery representations, models can overestimate battery revenues by up to 155%, resulting primarily from degradation-related costs. Using advanced algorithms, net revenue can be increased by 29%. Results also indicate that wider useable SOC windows could lead to higher net revenues from the energy market, due to higher arbitrage opportunities that compensate for any additional degradation-tied costs in higher DODs. The added value of a MWh of energy storage varies from $2 to $3.5 per MWh of wind energy, which leads to a breakeven cost range of $50-$95 per kWh for the battery systems studied. As such, energy- and capacity-market revenues were found to be insufficient in recovering the investment costs of current battery systems for the applications considered in this analysis.
electrical engineering and systems science
The emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), which is the most prevalent malignancy in males in the western world, enabling a better selection of patients for confirmation biopsy. However, analyzing these images is complex even for experts, hence opening an opportunity for computer-aided diagnosis systems to seize. This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient and, by leveraging the Retina U-Net detection framework, locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG). It uses 490 mpMRIs for training/validation, and 75 patients for testing from two different datasets: ProstateX and IVO (Valencia Oncology Institute Foundation). In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG$\geq$2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. Evaluated at a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. Additional subsystems for automatic prostate zonal segmentation and mpMRI non-rigid sequence registration were also employed to produce the final fully automated system. The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection. We hope that this will represent a landmark for future research to use, compare and improve upon.
physics
Several software defect prediction techniques have been developed over the past decades. These techniques predict defects at the granularity of typical software assets, such as components and files. In this paper, we investigate feature-oriented defect prediction: predicting defects at the granularity of features -- domain-entities that represent software functionality and often cross-cut software assets. Feature-oriented defect prediction can be beneficial since: (i) some features might be more error-prone than others, (ii) characteristics of defective features might be useful to predict other error-prone features, and (iii) feature-specific code might be prone to faults arising from feature interactions. We explore the feasibility and solution space for feature-oriented defect prediction. Our study relies on 12 software projects from which we analyzed 13,685 bug-introducing and corrective commits, and systematically generated 62,868 training and test datasets to evaluate classifiers, metrics, and scenarios. The datasets were generated based on the 13,685 commits, 81 releases, and 24, 532 permutations of our 12 projects depending on the scenario addressed. We covered scenarios such as just-in-time (JIT) and cross-project defect prediction. Our results confirm the feasibility of feature-oriented defect prediction. We found the best performance (i.e., precision and robustness) when using the Random Forest classifier, with process and structure metrics. Surprisingly, single-project JIT and release-level predictions had median AUC-ROC values greater than 95% and 90% respectively, contrary to studies that assert poor performance due to insufficient training data. We also found that a model trained on release-level data from one of the twelve projects could predict defect-proneness of features in the other eleven projects with median AUC-ROC of 82%, without retraining.
computer science