text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
This paper aims to develop distributed feedback control algorithms that allow cooperative locomotion of quadrupedal robots which are coupled to each other by holonomic constraints. These constraints can arise from collaborative manipulation of objects during locomotion. In addressing this problem, the complex hybrid dynamical models that describe collaborative legged locomotion are studied. The complex periodic orbits (i.e., gaits) of these sophisticated and high-dimensional hybrid systems are investigated. We consider a set of virtual constraints that stabilizes locomotion of a single agent. The paper then generates modified and local virtual constraints for each agent that allow stable collaborative locomotion. Optimal distributed feedback controllers, based on nonlinear control and quadratic programming, are developed to impose the local virtual constraints. To demonstrate the power of the analytical foundation, an extensive numerical simulation for cooperative locomotion of two quadrupedal robots with robotic manipulators is presented. The numerical complex hybrid model has 64 continuous-time domains, 192 discrete-time transitions, 96 state variables, and 36 control inputs. | mathematics |
Charge conjugation, Parity and Time reversal together (CPT) is the fundamental symmetry of the nature. If the CPT symmetry is violated, it will be a great threat to all the well explained theories. Although no definitive signal of CPT violation has been observed so far, there are many reasons to undertake careful investigation of various low-energy phenomena which can at least provide some stringent limits on CPT violation. In this regard, neutrinos are considered to be one of the most accurate tools to test this symmetry. In this work, we therefore investigate the effect of CPT violation through neutrino oscillations. If we assume the fundamental CPT symmetry to be conserved, both neutrino and antineutrino oscillations can be parametrised using the same set of oscillation parameters. Any small deviation to the neutrino properties during their propagation due to CPT violation, can be studied explicitly in long-baseline experiments. We discuss the potential of various long-baseline experiments to impart the bounds on the CPT violating parameters: $\Delta(\delta_{CP})$, $\Delta(m^2_{31})$ and $\Delta(\sin^2 \theta_{23})$, which characterize the difference between neutrino and antineutrino oscillation parameters. | high energy physics phenomenology |
The potential power provided and possibilities presented by computation graphs has steered most of the available modeling techniques to re-implementing, utilization and including the complex nature of System Biology (SB). To model the dynamics of cellular population, we need to study a plethora of scenarios ranging from cell differentiation to tumor growth and etcetera. Test and verification of a model in research means running the model multiple times with different or in some cases identical parameters, to see how the model interacts and if some of the outputs would change regarding different parameters. In this paper, we will describe the development and implementation of a new agent-based model using Python. The model can be executed using a development environment (based on Mesa, and extremely simplified for convenience) with different parameters. The result is collecting large sets of data, which will allow an in-depth analysis in the microenvironment of the tumor by the means of network analysis. | computer science |
First generation quantum repeater networks require independent quantum memories capable of storing and retrieving indistinguishable photons to perform quantum-interference-mediated high-repetition entanglement swapping operations. The ability to perform these coherent operations at room temperature is of prime importance in order to realize large scalable quantum networks. Here we address these significant challenges by observing Hong-Ou-Mandel (HOM) interference between indistinguishable photons carrying polarization qubits retrieved from two independent room-temperature quantum memories. Our elementary quantum network configuration includes: (i) two independent sources generating polarization-encoded qubits; (ii) two atomic-vapor dual-rail quantum memories; and (iii) a HOM interference node. We obtained interference visibilities after quantum memory retrieval of $\rm \boldsymbol{V=(41.9\pm2.0)\%}$ for few-photon level inputs and $\rm \boldsymbol{V=(25.9\pm2.5)\%}$ for single-photon level inputs. Our prototype network lays the groundwork for future large-scale memory-assisted quantum cryptography and distributed quantum networks. | quantum physics |
We consider some critical claims concerning our above paper, and reply to these claims. | mathematics |
New physics scenarios beyond the Standard Model (SM) for neutrino mass mechanism often necessitate the existence of a neutral scalar $H$ and/or doubly-charged scalar $H^{\pm\pm}$, which couple to the SM charged leptons in a flavor violating way, while evading all existing constraints. Such scalars could be effectively produced at future lepton colliders like CEPC, ILC, FCC-ee and CLIC, either on-shell or off-shell, and induce striking charged lepton flavor violating (LFV) signals. We find that a large parameter space of the scalar masses and the LFV couplings can be probed at lepton colliders, well beyond the current low-energy constraints in the lepton sector. The neutral scalar explanation of the muon $g-2$ anomaly could also be directly tested. | high energy physics phenomenology |
Target tracking problem has many practical applications in real life. In submarines, target tracking is done using, preferably, passive sensors. These sensors measure only the bearing angles between the observed target and the ownship. Therefore, this problem is generally referred as bearing only target tracking or target motion analysis. The classical approach is to use a state observer based filter, i.e. Extended Kalman Filter, to estimate the range, course and speed of the target, using only the bearings. In recent studies, the problem is solved as a global optimization problem by utilizing evolutionary algorithms with respect to some objective functions. In this study, we investigate the effect of the commonly used cost functions on the performance of the TMA algorithms. Particularly, we investigate the cost functions based on bearing differences and equidistant line segments. The simulation results show that the latter gives a sub-optimal solution to the target motion analysis problem, compared to the former. | electrical engineering and systems science |
Due to urbanization and the increase of individual mobility, in most metropolitan areas around the world congestion and inefficient traffic management occur. Highly necessary intelligent traffic control systems, which are able to reduce congestion, rely on measurements of traffic situations in urban road networks and freeways. Unfortunately, the instrumentation for accurate traffic measurement is expensive and not widely implemented. This thesis addresses this problem, where relatively inexpensive and easy to install loop-detectors are used by a geometric deep learning algorithm, which uses loop-detector data in a spatial context of a road network, to estimate queue length in front of signalized intersections, which can be then used for following traffic control tasks. Therefore, in the first part of this work a conventional estimation method for queue length (which does not use machine learning techniques) based on second-by-second loop-detector data is implemented, which uses detected shockwaves in queues to estimate the length and point of time for the maximum queue. The method is later used as reference but also as additional input information for the geometric deep learning approach. In the second part the geometric deep learning algorithm is developed, which uses spatial correlations in the road network but also temporal correlations in detector data time sequences by new attention mechanisms, to overcome the limitations of conventional methods like excess traffic demand, lane changing and stop-and-go traffic. Therefore, it is necessary to abstract the topology of the road network in a graph. Both approaches are compared regarding their performance, reliability as well as limitations and validated by usage of the traffic simulation software SUMO (Simulation of Urban MObility). Finally, the results are discussed in the conclusions and further investigations are suggested. | computer science |
We continue the program of extending the scattering equation framework by Cachazo, He and Yuan to a double-cover prescription. We discuss how to apply the double-cover formalism to effective field theories, with a special focus on the non-linear sigma model. A defining characteristic of the double-cover formulation is the emergence of new factorization relations. We present several factorization relations, along with a novel recursion relation. Using the recursion relation and a new prescription for the integrand, any non-linear sigma model amplitude can be expressed in terms of off-shell three-point amplitudes. The resulting expression is purely algebraic, and we do not have to solve any scattering equation. We also discuss soft limits, boundary terms in BCFW recursion, and application of the double-cover prescription to other effective field theories, like the special Galileon theory. | high energy physics theory |
We study probability density functions that are log-concave. Despite the space of all such densities being infinite-dimensional, the maximum likelihood estimate is the exponential of a piecewise linear function determined by finitely many quantities, namely the function values, or heights, at the data points. We explore in what sense exact solutions to this problem are possible. First, we show that the heights given by the maximum likelihood estimate are generically transcendental. For a cell in one dimension, the maximum likelihood estimator is expressed in closed form using the generalized W-Lambert function. Even more, we show that finding the log-concave maximum likelihood estimate is equivalent to solving a collection of polynomial-exponential systems of a special form. Even in the case of two equations, very little is known about solutions to these systems. As an alternative, we use Smale's alpha-theory to refine approximate numerical solutions and to certify solutions to log-concave density estimation. | mathematics |
On surfaces with many motile cilia, beats of the individual cilia coordinate to form metachronal waves. We present a theoretical framework that connects the dynamics of an individual cilium to the collective dynamics of a ciliary carpet via systematic coarse-graining. We uncover the criteria that control the selection of frequency and wavevector of stable metchacronal waves of the cilia and examine how they depend on the geometric and dynamical characteristics of single cilia, as well as the geometric properties of the array. We perform agent-based numerical simulations of arrays of cilia with hydrodynamic interactions and find quantitative agreement with the predictions of the analytical framework. Our work sheds light on the question of how the collective properties of beating cilia can be determined using information about the individual units, and as such exemplifies a bottom-up study of a rich active matter system. | condensed matter |
Scanned receipts OCR and key information extraction (SROIE) represent the processeses of recognizing text from scanned receipts and extracting key texts from them and save the extracted tests to structured documents. SROIE plays critical roles for many document analysis applications and holds great commercial potentials, but very little research works and advances have been published in this area. In recognition of the technical challenges, importance and huge commercial potentials of SROIE, we organized the ICDAR 2019 competition on SROIE. In this competition, we set up three tasks, namely, Scanned Receipt Text Localisation (Task 1), Scanned Receipt OCR (Task 2) and Key Information Extraction from Scanned Receipts (Task 3). A new dataset with 1000 whole scanned receipt images and annotations is created for the competition. In this report we will presents the motivation, competition datasets, task definition, evaluation protocol, submission statistics, performance of submitted methods and results analysis. | computer science |
In recent years, there have been many practical applications of anomaly detection such as in predictive maintenance, detection of credit fraud, network intrusion, and system failure. The goal of anomaly detection is to identify in the test data anomalous behaviors that are either rare or unseen in the training data. This is a common goal in predictive maintenance, which aims to forecast the imminent faults of an appliance given abundant samples of normal behaviors. Local outlier factor (LOF) is one of the state-of-the-art models used for anomaly detection, but the predictive performance of LOF depends greatly on the selection of hyperparameters. In this paper, we propose a novel, heuristic methodology to tune the hyperparameters in LOF. A tuned LOF model that uses the proposed method shows good predictive performance in both simulations and real data sets. | statistics |
Most research on dialogue has focused either on dialogue generation for openended chit chat or on state tracking for goal-directed dialogue. In this work, we explore a hybrid approach to goal-oriented dialogue generation that combines retrieval from past history with a hierarchical, neural encoder-decoder architecture. We evaluate this approach in the customer support domain using the Multiwoz dataset (Budzianowski et al., 2018). We show that adding this retrieval step to a hierarchical, neural encoder-decoder architecture leads to significant improvements, including responses that are rated more appropriate and fluent by human evaluators. Finally, we compare our retrieval-based model to various semantically conditioned models explicitly using past dialog act information, and find that our proposed model is competitive with the current state of the art (Chen et al., 2019), while not requiring explicit labels about past machine acts. | computer science |
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet. | statistics |
We propose a unified description of two important phenomena: color confinement in large-$N$ gauge theory, and Bose-Einstein condensation (BEC). We focus on the confinement/deconfinement transition characterized by the increase of the entropy from $N^0$ to $N^2$, which persists in the weak coupling region. Indistinguishability associated with the symmetry group -- SU($N$) or O($N$) in gauge theory, and S$_N$ permutations in the system of identical bosons -- is crucial for the formation of the condensed (confined) phase. We relate standard criteria, based on off-diagonal long range order (ODLRO) for BEC and the Polyakov loop for gauge theory. The constant offset of the distribution of the phases of the Polyakov loop corresponds to ODLRO, and gives the order parameter for the partially-(de)confined phase at finite coupling. This viewpoint may have implications for confinement at finite $N$, and for quantum gravity via gauge/gravity duality. | high energy physics theory |
We study bond percolation on several four-dimensional (4D) lattices, including the simple (hyper) cubic (SC), the SC with combinations of nearest neighbors and second nearest neighbors (SC-NN+2NN), the body-centered cubic (BCC), and the face-centered cubic (FCC) lattices, using an efficient single-cluster growth algorithm. For the SC lattice, we find $p_c = 0.1601312(2)$, which confirms previous results (based on other methods), and find a new value $p_c=0.035827(1)$ for the SC-NN+2NN lattice, which was not studied previously for bond percolation. For the 4D BCC and FCC lattices, we obtain $p_c=0.074212(1)$ and 0.049517(1), which are substantially more precise than previous values. We also find critical exponents $\tau = 2.3135(5)$ and $\Omega = 0.40(3)$, consistent with previous numerical results and the recent four-loop series result of Gracey [Phys. Rev. D 92, 025012, (2015)]. | condensed matter |
Neural transducer-based systems such as RNN Transducers (RNN-T) for automatic speech recognition (ASR) blend the individual components of a traditional hybrid ASR systems (acoustic model, language model, punctuation model, inverse text normalization) into one single model. This greatly simplifies training and inference and hence makes RNN-T a desirable choice for ASR systems. In this work, we investigate use of RNN-T in applications that require a tune-able latency budget during inference time. We also improved the decoding speed of the originally proposed RNN-T beam search algorithm. We evaluated our proposed system on English videos ASR dataset and show that neural RNN-T models can achieve comparable WER and better computational efficiency compared to a well tuned hybrid ASR baseline. | computer science |
We study the Quantum Zeno Effect (QZE) induced by continuous partial measurement in the presence of short-correlated noise in the system Hamiltonian. We study the survival probability and the onset of the QZE as a function of the measurement strength, and find that, depending on the noise parameters, the quantum Zeno effect can be enhanced or suppressed by the noise in different regions of the parameter space. Notably, the conditions for the enhancement of the QZE are different when determined by the short-time or long-time behavior of the survival probability, or by the measurement strength marking the onset of the quantum Zeno regime. | quantum physics |
We analyze the model of a self-interacting $\phi^4_{\star}$ scalar field theory in Snyder-de Sitter space. After analytically computing the one-loop beta functions {in the small noncommutativity and curvature limit}, we solve numerically the corresponding system of differential equations, showing that in this limit the model possesses at least one regime in which the theory is asymptotically free. Moreover, in a given region of the parameter space we also observe a peculiar running of the parameter associated to the curvature, which changes its sign and therefore can be interpreted as a transition from an IR de-Sitter space to and UV anti-de Sitter one. | high energy physics theory |
Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a model. However, calculating exact posterior distributions is intractable for all but the simplest models. Consequently, efficient computation of an approximate posterior distribution is a crucial problem for scalable TS with complex models, such as neural networks. In this paper, we use distribution optimization techniques to approximate the posterior distribution, solved via Wasserstein gradient flows. Based on the framework, a principled particle-optimization algorithm is developed for TS to approximate the posterior efficiently. Our approach is scalable and does not make explicit distribution assumptions on posterior approximations. Extensive experiments on both synthetic data and real large-scale data demonstrate the superior performance of the proposed methods. | statistics |
Let $G$ be a bipartite graph where every node has a strict ranking of its neighbors. For every node, its preferences over neighbors extend naturally to preferences over matchings. Matching $N$ is more popular than matching $M$ if the number of nodes that prefer $N$ to $M$ is more than the number that prefer $M$ to $N$. A maximum matching $M$ in $G$ is a "popular max-matching" if there is no maximum matching in $G$ that is more popular than $M$. Such matchings are relevant in applications where the set of admissible solutions is the set of maximum matchings and we wish to find a best maximum matching as per node preferences. It is known that a popular max-matching always exists in $G$. Here we show a compact extended formulation for the popular max-matching polytope. So when there are edge costs, a min-cost popular max-matching in $G$ can be computed in polynomial time. This is in contrast to the min-cost popular matching problem which is known to be NP-hard. We also consider Pareto-optimality, which is a relaxation of popularity, and show that computing a min-cost Pareto-optimal matching/max-matching is NP-hard. | computer science |
Telementoring surgeons as they perform surgery can be essential in the treatment of patients when in situ expertise is not available. Nonetheless, expert mentors are often unavailable to provide trainees with real-time medical guidance. When mentors are unavailable, a fallback autonomous mechanism should provide medical practitioners with the required guidance. However, AI/autonomous mentoring in medicine has been limited by the availability of generalizable prediction models, and surgical procedures datasets to train those models with. This work presents the initial steps towards the development of an intelligent artificial system for autonomous medical mentoring. Specifically, we present the first Database for AI Surgical Instruction (DAISI). DAISI leverages on images and instructions to provide step-by-step demonstrations of how to perform procedures from various medical disciplines. The dataset was acquired from real surgical procedures and data from academic textbooks. We used DAISI to train an encoder-decoder neural network capable of predicting medical instructions given a current view of the surgery. Afterwards, the instructions predicted by the network were evaluated using cumulative BLEU scores and input from expert physicians. According to the BLEU scores, the predicted and ground truth instructions were as high as 67% similar. Additionally, expert physicians subjectively assessed the algorithm using Likert scale, and considered that the predicted descriptions were related to the images. This work provides a baseline for AI algorithms to assist in autonomous medical mentoring. | computer science |
We develop the spectroscopy of $c\bar c c\bar c$ and other all-heavy tetraquark states in the dynamical diquark model. In the most minimal form of the model (e.g., each diquark appears only in the color-triplet combination; the non-orbital spin couplings connect only quarks within each diquark), the spectroscopy is extremely simple. Namely, the $S$-wave multiplets contain precisely 3 degenerate states ($0^{++}$, $1^{+-}$, $2^{++}$) and the 7 $P$-wave states satisfy an equal-spacing rule when the tensor coupling is negligible. When comparing numerically to the recent LHCb results, we find the best interpretation is assigning $X(6900)$ to the $2S$ multiplet, while a lower state suggested at about $6740$ MeV fits well with the members of the $1P$ multiplet. We also predict the location of other multiplets ($1S$, $1D$, etc.) and discuss the significance of the $cc$ open-flavor threshold. | high energy physics phenomenology |
Drmota and Stufler proved recently that the expected number of pattern occurrences of a given map is asymptotically linear when the number of edges goes to infinity. In this paper we improve their result by means of a different method. Our method allows us to develop a systematic way for computing the explicit constant of the linear (main) term and shows that it is a positive rational number. Moreover, by extending our method, we also solve the corresponding problem of submap occurrences. | mathematics |
Spontaneous collapse models models and Bohmian mechanics are two different solutions to the measurement problem plaguing orthodox quantum mechanics. They have a priori nothing in common. At a formal level, collapse models add a non-linear noise term to the Schr\"odinger equation, and extract definite measurement outcomes either from the wave function (e.g. mass density ontology) or the noise itself (flash ontology). Bohmian mechanics keeps the Schr\"odinger equation intact but uses the wave function to guide particles (or fields), which comprise the primitive ontology. Collapse models modify the predictions of orthodox quantum mechanics, whilst Bohmian mechanics can be argued to reproduce them. However, it turns out that collapse models and their primitive ontology can be exactly recast as Bohmian theories. More precisely, considering (i) a system described by a non-Markovian collapse model, and (ii) an extended system where a carefully tailored bath is added and described by Bohmian mechanics, the stochastic wave-function of the collapse model is exactly the wave-function of the original system conditioned on the Bohmian particle positions of the bath. Further, the noise driving the collapse model is a linear functional of the Bohmian positions. The randomness that seems progressively revealed in the collapse models lies entirely in the initial conditions in the Bohmian-like theory. Our construction of the appropriate bath is not trivial and exploits an old result from the theory of open quantum systems. This reformulation of collapse models as Bohmian theories brings to the fore the question of whether there exists `unromantic' realist interpretations of quantum theory that cannot ultimately be rewritten this way, with some guiding law. It also points to important foundational differences between `true' (Markovian) collapse models and non-Markovian models. | quantum physics |
Recently, the Hamilton Monte Carlo (HMC) has become widespread as one of the more reliable approaches to efficient sample generation processes. However, HMC is difficult to sample in a multimodal posterior distribution because the HMC chain cannot cross energy barrier between modes due to the energy conservation property. In this paper, we propose a Stochastic Approximate Hamilton Monte Carlo (SAHMC) algorithm for generating samples from multimodal density under the Hamiltonian Monte Carlo (HMC) framework. SAHMC can adaptively lower the energy barrier to move the Hamiltonian trajectory more frequently and more easily between modes. Our simulation studies show that the potential for SAHMC to explore a multimodal target distribution more efficiently than HMC based implementations. | statistics |
During migration cells exhibit a rich variety of seemingly random migration patterns, which makes unraveling the underlying mechanisms that control cell migration a daunting challenge. For efficient migration cells require a mechanism for polarization, so that traction forces are produced in the direction of motion, while adhesion is released to allow forward migration. To simplify the study of this process cells have been studied when placed along one-dimensional tracks, where single cells exhibit both smooth and stick-slip migration modes. The stick-slip motility mode is characterized by protrusive motion at the cell front, coupled with slow cell elongation, which is followed by rapid retractions of the cell back. In this study, we explore a minimal physical model that couples the force applied on the adhesion bonds to the length variations of the cell and the traction forces applied by the polarized actin retrograde flow. We show that the rich spectrum of cell migration patterns emerges from this model as different \emph{deterministic} dynamical phases. This result suggests a source for the large cell-to-cell variability (CCV) in cell migration patterns observed in single cells over time and within cell populations. The large heterogeneity can arise from small fluctuations in the cellular components that are greatly amplified due to moving the cells' internal state across the dynamical phase transition lines. Temporal noise is shown to drive random changes in the cellular polarization direction, which is enhanced during the stick-slip migration mode. These results offer a new framework to explain experimental observations of migrating cells, resulting from noisy switching between underlying deterministic migration modes. | physics |
The preparation of nonclassical states of mechanical motion conclusively proves that control over such motion has reached the quantum level. We investigate ways to achieve nonclassical states of macroscopic mechanical oscillators, particularly levitated nanoparticles. We analyze the possibility of the conditional squeezing of the levitated particle induced by the homodyne detection of light in a pulsed optomechanical setup within the resolved sideband regime. We focus on the regimes that are experimentally relevant for the levitated systems where the ground-state cooling is not achievable and the optomechanical coupling is comparable with the cavity linewidth. The analysis is thereby performed beyond the adiabatic regime routinely used for the bulk optomechanical pulsed systems. The results show that the quantum state of a levitated particle could be squeezed below the ground state variance within a wide range of temperatures. This opens a path to test for the first time nonclassical control of levitating nanoparticles beyond the ground state. | quantum physics |
The problem of compiling general quantum algorithms for implementation on near-term quantum processors has been introduced to the AI community. Previous work demonstrated that temporal planning is an attractive approach for part of this compilationtask, specifically, the routing of circuits that implement the Quantum Alternating Operator Ansatz (QAOA) applied to the MaxCut problem on a quantum processor architecture. In this paper, we extend the earlier work to route circuits that implement QAOA for Graph Coloring problems. QAOA for coloring requires execution of more, and more complex, operations on the chip, which makes routing a more challenging problem. We evaluate the approach on state-of-the-art hardware architectures from leading quantum computing companies. Additionally, we apply a planning approach to qubit initialization. Our empirical evaluation shows that temporal planning compares well to reasonable analytic upper bounds, and that solving qubit initialization with a classical planner generally helps temporal planners in finding shorter-makespan compilations for QAOA for Graph Coloring. These advances suggest that temporal planning can be an effective approach for more complex quantum computing algorithms and architectures. | quantum physics |
Multistage stochastic programming provides a modeling framework for sequential decision-making problems that involve uncertainty. One typically overlooked aspect of this methodology is how uncertainty is incorporated into modeling. Traditionally, statistical forecasting techniques with simple forms, e.g., (first-order) autoregressive time-series models, are used to extract scenarios to be added to optimization models to represent the uncertain future. However, often times, the performance of these forecasting models are not thoroughly assessed. Motivated by the advances in probabilistic forecasting, we incorporate a deep learning-based global time-series forecasting method into multistage stochastic programming framework, and compare it with the cases where a traditional forecasting method is employed to model the uncertainty. We assess the impact of more accurate forecasts on the quality of two commonly used look-ahead policies, a deterministic one and a two-stage one, in a rolling-horizon framework on a practical problem. Our results illustrate that more accurate forecasts contribute substantially to the model performance, and enable obtaining high-quality solutions even from computationally cheap heuristics. They also show that the probabilistic forecasting capabilities of deep learning-based methods can be especially beneficial when used as a (conditional) sampling tool for scenario-based models, and to predict the worst-case scenario for risk-averse models. | mathematics |
The outburst and proliferation of the COVID-19 pandemic, together with the subsequent social distancing measures, have raised massive challenges in almost all domains of public and private life around the globe. The stay-at-home movement has pushed the news audiences into social networks, which, in turn, has become the most prolific field for receiving and sharing news updates, as well as for public expression of opinions, concerns and feelings about the pandemic. Public opinion is a critical aspect in analysing how the information and events impact peoples lives, and research has shown that social media data may be promising in understanding how people respond to health risks and social crisis, which are the feelings they tend to share and how they are adapting to unforeseen circumstances that threaten almost all societal spheres. This paper presents results from a social media analysis of 61532 news headlines posted by the major daily news outlet in Portugal, Sic Noticias, on Facebook, from January to December 2020, focusing on the issues attention cycle and audiences emotional response to the COVID news outburst. This work adds to the emergent body of studies examining public response to the coronavirus pandemic on social media data. | computer science |
Radio pulsar signals are significantly perturbed by their propagation through the ionized interstellar medium. In addition to the frequency-dependent pulse times of arrival due to dispersion, pulse shapes are also distorted and shifted, having been scattered by the inhomogeneous interstellar plasma, affecting pulse arrival times. Understanding the degree to which scattering affects pulsar timing is important for gravitational wave detection with pulsar timing arrays (PTAs), which depend on the reliability of pulsars as stable clocks with an uncertainty of ~100ns or less over ~10yr or more. Scattering can be described as a convolution of the intrinsic pulse shape with an impulse response function representing the effects of multipath propagation. In previous studies, the technique of cyclic spectroscopy has been applied to pulsar signals to deconvolve the effects of scattering from the original emitted signals. We present an analysis of simulated data to test the quality of deconvolution using cyclic spectroscopy over a range of parameters characterizing interstellar scattering and pulsar signal-to-noise ratio. We show that cyclic spectroscopy is most effective for high-S/N and/or highly scattered pulsars. We conclude that cyclic spectroscopy could play an important role in scattering correction to distant populations of highly scattered pulsars not currently included in PTAs. For future telescopes and for current instruments such as the Green Bank Telescope upgraded with the ultrawide bandwidth (UWB) receiver, cyclic spectroscopy could potentially double the number of PTA-quality pulsars. | astrophysics |
Without any mechanism to protect its mass, the self-energy of the Higgs boson diverges quadratically, leading to the hierarchy or fine-tuning problem. One bottom-up solution is to postulate some yet-to-be-discovered symmetry which forces the sum of the quadratic divergences to be zero, or almost negligible; this is known as the Veltman condition. Even if one assumes the existence of some new physics at a high scale, the fine-tuning problem is not eradicated, although it is softer than what it would have been with a Planck scale momentum cut-off. We study such divergences in an effective theory framework, and construct the Veltman condition with dimension-six operators. We show that there are two classes of diagrams, the one-loop and the two-loop ones, that contribute to quadratic divergences, but the contribution of the latter is suppressed by a loop factor of $1/16\pi^2$. There are only six dimension-six operators that contribute to the one-loop category, and the Wilson coefficients of these operators play an important role towards softening the fine-tuning problem. We find the parameter space for the Wilson coefficients that satisfies the extended Veltman condition, and also discuss why one need not bother about the $d>6$ operators. The parameter space is consistent with the theoretical and experimental bounds of the Wilson coefficients, and should act as a guide to the model builders. | high energy physics phenomenology |
We study a space-based gravity gradiometer based on cold atom interferometry and its potential for the Earth's gravitational field mapping. The instrument architecture has been proposed in [Carraz et al., Microgravity Science and Technology 26, 139 (2014)] and enables high-sensitivity measurements of gravity gradients by using atom interferometers in a differential accelerometer configuration. We present the design of the instrument including its subsystems and analyze the mission scenario, for which we derive the expected instrument performances, the requirements on the sensor and its key subsystems, and the expected impact on the recovery of the Earth gravity field. | physics |
In the absence of a concrete discovery of new physics at the LHC, global analyses of the standard model effective field theory (SMEFT) are important to find and describe the impact of new physics beyond the energy reach of the LHC. Among the SMEFT operators that can be constrained via various measurements, the dimension six triple gluon operator involves neither the Higgs boson nor the top quark, yet its variation can have measurable effects on top and Higgs production. Without independent constraints on its impact, the sensitivity of measurements in the top and Higgs sectors to new physics is reduced. We show that the dijet angular distribution is a powerful observable for probing the triple gluon operator. We set the most stringent limit on the triple gluon effective coupling by reinterpreting the results of a search for new phenomena in dijet events using 35.9 fb$^{-1}$ of pp collision data collected at $\sqrt{s}$ = 13 TeV performed by the CMS collaboration. The obtained limit on the strength of the triple gluon operator is far below the sensitivity of that which can be derived from top quark and Higgs measurements and thus this operator can be neglected in global SMEFT analyses. | high energy physics phenomenology |
Quantum key distribution (QKD) offers a reliable solution to communication problems that require long-term data security. For its widespread use, however, the rate and reach of QKD systems must be improved. Twin-field (TF) QKD is a step forward toward this direction, with early demonstrations suggesting it can beat the current rate-versus-distance records. A recently introduced variant of TF-QKD is particularly suited for experimental implementation, and has been shown to offer a higher key rate than other variants in the asymptotic regime where users exchange an infinite number of signals. Here, we extend the security of this protocol to the finite-key regime, showing that it can overcome the fundamental bounds on point-to-point QKD with around $10^{10}$ transmitted signals. Within distance regimes of interest, our analysis offers higher key rates than those of alternative variants. Moreover, some of the techniques we develop are applicable to the finite-key analysis of other QKD protocols. | quantum physics |
We analyze the relationship between qubit-environment entanglement that can be created during the pure dephasing of the qubit and the effectiveness of the spin echo protocol. We focus here on mixed states of the environment. We show that while the echo protocol can obviously counteract classical environmental noise, it can also undo dephasing associated with qubit-environment entanglement, and there is no obvious difference in its efficiency in these two cases. Additionally, we show that qubit-environment entanglement can be generated at the end of the echo protocol even when it is absent at the time of application of the local operation on the qubit (the {\pi} pulse). We prove that this can occur only at isolated points in time, after fine-tuning of the echo protocol duration. Finally, we discuss the conditions under which the observation of specific features of the echo signal can serve as a witness of the entangling nature of the joint qubit-environment evolution. | quantum physics |
In this work we present the design of a new test geometry inspired by the Tapered Double Cantilever Beam (TDCB) specimen that is shown to provide an improved characterization of the fracture properties of brittle solids. First, we show that our new design results in an exponential increase of the specimen compliance with crack length, leading to an extremely stable crack growth during the test. We determine an analytical description of this behavior, which provides a simple procedure to extract the fracture energy without depending on finite element calculations. Validation tests are done on polymethylmethacrylate (PMMA) specimens. We use both finite element simulations and our analytical model to interpret the data. We find a very good agreement between the toughness determined by both methods. The stable nature of crack growth in our improved TDCB specimens results in a precise control of the crack speed. This feature is employed to go one step further and characterize the variations of toughness with crack speed. We propose an original optimization procedure for the determination of the material parameters characterizing the kinetic law describing the toughness rate dependency. Overall, the approach proposed together with the newly designed test geometry offer unprecedented possibilities for the full and accurate characterization of the fracture behavior of brittle materials such as rocks, sandstone, mortar etc. | condensed matter |
Deep underground environments are ideal for low background searches due to the attenuation of cosmic rays by passage through the earth. However, they are affected by backgrounds from $\gamma$-rays emitted by $^{40}$K and the $^{238}$U and $^{232}$Th decay chains in the surrounding rock. The LUX-ZEPLIN (LZ) experiment will search for dark matter particle interactions with a liquid xenon TPC located within the Davis campus at the Sanford Underground Research Facility, Lead, South Dakota, at the 4,850-foot level. In order to characterise the cavern background, in-situ $\gamma$-ray measurements were taken with a sodium iodide detector in various locations and with lead shielding. The integral count rates (0--3300~keV) varied from 596~Hz to 1355~Hz for unshielded measurements, corresponding to a total flux in the cavern of $1.9\pm0.4$~$\gamma~$cm$^{-2}$s$^{-1}$. The resulting activity in the walls of the cavern can be characterised as $220\pm60$~Bq/kg of $^{40}$K, $29\pm15$~Bq/kg of $^{238}$U, and $13\pm3$~Bq/kg of $^{232}$Th. | physics |
We propose a noncommutative (NC) version for a global O(2) scalar field theory, whose damping feature is introduced into the scalar field theory through the NC parameter. In this context, we investigate how noncommutative drives spontaneous symmetry breaking (SSB) and Higgs-Kibble mechanisms and how the damping feature workout. Indeed, we show that the noncommutativity plays an important role in such mechanisms, i.e., the Higgs mass and VEV dependent on NC parameter. After that, it is explored the consequences of noncommutativity dependence of Higgs mass and VEV: for the first, it is shown that there are a mass-degenerate Higgs bosons near 126.5 GeV, parametrized by the noncommutativity; for the second, the gauge fields gain masses that present a noncommutativity contribution. | high energy physics phenomenology |
We calculate analytically and numerically the axial orbital and spin torques of guided light on a two-level atom near an optical nanofiber. We show that the generation of these torques is governed by the angular momentum conservation law in the Minkowski formulation. The orbital torque on the atom near the fiber has a contribution from the average recoil of spontaneously emitted photons. Photon angular momentum and atomic spin angular momentum can be converted into atomic orbital angular momentum. The orbital and spin angular momenta of the guided field are not transferred separately to the orbital and spin angular momenta of the atom. | quantum physics |
We present an experimental realization of a measurement-based adaptation protocol with quantum reinforcement learning in a Rigetti cloud quantum computer. The experiment in this few-qubit superconducting chip faithfully reproduces the theoretical proposal, setting the first steps towards a semiautonomous quantum agent. This experiment paves the way towards quantum reinforcement learning with superconducting circuits. | quantum physics |
Percutaneous coronary intervention (PCI) is typically performed with image guidance using X-ray angiograms in which coronary arteries are opacified with X-ray opaque contrast agents. Interventional cardiologists typically navigate instruments using non-contrast-enhanced fluoroscopic images, since higher use of contrast agents increases the risk of kidney failure. When using fluoroscopic images, the interventional cardiologist needs to rely on a mental anatomical reconstruction. This paper reports on the development of a novel dynamic coronary roadmapping approach for improving visual feedback and reducing contrast use during PCI. The approach compensates cardiac and respiratory induced vessel motion by ECG alignment and catheter tip tracking in X-ray fluoroscopy, respectively. In particular, for accurate and robust tracking of the catheter tip, we proposed a new deep learning based Bayesian filtering method that integrates the detection outcome of a convolutional neural network and the motion estimation between frames using a particle filtering framework. The proposed roadmapping and tracking approaches were validated on clinical X-ray images, achieving accurate performance on both catheter tip tracking and dynamic coronary roadmapping experiments. In addition, our approach runs in real-time on a computer with a single GPU and has the potential to be integrated into the clinical workflow of PCI procedures, providing cardiologists with visual guidance during interventions without the need of extra use of contrast agent. | electrical engineering and systems science |
The thermonuclear explosion of massive white dwarfs is believed to explain at least a fraction of Type Ia supernovae (SNIa). After thermal runaway, electron captures on the ashes left behind by the burning front determine a loss of pressure, which impacts the dynamics of the explosion and the neutron excess of matter. Indeed, overproduction of neutron-rich species such as $^{54}$Cr has been deemed a problem of Chandrasekhar-mass models of SNIa for a long time. I present the results of a sensitivity study of SNIa models to the rates of weak interactions, which have been incorporated directly into the hydrodynamic explosion code. The weak rates have been scaled up/down by a factor ten, either globally for a common bibliographical source, or individually for selected isotopes. In line with previous works, the impact of weak rates uncertainties on sub-Chandrasekhar models of SNIa is almost negligible. The impact on the dynamics of Chandrasekhar-mass models and on the yield of $^{56}$Ni is also scarce. The strongest effect is found on the nucleosynthesis of neutron-rich nuclei, such as $^{48}$Ca, $^{54}$Cr, $^{58}$Fe, and $^{64}$Ni. The species with the highest influence on nucleosynthesis do not coincide with the isotopes that contribute most to the neutronization of matter. Among the last ones, there are protons, $^{54,55}$Fe, $^{55}$Co, and $^{56}$Ni, while the main influencers are $^{54,55}$Mn and $^{55-57}$Fe, in disagreement with Parikh et al (2013), who found that SNIa nucleosynthesis is most sensitive to the $\beta^+$-decay rates of $^{28}$Si, $^{32}$S, and $^{36}$Ar. An eventual increase in all weak rates on pf-shell nuclei would affect the dynamical evolution of hot bubbles, running away at the beginning of the explosion, and the yields of SNIa. | astrophysics |
In this paper, we propose a new end-to-end methodology to optimize the energy performance as well as comfort and air quality in large buildings without any renovation work. We introduce a metamodel based on recurrent neural networks and trained to predict the behavior of a general class of buildings using a database sampled from a simulation program. This metamodel is then deployed in different frameworks and its parameters are calibrated using the specific data of two real buildings. Parameters are estimated by comparing the predictions of the metamodel with real data obtained from sensors using the CMA-ES algorithm, a derivative free optimization procedure. Then, energy consumptions are optimized while maintaining a target thermal comfort and air quality, using the NSGA-II multi-objective optimization procedure. The numerical experiments illustrate how this metamodel ensures a significant gain in energy efficiency, up to almost 10%, while being computationally much more appealing than numerical models and flexible enough to be adapted to several types of buildings. | electrical engineering and systems science |
In this paper, we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed. We propose a correspondent mini-max problem for nonlinear regression and give a numerical algorithm. Such an algorithm can be applied in regression and machine learning problems, and yield better results than traditional least square and machine learning methods. | statistics |
Modeling sequential data has become more and more important in practice. Some applications are autonomous driving, virtual sensors and weather forecasting. To model such systems, so called recurrent models are frequently used. In this paper we introduce several new Deep recurrent Gaussian process (DRGP) models based on the Sparse Spectrum Gaussian process (SSGP) and the improved version, called variational Sparse Spectrum Gaussian process (VSSGP). We follow the recurrent structure given by an existing DRGP based on a specific variational sparse Nystr\"om approximation, the recurrent Gaussian process (RGP). Similar to previous work, we also variationally integrate out the input-space and hence can propagate uncertainty through the Gaussian process (GP) layers. Our approach can deal with a larger class of covariance functions than the RGP, because its spectral nature allows variational integration in all stationary cases. Furthermore, we combine the (variational) Sparse Spectrum ((V)SS) approximations with a well known inducing-input regularization framework. We improve over current state of the art methods in prediction accuracy for experimental data-sets used for their evaluation and introduce a new data-set for engine control, named Emission. | statistics |
A novel special class of active periodic metamaterials is designed, suitable for achieving high-performance tunable acoustic filters. The metamaterial is made up of a phononic crystal coupled to local resonators. Such local resonators consist of masses enclosed into piezoelectric rings, shunted by a non-dissipative electrical circuit. The use of passive bipoles, with variable impedance/admittance, makes it possible to fully tuning the constitutive properties of the shunting piezoelectric material. This feature paves the way for unconventional behaviours, well beyond the capabilities achievable with classical materials. It follows that the acoustic properties of the periodic metamaterial can be actively modified, in turn, opening new possibilities for the control of pass and stop bands. By exploiting a generalization of the Floquet-Bloch theory, the free wave propagation in the active metamaterial is investigated, by varying a tuning parameter, to show the efficiency of the proposed shunting piezoelectric system as a wave propagation passive control device. Particular attention is devoted to the determination of the in-plane constitutive equations of the shunting piezoelectric phase in the transformed Laplace space. Finally, broad design directions of active acoustic filters, adapt to a changing performance requirement in real-time, is also provided. | physics |
Erd\"{o}s-Hajnal conjecture states that for every undirected graph $H$ there exists $ \epsilon(H) > 0 $ such that every undirected graph on $ n $ vertices that does not contain $H$ as an induced subgraph contains a clique or a stable set of size at least $ n^{\epsilon(H)} $. This conjecture has a directed equivalent version stating that for every tournament $H$ there exists $ \epsilon(H) > 0 $ such that every $H-$free $n-$vertex tournament $T$ contains a transitive subtournament of order at least $ n^{\epsilon(H)} $. This conjecture is proved when $H$ is a galaxy or a constellation and for all five$-$vertex tournaments and for all six$-$vertex tournaments except one. In this paper we prove the correctness of the conjecture for any flotilla-galaxy tournament. This generalizes results some previous results. | mathematics |
Cosmic backreaction as an additional source of the expansion of the universe has been a debate topic since the discovery of cosmic acceleration. The major concern is whether the self interaction of small-scale nonlinear structures would source gravity on very large scales. Gregory Ryskin argued against the additional inclusion of gravitational interaction energy of astronomical objects, whose masses are mostly inferred from gravitational effects and hence should already contain all sources with long-range gravity forces. Ryskin proposed that the backreaction contribution to the energy momentum tensor is instead from the rest of the universe beyond the observable patch. Ryskin's model elegantly solves the fine-tuning problem and is in good agreement with the Hubble diagram of Type Ia supernovae. In this article we revisit Ryskin's model and show that it is {\it inconsistent} with at least one of the following statements: (i) the universe is matter-dominated at low redshift ($z\lesssim 2$); (ii) the universe is radiation-dominated at sufficiently high redshift; (iii) matter density fluctuations are tiny ($\lesssim 10^{-4}$) at the recombination epoch. | astrophysics |
There is much confusion in the literature over Hurst exponent (H). The purpose of this paper is to illustrate the difference between fractional Brownian motion (fBm) on the one hand and Gaussian Markov processes where H is different to 1/2 on the other. The difference lies in the increments, which are stationary and correlated in one case and nonstationary and uncorrelated in the other. The two- and one-point densities of fBm are constructed explicitly. The two-point density does not scale. The one-point density for a semi-infinite time interval is identical to that for a scaling Gaussian Markov process with H different to 1/2 over a finite time interval. We conclude that both Hurst exponents and one-point densities are inadequate for deducing the underlying dynamics from empirical data. We apply these conclusions in the end to make a focused statement about nonlinear diffusion. | electrical engineering and systems science |
This paper considers a network where a node wishes to transmit a source message to a legitimate receiver in the presence of an eavesdropper. The transmitter secures its transmissions employing a sparse implementation of Random Linear Network Coding (RLNC). A tight approximation to the probability of the eavesdropper recovering the source message is provided. The proposed approximation applies to both the cases where transmissions occur without feedback or where the reliability of the feedback channel is impaired by an eavesdropper jamming the feedback channel. An optimization framework for minimizing the intercept probability by optimizing the sparsity of the RLNC is also presented. Results validate the proposed approximation and quantify the gain provided by our optimization over solutions where non-sparse RLNC is used. | computer science |
The usual Su-Schrieffer-Heeger model with an even number of lattice sites possesses two degenerate zero energy modes. The degeneracy of the zero energy modes leads to the mixing between the topological left and right edge states, which makes it difficult to implement the state transfer via topological edge channel. Here, enlightened by the Rice-Male topological pumping, we find that the staggered periodic next-nearest neighbor hoppings can also separate the initial mixed edge states, which ensures the state transfer between topological left and right edge states. Significantly, we construct an unique topological state transfer channel by introducing the staggered periodic on-site potentials and the periodic next-nearest neighbor hoppings added only on the odd sites simultaneously, and find that the state initially prepared at the last site can be transfered to the first two sites with the same probability distribution. This special topological state transfer channel is expected to realize a topological beam splitter, whose function is to make the initial photon at one position appear at two different positions with the same probability. Further, we demonstrate the feasibility of implementing the topological beam splitter based on the circuit quantum electrodynamic lattice. Our scheme opens up a new way for the realization of topological quantum information processing and provides a new path towards the engineering of new type of quantum optical device. | quantum physics |
Broadband X-ray spectroscopy of the X-ray emission produced in the coronae of active galactic nuclei (AGN) can provide important insights into the physical conditions very close to their central supermassive black holes. The temperature of the Comptonizing plasma that forms the corona is manifested through a high-energy cutoff that has been difficult to directly constrain even in the brightest AGN because it requires high-quality data at energies above 10 keV. In this paper we present a large collection of coronal cutoff constraints for obscured AGN based on a sample of 130 AGN selected in the hard X-ray band with Swift/BAT and observed nearly simultaneously with NuSTAR and Swift/XRT. We find that under a reasonable set of assumptions regarding partial constraints the median cutoff is well constrained to 290$\pm$20 keV, where the uncertainty is statistical and given at the 68% confidence level. We investigate the sensitivity of this result to our assumptions and find that consideration of various known systematic uncertainties robustly places the median cutoff between 240 keV and 340 keV. The central 68% of the intrinsic cutoff distribution is found to be between about 140 keV and 500 keV, with estimated uncertainties of 20 keV and 100 keV, respectively. In comparison with the literature, we find no clear evidence that the cutoffs in obscured and unobscured AGN are substantially different. Our analysis highlights the importance of carefully considering partial and potentially degenerate constraints on the coronal high-energy cutoff in AGN. | astrophysics |
We study the non-perturbative corrections generated by exotic instantons in U(N) gauge theories in eight and four dimensions. As it was shown previously, the eight-dimensional prepotential can be resummed using a plethystic formula showing only a dependence from the center of mass and from a U(1) gauge factor. On the contrary, chiral correlators in eight and four dimensions display a non-trivial dependence from the full gauge group. Furthermore the resolvent, the generating function for the eight and four dimensional correlators, can be written in a compact form both in the eight and four dimensional cases. | high energy physics theory |
Underactuated systems like sea vessels have degrees of motion that are insufficiently matched by a set of independent actuation forces. In addition, the underlying trajectory-tracking control problems grow in complexity in order to decide the optimal rudder and thrust control signals. This enforces several difficult-to-solve constraints that are associated with the error dynamical equations using classical optimal tracking and adaptive control approaches. An online machine learning mechanism based on integral reinforcement learning is proposed to find a solution for a class of nonlinear tracking problems with partial prior knowledge of the system dynamics. The actuation forces are decided using innovative forms of temporal difference equations relevant to the vessel's surge and angular velocities. The solution is implemented using an online value iteration process which is realized by employing means of the adaptive critics and gradient descent approaches. The adaptive learning mechanism exhibited well-functioning and interactive features in react to different desired reference-tracking scenarios. | electrical engineering and systems science |
In this paper we study the adaptive learnability of decision trees of depth at most $d$ from membership queries. This has many applications in automated scientific discovery such as drugs development and software update problem. Feldman solves the problem in a randomized polynomial time algorithm that asks $\tilde O(2^{2d})\log n$ queries and Kushilevitz-Mansour in a deterministic polynomial time algorithm that asks $ 2^{18d+o(d)}\log n$ queries. We improve the query complexity of both algorithms. We give a randomized polynomial time algorithm that asks $\tilde O(2^{2d}) + 2^{d}\log n$ queries and a deterministic polynomial time algorithm that asks $2^{5.83d}+2^{2d+o(d)}\log n$ queries. | computer science |
This paper presents a generalized robust stability analysis for bearing-based formation control and network localization systems. For an undirected network, we provide a robust stability analysis in the presence of time-varying exogenous disturbances in arbitrary dimensional space. In addition, we compute the explicit upper-bound set of the bearing formation and network localization errors, which provides valuable information for a system design. | electrical engineering and systems science |
Using the Born-Oppenheimer approximation, we show that exotic resonances, X and Z, may emerge as QCD molecular objects made of colored two-quark lumps, states with heavy-light diquarks spatially separated from antidiquarks. With the same method we confirm that doubly heavy tetraquarks are stable against strong decays. Tetraquarks described here provide a new picture of exotic hadrons, as formed by the QCD analog of the hydrogen bond of molecular physics. | high energy physics phenomenology |
The $\gamma$-ray observation of interstellar gas provides a unique way to probe the cosmic rays (CRs) outside the solar system. In this work, we use an updated version of Fermi-LAT data and recent multi-wavelength tracers of interstellar gas to re-analyze a mid-latitude region in the third Galactic quadrant and estimate the local CR proton spectrum. Two $\gamma$-ray production cross section models for $pp$ interaction, the commonly used one from Kamae et al. (2006) and the up-to-date one from Kafexhiu et al. (2014), are adopted separately in the analysis. Both of them can well fit the emissivity and the derived proton spectra roughly resemble the direct measurements from AMS-02 and Voyager 1, but rather different spectral parameters are indicated. A break at $4\pm1~{\rm GeV}\;c^{-1}$ is shown if the cross section model by Kamae et al. (2006) is adopted. The resulting spectrum is $\lesssim 20\%$ larger than the AMS-02 observation above $15~\rm GeV$ and consistent with the de-modulated spectrum within $2\%$. The proton spectrum based on the cross section model of Kafexhiu et al. (2014) is about $1.4-1.8$ times that of AMS-02 at $2-100~\rm GeV$, however the difference decreases to $20\%$ below $10~\rm GeV$ with respect to the de-modulated spectrum. A spectral break at $20\pm11~{\rm GeV}\;c^{-1}$ is required in this model. An extrapolation down to $300~\rm MeV$ is performed to compare with the observation of Voyager 1, and we find a deviation of $\lesssim 2.5\sigma$ for both the models. In general, an approximately consistent CR spectrum can be obtained using $\gamma$-ray observation nowadays, but we still need a better $\gamma$-ray production cross section model to derive the parameters accurately. | astrophysics |
One of the important and widely used classes of models for non-Gaussian time series is the generalized autoregressive model average models (GARMA), which specifies an ARMA structure for the conditional mean process of the underlying time series. However, in many applications one often encounters conditional heteroskedasticity. In this paper we propose a new class of models, referred to as GARMA-GARCH models, that jointly specify both the conditional mean and conditional variance processes of a general non-Gaussian time series. Under the general modeling framework, we propose three specific models, as examples, for proportional time series, nonnegative time series, and skewed and heavy-tailed financial time series. Maximum likelihood estimator (MLE) and quasi Gaussian MLE (GMLE) are used to estimate the parameters. Simulation studies and three applications are used to demonstrate the properties of the models and the estimation procedures. | statistics |
Transfer learning and joint learning approaches are extensively used to improve the performance of Convolutional Neural Networks (CNNs). In medical imaging applications in which the target dataset is typically very small, transfer learning improves feature learning while joint learning has shown effectiveness in improving the network's generalization and robustness. In this work, we study the combination of these two approaches for the problem of liver lesion segmentation and classification. For this purpose, 332 abdominal CT slices containing lesion segmentation and classification of three lesion types are evaluated. For feature learning, the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge is used. Joint learning shows improvement in both segmentation and classification results. We show that a simple joint framework outperforms the commonly used multi-task architecture (Y-Net), achieving an improvement of 10% in classification accuracy, compared to a 3% improvement with Y-Net. | electrical engineering and systems science |
We present the analysis of XMM-Newton observations of two X-ray luminous cool core clusters, RXCJ1504.1-0248 and Abell 1664. The Reflection Grating Spectrometer reveals a radiative cooling rate of $180\pm 40\, \rm M_{\odot}\rm\,yr^{-1}$ and $34\pm 6\, \rm M_{\odot}\rm\,yr^{-1}$ in RXCJ1504.1-0248 and Abell 1664 for gas above 0.7 keV, respectively. These cooling rates are higher than the star formation rates observed in the clusters, and support simultaneous star formation and molecular gas mass growth on a timescale of 3$\times 10^8$ yr or longer. At these rates, the energy of the X-ray cooling gas is inadequate to power the observed UV/optical line-emitting nebulae, which suggests additional strong heating. No significant residual cooling is detected below 0.7 keV in RXCJ1504.1-0248. By simultaneously fitting the first and second order spectra, we place an upper limit on turbulent velocity of 300 km$\rm s^{-1}$ at 90 per cent confidence level for the soft X-ray emitting gas in both clusters. The turbulent energy density is considered to be less than 8.9 and 27 per cent of the thermal energy density in RXCJ1504.1-0248 and Abell 1664, respectively. This means it is insufficient for AGN heating to fully propagate throughout the cool core via turbulence. We find the cool X-ray component of Abell 1664 ($\sim$0.8 keV) is blueshifted from the systemic velocity by 750$^{+800}_{-280}$ km$\rm s^{-1}$. This is consistent with one component of the molecular gas in the core and suggests a similar dynamical structure for the two phases. We find that an intrinsic absorption model allows the cooling rate to increase to $520\pm 30\, \rm M_{\odot}\rm\,yr^{-1}$ in RXCJ1504.1-0248. | astrophysics |
The low-mass star GJ 1151 has been reported to display variable low-frequency radio emission, which has been interpreted as a signpost of coronal star-planet interactions with an unseen exoplanet. Here we report the first X-ray detection of GJ 1151's corona based on XMM-Newton data. We find that the star displays a small flare during the X-ray observation. Averaged over the observation, we detect the star with a low coronal temperature of 1.6~MK and an X-ray luminosity of $L_X = 5.5\times 10^{26}$\,erg/s. During the quiescent time periods excluding the flare, the star remains undetected with an upper limit of $L_{X,\,qui} \leq 3.7\times 10^{26}$\,erg/s. This is compatible with the coronal assumptions used in a recently published model for a star-planet interaction origin of the observed radio signals from this star. | astrophysics |
The dilepton production in diffractive and exclusive processes at forward rapidities considering ultraperipheral $PbPb$ collisions at the LHC is investigated. Predictions for the $e^+ e^-$, $\mu^+ \mu^-$ and $\tau^+ \tau^-$ cross sections are presented taking into account of realistic cuts that can be implemented by the LHCb Collaboration in a future experimental analysis. Our results indicate that the background associated with the diffractive production can be strongly suppressed and the exclusive processes can be cleanly separated. For the $\tau^+ \tau^-$ production, the semi and purely leptonic decay channels are considered. Our results indicate that a future experimental analysis of the dilepton production at the LHCb is feasible and can be useful to search for BSM physics. | high energy physics phenomenology |
Analysis of late O-type stars observed in the Large Magellanic Cloud (LMC) by the VLT-FLAMES Tarantula Survey (VFTS) revealed a discrepancy between the physical properties estimated from model-atmosphere analysis and those expected from their morphological classifications. Here we revisit the analysis of 32 of these puzzling objects using new hydrogen-helium-silicon FASTWIND models and a different fitting approach to re-evaluate their physical properties. Our new analysis confirms that these stars indeed have properties that are typical of late O-type dwarfs. We also present the first estimates of silicon abundances for O-type stars in the 30 Dor clusters NGC 2060 and NGC 2070, with a weighted mean abundance for our sample of 7.05 +/- 0.03. Our values are about 0.20 dex lower than those previously derived for B-type stars in the LMC clusters N 11 and NGC 2004 using TLUSTY models. Various possibilities (e.g. differences in the analysis methods, effects of microturbulence, and real differences between stars in different clusters) were considered to account for these results. We also used our grid of FASTWIND models to reassess the impact of using the Galactic classification criteria for late O-type stars in the LMC by scrutinising their sensitivity to different stellar properties. At the cool edge of the O star regime the HeII 4686/HeI 4713 ratio used to assign luminosity class for Galactic stars can mimic giants or bright giants in the LMC, even for objects with high gravities (log_g > 4.0 dex). We argue that this line ratio is not a reliable luminosity diagnostic for late O-type stars in the LMC, and that the SiIV 4989/HeI4026 ratio is more robust for these types. | astrophysics |
We study the parity-odd sector of 3-point functions comprising of scalar operators and conserved currents in conformal field theories in momentum space. We use momentum space conformal Ward identities as well as spin-raising and weight-shifting operators to fix the form of these correlators. We discuss in detail the regularisation of divergences and their renormalisation using specific counter-terms. | high energy physics theory |
Constraining quantum gravity from observations is a challenge. We expand on the idea that the interplay of quantum gravity with matter could be key to meeting this challenge. Thus, we set out to confront different potential candidates for quantum gravity -- unimodular asymptotic safety, Weyl-squared gravity and asymptotically safe gravity -- with constraints arising from demanding an ultraviolet complete Standard Model. Specifically, we show that within approximations, demanding that quantum gravity solves the Landau-pole problems in Abelian gauge couplings and Yukawa couplings strongly constrains the viable gravitational parameter space. In the case of Weyl-squared gravity with a dimensionless gravitational coupling, we also investigate whether the gravitational contribution to beta functions in the matter sector calculated from functional Renormalization Group techniques is universal, by studying the dependence on the regulator, metric field parameterization and choice of gauge. | high energy physics theory |
Several bandwise total variation (TV) regularized low-rank (LR)-based models have been proposed to remove mixed noise in hyperspectral images (HSIs). Conventionally, the rank of LR matrix is approximated using nuclear norm (NN). The NN is defined by adding all singular values together, which is essentially a $L_1$-norm of the singular values. It results in non-negligible approximation errors and thus the resulting matrix estimator can be significantly biased. Moreover, these bandwise TV-based methods exploit the spatial information in a separate manner. To cope with these problems, we propose a spatial-spectral TV (SSTV) regularized non-convex local LR matrix approximation (NonLLRTV) method to remove mixed noise in HSIs. From one aspect, local LR of HSIs is formulated using a non-convex $L_{\gamma}$-norm, which provides a closer approximation to the matrix rank than the traditional NN. From another aspect, HSIs are assumed to be piecewisely smooth in the global spatial domain. The TV regularization is effective in preserving the smoothness and removing Gaussian noise. These facts inspire the integration of the NonLLR with TV regularization. To address the limitations of bandwise TV, we use the SSTV regularization to simultaneously consider global spatial structure and spectral correlation of neighboring bands. Experiment results indicate that the use of local non-convex penalty and global SSTV can boost the preserving of spatial piecewise smoothness and overall structural information. | computer science |
We extend the classical Euler-Maclaurin expansion to sums over multidimensional lattices that involve functions with algebraic singularities. This offers a tool for the precise quantification of the effect of microscopic discreteness on macroscopic properties of a system. First, the Euler-Maclaurin summation formula is generalised to lattices in higher dimensions, assuming a sufficiently regular summand function. We then develop this new expansion further and construct the singular Euler-Maclaurin (SEM) expansion in higher dimensions, an extension of our previous work in one dimension, which remains applicable and useful even if the summand function includes a singular function factor. We connect our method to analytical number theory and show that all operator coefficients can be efficiently computed from derivatives of the Epstein zeta function. Finally we demonstrate the numerical performance of the expansion and efficiently compute singular lattice sums in infinite two-dimensional lattices, which are of high relevance in solid state and quantum physics. An implementation in Mathematica is provided online along with this article. | mathematics |
Most of the visible matter in the Universe is ionized, so that cosmic magnetic fields are quite easy to generate and due to the lack of magnetic monopoles hard to destroy. Magnetic fields have been measured in or around practically all celestial objects, either by in-situ measurements of spacecrafts or by the electromagnetic radiation of embedded cosmic rays, gas, or dust. The Earth, the Sun, solar planets, stars, pulsars, the Milky Way, nearby galaxies, more distant (radio) galaxies, quasars, and even intergalactic space in clusters of galaxies have significant magnetic fields, and even larger volumes of the Universe may be permeated by 'dark' magnetic fields. Information on cosmic magnetic fields has increased enormously as the result of the rapid development of observational methods, especially in radio astronomy. In the Milky Way, a wealth of magnetic phenomena was discovered that are only partly related to objects visible in other spectral ranges. The large-scale structure of the Milky Way's magnetic field is still under debate. The available data for external galaxies can well be explained by field amplification and ordering via the dynamo mechanism. The measured field strengths and the similarity of field patterns and flow patterns of the diffuse ionized gas give strong indication that galactic magnetic fields are dynamically important. They may affect the formation of spiral arms, outflows, and the general evolution of galaxies. In spite of our increasing knowledge on magnetic fields, many important questions on the origin and evolution of magnetic fields, like their first occurrence in young galaxies, or the existence of large-scale intergalactic fields remained unanswered. 'Cosmic magnetism' is a key science project for several existing and planned radio telescopes, like LOFAR and the SKA. | astrophysics |
In this paper we propose a hybrid quantum-classical algorithm for dynamic portfolio optimization with minimal holding period. Our algorithm is based on sampling the near-optimal portfolios at each trading step using a quantum processor, and efficiently post-selecting to meet the minimal holding constraint. We found the optimal investment trajectory in a dataset of 50 assets spanning a one year trading period using the D-Wave 2000Q processor. Our method is remarkably efficient, and produces results much closer to the efficient frontier than typical portfolios. Moreover, we also show how our approach can easily produce trajectories adapted to different risk profiles, as typically offered in financial products. Our results are a clear example of how the combination of quantum and classical techniques can offer novel valuable tools to deal with real-life problems, beyond simple toy models, in current NISQ quantum processors. | quantum physics |
We explore the physics of a new neutral gauge boson, ($Z^\prime$), coupling to only third-generation particles with a mass near the electroweak gauge boson mass poles. A $Z^\prime$ boson produced by top quarks and decaying to tau leptons is considered. With a simple search strategy inspired by existing analyses of the standard model gauge boson production in association with top quarks, we show that the Large Hadron Collider has good exclusionary power over the model parameter space of the $Z^\prime$ boson even at the advent of the high-luminosity era. It is shown that the $t\bar{t}Z^\prime$ process allows one to place limits on right-handed top couplings with a $Z^\prime$ boson that preferentially couples to third generation fermions, which are at present very weakly constrained. | high energy physics phenomenology |
Protoplanetary disks are thought to evolve viscously, where the disk mass - the reservoir available for planet formation - decreases over time as material is accreted onto the central star. Observations show a correlation between dust mass and the stellar accretion rate, as expected from viscous theory. However, the gas mass inferred from 13CO and C18O line fluxes, which should be a more direct measure, shows no such correlation. Using thermochemical DALI models, we investigate how 13CO and C18O J=3-2 line fluxes change over time in a viscously evolving disk. We also investigate if the chemical conversion of CO through grain-surface chemistry combined with viscous evolution can explain the observations of disks in Lupus. The 13CO and C18O 3-2 line fluxes increase over time due to their optically thick emitting regions growing in size as the disk expands viscously. The C18O 3-2 emission is optically thin throughout the disk for only a subset of our models (Mdisk (t = 1 Myr) < 1e-3 Msun). For these disks the integrated C18O flux decreases with time, similar to the disk mass. The C18O 3-2 fluxes for the bulk of the disks in Lupus (with Mdust < 5e-5 Msun) can be reproduced to within a factor of ~2 with viscously evolving disks in which CO is converted into other species through grain-surface chemistry driven by a cosmic-ray ionization rate zeta_cr ~ 5e-17 - 1e-16 s^-1. However, explaining the stacked C18O upper limits requires a lower average abundance than our models can produce and they cannot explain the observed 13CO fluxes, which, for most disks, are more than an order of magnitude fainter than what our models predict. Reconciling the 13CO fluxes of viscously evolving disks with the observations requires either a combination of efficient vertical mixing and a high zeta_cr or low mass disks (Mdust < 3e-5 Msun) being much thinner and/or smaller than their more massive counterparts. | astrophysics |
Page placement is a critical problem for memoryintensive applications running on a shared-memory multiprocessor with a non-uniform memory access (NUMA) architecture. State-of-the-art page placement mechanisms interleave pages evenly across NUMA nodes. However, this approach fails to maximize memory throughput in modern NUMA systems, characterised by asymmetric bandwidths and latencies, and sensitive to memory contention and interconnect congestion phenomena. We propose BWAP, a novel page placement mechanism based on asymmetric weighted page interleaving. BWAP combines an analytical performance model of the target NUMA system with on-line iterative tuning of page distribution for a given memory-intensive application. Our experimental evaluation with representative memory-intensive workloads shows that BWAP performs up to 66% better than state-of-the-art techniques. These gains are particularly relevant when multiple co-located applications run in disjoint partitions of a large NUMA machine or when applications do not scale up to the total number of cores. | computer science |
This paper proposes a speech emotion recognition method based on speech features and speech transcriptions (text). Speech features such as Spectrogram and Mel-frequency Cepstral Coefficients (MFCC) help retain emotion-related low-level characteristics in speech whereas text helps capture semantic meaning, both of which help in different aspects of emotion detection. We experimented with several Deep Neural Network (DNN) architectures, which take in different combinations of speech features and text as inputs. The proposed network architectures achieve higher accuracies when compared to state-of-the-art methods on a benchmark dataset. The combined MFCC-Text Convolutional Neural Network (CNN) model proved to be the most accurate in recognizing emotions in IEMOCAP data. | electrical engineering and systems science |
Dust is a crucial component of the interstellar medium of galaxies. The presence of dust strongly affects the light produced by stars within a galaxy. As these photons are our main information vector to explore the stellar mass assembly and therefore understand a galaxy's evolution, modeling the luminous properties of galaxies and taking into account the impact of the dust is a fundamental challenge for semi-analytical models.We present the complete prescription of dust attenuation implemented in the new semi-analytical model: G.A.S. This model is based on a two-phase medium originating from a physically motivated turbulent model of gas structuring (G.A.S. I paper). Dust impact is treated by taking into account three dust components: Polycyclic Aromatic Hydrocarbons, Very Small Grains, and Big Grains. All three components evolve in both a diffuse and a fragmented/dense gas phase. Each phase has its own stars, dust content and geometry. Dust content evolves according to the metallicity of it associated phase.The G.A.S. model is used to predict both the UV and the IR luminosity functions from $z=9.0$ to $z=0.1$. Our two-phase ISM prescription catches very well the evolution of UV and IR luminosity functions. We note a small overproduction of the IR luminosity at low redshift ($z<0.5$). We also focus on the Infrared-Excess (IRX) and explore its dependency with the stellar mass, UV slope, stellar age, metallicity and slope of the attenuation curves. Our model predicts large scatters for relations based on IRX, especially for the IRX-$\beta$ relation. Our analysis reveals that the slope of the attenuation curve is more driven by absolute attenuation in the FUV band than by disk inclination. We confirm that the age of the stellar population and the slope of the attenuation curve can both shift galaxies below the fiducial star-birth relation in the IRX-$\beta$ diagram. | astrophysics |
We report on the possibility that the Dark Matter particle is a stable, neutral, as-yet-undiscovered hadron in the standard model. The existence of a compact color-flavor-spin singlet sexaquark (S, uuddss) with mass ~2m_p, is compatible with current knowledge. The S interacts with baryons primarily via a Yukawa interaction of coupling strength alpha_SN, mediated by omega and phi vector mesons having mass ~1 GeV. If it exists, the S is a very attractive DM candidate. The relic abundance of S Dark Matter (SDM) is established when the Universe transitions from the quark-gluon plasma to the hadronic phase at ~150 MeV and is in remarkable agreement with the observed Omega_DM/Omega_b = 5.3+-0.1; this is a no-free-parameters result because the relevant parameters are known from QCD. Survival of this relic abundance to low temperature requires the breakup amplitude gtilde <~ 2 10^-6, comfortably compatible with theory expectations and observational bounds because the breakup amplitude is dynamically suppressed and many orders of magnitude smaller, as we show. The scattering cross section can differ by orders of magnitude from Born approximation, depending on alpha_SN, requiring reanalysis of observational limits. We use direct detection experiments and cosmological constraints to determine the allowed region of alpha_SN. For a range of allowed values, we predict exotic nuclear isotopes at a detectable level with mass offset ~2 amu. The most promising approaches for detecting the sexaquark in accelerator experiments are to search for a long-interaction-length neutral particle component in the central region of relativistic heavy ion collisions or using a beam-dump setup, and to search for evidence of missing particle production characterized by unbalanced baryon number and strangeness using Belle-II or possibly GLUEX at J-Lab. | high energy physics phenomenology |
The main aim of this paper is to investigate the nature of invariancy of rectifying curve under conformal transformation and obtain a sufficient condition for which such a curve remains conformally invariant. It is shown that the normal component and the geodesic curvature of the rectifying curve is homothetic invariant. | mathematics |
We consider the problem of approximating the partition function of a classical Hamiltonian using simulated annealing. This requires the computation of a cooling schedule, and the ability to estimate the mean of the Gibbs distributions at the corresponding inverse temperatures. We propose classical and quantum algorithms for these two tasks, achieving two goals: (i) we simplify the seminal work of \v{S}tefankovi\v{c}, Vempala and Vigoda (\emph{J.~ACM}, 56(3), 2009), improving their running time and almost matching that of the current classical state of the art; (ii) we quantize our new simple algorithm, improving upon the best known algorithm for computing partition functions of many problems, due to Harrow and Wei (SODA 2020). A key ingredient of our method is the paired-product estimator of Huber (\emph{Ann.\ Appl.\ Probab.}, 25(2),~2015). The proposed quantum algorithm has two advantages over the classical algorithm: it has quadratically faster dependence on the spectral gap of the Markov chains as well as the precision, and it computes a shorter cooling schedule, which matches the length conjectured to be optimal by \v{S}tefankovi\v{c}, Vempala and Vigoda. | quantum physics |
This paper was devoted to study the quantitative homogenization problems for nonlinear elliptic operators in perforated domains. We obtained a sharp error estimate $O(\varepsilon)$ when the problem was anchored in the reference domain $\varepsilon\omega$. If concerning a bounded perforated domain, one will see a bad influence from the boundary layers, which leads to the loss of the convergence rate by $O(\varepsilon^{1/2})$. Equipped with the error estimates, we developed both interior and boundary Lipschitz estimates at large-scales. As an application, we received the so-called quenched Calder\'on-Zygumund estimates by Shen's real arguments. To overcome some difficulties, we improved the extension theory from (\cite[Theorem 4.3]{OSY}) to $L^p$-versions with $\frac{2d}{d+1}-\epsilon<p<\frac{2d}{d-1}+\epsilon$ and $0<\epsilon\ll1$. Appealing to this, we established Poincar\'e-Sobolev inequalities of local type on perforated domains. Some of results in the present literature are new even for related linear elliptic models. | mathematics |
This document contains extended mathematical derivations for the communication- and model-free loss minimization algorithm. The algorithm is applied in the distribution grids and exploits the capabilities of the inverters to control the reactive power output. | electrical engineering and systems science |
In this paper we utilize a survival analysis methodology incorporating Bayesian additive regression trees to account for nonlinear and additive covariate effects. We compare the performance of Bayesian additive regression trees, Cox proportional hazards and random survival forests models for censored survival data, using simulation studies and survival analysis for breast cancer with U.S. SEER database for the year 2005. In simulation studies, we compare the three models across varying sample sizes and censoring rates on the basis of bias and prediction accuracy. In survival analysis for breast cancer, we retrospectively analyze a subset of 1500 patients having invasive ductal carcinoma that is a common form of breast cancer mostly affecting older woman. Predictive potential of the three models are then compared using some widely used performance assessment measures in survival literature. | statistics |
In the absence of external rewards, agents can still learn useful behaviors by identifying and mastering a set of diverse skills within their environment. Existing skill learning methods use mutual information objectives to incentivize each skill to be diverse and distinguishable from the rest. However, if care is not taken to constrain the ways in which the skills are diverse, trivially diverse skill sets can arise. To ensure useful skill diversity, we propose a novel skill learning objective, Relative Variational Intrinsic Control (RVIC), which incentivizes learning skills that are distinguishable in how they change the agent's relationship to its environment. The resulting set of skills tiles the space of affordances available to the agent. We qualitatively analyze skill behaviors on multiple environments and show how RVIC skills are more useful than skills discovered by existing methods when used in hierarchical reinforcement learning. | computer science |
Given a random sample extracted from a Multivariate Bernoulli Variable (MBV), we consider the problem of estimating the structure of the undirected graph for which the distribution is pairwise Markov and the parameters' vector of its exponential form. We propose a simple method that provides a closed form estimator of the parameters' vector and through its support also provides an estimate of the undirected graph associated to the MBV distribution. The estimator is proved to be consistent but it is feasible only in low-dimensional regimes. Synthetic examples illustrates its performance compared with another method that represents the state of the art in literature. Finally, the proposed procedure is used for the analysis of a real data set in the pediatric allergology area showing its practical efficiency. | statistics |
In this paper we study the weak Roman domination number and the secure domination number of a graph. In particular, we obtain general bounds on these two parameters and, as a consequence of the study, we derive new inequalities of Nordhaus-Gaddum type involving secure domination and weak Roman domination. Furthermore, the particular case of Cartesian product graphs is considered. | mathematics |
In this paper we analyze the extension of the classical smallest enclosing disk problem to the case of the location of a polyellipsoid to fully cover a set of demand points in $\mathbb{R}^d$. We prove that the problem is polynomially solvable in fixed dimension and analyze mathematical programming formulations for it. We also consider some geometric approaches for the problem in case the foci of the polyellipsoids are known. Extensions of the classical algorithm by Elzinga-Hearn are also derived for this new problem. Moreover, we address several extensions of the problem, as the case where the foci of the enclosing polyellipsoid are not given and have to be determined among a potential set of points or the induced covering problems when instead of polyellipsoids, one uses ordered median polyellipsoids. For these problems we also present Mixed Integer (Non) Linear Programming strategies that lead to efficient ways to solve it. | mathematics |
In this paper, we consider the multiplicity and asymptotics of standing waves with prescribed mass $\int_{{\mathbb{R}^N}} {{u}^2}=a^2$ to the energy critical half-wave \begin{equation}\label{eqA0.1} \sqrt{-\Delta}u=\lambda u+\mu|u|^{q-2} u+|u|^{2^*-2}u,\ \ u\in H^{1/2}(\R^N), \end{equation} where $N\!\geq\! 2$, $a\!>\!0$, $q \!\in\!\big(2,2+\frac{2}{N}\big)$, $2^*\!=\!\frac{2N}{N-1}$ and $\lambda\!\in\!\R$ appears as a Lagrange multiplier. We show that \eqref{eqA0.1} admits a ground state $u_a$ and an excited state $v_a$, which are characterised by a local minimizer and a mountain-pass critical point of the corresponding energy functional. Several asymptotic properties of $\{u_a\}$, $\{v_a\}$ are obtained and it is worth pointing out that we get a precise description of $\{u_a\}$ as $a\!\to\! 0^+$ without needing any uniqueness condition on the related limit problem. The main contribution of this paper is to extend the main results in J. Bellazzini et al. [Math. Ann. 371 (2018), 707-740] from energy subcritical to energy critical case. Furthermore, these results can be extended to the general fractional nonlinear Schr\"{o}dinger equation with Sobolev critical exponent, which generalize the work of H. J. Luo-Z. T. Zhang [Calc. Var. Partial Differ. Equ. 59 (2020)] from energy subcritical to energy critical case. | mathematics |
This paper presents a new approach for trees-based regression, such as simple regression tree, random forest and gradient boosting, in settings involving correlated data. We show the problems that arise when implementing standard trees-based regression models, which ignore the correlation structure. Our new approach explicitly takes the correlation structure into account in the splitting criterion, stopping rules and fitted values in the leaves, which induces some major modifications of standard methodology. The superiority of our new approach over trees-based models that do not account for the correlation is supported by simulation experiments and real data analyses. | statistics |
The current work covers the evaluation of ultrasonic and electromagnetic (EM) techniques applied to temperature measurement and flow characterization for Enhanced Geothermal System (EGS). We have evaluated both ultrasonic techniques and microwave radiometry for temperature gradient and profile measurements. A waveguide-based ultrasonic probe was developed to measure the temperature gradient. A statistic approach on estimating the average grain size via spectral analysis of the scattered ultrasonic signals is introduced. For directional temperature measurement, different microwave antenna designs are compared numerically and an array loop antenna design is selected for further development. Finally techniques to characterize the porosity and permeability of a hot dry rock resource are presented. | physics |
Even though transitivity is a central structural feature of social networks, its influence on epidemic spread on coevolving networks has remained relatively unexplored. Here we introduce and study an adaptive SIS epidemic model wherein the infection and network coevolve with non-trivial probability to close triangles during edge rewiring, leading to substantial reinforcement of network transitivity. This new model provides a unique opportunity to study the role of transitivity in altering the SIS dynamics on a coevolving network. Using numerical simulations and Approximate Master Equations (AME), we identify and examine a rich set of dynamical features in the new model. In many cases, the AME including transitivity reinforcement provides accurate predictions of stationary-state disease prevalences and network degree distributions. Furthermore, for some parameter settings, the AME accurately trace the temporal evolution of the system. We show that higher transitivity reinforcement in the model leads to lower levels of infective individuals in the population, when closing a triangle is the dominant rewiring mechanism. These methods and results may be useful in developing ideas and modeling strategies for controlling SIS type epidemics. | physics |
In addition to Internet service, new commercial broadband low-Earth-orbiting (LEO) satellites could provide a positioning, navigation, and timing (PNT) service far more robust to interference than traditional Global Navigation Satellite Systems (GNSS). Previous proposals for LEO PNT require dedicated spectrum and hardware: a transmitter, antenna, and atomic clock on board every broadband satellite. This paper proposes a high-performance, low-cost alternative which fuses the requirements of PNT service into the existing capabilities of the broadband satellite. A concept of operations for so-called fused LEO GNSS is presented and analyzed in terms of the economy of its use of constellation resources of transmitters, bandwidth, and time. This paper shows that continuous assured PNT service over $\pm$60{\deg} latitude (covering 99.8% of the world's population) with positioning performance exceeding traditional GNSS pseudoranging would cost less than 0.8% of downlink capacity for the largest of the new constellations, SpaceX's Starlink. | electrical engineering and systems science |
We study the local geometry of 4-manifolds equipped with a \emph{para-K\"ahler-Einstein} (pKE) metric, a special type of split-signature pseudo-Riemannian metric, and their associated \emph{twistor distribution}, a rank 2 distribution on the 5-dimensional total space of the circle bundle of self-dual null 2-planes. For pKE metrics with nonvanishing Einstein constant this twistor distribution has exactly two integral leaves and is `maximally non-integrable' on their complement, a so-called (2,3,5)-distribution. Our main result establishes a simple correspondence between the anti-self-dual Weyl tensor of a pKE metric with non-vanishing Einstein constant and the Cartan quartic of the associated twistor distribution. This will be followed by a discussion of this correspondence for general split-signature metrics which is shown to be much more involved. We use Cartan's method of equivalence to produce a large number of explicit examples of pKE metrics with nonvanishing Einstein constant whose anti-self-dual Weyl tensor have special real Petrov type. In the case of real Petrov type $D,$ we obtain a complete local classification. Combined with the main result, this produces twistor distributions whose Cartan quartic has the same algebraic type as the Petrov type of the constructed pKE metrics. In a similar manner, one can obtain twistor distributions with Cartan quartic of arbitrary algebraic type. As a byproduct of our pKE examples we naturally obtain para-Sasaki-Einstein metrics in five dimensions. Furthermore, we study various Cartan geometries naturally associated to certain classes of pKE 4-dimensional metrics. We observe that in some geometrically distinguished cases the corresponding \emph{Cartan connections} satisfy the Yang-Mills equations. We then provide explicit examples of such Yang-Mills Cartan connections. | mathematics |
In this thesis we expand upon the results that led to the paper of Lee et al., arXiv:2105.01114 (2021). In particular, we give more details on the oracular formulation of variational quantum algorithms, and the relationship between properties of Ans\"atze and the strength of their corresponding oracles. Furthermore, having identified the importance of noncommutativity in parameterized quantum circuits (PQCs) as likely being crucial to achieving a quantum advantage, we compare this notion to similar properties in classical neural networks such as nonlinearity, based on the perspective of the recent moniker for PQCs as quantum neural networks. While this thesis includes much of the figures and content from the aforementioned paper, it should be considered mainly as a self-contained collection of supplementary materials. | quantum physics |
Model compression has emerged as an important area of research for deploying deep learning models on Internet-of-Things (IoT). However, for extremely memory-constrained scenarios, even the compressed models cannot fit within the memory of a single device and, as a result, must be distributed across multiple devices. This leads to a distributed inference paradigm in which memory and communication costs represent a major bottleneck. Yet, existing model compression techniques are not communication-aware. Therefore, we propose Network of Neural Networks (NoNN), a new distributed IoT learning paradigm that compresses a large pretrained 'teacher' deep network into several disjoint and highly-compressed 'student' modules, without loss of accuracy. Moreover, we propose a network science-based knowledge partitioning algorithm for the teacher model, and then train individual students on the resulting disjoint partitions. Extensive experimentation on five image classification datasets, for user-defined memory/performance budgets, show that NoNN achieves higher accuracy than several baselines and similar accuracy as the teacher model, while using minimal communication among students. Finally, as a case study, we deploy the proposed model for CIFAR-10 dataset on edge devices and demonstrate significant improvements in memory footprint (up to 24x), performance (up to 12x), and energy per node (up to 14x) compared to the large teacher model. We further show that for distributed inference on multiple edge devices, our proposed NoNN model results in up to 33x reduction in total latency w.r.t. a state-of-the-art model compression baseline. | statistics |
We construct a four-parameter family of affine Yangian algebras by gluing two copies of the affine Yangian of $\mathfrak{gl}_1$. Our construction allows for gluing operators with arbitrary (integer or half integer) conformal dimension and arbitrary (bosonic or fermionic) statistics, which is related to the relative framing. The resulting family of algebras is a two-parameter generalization of the $\mathcal{N}=2$ affine Yangian, which is isomorphic to the universal enveloping algebra of $\mathfrak{u}(1)\oplus \mathcal{W}^{\mathcal{N}=2}_{\infty}[\lambda]$. All algebras that we construct have natural representations in terms of "twin plane partitions", a pair of plane partitions appropriately joined along one common leg. We observe that the geometry of twin plane partitions, which determines the algebra, bears striking similarities to the geometry of certain toric Calabi-Yau threefolds. | high energy physics theory |
We study the anomalous microwave emission (AME) in the Lynds Dark Nebula (LDN) 1780 on two angular scales. Using available ancillary data at an angular resolution of 1 degree, we construct an SED between 0.408 GHz to 2997 GHz. We show that there is a significant amount of AME at these angular scales and the excess is compatible with a physical spinning dust model. We find that LDN 1780 is one of the clearest examples of AME on 1 degree scales. We detected AME with a significance > 20$\sigma$. We also find at these angular scales that the location of the peak of the emission at frequencies between 23-70 GHz differs from the one on the 90-3000 GHz map. In order to investigate the origin of the AME in this cloud, we use data obtained with the Combined Array for Research in Millimeter-wave Astronomy (CARMA) that provides 2 arcmin resolution at 30 GHz. We study the connection between the radio and IR emissions using morphological correlations. The best correlation is found to be with MIPS 70$\mu$m, which traces warm dust (T$\sim$50K). Finally, we study the difference in radio emissivity between two locations within the cloud. We measured a factor $\approx 6$ of difference in 30 GHz emissivity. We show that this variation can be explained, using the spinning dust model, by a variation on the dust grain size distribution across the cloud, particularly changing the carbon fraction and hence the amount of PAHs. | astrophysics |
There is a need to build intelligence in operating machinery and use data analysis on monitored signals in order to quantify the health of the operating system and self-diagnose any initiations of fault. Built-in control procedures can automatically take corrective actions in order to avoid catastrophic failure when a fault is diagnosed. This paper presents a Temporal Clustering Network (TCN) capability for processing acceleration measurement(s) made on the operating system (i.e. machinery foundation, machinery casing, etc.), or any other type of temporal signals, and determine based on the monitored signal when a fault is at its onset. The new capability uses: one-dimensional convolutional neural networks (1D-CNN) for processing the measurements; unsupervised learning (i.e. no labeled signals from the different operating conditions and no signals at pristine vs. damaged conditions are necessary for training the 1D-CNN); clustering (i.e. grouping signals in different clusters reflective of the operating conditions); and statistical analysis for identifying fault signals that are not members of any of the clusters associated with the pristine operating conditions. A case study demonstrating its operation is included in the paper. Finally topics for further research are identified. | electrical engineering and systems science |
M-theory backgrounds in the form of unwarped compactifications with or without fluxes are considered. We construct the bilinear forms of supergravity Killing spinors for different choices of spinor inner products on these backgrounds. The equations satisfied by the bilinear forms and their decompositions into product manifolds are obtained for different inner product choices in the special case for which the spinors factorize. It is found that the $AdS$ solutions can only appear for some special choices of spinor inner products on product manifolds. The reduction of bilinears of supergravity Killing spinors into the hidden symmetries of product manifolds which are Killing-Yano and closed conformal Killing-Yano forms for $AdS$ solutions is shown. These hidden symmetries are lifted to eleven-dimensional backgrounds to find the hidden symmetires on them. The relation between the choices of spinor inner products, $AdS$ solutions and hidden symmetries on M-theory backgrounds are investigated. | high energy physics theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.