text
stringlengths
11
9.77k
label
stringlengths
2
104
In recent years, a number of functional inequalities have been derived for Poisson random measures, with a wide range of applications. In this paper, we prove that such inequalities can be extended to the setting of marked temporal point processes, under mild assumptions on their Papangelou conditional intensity. First, we derive a Poincar\'e inequality. Second, we prove two transportation cost inequalities. The first one refers to functionals of marked point processes with a Papangelou conditional intensity and is new even in the setting of Poisson random measures. The second one refers to the law of marked temporal point processes with a Papangelou conditional intensity, and extends a related inequality which is known to hold on a general Poisson space. Finally, we provide a variational representation of the Laplace transform of functionals of marked point processes with a Papangelou conditional intensity. The proofs make use of an extension of the Clark-Ocone formula to marked temporal point processes. Our results are shown to apply to classes of renewal, nonlinear Hawkes and Cox point processes.
mathematics
The work interprets experimental data for the heat capacity of Zn2(BDC)2(DABCO) in the region of second-order phase transitions. The proposed understanding of the processes occurring during phase transitions may be helpful to reveal quantum Zeno effects in metal-organic frameworks (MOFs) with evolving (unstable) structural subsystems and to establish relations between quantum measurements and the entropy of phase transitions.
condensed matter
Analogous to circular spin current in an isolated quantum loop, bias induced spin circular current can also be generated under certain physical conditions in a nanojunction having single and/or multiple loop geometries which we propose first time, to the best of our concern, considering a magnetic quantum system. The key aspect of our work is the development of a suitable theory for defining and analyzing circular spin current in presence of environmental dephasing and impurities. Unlike transport current in a conducting junction, circular current may enhance significantly in presence of disorder and phase randomizing processes. Our analysis provides a new spin dependent phenomenon, and can give important signatures in designing suitable spintronic devices as well as selective spin regulations.
condensed matter
Photoacoustic tomography (PAT) is intrinsically sensitive to blood oxygen saturation (sO2) in vivo. However, making accurate sO2 measurements without knowledge of tissue- and instrumentation-related correction factors is extremely challenging. We have developed a low-cost flow phantom system to facilitate validation of photoacoustic tomography systems. The phantom is composed of a flow circuit, which is partially embedded within a tissue mimicking phantom, with independent sensors providing online monitoring of the optical absorption spectrum and partial pressure of oxygen in the tube. We first establish the flow phantom using two small molecule dyes that are frequently used for photoacoustic imaging: methylene blue (MB) and indocyanine green (ICG). We then demonstrate the potential of the phantom for evaluating sO2 using chemical oxygenation and deoxygenation of blood in the phantom. Using this dynamic assessment of the photoacoustic sO2 measurement in phantoms in relation to a ground truth, we explore the influence of multispectral processing and spectral coloring on accurate assessment of sO2. Future studies could exploit this low-cost dynamic flow phantom to validate fluence correction algorithms and explore additional blood parameters such as pH, and also absorptive and other properties of different fluids.
physics
Despite its dominance, hydrogen has been largely ignored in studies of the abundance patterns of the chemical elements in gradual solar energetic-particle (SEP) events; those neglected abundances show a surprising new pattern of behavior. Abundance enhancements of elements with 2 <= Z <= 56, relative to coronal abundances, show a power-law dependence, versus their average mass-to-charge ratio A/Q, that varies from event to event and with time during events. The ion charge states Q depend upon the source plasma temperature T. For most gradual SEP events, shock waves have accelerated ambient coronal material with T < 2 MK with decreasing power-laws in A/Q. In this case, the proton abundances agree rather well with the power-law fits extrapolated from elements with Z >= 6 at A/Q > 2 down to hydrogen at A/Q = 1. Thus the abundances of the elements with Z >= 6 fairly accurately predict the observed abundance of H, at a similar velocity, in most SEP events. However, for those gradual SEP events where ion enhancements follow positive powers of A/Q, especially those with T > 2 MK where shock waves have reaccelerated residual suprathermal ions from previous impulsive SEP events, proton abundances commonly exceed the extrapolated expectation, usually by a factor of order ten. This is a new and unexpected pattern of behavior that is unique to the abundances of protons and may be related to the need for more streaming protons to produce sufficient waves for scattering and acceleration of more heavy ions at the shock.
astrophysics
A new one-dimensional model is proposed for the low-energy vibrational quantum dynamics of CH5+ based on the motion of an effective particle confined to a 60-vertex graph ${\Gamma}_{60}$ with a single edge length parameter. Within this model, the quantum states of CH5+ are obtained in analytic form and are related to combinatorial properties of ${\Gamma}_{60}$. The bipartite structure of ${\Gamma}_{60}$ gives a simple explanation for curious symmetries observed in numerically exact variational calculations on CH5+.
physics
In this paper, a photovoltaic (PV) reconfigurable grid-tied inverter (RGTI) scheme is proposed. Unlike a conventional GTI that ceases operation during a power outage, the RGTI is designed to act as a regular GTI in the on-grid mode but it is reconfigured to function as a DC-DC charge-controller that continues operation during a grid outage. During this period, the RGTI is tied to the battery-bank of an external UPS based backup power system to augment it with solar power. Such an operation in off-grid mode without employing communication with the UPS is challenging, as the control of RGTI must not conflict with the battery management system of the UPS. The hardware and control design aspects of this requirement are discussed in this paper. A battery emulation control scheme is proposed for the RGTI that facilitates seamless functioning of the RGTI in parallel with the physical UPS battery to reduce its discharge current. A system-level control scheme for overall operation and power management is presented to handle the dynamic variations in solar irradiation and UPS loads during the day, such that the battery discharge burden is minimized. The design and operation of the proposed RGTI system are independent of the external UPS and can be integrated with an UPS supplied by any manufacturer. Experimental results on a 4~kVA hardware setup validate the proposed RGTI concept, its operation and control.
electrical engineering and systems science
The CLIC Tracker Detector (CLICTD) is a monolithic pixelated sensor chip produced in a $180$ nm imaging CMOS process built on a high-resistivity epitaxial layer. The chip, designed in the context of the CLIC tracking detector study, comprises a matrix of ${16\times128}$ elongated pixels, each measuring ${300\times30}$ $\mu$m$^2$. To ensure prompt charge collection, every elongated pixel is segmented in eight sub-pixels, each containing a collection diode and a separate analog front-end. A simultaneous $8$-bit time measurement with $10$ ns time bins and $5$-bit energy measurement with programmable range is performed in the on-pixel digital logic. The main design aspects as well as the first results from laboratory measurements with the CLICTD chip are presented.
physics
In this paper we expanded the security of a central broadcast protocol using thermal states to the case in which the eavesdropper controls the source. Quantum secrecy in a continuous variable central broadcast scheme is guaranteed by the quantum correlations present in thermal states arising from the Hanbury Brown and Twiss effect. This work allows for a method of key exchange in which two parties can agree a key as long as both can detect the same source and they are within the spatial coherence length of the source. This is important because it allows quantum secure key exchange with only minimal changes to existing infrastructure.
quantum physics
A stream of algorithmic advances has steadily increased the popularity of the Bayesian approach as an inference paradigm, both from the theoretical and applied perspective. Even with apparent successes in numerous application fields, a rising concern is the robustness of Bayesian inference in the presence of model misspecification, which may lead to undesirable extreme behavior of the posterior distributions for large sample sizes. Generalized belief updating with a loss function represents a central principle to making Bayesian inference more robust and less vulnerable to deviations from the assumed model. Here we consider such updates with $f$-divergences to quantify a discrepancy between the assumed statistical model and the probability distribution which generated the observed data. Since the latter is generally unknown, estimation of the divergence may be viewed as an intractable problem. We show that the divergence becomes accessible through the use of probabilistic classifiers that can leverage an estimate of the ratio of two probability distributions even when one or both of them is unknown. We demonstrate the behavior of generalized belief updates for various specific choices under the $f$-divergence family. We show that for specific divergence functions such an approach can even improve on methods evaluating the correct model likelihood function analytically.
statistics
In this paper, millimeter wave (mmWave) wireless channel characteristics (Doppler spread and path loss modeling) for Unmanned Aerial Vehicles (UAVs) assisted communication is analyzed and studied by emulating the real UAV motion using a robotic arm. The motion considers the actual turbulence caused by the wind gusts to the UAV in the atmosphere, which is statistically modeled by the widely used Dryden wind model. The frequency under consideration is 28 GHz in an anechoic chamber setting. A total of 11 distance points from 3.5 feet to 23.5 feet in increments of 2 feet were considered in this experiment. At each distance point, 3 samples of data were collected for better inference purposes. In this emulated environment, it was found out that the average Doppler spread at these different distances was around -20 Hz and +20 Hz at the noise floor of -60 dB. On the other hand, the path loss exponent was found to be 1.843. This study presents and lays out a novel framework of emulating UAV motion for mmWave communication systems, which will pave the way out for future design and implementation of next generation UAV-assisted wireless communication systems.
electrical engineering and systems science
The generation of mean flows is a long-standing issue in rotating fluids. Motivated by planetary objects, we consider here a rapidly rotating fluid-filled spheroid, which is subject to weak perturbations of either the boundary (e.g. tides) or the rotation vector (e.g. in direction by precession, or in magnitude by longitudinal librations). Using boundary-layer theory, we determine the mean zonal flows generated by nonlinear interactions within the viscous Ekman layer. These flows are of interest because they survive in the relevant planetary regime of both vanishing forcings and viscous effects. We extend the theory to take into account (i) the combination of spatial and temporal perturbations, providing new mechanically driven zonal flows (e.g. driven by latitudinal librations), and (ii) the spheroidal geometry relevant for planetary bodies. Wherever possible, our analytical predictions are validated with direct numerical simulations. The theoretical solutions are in good quantitative agreement with the simulations, with expected discrepancies (zonal jets) in the presence of inertial waves generated at the critical latitudes (as for precession). Moreover, we find that the mean zonal flows can be strongly affected in spheroids. Guided by planetary applications, we also revisit the scaling laws for the geostrophic shear layers at the critical latitudes, and the influence of a solid inner core.
physics
Natural Language Processing algorithms have made incredible progress, but they still struggle when applied to out-of-distribution examples. We address a challenging and underexplored version of this domain adaptation problem, where an algorithm is trained on several source domains, and then applied to examples from an unseen domain that is unknown at training time. Particularly, no examples, labeled or unlabeled, or any other knowledge about the target domain are available to the algorithm at training time. We present PADA: A Prompt-based Autoregressive Domain Adaptation algorithm, based on the T5 model. Given a test example, PADA first generates a unique prompt and then, conditioned on this prompt, labels the example with respect to the NLP task. The prompt is a sequence of unrestricted length, consisting of pre-defined Domain Related Features (DRFs) that characterize each of the source domains. Intuitively, the prompt is a unique signature that maps the test example to the semantic space spanned by the source domains. In experiments with 3 tasks (text classification and sequence tagging), for a total of 14 multi-source adaptation scenarios, PADA substantially outperforms strong baselines.
computer science
The paper starts by giving a motivation for this research and justifying the considered stochastic diffusion models for cosmic microwave background radiation studies. Then it derives the exact solution in terms of a series expansion to a hyperbolic diffusion equation on the unit sphere. The Cauchy problem with random initial conditions is studied. All assumptions are stated in terms of the angular power spectrum of the initial conditions. An approximation to the solution is given and analysed by finitely truncating the series expansion. The upper bounds for the convergence rates of the approximation errors are derived. Smoothness properties of the solution and its approximation are investigated. It is demonstrated that the sample H\"older continuity of these spherical fields is related to the decay of the angular power spectrum. Numerical studies of approximations to the solution and applications to cosmic microwave background data are presented to illustrate the theoretical results.
statistics
The proximity of the anode to a curved field electron emitter alters the electric field at the apex and its neighbourhood. A formula for the apex field enhancement factor, $\gamma_a(D)$, for generic smooth emitters is derived using the line charge model when the anode is at a distance $D$ from the cathode plane. The resulting approximately modular form is such that the anode proximity contribution can be calculated separately (using geometric quantities such as the anode-cathode distance $D$, the emitter height $h$ and the emitter apex radius of curvature $R_a$) and plugged into the expression for $\gamma_a(\infty)$. It is also shown that the variation of the enhancement factor on the surface of the emitter close to the apex is unaffected by the presence of the anode and continues to obey the generalized cosine law. These results are verified numerically for various generic emitter shapes using COMSOL Multiphysics. Finally, the theory is applied to explain experimental observations on the scaling behavior of the $I-V$ field emission curve.
physics
This paper presents an efficient sensor management approach for multi-target tracking in passive sensor networks. Compared with active sensor networks, passive sensor networks have larger uncertainty due to the nature of passive sensing. Multi-target tracking in passive sensor networks is challenging because the multi-sensor multi-target fusion problem is difficult and sensor management is necessary to achieve good trade-offs between tracking accuracy and energy consumption or other costs. To address this problem, we present an efficient information-theoretic approach to manage the sensors for better tracking of the unknown and time-varying number of targets. This is accomplished with two main technical innovations. The first is a tractable information-based multi-sensor selection solution via a partially observed Markov decision process framework. The Cauchy-Schwarz divergence is used as the criterion to select informative sensors sequentially from the candidates. The second is a novel dual-stage fusion strategy based on the iterated-corrector multi-sensor generalized labeled multi-Bernoulli filter. Since the performance of the iterated-corrector scheme is greatly influenced by the order of sensor updates, the selected sensors are first ranked in order of their abilities to detect targets according to the Cauchy-Schwarz divergence, followed the iterated-corrector update. The computation costs of ranking the sensors are negligible, since the Cauchy-Schwarz divergence has been computed in the multi-sensor selection procedure. Simulation results validate the effectiveness and efficiency of the proposed approach.
electrical engineering and systems science
One route to numerically propagating quantum systems is time-dependent density functional theory (TDDFT). The application of TDDFT to a particular system's time evolution is predicated on $V$-representability which we have analyzed in a previous publication. Here we describe a newly developed solver for the scalar time-dependent Kohn-Sham potential. We present and interpret the force-balance equation central to our numerical method, describe details of its implementation, and present illustrative numerical results for one- and two-electron systems. A new characterization of $V$-representability for one-electron systems is also included along with possible improvements and future directions.
quantum physics
In this work we study the relaxation of the system of strongly correlated electrons, at charge neutrality, when the chemical potential undergoes a local change. This setup is a model for for the X-ray absorbtion edge study in the half-filled graphene. We use holographic duality to describe the system as a classical Schwarzschild black hole in curved 4-dimensional AdS spacetime. Assuming the amplitude of the quench is small, we neglect the backreaction on the geometry. We numerically study the two relaxation regimes: the adiabatic relaxation when the quench is slow and the relaxation governed by the quasinormal modes of the system, when the quench is fast. We confirm the expectation that the scale of separation between the slow and fast regimes is set by the characteristic frequency of the quasinormal modes.
high energy physics theory
Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. As a result, constraining paths of the robot to not interact with the environment is not necessary and may be limiting. In this paper, we mathematically formalize interactions of a soft growing robot with a planar environment in an empirical kinematic model. Using this interaction model, we develop a method to plan paths for the robot to a destination. Rather than avoiding contact with the environment, the planner exploits obstacle contact when beneficial for navigation. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact.
computer science
Seismic data processing plays a major role in seismic exploration as it conditions much of the seismic interpretation performance. In this context, generating reliable post-stack seismic data depends also on disposing of an efficient pre-stack noise attenuation tool. Here we tackle ground roll noise, one of the most challenging and common noises observed in pre-stack seismic data. Since ground roll is characterized by relative low frequencies and high amplitudes, most commonly used approaches for its suppression are based on frequency-amplitude filters for ground roll characteristic bands. However, when signal and noise share the same frequency ranges, these methods usually deliver also signal suppression or residual noise. In this paper we take advantage of the highly non-linear features of convolutional neural networks, and propose to use different architectures to detect ground roll in shot gathers and ultimately to suppress them using conditional generative adversarial networks. Additionally, we propose metrics to evaluate ground roll suppression, and report strong results compared to expert filtering. Finally, we discuss generalization of trained models for similar and different geologies to better understand the feasibility of our proposal in real applications.
electrical engineering and systems science
Quantum absorption refrigerator (QAR) autonomously extracts heat from a cold bath and dumps into a hot bath by exploiting the input heat from a higher temperature reservoir. QARs typically require three-body interactions. We propose and examine a two-body QAR model based upon optomechanical-like coupling in the working medium composed of either two two-level systems or two harmonic oscillators or one two-level atom and a harmonic oscillator. In the ideal case without internal dissipation, within the experimentally realizable parameters, our model can attain the coefficient of performance that is arbitrarily close to the Carnot bound. We study the efficiency at maximum power, a bound for practical purposes, and show that by using suitable reservoir engineering and exploiting the nonlinear optomechanial-like coupling, one can achieve efficiency at maximum power close to the Carnot bound, though the power gradually approaches to zero as the efficiency approaches the Carnot bound. Moreover, we discuss the impact of non-classical correlations and the size of Hilbert space on the cooling power. Finally, we consider a more realistic version of our model in which we consider heat leaks that makes QAR non-ideal and prevent it to achieve the Carnot efficiency.
quantum physics
Magnetic monopoles may be produced by the Schwinger effect in the strong magnetic fields of peripheral heavy-ion collisions. We review the form of the electromagnetic fields in such collisions and calculate from first principles the cross section for monopole pair production. Using the worldline instanton method, we work to all orders in the magnetic charge, and hence are not hampered by the breakdown of perturbation theory. Our result depends on the spacetime inhomogeneity through a single dimensionless parameter, the Keldysh parameter, which is independent of collision energy for a given monopole mass. For realistic heavy-ion collisions, the computational cost of the calculation becomes prohibitive and the finite size of the monopoles needs to be taken into account, and therefore our current results are not applicable to them. Nonetheless, our results show that the spacetime dependence enhances the production cross section and would therefore lead to stronger monopole mass bounds than in the constant-field case.
high energy physics theory
The experimental measurements on flavour physics, in tension with Standard Model predictions, exhibit large sources of Lepton Flavour Universality violation. This note summarises an analysis of the effects of the global fits to the Wilson coefficients assuming a model independent effective Hamiltonian approach, by including a proposal of different scenarios to include the New Physics contributions. Additionally, we include an overview of the impact of the future generation of colliders in the field of B-meson anomalies.
high energy physics phenomenology
A Parity Alternating Permutation of the set $[n] = \{1, 2,\ldots, n\}$ is a permutation with even and odd entries alternatively. We deal with parity alternating permutations having an odd entry in the first position, PAPs. We study the numbers that count the PAPs with even as well as odd parity. We also study a subclass of PAPs being derangements as well, Parity Alternating Derangements (PADs). Moreover, by considering the parity of these PADs we look into their statistical property of excedance.
mathematics
A high share of distributed photovoltaic (PV) generation in low-voltage networks may lead to over-voltage, and line/transformer overloading. To mitigate these issues, we investigate how advanced electricity tariffs could ensure safe grid operation hile enabling building owners to recover their investment in a PV and storage system. We show that dynamic volumetric electricity prices trigger economic opportunities for large investments in PV and battery capacity but lead to more pressure on the grid while capacity and block rate tariffs mitigate over-voltage and decrease line loading issues. However, block rate tariffs significantly decrease the optimal PV installation size.
electrical engineering and systems science
The natural world often follows a long-tailed data distribution where only a few classes account for most of the examples. This long-tail causes classifiers to overfit to the majority class. To mitigate this, prior solutions commonly adopt class rebalancing strategies such as data resampling and loss reshaping. However, by treating each example within a class equally, these methods fail to account for the important notion of example hardness, i.e., within each class some examples are easier to classify than others. To incorporate this notion of hardness into the learning process, we propose the EarLy-exiting Framework(ELF). During training, ELF learns to early-exit easy examples through auxiliary branches attached to a backbone network. This offers a dual benefit-(1) the neural network increasingly focuses on hard examples, since they contribute more to the overall network loss; and (2) it frees up additional model capacity to distinguish difficult examples. Experimental results on two large-scale datasets, ImageNet LT and iNaturalist'18, demonstrate that ELF can improve state-of-the-art accuracy by more than 3 percent. This comes with the additional benefit of reducing up to 20 percent of inference time FLOPS. ELF is complementary to prior work and can naturally integrate with a variety of existing methods to tackle the challenge of long-tailed distributions.
computer science
In this paper, we introduce a new toolbox for constructing speech datasets from long audio recording and raw reference texts. We develop tools for each step of the speech dataset construction pipeline including data preprocessing, audio-text alignment, data post-processing and filtering. The proposed pipeline also supports human-in-the-loop to address text-audio mismatch issues and remove samples that don't satisfy the quality requirements. We demonstrated the toolbox efficiency by building the Russian LibriSpeech corpus (RuLS) from LibriVox audiobooks. The toolbox is opne sourced in NeMo framework. The RuLS corpus is released in OpenSLR.
electrical engineering and systems science
We study the individuality of the human voice with respect to a widely used feature representation of speech utterances, namely, the i-vector model. As a first step toward this goal, we compare and contrast uniqueness measures proposed for different biometric modalities. Then, we introduce a new uniqueness measure that evaluates the entropy of i-vectors while taking into account speaker level variations. Our measure operates in the discrete feature space and relies on accurate estimation of the distribution of i-vectors. Therefore, i-vectors are quantized while ensuring that both the quantized and original representations yield similar speaker verification performance. Uniqueness estimates are obtained from two newly generated datasets and the public VoxCeleb dataset. The first custom dataset contains more than one and a half million speech samples of 20,741 speakers obtained from TEDx Talks videos. The second one includes over twenty one thousand speech samples from 1,595 actors that are extracted from movie dialogues. Using this data, we analyzed how several factors, such as the number of speakers, number of samples per speaker, sample durations, and diversity of utterances affect uniqueness estimates. Most notably, we determine that the discretization of i-vectors does not cause a reduction in speaker recognition performance. Our results show that the degree of distinctiveness offered by i-vector-based representation may reach 43-70 bits considering 5-second long speech samples; however, under less constrained variations in speech, uniqueness estimates are found to reduce by around 30 bits. We also find that doubling the sample duration increases the distinctiveness of the i-vector representation by around 20 bits.
electrical engineering and systems science
We discuss the use of a region of uniform and constant magnetic field in order to implement a two-state atomic polarizer for an H(2S) beam. We have observed that a device with such field configuration is capable of achieving an efficient polarization for a wide range of magnetic field intensities and atomic velocities. In addition, we establish a criterion that must be met to confirm a successful polarization. That is possible due to a specific beating pattern for the Lyman-$\alpha$ radiation expected for the outgoing two-state atomic beam.
physics
In this work we look at Byzantine consensus in asynchronous systems under the local broadcast model. In the local broadcast model, a message sent by any node is received identically by all of its neighbors in the communication network, preventing a faulty node from transmitting conflicting information to different neighbors. Our recent work has shown that in the synchronous setting, network connectivity requirements for Byzantine consensus are lower under the local broadcast model as compared to the classical point-to-point communication model. Here we show that the same is not true in the asynchronous setting, and the network requirements for Byzantine consensus stays the same under local broadcast as under point-to-point communication model.
computer science
We use the string melting version of a multi-phase transport (AMPT) model to study Cu+Au collisions at $\sqrt{s_{NN}}=200$ GeV. The rapidity distributions of identified hadrons show asymmetric dependences on rapidity. In addition, elliptic and triangular flows at mid-rapidity from the AMPT model for pions, kaons, and protons agree reasonably with the experimental data up to $p_{T}\sim1$ GeV$/c$. We then investigate the forward/backward asymmetry of $v_2$ and $v_3$. We find that these anisotropic flows are larger on the Au-going side than the Cu-going side, while the asymmetry tends to go away in very peripheral collisions. We also make predictions on transverse momentum spectra of identified hadrons and longitudinal decorrelations of charged particles, where the average decorrelation of elliptic flow in asymmetric Cu+Au collisions is found to be stronger than that in Au+Au collisions.
high energy physics phenomenology
Benchmarks play an important role in evaluating the efficiency and effectiveness of solutions to automate several phases of the software development lifecycle. Moreover, if well designed, they also serve us well as an important artifact to compare different approaches amongst themselves. BugSwarm is a benchmark that has been recently published, which contains 3,091 pairs of failing and passing continuous integration builds. According to the authors, the benchmark has been designed with the automatic program repair and fault localization communities in mind. Given that a benchmark targeting these communities ought to have several characteristics (e.g., a buggy statement needs to be present), we have dissected the benchmark to fully understand whether the benchmark suits these communities well. Our critical analysis has found several limitations in the benchmark: only 112/3,091 (3.6%) are suitable to evaluate techniques for automatic fault localization or program repair.
computer science
As developing countries continue to face challenges associated with infectious diseases, the need to improve infrastructure to systematically collect data which can be used to understand their outbreak patterns becomes more critical. The World Health Organization (WHO) Integrated Disease Surveillance and Response (IDSR) strategy seeks to drive the systematic collection of surveillance data to strengthen district-level reporting and to translate them into public health actions. Since the analysis of this surveillance data at the central levels of government in many developing nations has traditionally not included advanced analytics, there are opportunities for the development and exploration of computational approaches that can provide proactive insights and improve general health outcomes of infectious disease outbreaks. We propose and demonstrate a multivariate time series cross-correlation analysis as a foundational step towards gaining insight on infectious disease patterns via the pairwise computation of weighted cross-correlation scores for a specified disease across different health districts using surveillance data from Cameroon. Following the computation of weighted cross-correlation scores, we apply an anomaly detection algorithm to assess how outbreak alarm patterns align in highly correlated health districts. We demonstrate how multivariate cross-correlation analysis of weekly surveillance data can provide insight into infectious disease incidence patterns in Cameroon by identifying highly correlated health districts for a given disease. We further demonstrate scenarios in which identification of highly correlated districts aligns with alarms flagged using a standard anomaly detection algorithm, hinting at the potential of end to end solutions combining anomaly detection algorithms for flagging alarms in combination with multivariate cross-correlation analysis.
statistics
I present evidence of a novel guise of superradiance that arises in black hole binary spacetimes. Given the right initial conditions, a wave will be amplified as it scatters off the binary. This process, which extracts energy from the orbital motion, is driven by absorption across the horizons and is most pronounced when the individual black holes are not spinning. Focusing on real scalar fields, I demonstrate how modern effective field theory (EFT) techniques enable the computation of the superradiant amplification factor analytically when there exist large separations of scales. Although exploiting these hierarchies inevitably means that the amplification factor is always negligible (it is never larger than about one part in $10^{10}$) in the EFT's regime of validity, this work has interesting theoretical implications for our understanding of general relativity and lays the groundwork for future studies on superradiant phenomena in binary systems.
high energy physics theory
PerceptIn develops and commercializes autonomous vehicles for micromobility around the globe. This paper makes a holistic summary of PerceptIn's development and operating experiences. This paper provides the business tale behind our product, and presents the development of the computing system for our vehicles. We illustrate the design decision made for the computing system, and show the advantage of offloading localization workloads onto an FPGA platform.
computer science
We describe of the topology of the geometric quotients of 2n dimensional compact connected symplectic manifolds with n-1 dimensional torus actions. When the isotropy weights at each fixed point are in general position, the quotient is homeomorphic to a sphere.
mathematics
The concept of a Sheffer operation known for Boolean algebras and orthomodular lattices is extended to arbitrary directed relational systems with involution. It is proved that to every such relational system there can be assigned a Sheffer groupoid and also, conversely, every Sheffer groupoid induces a directed relational system with involution. Hence, investigations of these relational systems can be transformed to the treaty of special groupoids which form a variety of algebras. If the Sheffer operation is also commutative then the induced binary relation is antisymmetric. Moreover, commutative Sheffer groupoids form a congruence distributive variety. We characterize symmertry, antisymmetry and treansitivity of binary relations by identities and quasi-identities satisfied by an assigned Sheffer operation. The concepts of twist-products of relational systems and of Kleene relational systems are introduced. We prove that every directed relational system can be embedded into a directed relational system with involution via the twist-product construction. If the relation in question is even transitive, then the directed relational system can be embedded into a Kleene relational system. Any Sheffer operation assigned to a directed relational system A with involution induces a Sheffer operation assigned to the twist-product of A.
mathematics
We analyze the stability of a nonlinear dynamical model describing the noncooperative strategic interactions among the agents of a finite collection of populations. Each agent selects one strategy at a time and revises it repeatedly according to a protocol that typically prioritizes strategies whose payoffs are either higher than that of the current strategy or exceed the population average. The model is predicated on well-established research in population and evolutionary games, and has two sub-components. The first is the payoff dynamics model (PDM), which ascribes the payoff to each strategy according to the proportions of every population adopting the available strategies. The second sub-component is the evolutionary dynamics model (EDM) that accounts for the revision process. In our model, the social state at equilibrium is a best response to the payoff, and can be viewed as a Nash-like solution that has predictive value when it is globally asymptotically stable (GAS). We present a systematic methodology that ascertains GAS by checking separately whether the EDM and PDM satisfy appropriately defined system-theoretic dissipativity properties. Our work generalizes pioneering methods based on notions of contractivity applicable to memoryless PDMs, and more general system-theoretic passivity conditions. As demonstrated with examples, the added flexibility afforded by our approach is particularly useful when the contraction properties of the PDM are unequal across populations.
electrical engineering and systems science
We propose a new clustering algorithm that is robust to the presence of outliers in the dataset. We perform Lloyd-type iterations with robust estimates of the centroids. More precisely, we build on the idea of median-of-means statistics to estimate the centroids, but allow for replacement while constructing the blocks. We call this methodology the bootstrap median-of-means (bMOM) and prove that if enough blocks are generated through the bootstrap sampling, then it has a better breakdown point for mean estimation than the classical median-of-means (MOM), where the blocks form a partition of the dataset. From a clustering perspective, bMOM enables to take many blocks of a desired size, thus avoiding possible disappearance of clusters in some blocks, a pitfall that can occur for the partition-based generation of blocks of the classical median-of-means. Experiments on simulated datasets show that the proposed approach, called K-bMOM, performs better than existing robust K-means based methods. Guidelines are provided for tuning the hyper-parameters K-bMOM in practice. It is also recommended to the practitionner to use such a robust approach to initialize their clustering algorithm. Finally, considering a simplified and theoretical version of our estimator, we prove its robustness to adversarial contamination by deriving robust rates of convergence for the K-means distorsion. To our knowledge, it is the first result of this kind for the K-means distorsion.
statistics
We prove a version of the weight part of Serre's conjecture for mod $p$ Galois representations attached to automorphic forms on rank 2 unitary groups which are non-split at $p$. More precisely, let $F/F^+$ denote a CM extension of a totally real field such that every place of $F^+$ above $p$ is unramified and inert in $F$, and let $\overline{r}: \textrm{Gal}(\overline{F^+}/F^+) \longrightarrow {}^C\mathbf{U}_2(\overline{\mathbb{F}}_p)$ be a Galois parameter valued in the $C$-group of a rank 2 unitary group attached to $F/F^+$. We assume that $\overline{r}$ is semisimple and sufficiently generic at all places above $p$. Using base change techniques and (a strengthened version of) the Taylor-Wiles-Kisin conditions, we prove that the set of Serre weights in which $\overline{r}$ is modular agrees with the set of Serre weights predicted by Gee-Herzig-Savitt.
mathematics
For an attracting periodic orbit (limit cycle) of a deterministic dynamical system, one defines the isochron for each point of the orbit as the cross-section with fixed return time under the flow. Equivalently, isochrons can be characterized as stable manifolds foliating neighborhoods of the limit cycle or as level sets of an isochron map. In recent years, there has been a lively discussion in the mathematical physics community on how to define isochrons for stochastic oscillations, i.e. limit cycles or heteroclinic cycles exposed to stochastic noise. The main discussion has concerned an approach finding stochastic isochrons as sections of equal expected return times versus the idea of considering eigenfunctions of the backward Kolmogorov operator. We discuss the problem in the framework of random dynamical systems and introduce a new rigorous definition of stochastic isochrons as random stable manifolds for random periodic solutions with noise-dependent period. This allows us to establish a random version of isochron maps whose level sets coincide with the random stable manifolds. Finally, we discuss links between the random dynamical systems interpretation and the equal expected return time approach via averaged quantities.
mathematics
Depression is a public health issue which severely affects one's well being and cause negative social and economic effect for society. To rise awareness of these problems, this publication aims to determine if long lasting effects of depression can be determined from electoencephalographic (EEG) signals. The article contains accuracy comparison for SVM, LDA, NB, kNN and D3 binary classifiers which were trained using linear (relative band powers, APV, SASI) and non-linear (HFD, LZC, DFA) EEG features. The age and gender matched dataset consisted of 10 healthy subjects and 10 subjects with depression diagnosis at some point in their lifetime. Several of the proposed feature selection and classifier combinations reached accuracy of 90% where all models where evaluated using 10-fold cross validation and averaged over 100 repetitions with random sample permutations.
computer science
We demonstrate a hierarchy of quantum correlations on experimentally prepared two-qubit Werner-like states with controllable white noise. Werner's states, which are white-noise-affected Bell states, are prototype examples for studying hierarchies of quantum correlations as a function of the amount of noise. We experimentally generated Werner's states and their generalizations (GWSs), i.e., partially entangled pure states affected by white noise. These states enable us to study the hierarchy of quantum entanglement, Bell's nonlocality (i.e., Bell's inequality violation, BIV), and quantum steering in two and three-measurement scenarios. We show that the GWSs reveal fundamentally new aspects of such hierarchies compared to those of Werner's states. In particular, we find that (i) non-maximally entangled states can be more robust to white noise than Bell states, and (ii) some GWSs can be steerable in a two-measurement scenario without violating Bell inequalities, which is impossible for the usual Werner states.
quantum physics
I consider a self-gravitating, N-body system assuming that the N constituents follow regular orbits about the center of mass of the cluster, where a central massive object may be present. I calculate the average over a characteristic timescale of the full, N-body Hamiltonian including all kinetic and potential energy terms. The resulting effective system allows for the identification of the orbital planes with N rigid, disk-shaped tops, that can rotate about their fixed common centre and are subject to mutual gravitational torques. The time-averaging imposes boundaries on the canonical generalized momenta of the resulting canonical phase space. I investigate the statistical mechanics induced by the effective Hamiltonian on this bounded phase space and calculate the thermal equilibrium states. These are a result of the relaxation of spins' directions, identified with orbital planes' orientations, which is called vector resonant relaxation. I calculate the dependence of spins' angular velocity dispersion on temperature and calculate the velocity distribution functions. I argue that the range of validity of the gravitational phase transitions, identified in the special case of zero kinetic term by Roupas, Kocsis & Tremaine, is expanded to non-zero values of the ratio of masses between the cluster of N-bodies and the central massive object. The relevance with astrophysics is discussed focusing on stellar clusters. The same analysis performed on an unbounded phase space accounts for continuous rigid tops.
astrophysics
We consider a family of norms (called operator E-norms) on the algebra $B(H)$ of all bounded operators on a separable Hilbert space $H$ induced by a positive densely defined operator $G$ on $H$. Each norm of this family produces the same topology on $B(H)$ depending on $G$. By choosing different generating operator $G$ one can obtain operator E-norms producing different topologies, in particular, the strong operator topology on bounded subsets of $B(H)$. We obtain a generalised version of the Kretschmann-Schlingemann-Werner theorem, which shows continuity of the Stinespring representation of CP linear maps w.r.t. the energy-constrained $cb$-norm (diamond norm) on the set of CP linear maps and the operator E-norm on the set of Stinespring operators. The operator E-norms induced by a positive operator $G$ are well defined for linear operators relatively bounded w.r.t. the operator $\sqrt{G}$ and the linear space of such operators equipped with any of these norms is a Banach space. We obtain explicit relations between the operator E-norms and the standard characteristics of $\sqrt{G}$-bounded operators. The operator E-norms allow to obtain simple upper bounds and continuity bounds for some functions depending on $\sqrt{G}$-bounded operators used in applications.
mathematics
In the practical continuous-variable quantum key distribution (CV-QKD) system, the postprocessing process, particularly the error correction part, significantly impacts the system performance. Multi-edge type low-density parity-check (MET-LDPC) codes are suitable for CV-QKD systems because of their Shannon-limit-approaching performance at a low signal-to-noise ratio (SNR). However, the process of designing a low-rate MET-LDPC code with good performance is extremely complicated. Thus, we introduce Raptor-like LDPC (RL-LDPC) codes into the CV-QKD system, exhibiting both the rate compatible property of the Raptor code and capacity-approaching performance of MET-LDPC codes. Moreover, this technique can significantly reduce the cost of constructing a new matrix. We design the RL-LDPC matrix with a code rate of 0.02 and easily and effectively adjust this rate from 0.016 to 0.034. Simulation results show that we can achieve more than 98% reconciliation efficiency in a range of code rate variation using only one RL-LDPC code that can support high-speed decoding with an SNR less than -16.45 dB. This code allows the system to maintain a high key extraction rate under various SNRs, paving the way for practical applications of CV-QKD systems with different transmission distances.
quantum physics
Quantum simulation of chemical systems is one of the most promising near-term applications of quantum computers. The variational quantum eigensolver, a leading algorithm for molecular simulations on quantum hardware, has a serious limitation in that it typically relies on a pre-selected wavefunction ansatz that results in approximate wavefunctions and energies. Here we present an arbitrarily accurate variational algorithm that instead of fixing an ansatz upfront, this algorithm grows it systematically one operator at a time in a way dictated by the molecule being simulated. This generates an ansatz with a small number of parameters, leading to shallow-depth circuits. We present numerical simulations, including for a prototypical strongly correlated molecule, which show that our algorithm performs much better than a unitary coupled cluster approach, in terms of both circuit depth and chemical accuracy. Our results highlight the potential of our adaptive algorithm for exact simulations with present-day and near-term quantum hardware.
quantum physics
Based on the results of a large-scale survey, we construct an agent-based network model for the independent inbound tourism of China and, by the approach of numerical simulation, investigate the dynamical responses of the tourist flows to external perturbations in different scenarios, including the closure of a tourist city, the opening of a new port in western China, and the increase of the tourism attractiveness of a specific city. Numerical results show that: (1) the closure of a single city in general will affect the tourist visitations of many other cities in the network and, comparing to the non-port cities, the overall visitation volume of the system is more influenced by closing a port city; (2) the opening of a new port city in western China will attract more tourists to the western cities, but has a negligible impact on either the overall visitation volume or the imbalanced tourist distribution; and (3) the increase of the tourism attractiveness of a non-port (port) city normally increases (decreases) the overall visitation volume, yet there are exceptions due to the spillover effect. Furthermore, by increasing the tourism attractiveness of a few cities simultaneously, we investigate also the strategy of multiple-city-upgrade in tourism development. Numerical results show that the overall tourist volume is better improved by upgrading important non-port cities that are geographically distant from each other. The study reveals the rich dynamics inherent in complex tourism network, and the findings could be helpful to the development and management of China inbound tourism.
physics
We present the largest sample of giant radio quasars (GRQs), which are defined as having a projected linear size greater than 0.7 Mpc. The sample consists of 272 GRQs, of which 174 are new objects discovered through cross-matching the NRAO VLA Sky Survey (NVSS) and the Sloan Digital Sky Survey 14$^{\rm th}$ Data Release Quasar Catalogue (DR14Q) and confirmed using Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) radio maps. In our analysis we compare the GRQs with 367 smaller, lobe-dominated radio quasars found using our search method, as well as with quasars from the SDSS DR14 Quasar Catalogue, investigating the parameters characterizing their radio emission (i.e. total and core radio luminosity, radio core prominence), optical properties (black hole masses, accretion rates, distribution in Eigenvector 1 plane) and infrared colours. For the GRQs and smaller radio quasars we find a strong correlation between [OIII] luminosity and radio luminosity at 1.4 GHz, indicating a strong connection between radio emission and conditions in the narrow-line region. We spot no significant differences between GRQs and smaller radio quasars, however we show that most extended radio quasars belong to a quasar population of evolved AGNs with large black hole masses and low accretion rates. We also show that GRQs have bluer W2-W3 colours compared to SDSS quasars with FIRST detections, indicating differences in the structure of the dusty torus.
astrophysics
Let $A$ be a unital $B_{0}$-algebra with an orthogonal basis, then every multiplicative linear functional on $A$ is continuous. This gives an answer to a problem posed by Z. Sawon and Z. Wronski.
mathematics
Long short-term memory recurrent neural networks (LSTM-RNNs) are considered state-of-the art in many speech processing tasks. The recurrence in the network, in principle, allows any input to be remembered for an indefinite time, a feature very useful for sequential data like speech. However, very little is known about which information is actually stored in the LSTM and for how long. We address this problem by using a memory reset approach which allows us to evaluate network performance depending on the allowed memory time span. We apply this approach to the task of multi-speaker source separation, but it can be used for any task using RNNs. We find a strong performance effect of short-term (shorter than 100 milliseconds) linguistic processes. Only speaker characteristics are kept in the memory for longer than 400 milliseconds. Furthermore, we confirm that performance-wise it is sufficient to implement longer memory in deeper layers. Finally, in a bidirectional model, the backward models contributes slightly more to the separation performance than the forward model.
electrical engineering and systems science
We focus on grounding (i.e., localizing or linking) referring expressions in images, e.g., ``largest elephant standing behind baby elephant''. This is a general yet challenging vision-language task since it does not only require the localization of objects, but also the multimodal comprehension of context -- visual attributes (e.g., ``largest'', ``baby'') and relationships (e.g., ``behind'') that help to distinguish the referent from other objects, especially those of the same category. Due to the exponential complexity involved in modeling the context associated with multiple image regions, existing work oversimplifies this task to pairwise region modeling by multiple instance learning. In this paper, we propose a variational Bayesian method, called Variational Context, to solve the problem of complex context modeling in referring expression grounding. Specifically, our framework exploits the reciprocal relation between the referent and context, i.e., either of them influences estimation of the posterior distribution of the other, and thereby the search space of context can be greatly reduced. In addition to reciprocity, our framework considers the semantic information of context, i.e., the referring expression can be reproduced based on the estimated context. We also extend the model to unsupervised setting where no annotation for the referent is available. Extensive experiments on various benchmarks show consistent improvement over state-of-the-art methods in both supervised and unsupervised settings.
computer science
We propose a method for constructing optimal block designs for experiments on networks. The response model for a given network interference structure extends the linear network effects model to incorporate blocks. The optimality criteria are chosen to reflect the experimental objectives and an exchange algorithm is used to search across the design space for obtaining an efficient design when an exhaustive search is not possible. Our interest lies in estimating the direct comparisons among treatments, in the presence of nuisance network effects that stem from the underlying network interference structure governing the experimental units, or in the network effects themselves. Comparisons of optimal designs under different models, including the standard treatment models, are examined by comparing the variance and bias of treatment effect estimators. We also suggest a way of defining blocks, while taking into account the interrelations of groups of experimental units within a network, using spectral clustering techniques to achieve optimal modularity. We expect connected units within closed-form communities to behave similarly to an external stimulus. We provide evidence that our approach can lead to efficiency gains over conventional designs such as randomized designs that ignore the network structure and we illustrate its usefulness for experiments on networks.
statistics
An old neutron star (NS) may capture halo dark matter (DM) and get heated up by the deposited kinetic energy, thus behaving like a thermal DM detector with sensitivity to a wide range of DM masses and a variety of DM-quark interactions. Near future infrared telescopes will measure NS temperatures down to a few thousand Kelvin and probe NS heating by DM capture. We focus on GeV-mass Dirac fermion DM (which is beyond the reach of current DM direct detection experiments) in scenarios in which the DM capture rate can saturate the geometric limit. For concreteness, we study (1) a model that invokes dark decays of the neutron to explain the neutron lifetime anomaly, and (2) a framework of DM coupled to quarks through a vector current portal. In the neutron dark decay model a NS can have a substantial DM population, so that the DM capture rate can reach the geometric limit through DM self-interactions even if the DM-neutron scattering cross section is tiny. We find NS heating to have greater sensitivity than multi-pion signatures in large underground detectors for the neutron dark decay model, and sub-GeV gamma-ray signatures for the quark vector portal model.
high energy physics phenomenology
Galaxy clusters have long been theorised to quench the star-formation of their members. This study uses integral-field unit observations from the $K$-band Multi-Object Spectrograph (KMOS) - Cluster Lensing And Supernova survey with Hubble (CLASH) survey (K-CLASH) to search for evidence of quenching in massive galaxy clusters at redshifts $0.3<z<0.6$. We first construct mass-matched samples of exclusively star-forming cluster and field galaxies, then investigate the spatial extent of their H$\alpha$ emission and study their interstellar medium conditions using emission line ratios. The average ratio of H$\alpha$ half-light radius to optical half-light radius ($r_{\rm{e},\rm{H}\alpha}/r_{\rm{e},R_c}$) for all galaxies is $1.14\pm0.06$, showing that star formation is taking place throughout stellar discs at these redshifts. However, on average, cluster galaxies have a smaller $r_{\rm{e},\rm{H}\alpha}/r_{\rm{e},R_c}$ ratio than field galaxies: $\langle r_{\rm{e},\rm{H}\alpha}/r_{\rm{e},R_c}\rangle = 0.96\pm0.09$ compared to $1.22\pm0.08$ (smaller at a 98\% credibility level). These values are uncorrected for the wavelength difference between H$\alpha$ emission and $R_c$-band stellar light, but implementing such a correction only reinforces our results. We also show that whilst the cluster and field samples follow indistinguishable mass-metallicity (MZ) relations, the residuals around the MZ relation of cluster members correlate with cluster-centric distance; galaxies residing closer to the cluster centre tend to have enhanced metallicities (significant at the 2.6$\sigma$ level). Finally, in contrast to previous studies, we find no significant differences in electron number density between the cluster and field galaxies. We use simple chemical evolution models to conclude that the effects of disc strangulation and ram-pressure stripping can quantitatively explain our observations.
astrophysics
We target the problem of estimating the center of mass of noisy 2-D images. We assume that the noise dominates the image, and thus many standard approaches are vulnerable to estimation errors. Our approach uses a surrogate function to the geometric median, which is a robust estimator of the center of mass. We mathematically analyze cases in which the geometric median fails to provide a reasonable estimate of the center of mass, and prove that our surrogate function leads to a successful estimate. One particular application for our method is to improve 3-D reconstruction in single-particle cryo-electron microscopy (cryo-EM). We show how to apply our approach for a better translational alignment of macromolecules picked from experimental data. In this way, we facilitate the succeeding steps of reconstruction and streamline the entire cryo-EM pipeline, saving valuable computational time and supporting resolution enhancement.
electrical engineering and systems science
Time-to-contact (TTC), the time for an object to collide with the observer's plane, is a powerful tool for path planning: it is potentially more informative than the depth, velocity, and acceleration of objects in the scene -- even for humans. TTC presents several advantages, including requiring only a monocular, uncalibrated camera. However, regressing TTC for each pixel is not straightforward, and most existing methods make over-simplifying assumptions about the scene. We address this challenge by estimating TTC via a series of simpler, binary classifications. We predict with low latency whether the observer will collide with an obstacle within a certain time, which is often more critical than knowing exact, per-pixel TTC. For such scenarios, our method offers a temporal geofence in 6.4 ms -- over 25x faster than existing methods. Our approach can also estimate per-pixel TTC with arbitrarily fine quantization (including continuous values), when the computational budget allows for it. To the best of our knowledge, our method is the first to offer TTC information (binary or coarsely quantized) at sufficiently high frame-rates for practical use.
computer science
Transient stability boundary (TSB) is an important tool in power system online security monitoring, but practically it suffers from high computational burden using state-of-the-art methods, such as time-domain simulation (TDS), with numerous scenarios taken into account (e.g., operating points (OPs) and N-1 contingencies). The purpose of this work is to establish a data-driven framework to generate sufficient critical samples close to the boundary within a limited time, covering all critical scenarios in current OP. Therefore, accurate TSB can be periodically refreshed by tracking current OP in time. The idea is to develop a search strategy to obtain more data samples near the stability boundary, while traverse the rest part with fewer samples. To achieve this goal, a specially designed transient index sensitivity based search strategy and critical scenarios selection mechanism are proposed, in order to find out the most representative scenarios and periodically update TSB for online monitoring. Two case studies validate effectiveness of the proposed method.
electrical engineering and systems science
This article is an extended version of the minicourse given by the second author at the summer school of the conference "Interactions of quantum affine algebras with cluster algebras, current algebras and categorification", held in June 2018 in Washington. The aim of the minicourse, consisting of three lectures, was to present a number of results and conjectures on certain monoidal categories of finite-dimensional representations of quantum affine algebras, obtained by exploiting the fact that their Grothendieck rings have the natural structure of a cluster algebra.
mathematics
Modern mobile phones contain a three-axis microelectromechanical system (MEMS) gyroscope, capable of taking accurate measurements of the angular velocity along the three principal axes of the phone with a sampling rate of 100 Hz or better. If the phone is tossed in the air, then, neglecting air resistance, it is in free rotation (rotation in the absence of a torque) with respect to its centre of mass, and the phone's gyroscope can be used to record the rotational dynamics. This enables experimental investigation of free rotation. In this paper, we use a mobile phone to demonstrate the steady states for rotation of the phone about two of its principal axes, and the instability in rotation about the third corresponding to the intermediate moment of inertia. We also show the approximate conservation of angular momentum and rotational kinetic energy during motion in the air, and compare the data with numerical solution of Euler's equations for free rotation. Our results demonstrate the capability of smartphones for investigating free rotation, and should be of interest to college and university teachers developing "at home" physics labs for remote learning.
physics
We investigate the effect of mass-loading from embedded clouds on the evolution of supernova remnants and on the energy and momentum that they inject into an inhomogeneous interstellar medium. We use 1D hydrodynamical calculations and assume that the clouds are numerous enough that they can be treated in the continuous limit. The destruction of embedded clouds adds mass into the remnant, increasing its density and pressure, and decreasing its temperature. The remnant cools more quickly, is less able to do PdV work on the swept-up gas, and ultimately attains a lower final momentum (by up to a factor of two or more). We thus find that the injection of momentum is more sensitive to an inhomogeneous environment than previous work has suggested, and we provide fits to our results for the situation where the cloud mass is not limited. The behaviour of the remnant is more complex in situations where the cloud mass is finite and locally runs out. In the case of multiple supernovae in a clustered environment, later supernova explosions may encounter higher densities than previous explosions due to the prior liberation of mass from engulfed clouds. If the cloud mass is finite, later explosions may be able to create a sustained hot phase when earlier explosions have not been able to.
astrophysics
In this paper, we propose a novel model-free reinforcement learning algorithm to compute the optimal policies for a multi-agent system with $N$ cooperative agents where each agent privately observes it's own private type and publicly observes each others' actions. The goal is to maximize their collective reward. The problem belongs to the broad class of decentralized control problems with partial information. We use the common agent approach wherein some fictitious common agent picks the best policy based on a belief on the current states of the agents. These beliefs are updated individually for each agent from their current belief and action histories. Belief state updates without the knowledge of system dynamics is a challenge. In this paper, we employ particle filters called the bootstrap filter distributively across agents to update the belief. We provide a model-free reinforcement learning (RL) method for this multi-agent partially observable Markov decision processes using the particle filter and sampled trajectories to estimate the optimal policies for the agents. We showcase our results with the help of a smartgrid application where the users strive to reduce collective cost of power for all the agents in the grid. Finally, we compare the performances for model and model-free implementation of the RL algorithm establishing the effectiveness of particle filter (pf) method.
electrical engineering and systems science
The stable and accurate approximation of discontinuities such as shocks on a finite computational mesh is a challenging task. Detection of shocks or strong discontinuities in the flow solution is typically achieved through a priori troubled cell indicators, which guide the subsequent action of an appropriate shock capturing mechanism. Arriving at a stable and accurate solution often requires empirically based parameter tuning and adjustments of the indicator settings to the discretization and solution at hand. In this work, we propose to separate the task of shock detection and shock capturing more strongly and aim to develop a shock indicator that is robust, accurate, requires minimal user input and is suitable for high order element-based methods like discontinuous Galerkin and flux reconstruction methods. The novel indicator is learned from analytical data through a supervised learning strategy; its input is given by the high order solution field, its output is an element-local map of the shock position. We use state of the art methods from edge detection in image analysis based on deep convolutional multiscale networks and deep supervision to train the indicators. The resulting networks are then used as black box indicators, showing their robustness and accuracy on well established canonical testcases. All simulations are run ab initio using the developed indicators, showing that they provide also stability during the strongly transient phases. In particular for high order schemes with large cells and considerable inner-cell resolution capabilities, we demonstrate how the additional accurate prediction of the position of the shock front can be exploited to guide inner-element shock capturing strategies.
mathematics
Cost-sensitive feature selection describes a feature selection problem, where features raise individual costs for inclusion in a model. These costs allow to incorporate disfavored aspects of features, e.g. failure rates of as measuring device, or patient harm, in the model selection process. Random Forests define a particularly challenging problem for feature selection, as features are generally entangled in an ensemble of multiple trees, which makes a post hoc removal of features infeasible. Feature selection methods therefore often either focus on simple pre-filtering methods, or require many Random Forest evaluations along their optimization path, which drastically increases the computational complexity. To solve both issues, we propose Shallow Tree Selection, a novel fast and multivariate feature selection method that selects features from small tree structures. Additionally, we also adapt three standard feature selection algorithms for cost-sensitive learning by introducing a hyperparameter-controlled benefit-cost ratio criterion (BCR) for each method. In an extensive simulation study, we assess this criterion, and compare the proposed methods to multiple performance-based baseline alternatives on four artificial data settings and seven real-world data settings. We show that all methods using a hyperparameterized BCR criterion outperform the baseline alternatives. In a direct comparison between the proposed methods, each method indicates strengths in certain settings, but no one-fits-all solution exists. On a global average, we could identify preferable choices among our BCR based methods. Nevertheless, we conclude that a practical analysis should never rely on a single method only, but always compare different approaches to obtain the best results.
statistics
In this study, we report identification of a new ultraluminous X-ray source (ULX) named as X-7 in NGC 1316, with an unabsorbed luminosity of 2.1$\times$10$^{39}$ erg s$^{-1}$ using the two recent Chandra archival observations. The X-7 was detected in the Chandra 2001 observation and was included in the source list of the NGC 1316 as CXOUJ032240.8-371224 with a luminosity of 5.7$\times$10$^{38}$ erg s$^{-1}$. Present luminosity implies a luminosity increase of a factor of $\sim$ 4. The best fit spectral model parameters indicate that X-7 has a relatively hot disk and hard spectra. If explained by a diskblackbody model, the mass of compact object is estimated as $\sim$ 8 M$\odot$ which is in the range of a stellar-mass black hole. The X-7 shows a relatively long-term count rate variability while no short-term variability is observed. We also identified a unique optical candidate within 0.22" error circle at 95\% confidence level for X-7 using the archival HST/ACS and HST/WFC3 data. Absolute magnitude (M$_{V}$) of this candidate is -7.8 mag. Its spectral energy distribution is adequately fitted a blackbody model with a temperature of 3100 K indicating an M type supergiant, assuming the donor star dominates the optical emission. In addition, we identified a transient ULX candidate (XT-1) located 6" away from X-7 has a (high) luminosity of $\sim$ 10$^{39}$ erg s$^{-1}$ with no visible optical candidate.
astrophysics
We study systematically the decomposition of the Weinberg operator at three-loop order. There are more than four thousand connected topologies. However, the vast majority of these are infinite corrections to lower order neutrino mass diagrams and only a very small percentage yields models for which the three-loop diagrams are the leading order contribution to the neutrino mass matrix. We identify 73 topologies that can lead to genuine three-loop models with fermions and scalars, i.e. models for which lower order diagrams are automatically absent without the need to invoke additional symmetries. The 73 genuine topologies can be divided into two sub-classes: Normal genuine ones (44 cases) and special genuine topologies (29 cases). The latter are a special class of topologies, which can lead to genuine diagrams only for very specific choices of fields. The genuine topologies generate 374 diagrams in the weak basis, which can be reduced to only 30 distinct diagrams in the mass eigenstate basis. We also discuss how all the mass eigenstate diagrams can be described in terms of only five master integrals. We present some concrete models and for two of them we give numerical estimates for the typical size of neutrino masses they generate. Our results can be readily applied to construct other $d=5$ neutrino mass models with three loops.
high energy physics phenomenology
Object detection and motion parameters estimation are crucial tasks for self-driving vehicle safe navigation in a complex urban environment. In this work we propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation based on 3D point cloud sequence. We introduce an ego-motion compensation layer to achieve real-time inference with performance comparable to a naive odometric transform of the original point cloud sequence. Not only is the proposed architecture capable of estimating the motion of common road participants like vehicles or pedestrians but also generalizes to other object categories which are not present in training data. We also conduct an in-deep analysis of different temporal context aggregation strategies such as recurrent cells and 3D convolutions. Finally, we provide comparison results of our state-of-the-art model with existing solutions on KITTI Scene Flow dataset.
computer science
Epilepsy affects about 1% of the population every year, and is characterized by abnormal and sudden hyper-synchronous excitation of the neurons in the brain. The electroencephalogram(EEG) is the most widely used method to record brain signals and diagnose epilepsy and seizure cases. In this paper we use the method of Variational Mode Decomposition (VMD) in our analysis to classify seizure/seizure free signals. This technique uses variational non recursive mode decomposition, in comparison to other methods like Empirical Mode (EMD) and Hilbert-Huang transform which recursively decompose the signals, making them more susceptible to noise and sampling rate. VMD decomposes a signal into its components which are called principal modes. In our analysis, 4 features of the decomposed signals namely Renyi Entropy, second order difference plot (SODP), fourth order difference plot(FODP) and average amplitude are investigated, both individually and using a ranking methodology considering all 4 features at the same time. The SODP of decomposed signal modes is an elliptical structure. The 95% confidence ellipse area measured from the SODP of the decomposed signal modes has been used as a feature in order to discriminate seizure-free EEG signals from the epileptic seizure EEG signal. For the classification, a Multilayer Perceptron(MLP) with back propagation algorithm as the training method was used. A high percentage of accuracy was obtained when the features were used individually for classification and an even higher degree of accuracy was obtained when the ranking methodology was used.
electrical engineering and systems science
In this paper we consider a static and regular fluid generating a locally spherically symmetric and time-independent space-time and calculate the leading quantum corrections to the metric to first order in curvature. Starting from a singularity free classical solution of general relativity, we show that singularities can be introduced in the curvature invariants by quantum gravitational corrections calculated using an effective field theory approach to quantum gravity. We identify non-trivial conditions that ensure that curvature invariants remain singularity free to leading order in the curvature expansion of the effective action.
high energy physics theory
We find a new contribution in wave-packet scatterings, which has been overlooked in the standard formulation of S-matrix. As a concrete example, we consider a two-to-two scattering of light scalars $\phi$ by another intermediate heavy scalar $\Phi$, in the Gaussian wave-packet formalism: $\phi\phi\to\Phi\to\phi\phi$. This contribution can be interpreted as an "in-time-boundary effect" of $\Phi$ for the corresponding $\Phi\to\phi\phi$ decay, proposed by Ishikawa et al., with a newly found modification that would cure the previously observed ultraviolet divergence. We show that such an effect can be understood as a Stokes phenomenon in an integral over complex energy plane: The number of relevant saddle points and Lefschetz thimbles (steepest descent paths) discretely changes depending on the configurations of initial and final states in the scattering.
high energy physics theory
The German-Russian Astroparticle Data Life Cycle Initiative is an international project launched in 2018. The Initiative aims to develop technologies that provide a unified approach to data management, as well as to demonstrate their applicability on the example of two large astrophysical experiments - KASCADE and TAIGA. One of the key points of the project is the development of a distributed storage, which, on the one hand, will allow data of several experiments to be combined into a single repository with unified interface, and on the other hand, will provide data to all participants of experimental groups for multi-messenger analysis. Our approach to storage design is based on the single write-multiple read (SWMR) model for accessing raw or centrally processed data for further analysis. The main feature of the distributed storage is the ability to extract data either as a collection of files or as aggregated events from different sources. In the last case the storage provides users with a special service that aggregates data from different storages into a single sample. Thanks to this feature, multi-messenger methods used for more sophisticated data exploration can be applied. Users can use both Web-interface and Application Programming Interface (API) for accessing the storage. In this paper we describe the architecture of a distributed data storage for astroparticle physics and discuss the current status of our work.
computer science
The rank $n$ symplectic oscillator Lie algebra $\mathfrak{g}_n$ is the semidirect product of the symplectic Lie algebra $\mathfrak{sp}_{2n}$ and the Heisenberg Lie algebra $H_n$. In this paper, we study weight modules with finite dimensional weight spaces over $\mathfrak{g}_n$. When $\dot z\neq 0$, it is shown that there is an equivalence between the full subcategory $\mathcal{O}_{\mathfrak{g}_n}[\dot z]$ of the BGG category $\mathcal{O}_{\mathfrak{g}_n}$ for $\mathfrak{g}_n$ and the BGG category $\mathcal{O}_{\mathfrak{sp}_{2n}}$ for $\mathfrak{sp}_{2n}$. Then using the technique of localization and the structure of generalized highest weight modules, we also give the classification of simple weight modules over $\mathfrak{g}_n$ with finite-dimensional weight spaces.
mathematics
We present ALMA ~0.02"-resolution observations of the nucleus of the nearby (~14 Mpc) type-2 AGN NGC 1068 at HCN/HCO+/HNC J=3-2 lines, as well as at their 13C isotopologue and vibrationally excited lines, to scrutinize the morphological/dynamical/chemical/physical properties of dense molecular gas in the putative dusty molecular torus around a mass-accreting supermassive black hole. We confirm almost east-west-oriented dense molecular gas emission both morphologically and dynamically, which we regard as coming from the torus. Bright emission is compact (<3 pc), and low-surface-brightness emission extends out to 5-7 pc. These dense molecular gas properties are not symmetric between the eastern and western torus. The HCN J=3-2 emission is stronger than the HCO+ J=3-2 emission within the ~7 pc torus region, with an estimated dense molecular mass of (0.4-1.0)x10^6Msun. We interpret that HCN abundance is enhanced in the torus. We detect signatures of outflowing dense molecular gas and a vibrationally excited HCN J=3-2 line. Finally, we find that in the innermost (<1 pc) part of the torus, the dense molecular line rotation velocity, relative to the systemic velocity, is the opposite of that in the outer (>2 pc) part, in both the eastern and western torus. We prefer a scenario of counter-rotating dense molecular gas with innermost almost-Keplerian-rotation and outer slowly rotating (far below Keplerian) components. Our high-spatial-resolution dense molecular line data reveal that torus properties of NGC 1068 are much more complicated than the simple axi-symmetrically rotating torus picture in the classical AGN unification paradigm.
astrophysics
Cross validation is commonly used for selecting tuning parameters in penalized regression, but its use in penalized Cox regression models has received relatively little attention in the literature. Due to its partial likelihood construction, carrying out cross validation for Cox models is not straightforward, and there are several potential approaches for implementation. Here, we propose two new cross-validation methods for Cox regression and compare them to approaches that have been proposed elsewhere. Our proposed approach of cross-validating the linear predictors seems to offer an attractive balance of performance and numerical stability. We illustrate these advantages using simulated data as well as using them to analyze data from a high-dimensional study of survival in lung cancer patients.
statistics
In this paper we propose four deep recurrent architectures to tackle the task of offensive tweet detection as well as further classification into targeting and subject of said targeting. Our architectures are based on LSTMs and GRUs, we present a simple bidirectional LSTM as a baseline system and then further increase the complexity of the models by adding convolutional layers and implementing a split-process-merge architecture with LSTM and GRU as processors. Multiple pre-processing techniques were also investigated. The validation F1-score results from each model are presented for the three subtasks as well as the final F1-score performance on the private competition test set. It was found that model complexity did not necessarily yield better results. Our best-performing model was also the simplest, a bidirectional LSTM; closely followed by a two-branch bidirectional LSTM and GRU architecture.
computer science
To reduce to resolving Cohen-Macaulay singularities, Faltings initiated the program of "Macaulayfying" a given Noetherian scheme $X$. For a wide class of $X$, Kawasaki built the sought Cohen-Macaulay modifications, with a crucial drawback that his blowups did not preserve the locus $\mathrm{CM}(X) \subset X$ where $X$ is already Cohen-Macaulay. We extend Kawasaki's methods to show that every quasi-excellent, Noetherian scheme $X$ has a Cohen-Macaulay $\widetilde{X}$ with a proper map $\widetilde{X} \rightarrow X$ that is an isomorphism over $\mathrm{CM}(X)$. This completes Faltings' program, reduces the conjectural resolution of singularities to the Cohen-Macaulay case, and implies that every proper, smooth scheme over a number field has a proper, flat, Cohen-Macaulay model over the ring of integers.
mathematics
In order to study the slope and strength of the non-stellar continuum, we analyzed a sample of nearby Narrow Line Seyfert 1 (NLS1). Also, we re-examined the location of NLS1 galaxies on the M $-$ $\sigma$ relation using the stellar velocity dispersion and the [OIII]$\lambda$5007 emission line as surrogate of the former. We studied spectra of a sample of 131 NLS1 galaxies taken from the Sloan Digital Sky Survey (SDSS) DR7. We approached determining the non-stellar continuum by employing the spectral synthesis technique, which uses the code {\sc starlight}, and by adopting a power-law base to model the non-stellar continuum. Composite spectra of NLS1 galaxies were also obtained based on the sample.In addition, we obtained the stellar velocity dispersion from the code and by measuring Calcium II Triplet absorption lines and [OIII] emission lines. From Gaussian decomposition of the H$\beta$ profile we calculated the black hole mass. We obtained a median slope of $\beta$ = $-$1.6 with a median fraction of contribution of the non-stellar continuum to the total flux of 0.64. We determined black hole masses in the range of log(M$_{BH}$/M$_{\odot}$) = 5.6 $-$ 7.5 which is in agreement with previous works. We found a correlation between the luminosity of the broad component of H$\beta$ and black hole mass with the fraction of a power-law component. Finally, according to our results, NLS1 galaxies in our sample are located mostly underneath the MBH - $\sigma_{\star}$ relation, both considering the stellar velocity dispersion ($\sigma_{\star}$) and the core component of [OIII]$\lambda$5007.
astrophysics
Recently discovered $4\sigma=360^{\circ}$ fourth-order particle resonance sets one of the fundamental operational limits for high-intensity linear accelerators. To mitigate this nonlinear space-charge driven resonance and subsequent envelope instabilities, we propose a novel approach of using spinning beams with finite average canonical angular momentum. From the analytical and numerical simulation studies, we found that the spinning beams have an intrinsic characteristic that can suppress the impact of the fourth-order resonance on emittance growth and associated envelope instability. We use initially well-matched Gaussian beams in a periodic solenoidal lattice, and consider stripping of $\rm H^-(or~ D^-)$ beams by a thin foil inside a pair of solenoids placed before the main linac to inject the spinning beams.
physics
We study the twisted index of 4d $\mathcal{N}$ = 2 class S theories on a closed hyperbolic 3-manifold $M_3$. Via 6d picture, the index can be written in terms of topological invariants called analytic torsions twisted by irreducible flat connections on the 3-manifold. Using the topological expression, we determine the full perturbative 1/N expansion of the twisted index. The leading part nicely matches the Bekestein-Hawking entropy of a magnetically charged black hole in the holographic dual $AdS_5$ with $AdS_2\times M_3$ near-horizon.
high energy physics theory
It is widely acknowledged that the biomedical literature suffer from a surfeit of false positive results. Part of the reason for this is the persistence of the myth that observation of a p value less than 0.05 is sufficient justification to claim that you've made a discovery. It is hopeless to expect users to change their reliance on p values unless they are offered an alternative way of judging the reliability of their conclusions. If the alternative method is to have a chance of being adopted widely, it will have to be easy to understand and to calculate. One such proposal is based on calculation of false positive risk. It is suggested that p values and confidence intervals should continue to be given, but that they should be supplemented by a single additional number that conveys the strength of the evidence better than the p value. This number could be the minimum false positive risk (that calculated on the assumption of a prior probability of 0.5, the largest value that can be assumed in the absence of hard prior data). Alternatively one could specify the prior probability that it would be necessary to believe in order to achieve a false positive risk of, say, 0.05.
statistics
We study the $\Lambda_b(6146)^0$ and $\Lambda_b(6152)^0$ recently observed by LHCb using the method of QCD sum rules within the framework of heavy quark effective theory. Our results suggest that they can be interpreted as $D$-wave bottom baryons of $J^P = 3/2^+$ and $5/2^+$ respectively, both of which contain two $\lambda$-mode excitations. We also investigate other possible assignments containing $\rho$-mode excitations. We predict masses of their strangeness partners to be $m_{\Xi_b(3/2^+)} = 6.26^{+0.11}_{-0.14}$ GeV and $m_{\Xi_b(5/2^+)} = 6.26^{+0.11}_{-0.14}$ GeV with the mass splitting $\Delta M = 4.5^{+1.9}_{-1.5}$ MeV, and propose to search for them in future LHCb and CMS experiments.
high energy physics phenomenology
It is widely thought that small time steps lead to small numerical errors in the finite-difference time-domain (FDTD) simulations. In this paper, we investigated how time steps impact on numerical dispersion of two FDTD methods including the FDTD(2,2) method and the FDTD(2,4) method. Through rigorously analytical and numerical analysis, it is found that small time steps of the FDTD methods do not always have small numerical errors. Our findings reveal that these two FDTD methods present different behaviors with respect to time steps: (1) for the FDTD(2,2) method, smaller time steps limited by the Courant-Friedrichs-Lewy (CFL) condition increase numerical dispersion and lead to larger simulation errors; (2) for the FDTD(2,4) method, as time step increases, numerical dispersion errors first decrease and then increase. Our findings are also comprehensively validated from one- to three-dimensional cases through several numerical examples including wave propagation, resonant frequencies of cavities and a practical electromagnetic compatibility (EMC) problem.
computer science
We present a distributed active subspace method for training surrogate models of complex physical processes with high-dimensional inputs and function valued outputs. Specifically, we represent the model output with a truncated Karhunen-Lo\`eve (KL) expansion, screen the structure of the input space with respect to each KL mode via the active subspace method, and finally form an overall surrogate model of the output by combining surrogates of individual output KL modes. To ensure scalable computation of the gradients of the output KL modes, needed in active subspace discovery, we rely on adjoint-based gradient computation. The proposed method combines benefits of active subspace methods for input dimension reduction and KL expansions used for spectral representation of the output field. We provide a mathematical framework for the proposed method and conduct an error analysis of the mixed KL active subspace approach. Specifically, we provide an error estimate that quantifies errors due to active subspace projection and truncated KL expansion of the output. We demonstrate the numerical performance of the surrogate modeling approach with an application example from biotransport.
physics
In the present paper we study instability caused by the velocity shear flows of the great red spot of Jupiter. For the purpose, we imply the Navier Stokes and the continuity equations, perform the linear analysis of the governing equations and numerically solve them. We have considered two different regimes: exponential and harmonic behaviour of wave vectors. It has been shown that both scenarios reveal unstable character of flows and the corresponding rate might be considerably high. For the case of harmonic time dependence of wave vectors we also found a beat like solution of sound waves.
astrophysics
The Seyfert Galaxy Mrk 335 is known for its frequent changes of flux and spectral shape in the X-ray band occurred during recent years. These variations may be explained by the onset of a wind that previous, non-contemporaneous high-resolution spectroscopy in X-ray and UV bands located at accretion disc scale. A simultaneous new campaign by XMM-Newton and HST caught the source at an historical low flux in the X-ray band. The soft X-ray spectrum is dominated by prominent emission features, and by the effect of a strong ionized absorber with an outflow velocity of 5-6X10$^3$~km~s$^{-1}$. The broadband spectrum obtained by the EPIC-pn camera reveals the presence of an additional layer of absorption by gas at moderate ionization covering 80% of the central source, and tantalizing evidence for absorption in the Fe~K band outflowing at the same velocity of the soft X-ray absorber. The HST-COS spectra confirm the simultaneous presence of broad absorption troughs in CIV, Ly alpha, Ly beta and OVI, with velocities of the order of 5000 km~s$^{-1}$ and covering factors in the range of 20-30%. Comparison of the ionic column densities and of other outflow parameters in the two bands show that the X-ray and UV absorbers are likely originated by the same gas. The resulting picture from this latest multi-wavelength campaign confirms that Mrk 335 undergoes the effect of a patchy, medium-velocity outflowing gas in a wide range of ionization states that seem to be persistently obscuring the nuclear continuum.
astrophysics
An experimental and computational investigation of the space-charge effects occurring in ultrafast photoelectron spectroscopy from the gas phase is presented. The target sample CF$_3$I is excited by ultrashort (100 fs) far-ultraviolet radiation pulses produced by a free-electron laser. The modification of the energy distribution of the photoelectrons, i.e. the shift and broadening of the spectral structures, is monitored as a function of the pulse intensity. A novel computational approach is presented in which a survey spectrum acquired at low radiation fluence is used to determine the initial energy distribution of the electrons after the photoemission event. The spectrum modified by the space-charge effects is then reproduced by $N$-body calculations that simulate the dynamics of the photoelectrons subject to the mutual Coulomb repulsion and to the attractive force of the positive ions. The employed numerical method allows to reproduce the complete photoelectron spectrum and not just a specific photoemission structure. The simulations also provide information on the time evolution of the space-charge effects on the picosecond scale. Differences with the case of photoemission from solid samples are highlighted and discussed. The presented simulation procedure constitutes an effective tool to predict and account for space-charge effect in time-resolved photoemission experiments with high-intensity pulsed sources.
physics
We present the galaxy power spectrum in general relativity. Using a novel approach, we derive the galaxy power spectrum taking into account all the relativistic effects in observations. In particular, we show independently of survey geometry that relativistic effects yield no divergent terms (proportional to $k^{-4}P_m(k)$ or $k^{-2}P_m(k)$ on all scales) that would mimic the signal of primordial non-Gaussianity. This cancellation of such divergent terms is indeed expected from the equivalence principle, meaning that any perturbation acting as a uniform gravity on the scale of the experiment cannot be measured. We find that the unphysical infrared divergence obtained in previous calculations occurred only due to not considering all general relativistic contributions consistently. Despite the absence of divergent terms, general relativistic effects represented by non-divergent terms alter the galaxy power spectrum at large scales (smaller than the horizon scale). In our numerical computation of the full galaxy power spectrum, we show the deviations from the standard redshift-space power spectrum due to these non-divergent corrections. We conclude that, as relativistic effects significantly alter the galaxy power spectrum at $k\lesssim k_{eq}$, they need to be taken into account in the analysis of large-scale data.
astrophysics
This paper proposes an efficient numerical integration formula to compute the normalizing constant of Fisher--Bingham distributions. This formula uses a numerical integration formula with the continuous Euler transform to a Fourier-type integral representation of the normalizing constant. As this method is fast and accurate, it can be applied to the calculation of the normalizing constant of high-dimensional Fisher--Bingham distributions. More precisely, the error decays exponentially with an increase in the integration points, and the computation cost increases linearly with the dimensions. In addition, this formula is useful for calculating the gradient and Hessian matrix of the normalizing constant. Therefore, we apply this formula to efficiently calculate the maximum likelihood estimation (MLE) of high-dimensional data. Finally, we apply the MLE to the hyperspherical variational auto-encoder (S-VAE), a deep-learning-based generative model that restricts the latent space to a unit hypersphere. We use the S-VAE trained with images of handwritten numbers to estimate the distributions of each label. This application is useful for adding new labels to the models.
statistics
PolarLight is a compact soft X-ray polarimeter onboard a CubeSat, which was launched into a low-Earth orbit on October 29, 2018. In March 2019, PolarLight started full operation, and since then, regular observations with the Crab nebula, Sco X-1, and background regions have been conducted. Here we report the operation, calibration, and performance of PolarLight in the orbit. Based on these, we discuss how one can run a low-cost, shared CubeSat for space astronomy, and how CubeSats can play a role in modern space astronomy for technical demonstration, science observations, and student training.
astrophysics
Four decades ago development of high-current superconducting NbTi wire cables revolutionized the magnet technology for energy frontier accelerators, such as Tevatron, RHIC and LHC. The NbTi based magnets offered advantage of much higher fields B and much lower electric wall plug power consumption if operated at 4.5 K but relatively small ramping rates dB/dt << 0.1 T/s. The need for the accelerators of high average beam power and high repetition rates have initiated studies of fast ramping SC magnets, but it was found the AC losses in the low-temperature superconductors preclude obtaining the rates in the excess of (1- 4) T/s. Here we report the first application of high-temperature superconductor magnet technology with substantially lower AC losses and report record high ramping rates of 12 T/s achieved in a prototype dual-aperture accelerator magnet.
physics
Recently, topography change by illumination of pre-stretched, flat sheets covered in ink of optical density varying in-plane has been demonstrated by Mailen\textit{ et al}, Smart Materials and Structures, 2019. They reduce an analysis of the problem to one of metric change in the sheets in the thin limit, that is, to a question of geometry. We present the explicit form of the contraction field needed to produce the bowls these authors were interested in, using a method that can also yield the contraction field for more general desired, circularly-symmetric topography development. We give as examples the fields required for developing paraboloids and catenoids.
condensed matter
We would like robots to achieve purposeful manipulation by placing any instance from a category of objects into a desired set of goal states. Existing manipulation pipelines typically specify the desired configuration as a target 6-DOF pose and rely on explicitly estimating the pose of the manipulated objects. However, representing an object with a parameterized transformation defined on a fixed template cannot capture large intra-category shape variation, and specifying a target pose at a category level can be physically infeasible or fail to accomplish the task -- e.g. knowing the pose and size of a coffee mug relative to some canonical mug is not sufficient to successfully hang it on a rack by its handle. Hence we propose a novel formulation of category-level manipulation that uses semantic 3D keypoints as the object representation. This keypoint representation enables a simple and interpretable specification of the manipulation target as geometric costs and constraints on the keypoints, which flexibly generalizes existing pose-based manipulation methods. Using this formulation, we factor the manipulation policy into instance segmentation, 3D keypoint detection, optimization-based robot action planning and local dense-geometry-based action execution. This factorization allows us to leverage advances in these sub-problems and combine them into a general and effective perception-to-action manipulation pipeline. Our pipeline is robust to large intra-category shape variation and topology changes as the keypoint representation ignores task-irrelevant geometric details. Extensive hardware experiments demonstrate our method can reliably accomplish tasks with never-before seen objects in a category, such as placing shoes and mugs with significant shape variation into category level target configurations.
computer science
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN) to perform classification tasks. There have been many proposals how to use variational quantum circuits as quantum perceptrons or as QNNs. The aim of this work is to systematically compare different QNN architectures and to evaluate their relative expressive power with a teacher-student scheme. Specifically, the teacher model generates the datasets mapping random inputs to outputs which then have to be learned by the student models. This way, we avoid training on arbitrary data sets and allow to compare the learning capacity of different models directly via the loss, the prediction map, the accuracy and the relative entropy between the prediction maps. We focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et. al. \cite{Tacchino1} and compare it to the data re-uploading scheme that was originally introduced by P\'erez-Salinas et. al. \cite{data_re-uploading}. We discuss alterations of the perceptron model and the formation of deep QNN to better understand the role of hidden units and non-linearities in these architectures.
quantum physics
The circuit model of quantum computation is reformulated as a multilayer network theory [3] called a Quantum Multiverse Network (QuMvN). The QuMvN formulation allows us to interpret the quantum wave function as a combination of ergodic Markov Chains where each Markov Chain occupies a different layer in the QuMvN structure. Layers of a QuMvN are separable components of the corresponding wave function. Single qubit measurement is defined as a state transition of the Markov Chain that emits either a $0$ or $1$ making each layer of the QuMvN a Discrete Information Source. A message is equivalent to a possible measurement outcome and the message length is the number of qubits. Therefore, the quantum wave function can be treated as a combination of multiple discrete information sources analogous to what Shannon called a "mixed" information source [18]. We show the QuMvN model has significant advantages in the classical simulation of some quantum circuits by implementing quantum gates as edge transformations on the QuMvNs. We implement a quantum virtual machine capable of simulating quantum circuits using the QuMvN model and use our implementation to classically simulate Shor's Algorithm [19]. We present results from multiple simulations of Shor's Algorithm culminating in a $70$ qubit simulation of Shor's Algorithm on a commodity cloud server with $96$ CPUS and $624$GB of RAM. Lastly, the source of quantum speedups is discussed in the context of layers in the QuMvN framework and how randomized algorithms can push the quantum supremacy boundary.
quantum physics
The choice of poses for camera calibration with planar patterns is only rarely considered - yet the calibration precision heavily depends on it. This work presents a pose selection method that finds a compact and robust set of calibration poses and is suitable for interactive calibration. Consequently, singular poses that would lead to an unreliable solution are avoided explicitly, while poses reducing the uncertainty of the calibration are favoured. For this, we use uncertainty propagation. Our method takes advantage of a self-identifying calibration pattern to track the camera pose in real-time. This allows to iteratively guide the user to the target poses, until the desired quality level is reached. Therefore, only a sparse set of key-frames is needed for calibration. The method is evaluated on separate training and testing sets, as well as on synthetic data. Our approach performs better than comparable solutions while requiring 30% less calibration frames.
computer science
This paper is devoted to the investigation of inverse problems related to stationary drift-diffusion equations modeling semiconductor devices. In this context we analyze several identification problems corresponding to different types of measurements, where the parameter to be reconstructed is an inhomogeneity in the PDE model (doping profile). For a particular type of measurement (related to the voltage-current map) we consider special cases of drift-diffusion equations, where the inverse problems reduces to a classical inverse conductivity problem. A numerical experiment is presented for one of these special situations (linearized unipolar case).
mathematics
Drone cell (DC) is an emerging technique to offer flexible and cost-effective wireless connections to collect Internet-of-things (IoT) data in uncovered areas of terrestrial networks. The flying trajectory of DC significantly impacts the data collection performance. However, designing the trajectory is a challenging issue due to the complicated 3D mobility of DC, unique DC-to-ground (D2G) channel features, limited DC-to-BS (D2B) backhaul link quality, etc. In this paper, we propose a 3D DC trajectory design for the DC-assisted IoT data collection where multiple DCs periodically fly over IoT devices and relay the IoT data to the base stations (BSs). The trajectory design is formulated as a mixed integer non-linear programming (MINLP) problem to minimize the average user-to-DC (U2D) pathloss, considering the state-of-the-art practical D2G channel model. We decouple the MINLP problem into multiple quasi-convex or integer linear programming (ILP) sub-problems, which optimizes the user association, user scheduling, horizontal trajectories and DC flying altitudes of DCs, respectively. Then, a 3D multi-DC trajectory design algorithm is developed to solve the MINLP problem, in which the sub-problems are optimized iteratively through the block coordinate descent (BCD) method. Compared with the static DC deployment, the proposed trajectory design can lower the average U2D pathloss by 10-15 dB, and reduce the standard deviation of U2D pathloss by 56%, which indicates the improvements in both link quality and user fairness.
computer science
Since the start of the gravitational wave observation era, no joint high energy neutrino and gravitational wave event has been found. These non-detections could be used for setting an upper bound on the neutrino emission properties for gravitational wave events individually or for a set of them. Although in the previous joint high energy neutrino and gravitational wave event searches upper limits have been found, there is a lack of consistent method for the calculation. The problem addressed in this paper is finding those limits for astrophysical events which are localized poorly in the sky where the sensitivities of the neutrino detectors change significantly and can also emit neutrinos, for example the gravitational wave detections. Here we describe methods for assigning limits for expected neutrino count, emission fluence and isotropically equivalent emission based on maximum likelihood estimators. Then we apply described methods on the three GW detections from aLIGO's first observing run (O1) and find upper limits for them.
astrophysics
This paper generalises the exponential family GLM to allow arbitrary distributions for the response variable. This is achieved by combining the model-assisted regression approach from survey sampling with the GLM scoring algorithm, weighted by random draws from the posterior Dirichlet distribution of the support point probabilities of the multinomial distribution. The generalisation provides fully Bayesian analyses from the posterior sampling, without MCMC. Several examples are given, of published GLM data sets. The approach can be extended widely: an example of a GLMM extension is given.
statistics
One of the characteristic features of many marine dinoflagellates is their bioluminescence, which lights up nighttime breaking waves or seawater sliced by a ship's prow. While the internal biochemistry of light production by these microorganisms is well established, the manner by which fluid shear or mechanical forces trigger bioluminescence is still poorly understood. We report controlled measurements of the relation between mechanical stress and light production at the single-cell level, using high-speed imaging of micropipette-held cells of the marine dinoflagellate $Pyrocystis~lunula$ subjected to localized fluid flows or direct indentation. We find a viscoelastic response in which light intensity depends on both the amplitude and rate of deformation, consistent with the action of stretch-activated ion channels. A phenomenological model captures the experimental observations.
condensed matter