text
stringlengths
11
9.77k
label
stringlengths
2
104
Motivated by recent experiments on $^{6}$Li-$^{133}$Cs atomic mixtures with high mass imbalance, we study the Efimov correlation in atomic system of two heavy bosons ($^{133}$Cs) immersed in a bath of light fermions ($^{6}$Li). Using the Born-Oppenheimer approximation, we identify two different regimes, depending on the Fermi momentum of light fermions ($k_F$) and the boson-fermion scattering length $a_s(<0)$, where the presence of underlying Fermi sea plays distinct roles in the Efimov-type binding of bosons. Namely, in the regime $k_F|a_s|\lesssim 1$ ($k_F|a_s|\gtrsim1$), the Fermi sea induces an attractive (repulsive) effective interaction between bosons and thus favors (disfavors) the formation of bound state, which can be seen as the Efimov trimer dressed by the fermion cloud. Interestingly, this implies a non-monotonic behavior of these bound states as increasing the fermion density (or $k_F$). Moreover, we establish a generalized universal scaling law for the emergence/variation of such dressed Efimov bound states when incorporating a new scale ($k_F$) brought by the Fermi sea. These results can be directly tested in Li-Cs cold atoms experiment by measuring the modified bound state spectrum and the shifted Efimov resonance, which manifest an emergent non-trivial Efimov correlation in a fermionic many-body environment.
condensed matter
We provide a necessary and sufficient condition for the metastability of a Markov chain, expressed in terms of a property of the solutions of the resolvent equation. As an application of this result, we prove the metastability of reversible, critical zero-range processes starting from a configuration.
mathematics
We determine the optimal method of discriminating and comparing quantum states from a certain class of multimode Gaussian states and their mixtures when arbitrary global Gaussian operations and general Gaussian measurements are allowed. We consider the so-called constant-$\hat{p}$ displaced states which include mixtures of multimode coherent states arbitrarily displaced along a common axis. We first show that no global or local Gaussian transformations or generalized Gaussian measurements can lead to a better discrimination method than simple homodyne measurements applied to each mode separately and classical postprocessing of the results. This result is applied to binary state comparison problems. We show that homodyne measurements, separately performed on each mode, are the best Gaussian measurement for binary state comparison. We further compare the performance of the optimal Gaussian strategy for binary coherent states comparison with these of non-Gaussian strategies using photon detections.
quantum physics
Recent numerical work has shown that high-speed confined granular flows down inclines exhibit a rich variety of flow patterns, including dense unidirectional flows, flows with longitudinal vortices and supported flows characterized by a dense core surrounded by a dilute hot granular gas (Brodu et al, JFM 2015). Here, we revisit the results obtained by Brodu et al. (JFM, 2015) and present new features characterizing these flows. In particular, we provide vertical and transverse profiles for the packing fraction, velocity and granular temperature.We also characterize carefully the transition between the different flow regimes and show that the packing fraction and the vorticity can be successfully used to describe these transitions. Additionally, we emphasize that the effective friction at the basal and side walls can be described by a unique function of a dimensionless number which is the analog of a Froude number: $Fr=V/\sqrt{gH\cos \theta}$ where $V$ is the particle velocity at the walls, $\theta$ is the inclination angle and $H$ the particle holdup (defined as the depth-integrated particle volume fraction). This universal function bears some similarities with the $\mu(I)$ rheological curve derived for dense granular flows.
condensed matter
Free space optical communication has been applied in many scenarios because of its security, low cost and high rates. In such scenarios, a tracking system is necessary to ensure an acceptable signal power. Free space optical links were considered unable to support optical mobile communication when nodes are randomly moving at a high speed because existing tracking schemes fail to track the nodes accurately and rapidly. In this paper, we propose a novel tracking system exploiting multiple beacon laser sources. At the receiver, each beacon laser's power is measured to estimate the orientation of the target. Unlike existing schemes which drive servo motors multiple times based on consecutive measurements and feedback, our scheme can directly estimate the next optimal targeting shift for the servo motors based on a single measurement, allowing the tracking system to converge much faster. Closed-form outage probability expression is derived for the optical mobile communication system with ideal tracking, where pointing error and moving statistics are considered. To maintain sufficient average power and reduce the outage probability, the recommended size of a source spot is expressed in closed form as a function of the target's statistics of random moving, providing insights to the system design.
electrical engineering and systems science
In this letter we show that it is not possible to set up a canonical quantization for the damped harmonic oscillator using the Bateman lagrangian. In particular, we prove that no square integrable vacuum exists for the {\em natural} ladder operators of the system, and that the only vacua can be found as distributions. This implies that the procedure proposed by some authors is only formally correct, and requires a much deeper analysis to be made rigorous.
quantum physics
We solve the linearised Vlasov-Fokker-Planck (VFP) equation to show that heat flow or an electrical current in a magnetized collisional plasma is unstable to the growth of a circularly polarised transverse perturbation to a zeroth order uniform magnetic field. The Braginskii (1965) transport equations exhibit the same instability in the appropriate limit. This is relevant to laser-produced plasmas, inertial fusion energy (IFE) and to dense cold interstellar plasmas.
physics
As the inevitable development trend of quantum key distribution, quantum networks have attracted extensive attention, and many prototypes have been deployed over recent years. Existing quantum networks based on optical fibers or quantum satellites can realize the metro and even global communications. However, for some application scenarios that require emergency or temporary communications, such networks cannot meet the requirements of rapid deployment, low cost, and high mobility. To solve this problem, we introduce an important concept of classical networks, i.e., self-organization, to quantum networks, and give two simple network prototypes based on acquisition, tracking, and pointing systems. In these networks, the users only need to deploy the network nodes, which will rapidly, automatically, and adaptively organize and mange a quantum network. Our method expands the application scope of quantum networks, and gives a new approach for the design and implementation of quantum networks. It also provides users with a low-cost access network solution, and can be used to fundamentally solve the security problem of classical self-organizing networks.
quantum physics
We construct black hole solutions in four-dimensional quadratic gravity, supported by a scalar field conformally coupled to quadratic terms in the curvature. The conformal matter Lagrangian is constructed with powers of traces of a conformally covariant tensor, which is defined in terms of the metric and a scalar field, and has the symmetries of the Riemann tensor. We find exact, neutral and charged, topological black hole solutions of this theory when the Weyl squared term is absent from the action functional. Including terms beyond quadratic order on the conformally covariant tensor, allows to have asymptotically de Sitter solutions, with a potential that is bounded from below. For generic values of the couplings we also show that static black hole solutions must have a constant Ricci scalar, and provide an analysis of the possible asymptotic behavior of both, the metric as well as the scalar field in the asymptotically AdS case, when the solutions match those of general relativity in vacuum at infinity. In this frame, the spacetime fulfils standard asymptotically AdS boundary conditions, and in spite of the non-standard couplings between the curvature and the scalar field, there is a family of black hole solutions in AdS that can be interpreted as localized objects. We also provide further comments on the extension of these results to higher dimensions.
high energy physics theory
We have designed and tested a parallel 8-bit ERSFQ arithmetic logic unit (ALU). The ALU design employs wave-pipelined instruction execution and features modular bit-slice architecture that is easily extendable to any number of bits and adaptable to current recycling. A carry signal synchronized with an asynchronous instruction propagation provides the wave-pipeline operation of the ALU. The ALU instruction set consists of 14 arithmetical and logical instructions. It has been designed and simulated for operation up to a 10 GHz clock rate at the 10-kA/cm2 fabrication process. The ALU is embedded into a shift-register-based high-frequency testbed with on-chip clock generator to allow for comprehensive high frequency testing for all possible operands. The 8-bit ERSFQ ALU, comprising 6840 Josephson junctions, has been fabricated with MIT Lincoln Lab 10-kA/cm2 SFQ5ee fabrication process featuring eight Nb wiring layers and a high-kinetic inductance layer needed for ERSFQ technology. We evaluated the bias margins for all instructions and various operands at both low and high frequency clock. At low frequency, clock and all instruction propagation through ALU were observed with bias margins of +/-11% and +/-9%, respectively. Also at low speed, the ALU exhibited correct functionality for all arithmetical and logical instructions with +/-6% bias margins. We tested the 8-bit ALU for all instructions up to 2.8 GHz clock frequency.
computer science
The bright emission from high-redshift quasars completely conceals their host galaxies in the rest-frame UV/optical, with detection of the hosts in these wavelengths eluding even the Hubble Space Telescope (HST) using detailed point spread function (PSF) modelling techniques. In this study we produce mock images of a sample of z~7 quasars extracted from the BlueTides simulation, and apply Markov Chain Monte Carlo-based PSF modelling to determine the detectability of their host galaxies with the James Webb Space Telescope (JWST). While no statistically significant detections are made with HST, we find that at the same wavelengths and exposure times JWST NIRCam imaging will detect ~50% of quasar host galaxies. We investigate various observational strategies, and find that NIRCam wide-band imaging in the long-wavelength filters results in the highest fraction of successful quasar host detections, detecting >80% of the hosts of bright quasars in exposure times of 5 ks. Exposure times of ~5 ks are required to detect the majority of host galaxies in the NIRCam wide-band filters, however even 10 ks exposures with MIRI result in <30% successful host detections. We find no significant trends between galaxy properties and their detectability. The PSF modelling underestimates the Sersic magnitudes of the host galaxies due to residual flux from the quasar contaminating the central core, which also results in a slight underestimation of the Sersic radii and significant overestimation of the Sersic indices n. Care should be made when interpreting the host properties measured using PSF modelling.
astrophysics
We study the effects of gravitationally-driven decoherence on tunneling processes associated with false vacuum decays, such as the Coleman--De~Luccia instanton. We compute the thermal graviton-induced decoherence rate for a wave function describing a perfect fluid of nonzero energy density in a finite region. When the effective cosmological constant is positive, the thermal graviton background sourced by a de Sitter horizon provides an unavoidable decoherence effect, which may have important consequences for tunneling processes in cosmological history. We discuss generalizations and consequences of this effect and comment on its observability and applications to black hole physics.
high energy physics theory
We consider the scotogenic model, where the standard model (SM) is extended by a scalar doublet and three $Z_2$ odd SM-singlet fermions ($N_i$, $i=1,2,3$), all odd under an additional $Z_2$ symmetry, as a unifying framework for simultaneous explanation of inflation, dark matter, baryogenesis and neutrino mass. The inert doublet is coupled nonminimally to gravity and forms the inflaton. The lightest neutral particle of this doublet later becomes the dark matter candidate. Baryogenesis is achieved via leptogenesis by the decay of $N_1$ to SM leptons and the inert doublet particles. Neutrino masses are generated at the one-loop level. Explaining all these phenomena together in one model is very economic and gives us a new set of constraints on the model parameters. We calculate the inflationary parameters like spectral index, tensor-to-scalar ratio and scalar power spectrum, and find them to be consistent with the Planck 2018 constraints. We also do the reheating analysis for the inert doublet decays/annihilations to relativistic, SM particles. We find that the observed baryon asymmetry of the Universe can be obtained and the sum of light neutrino mass bound can be satisfied for the lightest $Z_2$ odd singlet fermion of mass around 10 TeV, dark matter in the mass range 1.25--1.60 TeV, and the lepton number violating quartic coupling between the SM Higgs and the inert doublet in the range of $6.5\times10^{-5}$ to $7.2\times 10^{-5}$.
high energy physics phenomenology
In molecular communications, the direct detection of signaling molecules may be challenging due to the lack of suitable sensors and interference from co-existing substances in the environment. Motivated by examples in nature, we investigate an indirect detection mechanism using chemical reactions between the signaling molecules and a molecular probe to produce an easy-to-measure product at the receiver. The underlying reaction-diffusion equations that describe the concentrations of the reactant and product molecules in the system are non-linear and coupled, and cannot be solved in closed-form. To analyze these molecule concentrations, we develop an efficient iterative algorithm by discretizing the time variable and solving for the space variables in each time step. We also derive insightful closed-form solutions for a special case. The accuracy of the proposed algorithm is verified by particle-based simulations. Our results show that the concentration of the product molecules has a similar characteristic over time as the concentration of the signaling molecules. We analyze the bit error rate (BER) for a threshold detection and highlight that significant improvements in the BER can be achieved by carefully choosing the molecular probe and optimizing the detection threshold.
computer science
The electric field in laser-driven plasma wakefield acceleration is orders of magnitude higher than conventional radio-frequency cavities, but the energy gain is limited by dephasing between the ultra-relativistic electron bunch and the wakefield, which travels at the laser group velocity. We present a way to overcome this limit within a single plasma stage. The amplitude of the wakefield behind a train of laser pulses can be controlled in-flight by modulating the density profile. This creates a succession of resonant laser-plasma accelerator sections and non-resonant drift sections, within which the wakefield disappears and the electrons rephase. A two-dimensional particle-in-cell simulation with four 2.5TW laser pulses produces a 50MeV electron energy gain, four times that obtained from a uniform plasma. Although laser red-shift prevents operation in the blowout regime, the technique offers increased energy gain for accelerators limited to the linear regime by the available laser power. This is particularly relevant for laser-plasma x-ray sources capable of operating at high repetition rates, which are highly sought after.
physics
The logarithmic derivative (or, quantum score) of a positive definite density matrix appearing in the quantum Fisher information is discussed, and its exact expression is presented. Then, the problem of estimating the parameters in a class of the Werner-type N-qudit states is studied in the context of the quantum Cram\'er-Rao inequality. The largest value of the lower bound to the error of estimate by the quantum Fisher information is shown to coincide with the separability point only in the case of two qubits. It is found, on the other hand, that such largest values give rise to the universal fidelity that is independent of the system size.
quantum physics
Since its outbreak, the ongoing COVID-19 pandemic has caused unprecedented losses to human lives and economies around the world. As of 18th July 2020, the World Health Organization (WHO) has reported more than 13 million confirmed cases including close to 600,000 deaths across 216 countries and territories. Despite several government measures, India has gradually moved up the ranks to become the third worst-hit nation by the pandemic after the US and Brazil, thus causing widespread anxiety and fear among her citizens. As majority of the world's population continues to remain confined to their homes, more and more people have started relying on social media platforms such as Twitter for expressing their feelings and attitudes towards various aspects of the pandemic. With rising concerns of mental well-being, it becomes imperative to analyze the dynamics of public affect in order to anticipate any potential threats and take precautionary measures. Since affective states of human mind are more nuanced than meager binary sentiments, here we propose a deep learning-based system to identify people's emotions from their tweets. We achieve competitive results on two benchmark datasets for multi-label emotion classification. We then use our system to analyze the evolution of emotional responses among Indians as the pandemic continues to spread its wings. We also study the development of salient factors contributing towards the changes in attitudes over time. Finally, we discuss directions to further improve our work and hope that our analysis can aid in better public health monitoring.
computer science
Several advances have been made recently towards handling overlapping speech for speaker diarization. Since speech and natural language tasks often benefit from ensemble techniques, we propose an algorithm for combining outputs from such diarization systems through majority voting. Our method, DOVER-Lap, is inspired from the recently proposed DOVER algorithm, but is designed to handle overlapping segments in diarization outputs. We also modify the pair-wise incremental label mapping strategy used in DOVER, and propose an approximation algorithm based on weighted k-partite graph matching, which performs this mapping using a global cost tensor. We demonstrate the strength of our method by combining outputs from diverse systems -- clustering-based, region proposal networks, and target-speaker voice activity detection -- on AMI and LibriCSS datasets, where it consistently outperforms the single best system. Additionally, we show that DOVER-Lap can be used for late fusion in multichannel diarization, and compares favorably with early fusion methods like beamforming.
electrical engineering and systems science
In this paper, robust control with sea state observer and dynamic thrust allocation is proposed for the Dynamic Positioning (DP) of an accommodation vessel in the presence of unknown hydrodynamic force variation and the input time delay. In order to overcome the huge force variation due to the adjoining Floating Production Storage and Offloading (FPSO) and accommodation vessel, a novel sea state observer is designed. The sea observer can effectively monitor the variation of the drift wave-induced force on the vessel and activate Neural Network (NN) compensator in the controller when large wave force is identified. Moreover, the wind drag coefficients can be adaptively approximated in the sea observer so that a feedforward control can be achieved. Based on this, a robust constrained control is developed to guarantee a safe operation. The time delay inside the control input is also considered. Dynamic thrust allocation module is presented to distribute the generalized control input among azimuth thrusters. Under the proposed sea observer and control, the boundedness of all the closed-loop signals are demonstrated via rigorous Lyapunov analysis. A set of simulation studies are conducted to verify the effectiveness of the proposed control scheme.
computer science
It is well known that the standard F test is severely affected by heteroskedasticity in unbalanced analysis of covariance (ANCOVA) models. Currently available potential remedies for such a scenario are based on heteroskedasticity-consistent covariance matrix estimation (HCCME). However, the HCCME approach tends to be liberal in small samples. Therefore, in the present manuscript, we propose a combination of HCCME and a wild bootstrap technique, with the aim of improving the small-sample performance. We precisely state a set of assumptions for the general ANCOVA model and discuss their practical interpretation in detail, since this issue may have been somewhat neglected in applied research so far. We prove that these assumptions are sufficient to ensure the asymptotic validity of the combined HCCME-wild bootstrap ANCOVA. The results of our simulation study indicate that our proposed test remedies the problems of the ANCOVA F test and its heteroskedasticity-consistent alternatives in small to moderate sample size scenarios. Our test only requires very mild conditions, thus being applicable in a broad range of real-life settings, as illustrated by the detailed discussion of a dataset from preclinical research on spinal cord injury. Our proposed method is ready-to-use and allows for valid hypothesis testing in frequently encountered settings (e.g., comparing group means while adjusting for baseline measurements in a randomized controlled clinical trial).
statistics
We have analyzed the reaction $\chi_{c0}\to \bar{p} K^+\Lambda$ reported by the BESIII Collaboration, taking into account the contributions from the intermediate $K(1830)$, $N(2300)$, and $\Lambda(1520)$ resonances. Our results are in good agreement with the BESIII measurements, and it is found that the anomalous enhancement near the $\bar{p}\Lambda$ threshold is mainly due to the contribution of the $K(1830)$ resonance. We also show that the interference of the high-mass $N^*$ and $\Lambda^*$ can not produce the anomalous enhancement near the $\bar{p}\Lambda$ threshold.
high energy physics phenomenology
We introduce a new opinion dynamics model where a group of agents holds two kinds of opinions: inherent and declared. Each agent's inherent opinion is fixed and unobservable by the other agents. At each time step, agents broadcast their declared opinions on a social network, which are governed by the agents' inherent opinions and social pressure. In particular, we assume that agents may declare opinions that are not aligned with their inherent opinions to conform with their neighbors. This raises the natural question: Can we estimate the agents' inherent opinions from observations of declared opinions? For example, agents' inherent opinions may represent their true political alliances (Democrat or Republican), while their declared opinions may model the political inclinations of tweets on social media. In this context, we may seek to predict the election results by observing voters' tweets, which do not necessarily reflect their political support due to social pressure. We analyze this question in the special case where the underlying social network is a complete graph. We prove that, as long as the population does not include large majorities, estimation of aggregate and individual inherent opinions is possible. On the other hand, large majorities force minorities to lie over time, which makes asymptotic estimation impossible.
electrical engineering and systems science
We prove that a variety of oscillatory and polynomial Carleson operators are uniformly bounded on the family of parameters under considerations. As a particular application of our techniques, we prove uniform bounds for oscillatory Carleson operators near a single scale version of the quadratic Carleson operator.
mathematics
Developing machine learning algorithms to understand person-to-person engagement can result in natural user experiences for communal devices such as Amazon Alexa. Among other cues such as voice activity and gaze, a person's audio-visual expression that includes tone of the voice and facial expression serves as an implicit signal of engagement between parties in a dialog. This study investigates deep-learning algorithms for audio-visual detection of user's expression. We first implement an audio-visual baseline model with recurrent layers that shows competitive results compared to current state of the art. Next, we propose the transformer architecture with encoder layers that better integrate audio-visual features for expressions tracking. Performance on the Aff-Wild2 database shows that the proposed methods perform better than baseline architecture with recurrent layers with absolute gains approximately 2% for arousal and valence descriptors. Further, multimodal architectures show significant improvements over models trained on single modalities with gains of up to 3.6%. Ablation studies show the significance of the visual modality for the expression detection on the Aff-Wild2 database.
electrical engineering and systems science
Prostate cancer (PCa) is graded by pathologists by examining the architectural pattern of cancerous epithelial tissue on hematoxylin and eosin (H&E) stained slides. Given the importance of gland morphology, automatically differentiating between glandular epithelial tissue and other tissues is an important prerequisite for the development of automated methods for detecting PCa. We propose a new method, using deep learning, for automatically segmenting epithelial tissue in digitized prostatectomy slides. We employed immunohistochemistry (IHC) to render the ground truth less subjective and more precise compared to manual outlining on H&E slides, especially in areas with high-grade and poorly differentiated PCa. Our dataset consisted of 102 tissue blocks, including both low and high grade PCa. From each block a single new section was cut, stained with H&E, scanned, restained using P63 and CK8/18 to highlight the epithelial structure, and scanned again. The H&E slides were co-registered to the IHC slides. On a subset of the IHC slides we applied color deconvolution, corrected stain errors manually, and trained a U-Net to perform segmentation of epithelial structures. Whole-slide segmentation masks generated by the IHC U-Net were used to train a second U-Net on H&E. Our system makes precise cell-level segmentations and segments both intact glands as well as individual (tumor) epithelial cells. We achieved an F1-score of 0.895 on a hold-out test set and 0.827 on an external reference set from a different center. We envision this segmentation as being the first part of a fully automated prostate cancer detection and grading pipeline.
computer science
The classical Gibbs paradox concerns the entropy change upon mixing two gases. Whether an observer assigns an entropy increase to the process depends on their ability to distinguish the gases. A resolution is that an "ignorant" observer, who cannot distinguish the gases, has no way of extracting work by mixing them. Moving the thought experiment into the quantum realm, we reveal new and surprising behaviour: the ignorant observer can extract work from mixing different gases, even if the gases cannot be directly distinguished. Moreover, in the macroscopic limit, the quantum case diverges from the classical ideal gas: as much work can be extracted as if the gases were fully distinguishable. We show that the ignorant observer assigns more microstates to the system than found by naive counting in semiclassical statistical mechanics. This demonstrates the importance of accounting for the level of knowledge of an observer, and its implications for genuinely quantum modifications to thermodynamics.
quantum physics
We consider the problem of deciding on sampling strategy, in particular sampling design. We propose a risk measure, whose minimizing value guides the choice. The method makes use of a superpopulation model and takes into account uncertainty about its parameters. The method is illustrated with a real dataset, yielding satisfactory results. As a baseline, we use the strategy that couples probability proportional-to-size sampling with the difference estimator, as it is known to be optimal when the superpopulation model is fully known. We show that, even under moderate misspecifications of the model, this strategy is not robust and can be outperformed by some alternatives
statistics
We present the results of a search for a hidden mirror sector in positronium decays with a sensitivity comparable with the bounds set by the prediction of the primordial He$^{4}$ abundance from Big Bang Nucleosynthesis. No excess of events compatible with decays into the dark sector is observed resulting in an upper limit for the branching ratio of this process of $4.0\times10^{-5}$ ($90\%$ C.L.). This is an order of magnitude more stringent than the current existing laboratory bounds and it constraints the mixing strength of ordinary photons to dark mirror photons at a level of $\varepsilon<5.8\times 10^{-8}$.
physics
The northeastern coast of the U.S. is projected to expand its offshore wind capacity from the existing 30 MW to over 22 GW in the next decade. Yet, only a few wind measurements are available in the region and none at hub height, thus extrapolations are needed to estimate wind speed as a function of height. A common method is the log-law, which is based on surface roughness length (z0). No reliable estimates of z0 for the region have been presented in the literature. Here, we fill this knowledge gap using two field campaigns that were conducted in the Nantucket Sound at the Cape Wind (CW) platform. We tested three different methods to calculate z0: 1) analytical, dependent on friction velocity u* and a stability function psi; 2) the Charnock relationship between z0 and u*; and 3) a statistical method based on wind speed observed at the three levels. The first two methods are physical, whereas the statistical method is purely mathematical. Comparing mean and median of z0, we find that the median is a more robust statistics. In general, the median z0 exhibits little seasonal variability and a weak dependency on atmospheric stability, which was predominantly unstable (54-67%). The statistical method, despite delivering unrealistic z0 values at times, gives the best estimates of 60-m winds. The unrealistic z0 values are caused by non-monotonic wind speed profiles, occurring about 41% of the time, and should not be rejected because they produce realistic fits. In summary, if wind speed data from multiple levels are available, the statistical method is recommended. If multi-level wind speeds are not available but advanced sonic anemometry is available at one level, the analytical method is recommended over the Charnock's. If a constant value of z0 is sought after to characterize the region, we recommend the median from the statistical method, i.e., 6.09*10^(-3) m.
physics
Galaxy clusters provide a multitude of observational data across wavelengths and their structure and morphology are of considerable interest in cosmology as well as astrophysics. We develop a framework that allows the combination of lensing and non-lensing observations in a free-form and mesh-free approach to infer the projected mass distribution of individual galaxy clusters. This method can be used to test common assumptions on the morphology of clusters in parametric models. We make use of the lensing reconstruction code SaWLens2 and expand its capabilities by incorporating an estimate of the projected gravitational potential based on X-ray data that are deprojected using the local Richardson-Lucy method and used to infer the Newtonian potential of the cluster and we discuss how potentially arising numerical artefacts can be treated. We demonstrate the feasibility of our method on a simplified mock NFW halo and on a cluster from a realistic hydrodynamical simulation and show how the combination of X-ray and weak lensing data can affect a free-form reconstruction, improving the accuracy in the central region in some cases by a factor of two.
astrophysics
In this paper, we propose a multi-stage and high-resolution model for image synthesis that uses fine-grained attributes and masks as input. With a fine-grained attribute, the proposed model can detailedly constrain the features of the generated image through rich and fine-grained semantic information in the attribute. With mask as prior, the model in this paper is constrained so that the generated images conform to visual senses, which will reduce the unexpected diversity of samples generated from the generative adversarial network. This paper also proposes a scheme to improve the discriminator of the generative adversarial network by simultaneously discriminating the total image and sub-regions of the image. In addition, we propose a method for optimizing the labeled attribute in datasets, which reduces the manual labeling noise. Extensive quantitative results show that our image synthesis model generates more realistic images.
computer science
In search-detect-track problems, knowledge of where objects were not seen can be as valuable as knowledge of where objects were seen. Exploiting the sensor's known sensing extents, or field-of-view (FoV), this type of evidence can be incorporated in a Bayesian framework to improve tracking accuracy and form better sensor schedules. This paper presents new techniques for incorporating bounded FoV inclusion/exclusion evidence in object state densities and multi-object cardinality distributions. Some examples of how the proposed techniques may be applied to tracking and sensor planning problems are given.
electrical engineering and systems science
We develop new models and algorithms for learning the temporal dynamics of the topic polytopes and related geometric objects that arise in topic model based inference. Our model is nonparametric Bayesian and the corresponding inference algorithm is able to discover new topics as the time progresses. By exploiting the connection between the modeling of topic polytope evolution, Beta-Bernoulli process and the Hungarian matching algorithm, our method is shown to be several orders of magnitude faster than existing topic modeling approaches, as demonstrated by experiments working with several million documents in under two dozens of minutes.
statistics
We introduce the safe linear stochastic bandit framework---a generalization of linear stochastic bandits---where, in each stage, the learner is required to select an arm with an expected reward that is no less than a predetermined (safe) threshold with high probability. We assume that the learner initially has knowledge of an arm that is known to be safe, but not necessarily optimal. Leveraging on this assumption, we introduce a learning algorithm that systematically combines known safe arms with exploratory arms to safely expand the set of safe arms over time, while facilitating safe greedy exploitation in subsequent stages. In addition to ensuring the satisfaction of the safety constraint at every stage of play, the proposed algorithm is shown to exhibit an expected regret that is no more than $O(\sqrt{T}\log (T))$ after $T$ stages of play.
statistics
We study the behaviour of the $\chi_{c1}(3872)$, also known as $X(3872)$, in dense nuclear matter. We begin from a picture in vacuum of the $X(3872)$ as a purely molecular $(D \bar D^*-c.c.)$ state, generated as a bound state from a heavy-quark symmetry leading-order interaction between the charmed mesons, and analyze the $D \bar D^*$ scattering $T-$matrix ($T_{D \bar D^*}$) inside of the medium. Next, we consider also mixed-molecular scenarios and, in all cases, we determine the corresponding $X(3872)$ spectral function and the $D \bar D^*$ amplitude, with the mesons embedded in the dense environment. We find important nuclear corrections for $T_{D \bar D^*}$ and the pole position of the resonance, and discuss the dependence of these results on the $D \bar D^*$ molecular component in the $X(3872)$ wave-function. These predictions could be tested in the finite-density regime that can be accessed in the future CBM and PANDA experiments at FAIR.
high energy physics phenomenology
We study the application of a new method for simulating nonlinear dynamics of many-body spin systems using quantum measurement and feedback [Mu\~noz-Arias et al., Phys. Rev. Lett. 124, 110503 (2020)] to a broad class of many-body models known as $p$-spin Hamiltonians, which describe Ising-like models on a completely connected graph with $p$-body interactions. The method simulates the desired mean field dynamics in the thermodynamic limit by combining nonprojective measurements of a component of the collective spin with a global rotation conditioned on the measurement outcome. We apply this protocol to simulate the dynamics of the $p$-spin Hamiltonians and demonstrate how different aspects of criticality in the mean-field regime are readily accessible with our protocol. We study applications including properties of dynamical phase transitions and the emergence of spontaneous symmetry breaking in the adiabatic dynamics of the collective spin for different values of the parameter $p$. We also demonstrate how this method can be employed to study the quantum-to-classical transition in the dynamics continuously as a function of system size.
quantum physics
We present new approximation schemes for bin packing based on the following two approaches: (1) partitioning the given problem into mostly identical sub-problems of constant size and then construct a solution by combining the solutions of these constant size sub-problems obtained through PTAS or exact methods; (2) solving bin packing using irregular sized bins, a generalization of bin packing, that facilitates the design of simple and efficient recursive algorithms that solve a problem in terms of smaller sub-problems such that the unused space in bins used by an earlier solved sub-problem is available to subsequently solved sub-problems.
computer science
In this paper, by defining off-shell amplitudes as off-shell CHY integrals, and redefining the longitudinal operator, we demonstrate that the differential operators which link on-shell amplitudes for a variety of theories together, also link off-shell amplitudes in the similar manner. Based on the algebraic property of the differential operator, we also generalize three relations among color-ordered on-shell amplitudes, including the color-ordered reversed relation, the photon decoupling relation, the Kleiss-Kuijf relation, to off-shell ones. The off-shell CHY integrals are chosen to be in the double-cover framework, thus, as a by product, our result also provides a verification for the double-cover construction.
high energy physics theory
Twisted van der Waals heterostructures and the corresponding superlattices, moire superlattices, are remarkable new material platforms, in which electron interactions and excited-state properties can be engineered. Particularly, the band offsets between adjacent layers can separate excited electrons and holes, forming interlayer excitons that exhibit unique optical properties. In this work, we employ the first-principles GW-Bethe-Salpeter Equation (BSE) method to calculate quasiparticle band gaps, interlayer excitons, and their modulated excited-state properties in twisted MoSe2/WSe2 bilayers that are of broad interest currently. In addition to achieving good agreements with the measured interlayer exciton energies, we predict a more than 100-meV lateral quantum confinement on quasiparticle energies and interlayer exciton energies, guiding the effort on searching for localized quantum emitters and simulating the Hubbard model in two-dimensional twisted structures. Moreover, we find that the optical dipole oscillator strength and radiative lifetime of interlayer excitons are modulated by a few orders of magnitude across moire supercells, highlighting the potential of using moire crystals to engineer exciton properties for optoelectronic applications.
condensed matter
We consider the problem of evaluating designs for a two-arm randomized experiment with the criterion being the power of the randomization test for the one-sided null hypothesis. Our evaluation assumes a response that is linear in one observed covariate, an unobserved component and an additive treatment effect where the only randomness comes from the treatment allocations. It is well-known that the power depends on the allocations' imbalance in the observed covariate and this is the reason for the classic restricted designs such as rerandomization. We show that power is also affected by two other design choices: the number of allocations in the design and the degree of linear dependence among the allocations. We prove that the more allocations, the higher the power and the lower the variability in the power. Designs that feature greater independence of allocations are also shown to have higher performance. Our theoretical findings and extensive simulation studies imply that the designs with the highest power provide thousands of highly independent allocations that each provide nominal imbalance in the observed covariates. These high powered designs exhibit less randomization than complete randomization and more randomization than recently proposed designs based on numerical optimization. Model choices for a practicing experimenter are rerandomization and greedy pair switching, where both outperform complete randomization and numerical optimization. The tradeoff we find also provides a means to specify the imbalance threshold parameter when rerandomizing.
statistics
We describe an efficient near-field to far-field transformation for optical quasinormal modes, which are the dissipative modes of open cavities and plasmonic resonators with complex eigenfrequencies. As an application of the theory, we show how one can compute the reservoir modes (or regularized quasinormal modes) outside the resonator, which are essential to use in both classical and quantum optics. We subsequently demonstrate how to efficiently compute the quantum optical parameters necessary in the theory of quantized quasinormal modes [Franke et al., Phys. Rev. Lett. 122, 213901 (2019)]. To confirm the accuracy of our technique, we directly compare with a Dyson equation approach currently used in the literature (in regimes where this is possible), and demonstrate several order of magnitude improvement for the calculation run times. We also introduce an efficient pole approximation for computing the quantized quasinormal mode parameters, since they require an integration over a range of frequencies. Using this approach, we show how to compute regularized quasinormal modes and quantum optical parameters for a full 3D metal dimer in under one minute on a standard desktop computer. Our technique is exemplified by studying the quasinormal modes of metal dimers and a hybrid structure consisting of a gold dimer on top of a photonic crystal beam. In the latter example, we show how to compute the quantum optical parameters that describe a pronounced Fano resonance, using structural geometries that cannot practically be solved using a Dyson equation approach. All calculations for the spontaneous emission rates are confirmed with full-dipole calculations in Maxwell's equations and are shown to be in excellent agreement.
physics
We analytically calculate the optical emission spectrum of nanolasers and nano-LEDs based on a model of many incoherently pumped two-level emitters in a cavity. At low pump rates we find two peaks in the spectrum for large coupling strengths and numbers of emitters. We interpret the double-peaked spectrum as a signature of collective Rabi splitting, and discuss the difference between the splitting of the spectrum and the existence of two eigenmodes. We show that an LED will never exhibit a split spectrum, even though it can have distinct eigenmodes. For systems where the splitting is possible we show that the two peaks merge into a single one when the pump rate is increased. Finally, we compute the linewidth of the systems, and discuss the influence of inter-emitter correlations on the lineshape.
quantum physics
The associated production of a single-top with opposite-sign same-flavor (OSSF) di-leptons, $pp \to t \ell^+ \ell^-$ and $ pp \to t \ell^+ \ell^- + j$ ($j=$light jet), can lead to striking tri-lepton $pp \to \ell^\prime \ell^+ \ell^- + X$ and di-lepton $pp \to \ell^+ \ell^- + j_b + X$ ($j_b=b$-jet) events at the LHC, after the top decays. Although these rather generic multi-lepton signals are flavor-blind, they can be generated by new 4-Fermi flavor changing (FC) $u_i t \ell \ell$ scalar, vector and tensor interactions ($u_i \in u,c$), which we study in this paper; we match the FC $u_i t \ell \ell$ 4-Fermi terms to the SMEFT operators and also to different types of FC underlying heavy physics. The main backgrounds to these di- and tri-lepton signals arise from $t \bar t$, $Z$+jets and $VV$ ($V=W,Z$) production, but they can be essentially eliminated with a sufficiently high invariant mass selection on the OSSF di-leptons, $m_{\ell^+ \ell^-}^{\tt min}(OSSF) > 1$ TeV; the use of $b$-tagging as an additional selection in the di-lepton final state case also proves very useful. We find, for example, that the expected 95\% CL bounds on the scale of a tensor(vector) $u t \mu \mu$ interaction, with the current $\sim 140$ fb$^{-1}$ of LHC data, are $\Lambda < 5(3.2) $ TeV or $\Lambda < 4.1(2.7)$ TeV, if analyzed via the di-muon $\mu^+ \mu^- + j_b$ signal or the $e \mu^+ \mu^-$ tri-lepton one, respectively. The expected reach at the HL-LHC with 3000 fb$^{-1}$ of data is $\Lambda < 7.1(4.7)$ TeV and $\Lambda < 2.4(1.5)$ TeV for the corresponding $u t \mu \mu$ and $c t \mu \mu$ operators. We also study the potential sensitivity at future 27 TeV and 100 TeV high-energy LHC successors and also discuss the possible implications of this class of FC 4-Fermi effective interactions on lepton non-universality tests at the LHC.
high energy physics phenomenology
Software-defined metamaterials (SDMs) represent a novel paradigm for real-time control of metamaterials. SDMs are envisioned to enable a variety of exciting applications in the domains such as smart textiles and sensing in challenging conditions. Many of these applications envisage deformations of the SDM structure (e.g., rolling, bending, stretching). This affects the relative position of the metamaterial elements and requires their localization relative to each other. The question of how to perform such localization is, however, yet to spark in the community. We consider that the metamaterial elements are controlled wirelessly through a Terahertz (THz)-operating nanonetwork. Moreover, we consider the elements to be energy constrained, with their sole powering option being to harvest environmental energy. For such a setup, we demonstrate sub-millimeter accuracy of the two-way Time of Flight (ToF)-based localization, as well as high availability of the service (i.e., consistently more than 80% of the time), which is a result of the low energy consumed in localization. Finally, we provide the localization context for a number of relevant system parameters such as operational frequency, bandwidth, and harvesting rate.
electrical engineering and systems science
Discovering a selection principle and the origin of flavor symmetries from an ultraviolet completion of particle physics is an interesting open task. As a step in this direction, we classify all possible flavor symmetries of 4D massless spectra emerging from supersymmetric Abelian orbifold compactifications, including roto-translations and non-factorizable compact spaces, for generic moduli values. Although these symmetries are valid in all string theories, we focus on the E8 x E8 heterotic string. We perform the widest known search of E8 x E8 Abelian orbifold compactifications, yielding over 121,000 models with MSSM-like features. About 75.4% of these models exhibit flavor symmetries containing D4 factors and only nearly 1.2% have Delta(54) factors. The remaining models are furnished with purely Abelian flavor symmetries. Our findings suggest that, should particle phenomenology arise from a heterotic orbifold, it could accommodate only one of these flavor symmetries.
high energy physics theory
We have studied the induced one-loop energy-momentum tensor of a massive complex scalar field within the framework of nonperturbative quantum electrodynamics (QED) with a uniform electric field background on the Poincar\'e patch of the two-dimensional de Sitter spacetime ($\mathrm{dS_{2}}$). We also consider a direct coupling the scalar field to the Ricci scalar curvature which is parameterized by an arbitrary dimensionless nonminimal coupling constant. We evaluate the trace anomaly of the induced energy-momentum tensor. We show that our results for the induced energy-momentum tensor in the zero electric field case, and the trace anomaly are in agreement with the existing literature. Furthermore, we construct the one-loop effective Lagrangian from the induced energy-momentum tensor.
high energy physics theory
We consider dynamics of scalar and vector fields on gravitational backgrounds of the Wess-Zumino-Witten models. For SO(4) and its cosets, we demonstrate full separation of variables for all fields and find a close analogy with a similar separation of vector equations in the backgrounds of the Myers--Perry black holes. For SO(5) and higher groups separation of variables is found only in some subsectors.
high energy physics theory
Music annotation has always been one of the critical topics in the field of Music Information Retrieval (MIR). Traditional models use supervised learning for music annotation tasks. However, as supervised machine learning approaches increase in complexity, the increasing need for more annotated training data can often not be matched with available data. In this paper, a new self-supervised music acoustic representation learning approach named MusiCoder is proposed. Inspired by the success of BERT, MusiCoder builds upon the architecture of self-attention bidirectional transformers. Two pre-training objectives, including Contiguous Frames Masking (CFM) and Contiguous Channels Masking (CCM), are designed to adapt BERT-like masked reconstruction pre-training to continuous acoustic frame domain. The performance of MusiCoder is evaluated in two downstream music annotation tasks. The results show that MusiCoder outperforms the state-of-the-art models in both music genre classification and auto-tagging tasks. The effectiveness of MusiCoder indicates a great potential of a new self-supervised learning approach to understand music: first apply masked reconstruction tasks to pre-train a transformer-based model with massive unlabeled music acoustic data, and then finetune the model on specific downstream tasks with labeled data.
electrical engineering and systems science
We present a systematic analysis of the stationary regimes of nonlinear parity-time(PT) symmetric laser composed of two coupled fiber cavities. We find that power-dependent nonlinear phase shifters broaden regions of existence of both PT-symmetric and PT-broken modes, and can facilitate transitions between modes of different types. We show the existence of non-stationary regimes and demonstrate an ambiguity of the transition process for some of the unstable states. We also identify the presence of higher-order stationary modes, which return to the initial state periodically after a certain number of round-trips.
physics
Neural network-based algorithms have garnered considerable attention in condensed matter physics for their ability to learn complex patterns from very high dimensional data sets towards classifying complex long-range patterns of entanglement and correlations in many-body quantum systems. Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets. A particularly interesting class of algorithms, the quantum convolutional neural networks (QCNN) could learn features of a quantum data set by performing a binary classification task on a nontrivial phase of quantum matter. Inspired by this promise, we present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility. A key feature of bQCNN is that it leverages mid-circuit (intermediate) measurement results, realizable on current trapped-ion systems, obtained in pooling layers to determine which sets of parameters will be used in the subsequent convolutional layers of the circuit. This results in a branching structure, which allows for a greater number of trainable variational parameters in a given circuit depth. This is of particular use on current-day NISQ devices, where circuit depth is limited by gate noise. We present an overview of the ansatz structure and scaling, and provide evidence of its enhanced expressibility compared to QCNN. Using artificially-constructed large data sets of training states as a proof-of-concept we demonstrate the existence of training tasks in which bQCNN far outperforms an ordinary QCNN. Finally, we present future directions where the classical branching structure and increased density of trainable parameters in bQCNN would be particularly valuable.
quantum physics
In health related machine learning applications, the training data often corresponds to a non-representative sample from the target populations where the learners will be deployed. In anticausal prediction tasks, selection biases often make the associations between confounders and the outcome variable unstable across different target environments. As a consequence, the predictions from confounded learners are often unstable, and might fail to generalize in shifted test environments. Stable prediction approaches aim to solve this problem by producing predictions that are stable across unknown test environments. These approaches, however, are sometimes applied to the training data alone with the hope that training an unconfounded model will be enough to generate stable predictions in shifted test sets. Here, we show that this is insufficient, and that improved stability can be achieved by deconfounding the test set features as well. We illustrate these observations using both synthetic data and real world data from a mobile health study.
statistics
We discuss the infrared structure of processes with massive quarks in the initial state. It is well known that, starting from next-to-next-to-leading order in perturbative QCD, such processes exhibit a violation of the Bloch-Nordsieck theorem, in that the sum of real and virtual contributions to partonic cross sections contains uncanceled infrared singularities. The main purpose of this paper is to present a simple physical argument that elucidates the origin of these singularities and simplifies the derivation of infrared-singular contributions to heavy-quark initiated cross sections.
high energy physics phenomenology
Using a coherent state representation of phonons, thus treating them as waves rather than as countable particles or quanta, metallic conduction electrons are found to be traversing a confused, dynamic, blackbody-like sea of forces. Electrons can be treated semiclassically, buffeted by unceasing quasi-elastic collisions with the deformation potential, limiting mobility. This leads to a nonperturbative paradigm, consistent with the existing theory in the perturbative limit. explaining the normal resistivity of pure metals from 0 Kelvin to melting in a new language. It extends well beyond the perturbation theory, both in intuitive insight and computational power. Using a coherent state representation brings the lattice alive, giving every atom a nonequilibrium position and momentum. The phonon sea grows in wave height as the temperature is increased. Below the Bloch-Gr\"uneisen transition, raising the temperature adds ever shorter wavelength components to the seas, causing a fast rise in resistivity. Above the Bloch-Gr\"uneisen temperature, it rises as T in the weak field limit.
condensed matter
The strong law of large numbers asserts that experimentally obtained mean values, in the limit of number of repetitions of the experiment going to infinity, converges almost surely to the theoretical predictions which are based on a priori assumed constant values for probabilities of the random events. Hence in most theoretical calculations, we implicitly neglect fluctuations around the mean. However in practice, we can repeat the experiment only finitely many times, and hence fluctuations are inevitable, and may lead to erroneous judgments. It is theoretically possible to teleport an unknown quantum state, using entanglement, with unit fidelity. The experimentally achieved values are however sub-unit, and often, significantly so. We show that when the number of repetitions of the experiment is small, there is significant probability of achieving a sub-unit experimentally achieved quantum teleportation fidelity that uses entanglement, even classically, i.e., without using entanglement. We further show that only when the number of repetitions of the experiment is of the order of a few thousands, the probability of a classical teleportation process to reach the currently achieved experimental quantum teleportation fidelities becomes negligibly small, and hence ensure that the experimentally obtained fidelities are due to genuine use of the shared entanglements.
quantum physics
We re-derive the first law of black hole mechanics in the context of the Einstein-Maxwell theory in a gauge-invariant way introducing "momentum maps" associated to field strengths and the vectors that generate their symmetries. These objects play the role of generalized thermodynamical potentials in the first law and satisfy generalized zeroth laws, as first observed in the context of principal gauge bundles by Prabhu, but they can be generalized to more complex situations. We test our ideas on the $d$-dimensional Reissner-Nordstr\"om-Tangherlini black hole.
high energy physics theory
In this paper, we consider the Cauchy problem for a generalized parabolic-elliptic Keller-Segel equation with fractional dissipation and the additional mixing effect of advection by an incompressible flow. Under suitable mixing condition on the advection, we study well-posedness of solution with large initial data. We establish the global $L^\infty$ estimate of the solution through nonlinear maximum principle, and obtain the global classical solution.
mathematics
Independent sociological polls are forbidden in Belarus. Online polls performed without sound scientific rigour do not yield representative results. Yet, both inside and outside Belarus it is of great importance to obtain precise estimates of the ratings of all candidates. These ratings could function as reliable proxies for the election's outcomes. We conduct an independent poll based on the combination of the data collected via Viber and on the streets of Belarus. The Viber and the street data samples consist of almost 45000 and 1150 unique observations respectively. Bayesian regressions with poststratification were build to estimate ratings of the candidates and rates of early voting turnout for the population as a whole and within various focus subgroups. We show that both the officially announced results of the election and early voting rates are highly improbable. With a probability of at least 95%, Sviatlana Tikhanouskaya's rating lies between 75% and 80%, whereas Aliaksandr Lukashenka's rating lies between 13% and 18% and early voting rate predicted by the method ranges from 9% to 13% of those who took part in the election. These results contradict the officially announced outcomes, which are 10.12%, 80.11%, and 49.54% respectively and lie far outside even the 99.9% credible intervals predicted by our model. The only marginal groups of people where the upper bounds of the 99.9% credible intervals of the rating of Lukashenka are above 50% are people older than 60 and uneducated people. For all other marginal subgroups, including rural residents, even the upper bounds of 99.9% credible intervals for Lukashenka are far below 50%. The same is true for the population as a whole. Thus, with a probability of at least 99.9% Lukashenka could not have had enough electoral support to win the 2020 presidential election in Belarus.
statistics
Population means and standard deviations are the most common estimands to quantify effects in factorial layouts. In fact, most statistical procedures in such designs are built towards inferring means or contrasts thereof. For more robust analyses, we consider the population median, the interquartile range (IQR) and more general quantile combinations as estimands in which we formulate null hypotheses and calculate compatible confidence regions. Based upon simultaneous multivariate central limit theorems and corresponding resampling results, we derive asymptotically correct procedures in general, potentially heteroscedastic, factorial designs with univariate endpoints. Special cases cover robust tests for the population median or the IQR in arbitrary crossed one-, two- and higher-way layouts with potentially heteroscedastic error distributions. In extensive simulations we analyze their small sample properties and also conduct an illustrating data analysis comparing children's height and weight from different countries.
mathematics
The generation of a tsunami wave by an aerial landslide is investigated through model laboratory experiments. We examine the collapse of an initially dry column of grains into a shallow water layer and the subsequent generation of waves. The experiments show that the collective entry of the granular material into water governs the wave generation process. We observe that the amplitude of the wave relative to the water height scales linearly with the Froude number based on the horizontal velocity of the moving granular front relative to the wave velocity. For all the different parameters considered here, the aspect ratio and the volume of the column, the diameter and density of the grains, and the height of the water, the granular collapse acts like a moving piston displacing the water. We also highlight that the density of the falling grains has a negligible influence on the wave amplitude, which suggests that the volume of grains entering the water is the relevant parameter in the wave generation.
physics
Quantum measurement is ultimately a physical process, resulting from an interaction between the measured system, and a measurement apparatus. Considering the physical process of measurement within a thermodynamic context naturally raises the following question: how can the work and heat resulting from the measurement process be interpreted? In the present manuscript, we consider the physical realisation of an arbitrary discrete observable as a measurement scheme, which is decomposed into two processes: premeasurement, followed by objectification. Premeasurement results from a unitary interaction between the measured system and a quantum probe of the measurement apparatus, whereas objectification is the process by which the probe is "measured" by a pointer observable, thus revealing a definite measurement outcome. Since premeasurement involves a mechanical manipulation of system and probe by the external observer, we identify this process with work. On the other hand, the apparatus is mechanically isolated during objectification. As such, we identify objectification with heat. We argue that in order for the apparatus to serve as a measurement, the heat will necessarily be a classically fluctuating quantity.
quantum physics
In the note by Khemani et al. [arXiv:2001.11037] the authors express conceptual disagreement with our recent paper on quantum time crystals [Phys. Rev. Lett. 123, 210602]. They criticise the idealized nature of the considered quantum time crystal, and make several points about properties of Hamiltonians presented in our work. In this reply we answer one-by-one all questions raised in the discussion. As for the ideological dispute, it brightly highlights a bizarre nature of time crystalline order in closed quantum systems, and we offer a different vision for the development of the field.
quantum physics
5G and beyond wireless networks are the upcoming evolution for the current cellular networks to provide the essential requirement of future demands such as high data rate, low energy consumption, and low latency to provide seamless communication for the emerging applications. Heterogeneous cloud radio access network (H-CRAN) is envisioned as a new trend of 5G that uses the advantages of heterogeneous and cloud radio access networks to enhance both the spectral and energy efficiency. In this paper, building on the notion of effective capacity (EC), we propose a framework in non-orthogonal multiple access (NOMA)-based H-CRAN to meet these demands simultaneously. Our proposed approach is to maximize the effective energy efficiency (EEE) while considering spectrum and power cooperation between macro base station (MBS) and radio remote heads (RRHs). To solve the formulated problem and to make it more tractable, we transform the original problem into an equivalent subtractive form via Dinkelbach algorithm. Afterwards, the combinational framework of distributed stable matching and successive convex algorithm (SCA) is then adopted to obtain the solution of the equivalent problem. Hereby, we propose an efficient resource allocation scheme to maximize energy efficiency while maintaining the delay quality of service (QoS) requirements for the all users. The simulation results show that the proposed algorithm can provide a non-trivial trade-off between delay and energy efficiency in NOMA H-CRAN systems in terms of EC and EEE and the spectrum and power cooperation improves EEE of the proposed network. Moreover, our proposed solution complexity is much lower than the optimal solution and it suffers a very limited gap compared to the optimal method.
electrical engineering and systems science
It is well-known that observing nonlocal correlations allows us to draw conclusions about the quantum systems under consideration. In some cases this yields a characterisation which is essentially complete, a phenomenon known as self-testing. Self-testing becomes particularly interesting if we can make the statement robust, so that it can be applied to a real experimental setup. For the simplest self-testing scenarios the most robust bounds come from the method based on operator inequalities. In this work we elaborate on this idea and apply it to the family of tilted CHSH inequalities. These inequalities are maximally violated by partially entangled two-qubit states and our goal is to estimate the quality of the state based only on the observed violation. For these inequalities we have reached a candidate bound and while we have not been able to prove it analytically, we have gathered convincing numerical evidence that it holds. Our final contribution is a proof that in the usual formulation, the CHSH inequality only becomes a self-test when the violation exceeds a certain threshold. This shows that self-testing scenarios fall into two distinct classes depending on whether they exhibit such a threshold or not.
quantum physics
New general results of non-existence and rigidity of spacelike submanifolds immersed in a spacetime, whose mean curvature is a time-oriented causal vector field, are given. These results hold for a wide class of spacetimes which includes globally hyperbolic, stationary, conformally stationary and pp-wave spacetimes, among others. Moreover, applications to the Cauchy problem in General Relativity, are presented. Finally, in the case of hypersurfaces, we also obtain significant consequences in Geometrical Analysis, solving new Calabi-Bernstein and Dirichlet problems on a Riemannian manifold.
mathematics
We study Hermitian metrics with a Gauduchon connection being "K\"ahler-like", namely, satisfying the same symmetries for curvature as the Levi-Civita and Chern connections. In particular, we investigate $6$-dimensional solvmanifolds with invariant complex structures with trivial canonical bundle and with invariant Hermitian metrics. The results for this case give evidence for two conjectures that are expected to hold in more generality: first, if the Strominger-Bismut connection is K\"ahler-like, then the metric is pluriclosed; second, if another Gauduchon connection, different from Chern or Strominger-Bismut, is K\"ahler-like, then the metric is K\"ahler. As a further motivation, we show that the K\"ahler-like condition for the Levi-Civita connection assures that the Ricci flow preserves the Hermitian condition along analytic solutions.
mathematics
Computation of the K- and KO-theory for the classifying G-spaces for proper actions of certain infinite discrete groups G via a special version of the equivariant Atiyah- Hirzebruch spectral sequence.
mathematics
We construct a single-boundary wormhole geometry in type IIB supergravity by perturbing two stacks of $N$ extremal D3-branes in the decoupling limit. The solution interpolates from a two-sided planar AdS-Schwarzschild geometry in the interior, through a harmonic two-center solution in the intermediate region, to an asymptotic AdS space. The construction involves a CPT twist in the gluing of the wormhole to the exterior throats that gives a global monodromy to some coordinates, while preserving orientability. The geometry has a dual interpretation in $\mathcal{N}=4$ $SU(2N)$ Super Yang-Mills theory in terms of a Higgsed $SU(2N) \to S(U(N) \times U(N))$ theory in which $\mathcal{O} (N^2)$ degrees of freedom in each $SU(N)$ sector are entangled in an approximate thermofield double state at a temperature much colder than the Higgs scale. We argue that the solution can be made long-lived by appropriate choice of parameters, and comment on mechanisms for generating traversability. We also describe a construction of a double wormhole between two universes.
high energy physics theory
We consider a holographic description of the chiral symmetry breaking in an external magnetic field in $ (2+1) $-dimensional gauge theories from the softwall model using an improved dilaton field profile given by $\Phi(z) = - kz^2 + (k+k_1)z^2\tanh (k_{2}z^2)$. We find inverse magnetic catalysis for $B<B_c$ and magnetic catalysis for $B>B_c$, where $B_c$ is the pseudocritical magnetic field. The transition between these two regimes is a crossover and occurs at $B=B_c$, which depends on the fermion mass and temperature. We also find spontaneous chiral symmetry breaking (the chiral condensate $\sigma \not=0$) at $T=0$ in the chiral limit ($m_q\to 0$) and chiral symmetry restoration for finite temperatures. We observe that changing the $k$ parameter of the dilaton profile only affects the overall scales of the system such as $B_c$ and $\sigma$. For instance, by increasing $k$ one sees an increase of $B_c$ and $\sigma$. This suggests that increasing the parameters $k_1$ and $k_2$ will decrease the values of $B_c$ and $\sigma$.
high energy physics phenomenology
A data compression system capable of providing real-time streaming of high-resolution continuous point-on-wave (CPOW) and phasor measurement unit (PMU) measurements is proposed. Referred to as adaptive subband compression (ASBC), the proposed technique partitions the signal space into subbands and adaptively compresses subband signals based on each subband's active bandwidth. The proposed technique conforms to existing industry phasor measurement standards, making it suitable for streaming high-resolution CPOW and PMU data either in continuous or burst on-demand/event-triggered modes. Experiments on synthetic and real data show that ASBC reduces the CPOW sampling rates by several orders of magnitude for real-time streaming while maintaining the precision required by industry standards.
electrical engineering and systems science
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of sensitive information, in the general context of regression with possible continuous sensitive attributes. We extend the framework of fair empirical risk minimization to this general scenario, covering in this way the whole standard supervised learning setting. Our generalized fairness measure reduces to well known notions of fairness available in literature. We derive learning guarantees for our method, that imply in particular its statistical consistency, both in terms of the risk and the fairness measure. We then specialize our approach to kernel methods and propose a convex fair estimator in that setting. We test the estimator on a commonly used benchmark dataset (Communities and Crime) and on a new dataset collected at the University of Genova, containing the information of the academic career of five thousand students. The latter dataset provides a challenging real case scenario of unfair behaviour of standard regression methods that benefits from our methodology. The experimental results show that our estimator is effective at mitigating the trade-off between accuracy and fairness requirements.
statistics
A novel adaptive identifier is developed for nonlinear time-delay systems composed of linear, Lipschitz and non-Lipschitz components. To begin with, an identifier is designed for uncertain systems with a priori known delay values, and then it is generalized for systems with unknown delay values. The algorithm ensures the asymptotic parameter estimation and state observation by using gradient algorithms. The unknown delays and plant parameters are estimated by using a special equivalent extension of the plant equation. The algorithms stability is presented by solvability of linear matrix inequalities. Simulation results are invoked to support the developed identifier design and to illustrate the efficiency of the proposed synthesis procedure.
electrical engineering and systems science
Propagation of photons (or of any spin-1 boson) is of interest in different kinds of non-trivial background, including a thermal bath, or a background magnetic field, or both. We give a unified treatment of all such cases, casting the problem as a matrix eigenvalue problem. The matrix in question is not a normal matrix, and therefore care should be given to distinguish the right eigenvectors from the left eigenvectors. The polarization vectors are shown to be right eigenvectors of this matrix, and the polarization sum formula is seen as the completeness relation of the eigenvectors. We show how this method is successfully applied to different non-trivial backgrounds.
high energy physics phenomenology
We report the calculation results of specific heat ($c_v$), and thermal conductivity ($\kappa$) by using Einstein, and Debye models about rock salt (NaCl), oxides: Na$_x$CoO$_2$, SrTiO$_3$, and LiNbO$_3$. In calculation, the longitudinal (L), and transverse (T) sound velocities ($v_T$, $v_L$) were estimated from acoustic phonons' dispersion's (~$\omega$/$K$) in above materials, and the average sound velocities ($v_a$) were input into Debye model for $\kappa$ equation, and results were compared with that of Einstein model. In some oxides, $v_a$ is relatively reduced at slight high $v_T$ ($v_T$/$v_L$=0.3-0.5). The relation of $\kappa$, $v_a$ and $T$ were imaged as the contour plots about realizing low $\kappa$ values to be application of thermoelectric properties.
condensed matter
An important challenge in big data is identification of important variables. In this paper, we propose methods of discovering variables with non-standard univariate marginal distributions. The conventional moments-based summary statistics can be well-adopted for that purpose, but their sensitivity to outliers can lead to selection based on a few outliers rather than distributional shape such as bimodality. To address this type of non-robustness, we consider the L-moments. Using these in practice, however, has a limitation because they do not take zero values at the Gaussian distributions to which the shape of a marginal distribution is most naturally compared. As a remedy, we propose Gaussian Centered L-moments which share advantages of the L-moments but have zeros at the Gaussian distributions. The strength of Gaussian Centered L-moments over other conventional moments is shown in theoretical and practical aspects such as their performances in screening important genes in cancer genetics data.
statistics
Accurate cell detection and counting in the image-based ELISpot and FluoroSpot immunoassays is a challenging task. Recently proposed methodology matches human accuracy by leveraging knowledge of the underlying physical process of these assays and using proximal optimization methods to solve an inverse problem. Nonetheless, thousands of computationally expensive iterations are often needed to reach a near-optimal solution. In this paper, we exploit the structure of the iterations to design a parameterized computation graph, SpotNet, that learns the patterns embedded within several training images and their respective cell information. Further, we compare SpotNet to a convolutional neural network layout customized for cell detection. We show empirical evidence that, while both designs obtain a detection performance on synthetic data far beyond that of a human expert, SpotNet is easier to train and obtains better estimates of particle secretion for each cell.
electrical engineering and systems science
We find a necessary condition for zero divisors in complex group algebras of torsion-free groups.
mathematics
Speaker identity is one of the important characteristics of human speech. In voice conversion, we change the speaker identity from one to another, while keeping the linguistic content unchanged. Voice conversion involves multiple speech processing techniques, such as speech analysis, spectral conversion, prosody conversion, speaker characterization, and vocoding. With the recent advances in theory and practice, we are now able to produce human-like voice quality with high speaker similarity. In this paper, we provide a comprehensive overview of the state-of-the-art of voice conversion techniques and their performance evaluation methods from the statistical approaches to deep learning, and discuss their promise and limitations. We will also report the recent Voice Conversion Challenges (VCC), the performance of the current state of technology, and provide a summary of the available resources for voice conversion research.
electrical engineering and systems science
Phase separation of saturated micellar network, as a result of cross-linking of branched micelles is established in mixed solutions of the anionic surfactant sodium laurylethersulfate (SLES) and the zwitterionic cocamidopropyl betaine (CAPB) in the presence of divalent counterions: Ca2+, Zn2+ and Mg2+. The saturated network appears in the form of droplets, which are heavier than water and sediment at the bottom of the vessel. In the case of Mg2+, the sedimented drops coalesce and form a separate multiconnected micellar phase - a supergiant surfactant micelle. For this phase, the rheological flow curves show Newtonian and shear-thinning regions. The appearance/disappearance of the Newtonian region marks the onset of formation of saturated network. The addition of small organic molecules (fragrances) to the multiconnected micellar phase leads to an almost spontaneous formation of oil-in-water nanoemulsion. The nanoemulsification capacity of the multiconnected micellar phase decreases with the rise of the volume of the oil molecule. A possible role of the network junctions in the nanoemulsification process can be anticipated. The properties of the multiconnected micellar phases could find applications in extraction and separation processes, in drug/active delivery, and for nanoemulsification at minimal energy input.
condensed matter
Complete characterization of states and processes that occur within quantum devices is crucial for understanding and testing their potential to outperform classical technologies for communications and computing. However, solving this task with current state-of-the-art techniques becomes unwieldy for large and complex quantum systems. Here we realize and experimentally demonstrate a method for complete characterization of a quantum harmonic oscillator based on an artificial neural network known as the restricted Boltzmann machine. We apply the method to optical homodyne tomography and show it to allow full estimation of quantum states based on a smaller amount of experimental data compared to state-of-the-art methods. We link this advantage to reduced overfitting. Although our experiment is in the optical domain, our method provides a way of exploring quantum resources in a broad class of large-scale physical systems, such as superconducting circuits, atomic and molecular ensembles, and optomechanical systems.
quantum physics
The newly developed Core Imaging Library (CIL) is a flexible plug and play library for tomographic imaging with a specific focus on iterative reconstruction. CIL provides building blocks for tailored regularised reconstruction algorithms and explicitly supports multichannel tomographic data. In the first part of this two-part publication, we introduced the fundamentals of CIL. This paper focuses on applications of CIL for multichannel data, e.g., dynamic and spectral. We formalise different optimisation problems for colour processing, dynamic and hyperspectral tomography and demonstrate CIL's capabilities for designing state of the art reconstruction methods through case studies and code snapshots.
physics
We have detected Egyptian blue pigment in the paint layer of the "Birch. Spring" painting by Robert Falk (1907); we have also found this pigment in the paints of the sketch drawn on the canvas back side. This is probably the first discovery of Egyptian blue in a 20th century work of art. We have analyzed a modern commercial Egyptian blue pigment (Kremer) and found it to be suitable as a standard for photoluminescence spectral analysis. The characteristic photoluminescence band of CaCuSi$_4$O$_{10}$ reaches maximum at the wavelength of about 910 nm. The luminescence is efficiently excited by incoherent green or blue light. The study demonstrates that the photoluminescence spectral micro analysis using excitation by incoherent light can be efficiently used for the identification of luminescent pigments in paint layers of artworks.
physics
We present a calculation of higgsino and gaugino pair production at the LHC at next-to-next-to-leading logarithmic (NNLL) accuracy, matched to approximate next-to-next-to-leading order (aNNLO) QCD corrections. We briefly review the formalism for the resummation of large threshold logarithms and highlight the analytical results required at aNNLO+NNLO accuracy. Our numerical results are found to depend on the mass and nature of the produced charginos and neutralinos. The differential and total cross sections for light higgsinos, which like sleptons are produced mostly at small x and in the s-channel, are found to be again moderately increased with respect to our previous results. The differential and total cross sections for gauginos are, however, not increased any more due to the fact that gauginos, like squarks, are now constrained by ATLAS and CMS to be heavier than about 1 TeV, so that also t- and u-channels play an important role. The valence quarks probed at large x then also induce substantially different cross sections for positively and negatively charged gauginos. The higgsino and gaugino cross sections are both further stabilized at aNNLO+NNLL with respect to the variation of renormalization and factorization scales. We also now take mixing in the squark sector into account and study the dependence of the total cross sections on the squark and gluino masses as well as the trilinear coupling controlling the mixing in particular in the sbottom sector.
high energy physics phenomenology
The Hilbert space formalism describes causality as a statistical relation between initial experimental conditions and final measurement outcomes, expressed by the inner products of state vectors representing these conditions. This representation of causality is in fundamental conflict with the classical notion that causality should be expressed in terms of the continuity of intermediate realities. Quantum mechanics essentially replaces this continuity of reality with phase sensitive superpositions, all of which need to interfere in order to produce the correct conditional probabilities for the observable input-output relations. In this paper, I investigate the relation between the classical notion of reality and quantum superpositions by identifying the conditions under which the intermediate states can have real external effects, as expressed by measurement operators inserted into the inner product. It is shown that classical reality emerges at the macroscopic level, where the relevant limit of the measurement resolution is given by the variance of the action around the classical solution. It is thus possible to demonstrate that the classical notion of objective reality emerges only at the macroscopic level, where observations are limited to low resolutions by a lack of sufficiently strong intermediate interactions. This result indicates that causality is more fundamental to physics than the notion of an objective reality, which means that the apparent contradictions between quantum physics and classical physics may be resolved by carefully distinguishing between observable causality and unobservable sequences of hypothetical realities "out there".
quantum physics
A unification of left-right $\rm{SU}(3)_\rm{L}\times \rm{SU}(3)_\rm{R}$, colour $\rm{SU}(3)_\rm{C}$ and family $\rm{SU}(3)_\rm{F}$ symmetries in a maximal rank-8 subgroup of ${\rm{E}}_8$ is proposed as a landmark for future explorations beyond the Standard Model (SM). We discuss the implications of this scheme in a supersymmetric (SUSY) model based on the trinification gauge $\left[\rm{SU}(3)\right]^3$ and global $\rm{SU}(3)_\rm{F}$ family symmetries. Among the key properties of this model are the unification of SM Higgs and lepton sectors, a common Yukawa coupling for chiral fermions, the absence of the $\mu$-problem, gauge couplings unification and proton stability to all orders in perturbation theory. The minimal field content consistent with a SM-like effective theory at low energies is composed of one $\mathrm{E}_6$ $27$-plet per generation as well as three gauge and one family $\rm{SU}(3)$ octets inspired by the fundamental sector of ${\rm{E}}_8$. The details of the corresponding (SUSY and gauge) symmetry breaking scheme, multi-scale gauge couplings' evolution, and resulting effective low-energy scenarios are discussed.
high energy physics phenomenology
Plastic scintillation detectors are increasingly used to measure dose distributions in the context of radiotherapy treatments. Their water-equivalence, real-time response and high spatial resolution distinguish them from traditional detectors, especially in complex irradiation geometries. Their range of applications could be further extended by embedding scintillators in a deformable matrix mimicking anatomical changes. In this work, we characterized signal variations arising from the translation and rotation of scintillating fibers with respect to a camera. Corrections are proposed using stereo vision techniques and two sCMOS complementing a CCD camera. The study was extended to the case of a prototype real-time deformable dosimeter comprising an array of 19 scintillating. The signal to angle relationship follows a gaussian distribution (FWHM = 52{\deg}) whereas the intensity variation from radial displacement follows the inverse square law. Tracking the position and angle of the fibers enabled the correction of these spatial dependencies. The detecting system provides an accuracy and precision of respectively 0.008 cm and 0.03 cm on the position detection. This resulted in an uncertainty of 2{\deg} on the angle measurement. Displacing the dosimeter by $\pm$3 cm in depth resulted in relative intensities of 100$\pm$10% (mean $\pm$ standard deviation) to the reference position. Applying corrections reduced the variations thus resulting in relative intensities of 100$\pm$1%. Similarly, for lateral displacements of $\pm$3 cm, intensities went from 98$\pm$3% to 100$\pm$1% after the correction. Therefore, accurate correction of the signal collected by a camera imaging the output of scintillating elements in a 3D volume is possible. This work paves the way to the development of real-time scintillator-based deformable dosimeters.
physics
This paper investigates a new class of non-convex optimization, which provides a unified framework for linear precoding in single/multi-user multiple-input multiple-output (MIMO) channels with arbitrary input distributions. The new optimization is called generalized quadratic matrix programming (GQMP). Due to the nondeterministic polynomial time (NP)-hardness of GQMP problems, instead of seeking globally optimal solutions, we propose an efficient algorithm which is guaranteed to converge to a Karush-Kuhn-Tucker (KKT) point. The idea behind this algorithm is to construct explicit concave lower bounds for non-convex objective and constraint functions, and then solve a sequence of concave maximization problems until convergence. In terms of application, we consider a downlink underlay secure cognitive radio (CR) network, where each node has multiple antennas. We design linear precoders to maximize the average secrecy (sum) rate with finite-alphabet inputs and statistical channel state information (CSI) at the transmitter. The precoding problems under secure multicast/broadcast scenarios are GQMP problems, and thus they can be solved efficiently by our proposed algorithm. Several numerical examples are provided to show the efficacy of our algorithm.
electrical engineering and systems science
We present algorithms for performing data-driven stochastic reachability as an addition to SReachTools, an open-source stochastic reachability toolbox. Our method leverages a class of machine learning techniques known as kernel embeddings of distributions to approximate the safety probabilities for a wide variety of stochastic reachability problems. By representing the probability distributions of the system state as elements in a reproducing kernel Hilbert space, we can learn the "best fit" distribution via a simple regularized least-squares problem, and then compute the stochastic reachability safety probabilities as simple linear operations. This technique admits finite sample bounds and has known convergence in probability. We implement these methods as part of SReachTools, and demonstrate their use on a double integrator system, on a million-dimensional repeated planar quadrotor system, and a cart-pole system with a black-box neural network controller.
mathematics
We present post process neutron capture computations for Asymptotic Giant Branch stars of 1.5 to 3 Mo and metallicities -1.3 to 0.1. The reference stellar models are computed with the FRANEC code, using the Schwarzschild's criterion for convection. Motivations for this choice are outlined. We assume that MHD processes induce the penetration of protons below the convective boundary, when the third dredge up occurs. There, the 13C(alpha,n)16O neutron source can subsequently operate, merging its effects with those of the 22Ne(alpha,n)25Mg reaction, activated at the temperature peaks characterizing AGB stages. This work has three main scopes. i) We provide a grid of abundance yields, as produced through our MHD mixing scheme, uniformly sampled in mass and metallicity. From it, we deduce that the solar s process distribution, as well as the abundances in recent stellar populations, can be accounted for, without the need of the extra primary like contributions suggested in the past. ii) We formulate analytical expressions for the mass of the 13C pockets generated, in order to allow easy verification of our findings. iii) We compare our results with observations of evolved stars and with isotopic ratios in presolar SiC grains, also noticing how some flux tubes should survive turbulent disruption, carrying C rich materials into the winds even when the envelope is O rich. This wind phase is approximated through the G component of AGB s processing. We conclude that MHD induced mixing is adequate to drive slow neutron capture phenomena accounting for observations. Our prescriptions should permit its inclusion into current stellar evolutionary codes.
astrophysics
In an effort to probe the origin of surface brightness profile (SBP) breaks widely observed in nearby disk galaxies, we carry out a comparative study of stellar population profiles of 635 disk galaxies selected from the MaNGA spectroscopic survey. We classify our galaxies into single exponential (TI), down-bending (TII) and up-bending (TIII) SBP types, and derive their spin parameters and radial profiles of age/metallicity-sensitive spectral features. Most TII (TIII) galaxies have down-bending (up-bending) star formation rate (SFR) radial profiles, implying that abrupt radial changes of SFR intensities contribute to the formation of both TII and TIII breaks. Nevertheless, a comparison between our galaxies and simulations suggests that stellar migration plays a significant role in weakening down-bending $\Sigma_{\star}$ profile breaks. While there is a correlation between the break strengths of SBPs and age/metallicity-sensitive spectral features for TII galaxies, no such correlation is found for TIII galaxies, indicating that stellar migration may not play a major role in shaping TIII breaks, as is evidenced by a good correspondence between break strengths of $\Sigma_{\star}$ and surface brightness profiles of TIII galaxies. We do not find evidence for galaxy spin being a relevant parameter for forming different SBP types, nor do we find significant differences between the asymmetries of galaxies with different SBP types, suggesting that environmental disturbances or satellite accretion in the recent past do not significantly influence the break formation. By dividing our sample into early and late morphological types, we find that galaxies with different SBP types follow nearly the same tight stellar mass-$R_{25}$ relation, which makes the hypothesis that stellar migration alone can transform SBP types from TII to TI and then to TIII highly unlikely.
astrophysics
This work is the extension of author`s research, where the modified theory of induced gravity (MTIG) is proposed. The theory describes two systems (stages): Einstein (ES) and "restructuring" (RS). We consider equations with quadratic potential that are symmetric with respect to scale transformations. The solutions of the equations obtained for the case of spaces defined by the Friedman-Robertson-Walker metric, as well as for a centrally symmetric space are investigated. In our model arise effective gravitational and cosmological "constants", which are defined by the "mean square" of the scalar fields. In obtained solutions the values of such parameters as "Hubble parameter", gravitational and cosmological "constants" in the RS stage fluctuate near monotonically evolving mean values. These parameters are matched with observational data, described as phenomena of dark energy and dark matter. The MTIG equations for the case of a centrally symmetric gravitational field, in addition to the Schwarzschild-de Sitter solutions, contain solutions that lead to the new physical effects at large distances from the center. The Schwarzschild-Sitter solution becomes unstable and enters the oscillatory regime. For distances greater than a certain critical value, the following effects can appear: deviation from General relativity and Newton's law of gravitational interaction, antigravity.
physics
The geometric phase can be used as a fruitful venue of investigation to infer features of the quantum systems. Its application can reach new theoretical frontiers and imply innovative and challenging experimental proposals. Herein, we take advantage of the geometric phase to sense the corrections induced while a neutral particle travels at constant velocity in front of an imperfect sheet in quantum vacuum. As it is already known, two bodies in relative motion at constant velocity experience a quantum contactless dissipative force, known as quantum friction. This force has eluded experimental detection so far due to its small magnitude and short range. However, we give details of an innovative experiment designed to track traces of the quantum friction by measuring the velocity dependence of corrections to the geometric phase. We notice that the environmentally induced corrections can be decomposed in different contributions: corrections induced by the presence of the dielectric sheet and the motion of the particle in quantum vacuum. As the geometric phase accumulates over time, its correction becomes relevant at a relative short timescale, while the system still preserves purity. The experimentally viable scheme presented would be the first one in tracking traces of quantum friction through the study of decoherence effects on a NV center in diamond.
quantum physics
Tensors are widely used to represent multiway arrays of data. The recovery of missing entries in a tensor has been extensively studied, generally under the assumption that entries are missing completely at random (MCAR). However, in most practical settings, observations are missing not at random (MNAR): the probability that a given entry is observed (also called the propensity) may depend on other entries in the tensor or even on the value of the missing entry. In this paper, we study the problem of completing a partially observed tensor with MNAR observations, without prior information about the propensities. To complete the tensor, we assume that both the original tensor and the tensor of propensities have low multilinear rank. The algorithm first estimates the propensities using a convex relaxation and then predicts missing values using a higher-order SVD approach, reweighting the observed tensor by the inverse propensities. We provide finite-sample error bounds on the resulting complete tensor. Numerical experiments demonstrate the effectiveness of our approach.
statistics
Here, we study the electrical transport and specific heat in 4$d$ based ferromagnetic material SrRuO$_3$ and its Ti substituted SrRu$_{1-x}$Ti$_x$O$_3$ series ($x$ $\le$ 0.7). The SrRuO$_3$ is a metal and shows itinerant ferromagnetism with transition temperature $T_c$ $\sim$ 160 K. The nonmagnetic Ti$^{4+}$ (3$d^0$) substitution would not only weaken the active Ru-O-Ru channel but is also expected to tune the electronic density and electron correlation effect. A metal to insulator transition has been observed around $x$ $\sim$ 0.4. The nature of charge transport in paramagnetic-metallic state ($x$ $\leq$ 0.4) and in insulating state ($x$ $>$ 0.4) follows modified Mott's variable range hopping model. In ferromagnetic-metallic state, resistivity shows a $T^2$ dependence below $T_c$ which though modifies to $T^{3/2}$ dependence at low temperature. In Ti substituted samples, temperature range for $T^{3/2}$ dependence extends to higher temperature. Interestingly, this $T^{3/2}$ dependence dominates in whole ferromagnetic regime in presence of magnetic field. This evolution of electronic transport behavior can be explained within the framework of Fermi liquid theory and electron-magnon scattering mechanism. The negative magnetoresistance exhibits a hysteresis and a crossover between negative and positive value with magnetic field which is connected with magnetic behavior in series. The decreasing electronic coefficient of specific heat with $x$ supports the increasing insulating behavior in present series. We calculate a high Kadowaki-Woods ratio ($x$ $\leq$ 0.3) for SrRuO$_3$ which increases with substitution concentration. This signifies an increasing electronic correlation effect with substitution concentration.
condensed matter
In many real-world applications we are interested in approximating costly functions that are analytically unknown, e.g. complex computer codes. An emulator provides a fast approximation of such functions relying on a limited number of evaluations. Gaussian processes (GPs) are commonplace emulators due to their statistical properties such as the ability to estimate their own uncertainty. GPs are essentially developed to fit smooth, continuous functions. However, the assumptions of continuity and smoothness is unwarranted in many situations. For example, in computer models where bifurcations or tipping points occur, the outputs can be discontinuous. This work examines the capacity of GPs for emulating step-discontinuous functions. Several approaches are proposed for this purpose. Two special covariance functions/kernels are adapted with the ability to model discontinuities. They are the neural network and Gibbs kernels whose properties are demonstrated using several examples. Another approach, which is called warping, is to transform the input space into a new space where a GP with a standard kernel, such as the Matern family, is able to predict the function well. The transformation is perform by a parametric map whose parameters are estimated by maximum likelihood. The results show that the proposed approaches have superior performance to GPs with standard kernels in capturing sharp jumps in the true function.
statistics
Static and uniformly rotating, cold and hot white dwarfs are investigated both in Newtonian gravity and general theory of relativity, employing the well-known Chandrasekhar equation of state. The mass-radius, mass-central density, radius-central density etc relations of stable white dwarfs with $\mu=A/Z=2$ and $\mu=56/26$ (where $A$ is the average atomic weight and $Z$ is the atomic charge) are constructed for different temperatures. It is shown that near the maximum mass the mass of hot rotating white dwarfs is slightly less than for cold rotating white dwarfs, though for static white dwarfs the situation is opposite.
astrophysics
Recent ALMA observations revealed concentric annular structures in several young class-II objects. In an attempt to produce the rings and gaps in some of these systems, they have been modeled numerically with a single embedded planet assuming a locally isothermal equation of state. This is often justified by observations targeting the irradiation-dominated outer regions of disks (approximately 100 au). We test this assumption by conducting hydrodynamics simulations of embedded planets in thin locally isothermal and radiative disks that mimic the systems HD 163296 and AS 209 in order to examine the effect of including the energy equation in a seemingly locally isothermal environment as far as planet-disk interaction is concerned. We find that modeling such disks with an ideal equation of state makes a difference in terms of the number of produced rings and the spiral arm contrast in the disk. Locally isothermal disks produce sharper annular or azimuthal features and overestimate a single planet's gap-opening capabilities by producing multiple gaps. In contrast, planets in radiative disks carve a single gap for typical disk parameters. Consequently, for accurate modeling of planets with semimajor axes up to about 100 au, radiative effects should be taken into account even in seemingly locally isothermal disks. In addition, for the case of AS 209, we find that the primary gap is significantly different between locally isothermal and radiative models. Our results suggest that multiple planets are required to explain the ring-rich structures in such systems.
astrophysics
The typical blazar S5 0716$+$714 is very interesting due to its rapid and large amplitude variability and high duty cycle of micro-variability in optical band. We analyze the observations in I, R and V bands obtained with the $1.0m$ telescope at Weihai observatory of Shandong University from 2011 to 2018. The model of synchrotron radiation from turbulent cells in a jet has been proposed as a mechanism for explaining micro-variability seen in blazar light curves. Parameters such as the sizes of turbulent cells, the enhanced particle densities, and the location of the turbulent cells in the jet can be studied using this model. The model predicts a time lag between variations as observed in different frequency bands. Automatic model fitting method for micro-variability is developed, and the fitting results of our multi-frequency micro-variability observations support the model. The results show that both the amplitude and duration of flares decomposed from the micro-variability light curves confirm to the log-normal distribution. The turbulent cell size is within the range of about 5 to 55 AU, and the time lags of the micro-variability flares between the I-R and R-V bands should be several minutes. The time lags obtained from the turbulence model are consistent with the fitting statistical results, and the time lags of flares are correlated with the time lags of the whole light curve.
astrophysics
A malicious attacker could, by taking control of internet-of-things devices, use them to capture received signal strength (RSS) measurements and perform surveillance on a person's vital signs, activities, and sound in their environment. This article considers an attacker who looks for subtle changes in the RSS in order to eavesdrop sound vibrations. The challenge to the adversary is that sound vibrations cause very low amplitude changes in RSS, and RSS is typically quantized with a significantly larger step size. This article contributes a lower bound on an attacker's monitoring performance as a function of the RSS step size and sampling frequency so that a designer can understand their relationship. Our bound considers the little-known and counter-intuitive fact that an adversary can improve their sinusoidal parameter estimates by making some devices transmit to add interference power into the RSS measurements. We demonstrate this capability experimentally. As we show, for typical transceivers, the RSS surveillance attacker can monitor sound vibrations with remarkable accuracy. New mitigation strategies will be required to prevent RSS surveillance attacks.
electrical engineering and systems science
In this note, we study the emergence of Hamiltonian Berge cycles in random $r$-uniform hypergraphs. For $r\geq 3$, we prove an optimal stopping-time result that if edges are sequently added to an initially empty $r$-graph, then as soon as the minimum degree is at least 2, the hypergraph almost surely has such a cycle. In particular, this determines the threshold probability for Berge Hamiltonicity of the Erd\H{o}s--R\'enyi random $r$-graph, and we also show that the $2$-out random $r$-graph almost surely has such a cycle. We obtain similar results for \textit{weak Berge} cycles as well, thus resolving a conjecture of Poole.
mathematics
The power spectrum of cosmic microwave background (CMB) lensing will be measured to sub-percent precision with upcoming surveys, enabling tight constraints on the sum of neutrino masses and other cosmological parameters. Measuring the lensing power spectrum involves the estimation of the connected trispectrum of the four-point function of the CMB map, which requires the subtraction of a large Gaussian disconnected noise bias. This reconstruction noise bias receives contributions both from CMB and foreground fluctuations as well as instrument noise (both detector and atmospheric noise for ground-based surveys). The debiasing procedure therefore relies on the quality of simulations of the instrument noise which may be expensive or inaccurate. We propose a new estimator that makes use of at least four splits of the CMB maps with independent instrument noise. This estimator makes the CMB lensing power spectrum completely insensitive to any assumptions made in modeling or simulating the instrument noise. We show that this estimator, in many practical situations, leads to no substantial loss in signal-to-noise. We provide an efficient algorithm for its computation that scales with the number of splits $m$ as $\mathcal{O}(m^2)$ as opposed to a naive $\mathcal{O}(m^4)$ expectation.
astrophysics