text
stringlengths
11
9.77k
label
stringlengths
2
104
A mechanism to generate realistic fermion mass hierarchies based on supersymmetric gauged $U(1)_F$ symmetry in flat five-dimensional (5D) spacetime is proposed. The fifth dimension is compactified on $S^1/Z_2$ orbifold. The standard model fermions charged under the extra abelian symmetry along with their superpartners live in the 5D bulk. Bulk masses of fermions are generated by the vacuum expectation value of $N=2$ superpartner of $U(1)_F$ gauge field, and they are proportional to $U(1)_F$ charges of respective fermions. This decides localization of fermions in the extra dimension, which in turn gives rise to exponentially suppressed Yukawa couplings in the effective 4D theory. Anomaly cancellation puts stringent constraints on the allowed $U(1)_F$ charges which leads to correlations between the masses of quarks and leptons. We perform an extensive numerical scan and obtain several solutions for anomaly-free $U(1)_F$, which describe the observed pattern of fermion masses and mixing with all the fundamental parameters of order unity. It is found that the possible existence of SM singlet neutrinos substantially improves the spectrum of solutions by offering more freedom in choosing $U(1)_F$ charges. The model predicts $Z^\prime$ boson mediating flavour violating interactions in both the quark and lepton sectors with the couplings which can be explicitly determined from the Yukawa couplings.
high energy physics phenomenology
Nonlocal sets of orthogonal product states (OPSs) are widely used in quantum protocols owing to their good property. Thus a lot of attention are paid to how to construct a nonlocal set of orthogonal product states though it is a difficult problem. In this paper, we propose a novel general method to construct a nonlocal set of orthogonal product states in $\mathbb{C}^{d} \otimes \mathbb{C}^{d}$ for $d\geq3$. We give an ingenious proof for the local indistinguishability of those product states. The set of product states, which are constructed by our method, has a very good structure. Subsequently, we give a construction of nonlocal set of OPSs with smaller members in $\mathbb{C}^{d} \otimes \mathbb{C}^{d}$ for $d\geq3$. On the other hand, we present two construction methods of nonlocal sets of OPSs in $\mathbb{C}^{m} \otimes \mathbb{C}^{n}$, where $m\geq3$ and $n\geq3.$ Furthermore, we propose the concept of isomorphism for two nonlocal sets of OPSs. Our work is of great help to understand the structure and classification of locally indistinguishable OPSs.
quantum physics
The search for higher-order feature interactions that are statistically significantly associated with a class variable is of high relevance in fields such as Genetics or Healthcare, but the combinatorial explosion of the candidate space makes this problem extremely challenging in terms of computational efficiency and proper correction for multiple testing. While recent progress has been made regarding this challenge for binary features, we here present the first solution for continuous features. We propose an algorithm which overcomes the combinatorial explosion of the search space of higher-order interactions by deriving a lower bound on the p-value for each interaction, which enables us to massively prune interactions that can never reach significance and to thereby gain more statistical power. In our experiments, our approach efficiently detects all significant interactions in a variety of synthetic and real-world datasets.
statistics
Estimating the quality of a single-photon source is crucial for its use in quantum technologies. The standard test for semiconductor sources is a value of the second-order correlation function of the emitted field below $1/2$ at zero time-delay. This criterion alone provides no information regarding the amplitude of the single-photon contribution for general quantum states. Addressing this question requires the knowledge of additional observables. We derive an effective second-order correlation function, strongly connected to the Mandel-$Q$ parameter and given in terms of both the second-order correlation and the average photon number, that provides a lower bound on the single-to-multi-photon projection ratio. Using both observables individually allows for lower and upper bounds for the single-photon projection. Comparing the tightness of our bounds with those in the literature, we find that relative bounds may be better described using the average photon number, while absolute bounds for low excitation states are tighter using the vacuum projection. Our results show that estimating the quality of a single-photon source based on additional information is very much dependent on what aspect of the quantum state of light one is interested in.
quantum physics
We present predictions for the distribution of rapidity gaps in realistic kinematics of future electron-ion colliders, based on numerical solutions of the original Kovchegov-Levin equation and of its next-to-leading extension taking into account the running of the strong coupling. We find that for the rapidities we have considered, the fixed and the running coupling equations lead to different distributions, rather insensitive to the chosen prescription in the running coupling case. The obtained distributions for the fixed coupling framework exhibit a shape characteristic of a recently proposed partonic picture of diffractive dissociation already at rapidities accessible at future electron-ion colliders. The modification of this shape in the running coupling case can also be understood qualitatively from that picture. Our results confirm the relevance of measurements of such observables for the microscopic understanding of diffractive dissociation in the framework of quantum chromodynamics.
high energy physics phenomenology
Understanding disk dissipation is essential for studying how planets form. Disk gaps and holes, which almost correspond to dust-free regions, are inferred from infrared observations of T Tauri stars (TTS), indicating the existence of a transitional phase between thick accreting disks and debris disks. Transition disks are usually referred to as candidates for newly formed planets. We searched for transition disk candidates belonging to NGC 2264. We characterized accretion, disk, and stellar properties of transition disk candidates and compared them to systems with a full disk and diskless stars We modeled the spectral energy distribution (SED) of a sample of 401 TTS, with Hyperion SED fitting code using photometric data from the U band to the MIPS band. We used the SED modeling to distinguish transition disk candidates, full disk systems, and diskless stars. We classified $52\%$ of the sample as full disk systems, $41\%$ as diskless stars, and $7\%$ of the systems as transition disk candidates, among which seven systems are new transition disk candidates belonging to the NGC 2264 cluster. The sample of transition disk candidates present dust in the inner disk similar to anemic disks, according to the $\alpha_{IRAC}$ classification, which shows that anemic disk systems can be candidate transition disks. We show that the presence of a dust hole in the inner disk does not stop the accretion process since $82\%$ of transition disk candidates accrete and show $H\alpha$, UV excess, and mass accretion rates at the same level as full disk systems. We estimate the inner hole sizes, ranging from 0.1 to $78AU$, for the sample of transition disk candidates. In only $18\%$ of the transition disk candidates, the hole size could be explained by X-ray photoevaporation from stellar radiation.
astrophysics
This paper proposes a novel low complexity joint bit and power suboptimal allocation algorithm for multicarrier systems operating in fading environments. The algorithm jointly maximizes the throughput and minimizes the transmitted power, while guaranteeing a target bit error rate (BER) per subcarrier and meeting a constraint on the total transmit power. Simulation results are described that illustrate the performance of the proposed scheme and demonstrate its superiority when compared to the algorithm in [4] with similar or reduced computational complexity. Furthermore, the results show that the performance of the proposed suboptimal algorithm approaches that of an optimal exhaustive search with significantly lower computational complexity.
electrical engineering and systems science
Counting the number of birds in an open sky setting has been an challenging problem due to the large number of bird flocks and the birds can overlap. Another difficulty is the lack of accurate training samples since the cost of labeling images of bird flocks can be extremely high and each sample picture can contain thousands of birds in a high resolution image. Inspired by recent work on training with synthetic data to perform crowd counting, we design a mechanism to generate synthetic bird dataset with precise bird count and the corresponding density maps. We then train a Unet model on the synthetic dataset to perform density map estimation that produces the count for each input. Our method is able to achieve MSE of approximately 12.4 on real dataset. In order to build a scalable system for fast bird counting under storage and computational constraints, we use model compression techniques and efficient model structures to increase the inference speed and save storage cost. We are able to reduce storage cost from 55MB to less than 5MB for the model with minimum loss of accuracy. This paper describes the pipelines of building an efficient bird counting system.
computer science
Argon with an admixture of CF4 is expected to be a good candidate for the gas mixture to be used for a time projection chamber (TPC) in the future linear collider experiment because of its small transverse diffusion of drift electrons especially under a strong magnetic field. In order to confirm the superiority of this gas mixture over conventional TPC gases we carried out cosmic ray tests using a GEM-based TPC operated mostly in Ar-CF4-isobutane mixtures under 0 - 1 T axial magnetic fields. The measured gas properties such as gas gain and transverse diffusion constant as well as the observed spatial resolution are presented.
physics
RES-NOVA is a new proposed experiment for the investigation of astrophysical neutrino sources with archaeological Pb-based cryogenic detectors. RES-NOVA will exploit Coherent Elastic neutrino-Nucleus Scattering (CE$\nu$NS) as detection channel, thus it will be equally sensitive to all neutrino flavors produced by Supernovae (SNe). RES-NOVA with only a total active volume of (60 cm)$^3$ and an energy threshold of 1 keV will probe the entire Milky Way Galaxy for (failed) core-collapse SNe with $> 3 \sigma$ detection significance. The high detector modularity makes RES-NOVA ideal also for reconstructing the main parameters (e.g. average neutrino energy, star binding energy) of SNe occurring in our vicinity, without deterioration of the detector performance caused by the high neutrino interaction rate. For the first time, distances $<3$ kpc can be surveyed, similarly to the ones where all known past galactic SNe happened. We discuss the RES-NOVA potential, accounting for a realistic setup, considering the detector geometry, modularity and background level in the region of interest. We report on the RES-NOVA background model and on the sensitivity to SN neutrinos as a function of the distance travelled by neutrinos.
astrophysics
Consider random polynomials of the form $G_n = \sum_{i=0}^n \xi_i p_i$, where the $\xi_i$ are i.i.d. non-degenerate complex random variables, and $\{p_i\}$ is a sequence of orthonormal polynomials with respect to a regular measure $\tau$ supported on a compact set $K$. We show that the normalized counting measure of the zeros of $G_n$ converges weakly almost surely to the equilibrium measure of $K$ if and only if $\mathbb E \log(1 + |\xi_0|) < \infty$. This generalizes the corresponding result of Ibragimov-Zaporozhets in the case when $p_i(z) = z^i$. We also show that the normalized counting measure of the zeros of $G_n$ converges weakly in probability to the equilibrium measure of $K$ if and only if $\mathbb P (|\xi_0| > e^n) = o(n^{-1})$. Our proofs rely on results from small ball probability and exploit the structure of general orthogonal polynomials. Our methods also work for sequences of asymptotically minimal polynomials in $L^p(\tau)$, where $p \in (0, \infty]$. In particular, sequences of $L^p$-minimal polynomials and (normalized) Faber and Fekete polynomials fall into this class.
mathematics
331 models constitute an extension of the Standard Model (SM) obtained by enlarging the SM gauge group $SU(3)_\text{C}\times SU(2)_\text{L}\times U(1)_Y$ to the group $SU(3)_\text{C}\times SU(3)_\text{L}\times U(1)_X$. We investigate how a non-minimal 331 model may embed lepton flavour universality violating contributions to $b\to s\ell\ell$ processes without introducing lepton flavour violation, as suggested by the recent LHCb measurements of the ratios $R_K$ and $R_{K^*}$. We discuss the model-independent scenarios of New Physics in $b\to s\ell\ell$ currently favoured by the data that could be accommodated by this model and consider a few phenomenological constraints on this model.
high energy physics phenomenology
In this paper, the flexibility, versatility and predictive power of kernel regression are combined with now lavishly available network data to create regression models with even greater predictive performances. Building from previous work featuring generalized linear models built in the presence of network cohesion data, we construct a kernelized extension that captures subtler nonlinearities in extremely high dimensional spaces and also produces far better predictive performances. Applications of seamless yet substantial adaptation to simulated and real-life data demonstrate the appeal and strength of our work.
statistics
Piezoelectric capacitive NanoGenerators (NG) based on vertically grown crystalline zinc oxide nanowires (ZnO-NWs) have been fabricated using a low-cost and scalable hydrothermal method on gold-coated silicon substrates, which served as both a seed layer and a conductive bottom electrode. Morphological and structural characterizations demonstrate that the obtained ZnO NWs are dense, uniformly distributed, vertically well aligned and exhibit good crystal quality. The piezoelectric NG consists of ZnO-NWs grown on a gold-coated silicon substrate, parylene-C matrix, titanium/aluminium top electrode and poly(dimethylsiloxane) (PDMS) encapsulating layer. In order to enhance the NG performances, which is the main goal of this study, two distinctly different post-growth treatments, namely thermal annealing in ambient air and cryo-cooling by immersion in liquid nitrogen, are applied and their effect studied. Achieving the high performance of NG via the combination of high-quality NWs growth and subsequent post-growth treatment is presented. Superior global performance of NG has been observed with a post-treatment of cryo-cooling for an optimum duration compared to the thermal annealing signifies the simplicity and novelty of the work. The proposed strategies highlight the role of post-growth treatments towards the fabrication of high-performance functional NG to be incorporated into future smart objects.
condensed matter
Light dark sectors in thermal contact with the Standard Model naturally produce the observed relic dark matter abundance and are the targets of a broad experimental search program. A key light dark sector model is the pseudo-Dirac fermion with a dark photon mediator. The dynamics of the fermionic excited states are often neglected. We consider scenarios in which a nontrivial abundance of excited states is produced and their subsequent de-excitation yields interesting electromagnetic signals in direct detection experiments. We study three mechanisms of populating the excited state: a primordial excited fraction, a component up-scattered in the sun, and a component up-scattered in the Earth. We find that the fractional abundance of primordial excited states is generically depleted to exponentially small fractions in the early universe. Nonetheless, this abundance can produce observable signals in current dark matter searches. MeV-scale dark matter with thermal cross sections and higher can be probed by down-scattering following excitation in the sun. Up-scatters of GeV-scale dark matter in the Earth can give rise to signals in current and upcoming terrestrial experiments and X-ray observations. We comment on the possible relevance of these scenarios to the recent excess in XENON1T.
high energy physics phenomenology
The discovery of gravitational waves (GWs) provides an unprecedented arena to test general relativity, including the gravitational Lorentz invariance violation (gLIV). In the propagation of GWs, a generic gLIV leads to anisotropy, dispersion, and birefringence. GW events constrain the anisotropic birefringence particularly well. Kosteleck\'y and Mewes (2016) performed a preliminary analysis for GW150914. We improve their method and extend the analysis systematically to the whole GW transient catalog, GWTC-1. This is the first global analysis of the spacetime anisotropic Lorentzian structure with a catalog of GWs, where multiple events are crucial in breaking the degeneracy among gLIV parameters. With the absence of abnormal propagation, we obtain new limits on 34 coefficients for gLIV in the nonminimal gravity that surpass previous limits by $\sim 10^2$-$10^5$.
high energy physics phenomenology
We consider Albeverio- Rabanovich linear representation $\pi$ of the braid group $B_3$. After specializing the indeterminates used in defining the representation to non-zero complex numbers, we prove that the restriction of $\pi$ to the pure braid group $P_3$ of dimension three is irreducible.
mathematics
We consider the expectation value $\langle \cal W \rangle$ of the circular BPS Wilson loop in ${\cal N}=2$ superconformal $SU(N)$ gauge theory containing a vector multiplet coupled to two hypermultiplets in rank-2 symmetric and antisymmetric representations. This model admits a regular large $N$ expansion, is planar-equivalent to ${\cal N}=4$ SYM theory and is expected to be dual to a certain orbifold/orientifold projection of AdS$_5\times S^5$ superstring theory. On the string theory side $\langle\cal W \rangle $ is represented by the path integral expanded near the same AdS$_2$ minimal surface as in the maximally supersymmetric case. Following the string theory argument in arXiv:2007.08512, we suggest that as in the ${\cal N}=4$ SYM case and in the ${\cal N}=2$ $SU(N) \times SU(N)$ superconformal quiver theory discussed in arXiv:2102.07696, the coefficient of the leading non-planar $1/N^2$ correction in $\langle\cal W \rangle $ should have the universal $\lambda^{3/2}$ scaling at large 't Hooft coupling. We confirm this prediction by starting with the localization matrix model representation for $\langle\cal W \rangle $. We complement the analytic derivation of the $\lambda^{3/2}$ scaling by a numerical high-precision resummation and extrapolation of the weak-coupling expansion using conformal mapping improved Pad\'e analysis.
high energy physics theory
We study mass deformations of $\mathcal{N}=4$, $d=4$ SYM theory that are spatially modulated in one spatial dimension and preserve some residual supersymmetry. We focus on generalisations of $\mathcal{N}=1^*$ theories and show that it is also possible, for suitably chosen supersymmetric masses, to preserve $d=3$ conformal symmetry associated with a co-dimension one interface. Holographic solutions can be constructed using $D=5$ theories of gravity that arise from consistent truncations of $SO(6)$ gauged supergravity and hence type IIB supergravity. For the mass deformations that preserve $d=3$ superconformal symmetry we construct a rich set of Janus solutions of $\mathcal{N}=4$ SYM theory which have the same coupling constant on either side of the interface. Limiting classes of these solutions give rise to RG interface solutions with $\mathcal{N}=4$ SYM on one side of the interface and the Leigh-Strassler (LS) SCFT on the other, and also to a Janus solution for the LS theory. Another limiting solution is a new supersymmetric $AdS_4\times S^1\times S^5$ solution of type IIB supergravity.
high energy physics theory
The first B-, V-, Rc-, and Ic-band light curves of CSS J022914.4+044340 are presented and analyzed. It is found that CSS J022914.4+044340 is a low mass ratio (0.198 +- 0.005) deep (63.7 +- 7.9%) contact binary, indicating that it has been already at the end evolutionary stage of tidally-locked evolution via magnetized wind. Because of the totally eclipsing character, the photometric solutions are reliable. The temperature and the metallicity are determined from the spectroscopic data as T = 5855 +- 15 K, and [Fe/H] = -0.842 +- 0.031, respectively. Based on the parallax of Gaia EDR3, the physical parameters of CSS J022914.4+044340 are estimated as M1 = 1.44 (+0.25,-0.22) solar mass, M2 = 0.29 (+0.05,-0.05) solar mass, R1 = 1.26 (+0.08,-0.06) solar radius, R2 = 0.65 (+0.03,-0.04) solar radius, L1 = 1.718 (+0.186,-0.191) solar luminosity, L2 = 0.416 (+0.039,-0.050) solar luminosity. Combined the fraction in light of the third body via the photometric solution (54%), the luminosity of the third body is estimated as 2.705 solar luminosity. The third body may be inferred as a subgiant. Thus, it is explained that why the primary component of CSS J022914.4+044340 has higher mass among the similar systems, and why its metallicity is so poor.
astrophysics
We construct the grand partition function of the system of chiral fermions in a uniform magnetic field from Landau levels, through which all thermodynamic quantities can be obtained. Taking use of Abel-Plana formula, these thermodynamic quantities can be expanded as series with respect to a dimensionless variable $b=2eB/T^{2}$. We find that the series expansions of energy density, pressure, magnetization intensity and magnetic susceptibility contain a singular term with $\ln b^{2}$, while particle number density, entropy density and heat capacity are power series of $b^{2}$. The asymptotic behaviors of these thermodynamic quantities in extreme conditions are also discussed.
high energy physics theory
Brain signals could be used to control devices to assist individuals with disabilities. Signals such as electroencephalograms are complicated and hard to interpret. A set of signals are collected and should be classified to identify the intention of the subject. Different approaches have tried to reduce the number of channels before sending them to a classifier. We are proposing a deep learning-based method for selecting an informative subset of channels that produce high classification accuracy. The proposed network could be trained for an individual subject for the selection of an appropriate set of channels. Reduction of the number of channels could reduce the complexity of brain-computer-interface devices. Our method could find a subset of channels. The accuracy of our approach is comparable with a model trained on all channels. Hence, our model's temporal and power costs are low, while its accuracy is kept high.
electrical engineering and systems science
Nested simulation arises frequently in financial or input uncertainty quantification problems, where the performance measure is defined as a function of the simulation output mean conditional on the outer scenario. The standard nested simulation samples $M$ outer scenarios and runs $N$ inner replications at each. We propose a new experiment design framework for a problem whose inner replication's inputs are generated from probability distribution functions parameterized by the outer scenario. This structure lets us pool replications from an outer scenario to estimate another scenario's conditional mean via the likelihood ratio method. We formulate a bi-level optimization problem to decide not only which of $M$ outer scenarios to simulate and how many times to replicate at each, but also how to pool these replications such that the total simulation effort is minimized while achieving the same estimation error as the standard nested simulation. The resulting optimal design requires far less simulation effort than $MN$. We provide asymptotic analyses on the convergence rates of the performance measure estimators computed from the experiment design. Empirical results show that our experiment design significantly reduces the simulation cost compared to the standard nested simulation as well as a state-of-the-art design that pools replications via regressions.
statistics
Godwin's law, i.e. the empirical observation that as an online discussion grows in time, the probability of a comparison with Nazis or Hitler quickly approaches unity, is one of the best-documented facts of the internet. Anticipating the quantum internet, here we show under reasonable model assumptions a polynomial quantum speedup of Godwin's law. Concretely, in quantum discussions, Hitler will be mentioned on average quadratically earlier, and we conjecture that under specific network topologies, even cubic speedups are possible. We also show that the speedup cannot be more than exponential, unless the polynomial hierarchy collapses to a certain finite level. We report on numerical experiments to simulate the appearance of the quantum Godwin law in future quantum internets; the most amazing finding of our studies is that -- unlike quantum computational speedups -- the quantum Godwin effect is not only robust against noise, but actually enhanced by decoherence. We have as yet no theoretical explanation, nor a good application, for this astonishing behaviour, which we dub quantum hyperpiesia.
quantum physics
We describe the homology intersection form associated to regular holonomic GKZ systems in terms of the combinatorics of regular triangulations. Combining this result with the twisted period relation, we obtain a formula of cohomology intersection numbers in terms of a Laurent series. We show that the cohomology intersection number depends rationally on the parameters. We also prove a conjecture of F. Beukers and C. Verschoor on the signature of the monodromy invariant hermitian form. This is a continuation of the previous work arXiv:1904.00565.
mathematics
We present a method for provably defending any pretrained image classifier against $\ell_p$ adversarial attacks. This method, for instance, allows public vision API providers and users to seamlessly convert pretrained non-robust classification services into provably robust ones. By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be $\ell_p$-robust to adversarial examples, without modifying the pretrained classifier. Our approach applies to both the white-box and the black-box settings of the pretrained classifier. We refer to this defense as denoised smoothing, and we demonstrate its effectiveness through extensive experimentation on ImageNet and CIFAR-10. Finally, we use our approach to provably defend the Azure, Google, AWS, and ClarifAI image classification APIs. Our code replicating all the experiments in the paper can be found at: https://github.com/microsoft/denoised-smoothing.
computer science
In the context of guage/gravity duality, we investigate the central charges of a number of 2-dimensional conformal field theories (CFTs) that might live on the boundary of some 3-dimensional (3D) toy models of gravity, from the thermodynamics aspect of the black holes. For many black hole solutions, the entropy product of the inner Cauchy and outer event horizons is universal (mass independent). It is proposed that for these solutions, the central charges of the left- and right-moving sectors of the dual CFTs should be the same and one may read the central charges from the universal entropy product. This provides strong motivations for investigating this prescription for BTZ and Warped AdS$_{3}$ black holes in a number of 3D gravity theories and we will show that the proposal works truly. One striking result of our analysis is that if the entropy product is not universal in any theory of 3D gravity, then the left and right central charges are not equal.
high energy physics theory
We experimentally demonstrate a bias-free optical quantum random number generator with real-time randomness extraction to directly output uniform distributed random numbers by measuring the vacuum fluctuation of quantum state. A phase modulator is utilized in the scheme to effectively reduce the influence of deviations between two arms of the generator caused by the imperfect practical devices, which is an innovative solution in the field of quantum random number generator. In the case where the feedback modulation frequency is much faster than the phase jitter, an unbiased result can be obtained by an additional subtraction between the compensation signal and its average value to eliminate residual deviation. A following randomness extractor is applied to eliminate the influence of residual side information introduced by the imperfect devices in practical system.
quantum physics
It is possible to reduce the discrepancy between the local measurement of the cosmological parameter $H_0$ and the value derived from the $Planck$ measurements of the Cosmic Microwave Background (CMB) by considering contamination of the CMB by emission from some medium around distant extragalactic sources, such as extremely cold coarse-grain dust. Though being distant, such a medium would still be in the foreground with respect to the CMB, and, as any other foreground, it would alter the CMB power spectrum. This could contribute to the dispersion of CMB temperature fluctuations. By generating a few random samples of CMB with different dispersions, we have checked that the increased dispersion leads to a smaller estimated value of $H_0$, the rest of the cosmological model parameters remaining fixed. This might explain the reduced value of the $Planck$-derived parameter $H_0$ with respect to the local measurements. The signature of the distant foreground in the CMB traced by SNe was previously reported by the authors of this paper -- we found a correlation between the SN redshifts, $z_{\rm SN}$, and CMB temperature fluctuations at the SNe locations, $T_{\rm SN}$. Here we have used the slopes of the regression lines $T_{\rm SN}\,/\,z_{\rm SN}$ corresponding to different {\it Planck} wave bands in order to estimate the possible temperature of the distant extragalactic medium, which turns out to be very low, about 5\,K. The most likely ingredient of this medium is coarse-grain ($grey$) dust, which is known to be almost undetectable, except for the effect of dimming remote extragalactic sources.
astrophysics
Despite global connectivity, societies seem to be increasingly polarized and fragmented. This phenomenon is rooted in the underlying complex structure and dynamics of social systems. Far from homogeneously mixing or adopting conforming views, individuals self-organize into groups at multiple scales, ranging from families up to cities and cultures. In this paper, we study the fragmented structure of the American society using mobility and communication networks obtained from geo-located social media data. We find self-organized patches with clear geographical borders that are consistent between physical and virtual spaces. The patches have multi-scale structure ranging from parts of a city up to the entire nation. Their significance is reflected in distinct patterns of collective interests and conversations. Finally, we explain the patch emergence by a model of network growth that combines mechanisms of geographical distance gravity, preferential attachment, and spatial growth. Our observations are consistent with the emergence of social groups whose separated association and communication reinforce distinct identities. Rather than eliminating borders, the virtual space reproduces them as people mirror their offline lives online. Understanding the mechanisms driving the emergence of fragmentation in hyper-connected social systems is imperative in the age of the Internet and globalization.
physics
Oceans play a big role in the nature of our planet, about $ 70 \% $ of our earth is covered by water. Strong currents are transporting warm water around the world making life possible, and allowing us to harvest its power producing energy. Yet, oceans also carry a much more deadly side. Floods and tsunamis can easily annihilate whole cities and destroy life in seconds. The earth's climate system is also very much linked to the currents in the ocean due to its large coverage of the earth's surface, thus, gaining scientific insights into the mechanisms and effects through simulations is of high importance. Deep ocean currents can be simulated by means of wall-bounded turbulent flow simulations. To support these very large scale numerical simulations and enable the scientists to interpret their output, we deploy an interactive visualization framework to study sheared thermal convection. The visualizations are based on volume rendering of the temperature field. To address the needs of supercomputer users with different hardware and software resources, we evaluate different volume rendering implementations supported in the ParaView environment: two GPU-based solutions with Kitware's native volume mapper or NVIDIA's IndeX library, and a CPU-only Intel OSPRay-based implementation.
physics
We further advance the study of the notion of computational complexity for 2d CFTs based on a gate set built out of conformal symmetry transformations. Previously, it was shown that by choosing a suitable cost function, the resulting complexity functional is equivalent to geometric (group) actions on coadjoint orbits of the Virasoro group, up to a term that originates from the central extension. We show that this term can be recovered by modifying the cost function, making the equivalence exact. Moreover, we generalize our approach to Kac-Moody symmetry groups, finding again an exact equivalence between complexity functionals and geometric actions. We then determine the optimal circuits for these complexity measures and calculate the corresponding costs for several examples of optimal transformations. In the Virasoro case, we find that for all choices of reference state except for the vacuum state, the complexity only measures the cost associated to phase changes, while assigning zero cost to the non-phase changing part of the transformation. For Kac-Moody groups in contrast, there do exist non-trivial optimal transformations beyond phase changes that contribute to the complexity, yielding a finite gauge invariant result. Furthermore, we also show that the alternative complexity proposal of path integral optimization is equivalent to the Virasoro proposal studied here. Finally, we sketch a new proposal for a complexity definition for the Virasoro group that measures the cost associated to non-trivial transformations beyond phase changes. This proposal is based on a cost function given by a metric on the Lie group of conformal transformations. The minimization of the corresponding complexity functional is achieved using the Euler-Arnold method yielding the Korteweg-de Vries equation as equation of motion.
high energy physics theory
Lattice deformations act on the low-energy excitations of Dirac materials as effective axial vector fields. This allows to directly detect quantum anomalies of Dirac materials via the response to axial gauge fields. We investigate the parity anomaly in Dirac nodal line semimetals induced by lattice vibrations, and establish a topological piezoelectric effect; i.e., periodic lattice deformations generate topological Hall currents that are transverse to the deformation field. The currents induced by this piezoelectric effect are dissipationless and their magnitude is completely determined by the length of the nodal ring, leading to a semi-quantized transport coefficient. Our theoretical proposal can be experimentally realized in various nodal line semimetals, such as CaAgP and Ca$_{_3}$P${_2}$.
condensed matter
By performing exact quantum Monte Carlo simulations of a model of interacting Dirac Fermions with staggered potential, we reveal a novel intermediate phase where the electronic correlations drive a band insulator metallic, and at a larger interaction, drive the metal to Mott insulator. We also show that the Mott insulating phase is antiferromagnetic. A complete phase diagram is achieved by studying the phase transitions at large staggered potential and interaction strengths, which shows that the intermediate state is robust and occupies a large part of the phase diagram and that it should be more feasible to be detected experimentally.
condensed matter
Background: Depression has become a major health burden worldwide, and effective detection depression is a great public-health challenge. This Electroencephalography (EEG)-based research is to explore the effective biomarkers for depression recognition. Methods: Resting state EEG data was collected from 24 major depressive patients (MDD) and 29 normal controls using 128 channel HydroCel Geodesic Sensor Net (HCGSN). To better identify depression, we extracted different types of EEG features including linear features, nonlinear features and functional connectivity features phase lagging index (PLI) to comprehensively analyze the EEG signals in patients with MDD. And using different feature selection methods and classifiers to evaluate the optimal feature sets. Results: Functional connectivity feature PLI is superior to the linear features and nonlinear features. And when combining all the types of features to classify MDD patients, we can obtain the highest classification accuracy 82.31% using ReliefF feature selection method and logistic regression (LR) classifier. Analyzing the distribution of optimal feature set, it was found that intrahemispheric connection edges of PLI were much more than the interhemispheric connection edges, and the intrahemispheric connection edges had a significant differences between two groups. Conclusion: Functional connectivity feature PLI plays an important role in depression recognition. Especially, intrahemispheric connection edges of PLI might be an effective biomarker to identify depression. And statistic results suggested that MDD patients might exist functional dysfunction in left hemisphere.
electrical engineering and systems science
In this paper, we consider ionospheric plasma consisting of weakly degenerate electrons and heavy ions. We embrace our hydrodynamic model by including the quantum diffraction term. By employing Sagdeev's pseudo-potential method, we obtain double layers and soliton structure. We have studied the various parametric dependence of solitary structures and double layers. The results thus obtained might be helpful in the studies of many high energy astrophysical phenomena.
physics
The goal of this mostly expository paper is to present several candidates for hyperbolic structures on irreducible Artin-Tits groups of spherical type and to elucidate some relations between them. Most constructions are algebraic analogues of previously known hyperbolic structures on Artin braid groups coming from natural actions of these groups on curve graphs and (modified) arc graphs of punctured disks.
mathematics
We consider the latency minimization problem in a task-offloading scenario, where multiple servers are available to the user equipment for outsourcing computational tasks. To account for the temporally dynamic nature of the wireless links and the availability of the computing resources, we model the server selection as a multi-armed bandit (MAB) problem. In the considered MAB framework, rewards are characterized in terms of the end-to-end latency. We propose a novel online learning algorithm based on the principle of optimism in the face of uncertainty, which outperforms the state-of-the-art algorithms by up to ~1s. Our results highlight the significance of heavily discounting the past rewards in dynamic environments.
electrical engineering and systems science
We introduce and study a new class of optimal switching problems, namely switching problem with controlled randomisation, where some extra-randomness impacts the choice of switching modes and associated costs. We show that the optimal value of the switching problem is related to a new class of multidimensional obliquely reflected BSDEs. These BSDEs allow as well to construct an optimal strategy and thus to solve completely the initial problem. The other main contribution of our work is to prove new existence and uniqueness results for these obliquely reflected BSDEs. This is achieved by a careful study of the domain of reflection and the construction of an appropriate oblique reflection operator in order to invoke results from [7].
mathematics
Gaussian Processes (GPs) provide a powerful probabilistic framework for interpolation, forecasting, and smoothing, but have been hampered by computational scaling issues. Here we prove that for data sampled on one dimension (e.g., a time series sampled at arbitrarily-spaced intervals), approximate GP inference at any desired level of accuracy requires computational effort that scales linearly with the number of observations; this new theorem enables inference on much larger datasets than was previously feasible. To achieve this improved scaling we propose a new family of stationary covariance kernels: the Latent Exponentially Generated (LEG) family, which admits a convenient stable state-space representation that allows linear-time inference. We prove that any continuous integrable stationary kernel can be approximated arbitrarily well by some member of the LEG family. The proof draws connections to Spectral Mixture Kernels, providing new insight about the flexibility of this popular family of kernels. We propose parallelized algorithms for performing inference and learning in the LEG model, test the algorithm on real and synthetic data, and demonstrate scaling to datasets with billions of samples.
statistics
Heart is one of the vital organs of human body. A minor dysfunction of heart even for a short time interval can be fatal, therefore, efficient monitoring of its physiological state is essential for the patients with cardiovascular diseases. In the recent past, various computer assisted medical imaging systems have been proposed for the segmentation of the organ of interest. However, for the segmentation of heart using MRI, only few methods have been proposed each with its own merits and demerits. For further advancement in this area of research, we analyze automated heart segmentation methods for magnetic resonance images. The analysis are based on deep learning methods that processes a full MR scan in a slice by slice fashion to predict desired mask for heart region. We design two encoder decoder type fully convolutional neural network models
electrical engineering and systems science
Double parton scattering (DPS) processes in which there is a perturbative $1\to2$ splitting in both protons overlap with loop corrections to single parton scattering (SPS). Any fundamental theoretical treatment of DPS needs to address this double-counting issue. In this paper, we augment our Monte-Carlo simulation of DPS, dShower, to be able to generate kinematic distributions corresponding to the combination SPS+DPS without double counting. To achieve this, we formulate a fully-differential version of the subtraction scheme introduced in Diehl et al. (JHEP 06 (2017) 083). A shower is attached to the subtraction term, and this is combined with the dShower DPS shower along with the usual SPS shower. We perform a proof-of-concept study of this new algorithm in the context of $\mathrm{Z}^0\mathrm{Z}^0$ production. Once the subtraction term is included, we verify that the results do not depend strongly on the artificial "DPS-SPS demarcation" scale $\nu$. As part of the development of the new algorithm, we improve the kinematics of the $1\to2$ splitting in the DPS shower (and subtraction term), allowing the daughter partons to have a relative transverse momentum. Several reasonable choices for the transverse profile in the $1\to2$ splitting are studied. We find that many kinematic distributions are not strongly affected by the choice, although we do observe some differences in the region where the transverse momenta of both bosons are small.
high energy physics phenomenology
In the area of multi-domain speech recognition, research in the past focused on hybrid acoustic models to build cross-domain and domain-invariant speech recognition systems. In this paper, we empirically examine the difference in behavior between hybrid acoustic models and neural end-to-end systems when mixing acoustic training data from several domains. For these experiments we composed a multi-domain dataset from public sources, with the different domains in the corpus covering a wide variety of topics and acoustic conditions such as telephone conversations, lectures, read speech and broadcast news. We show that for the hybrid models, supplying additional training data from other domains with mismatched acoustic conditions does not increase the performance on specific domains. However, our end-to-end models optimized with sequence-based criterion generalize better than the hybrid models on diverse domains. In term of word-error-rate performance, our experimental acoustic-to-word and attention-based models trained on multi-domain dataset reach the performance of domain-specific long short-term memory (LSTM) hybrid models, thus resulting in multi-domain speech recognition systems that do not suffer in performance over domain specific ones. Moreover, the use of neural end-to-end models eliminates the need of domain-adapted language models during recognition, which is a great advantage when the input domain is unknown.
electrical engineering and systems science
It has been recently proposed by Maldacena and Qi that an eternal traversable wormhole in a two dimensional Anti de Sitter space (${\rm AdS}_2$) is the gravity dual of the low temperature limit of two Sachdev-Ye-Kitaev (SYK) models coupled by a relevant interaction (which we will refer to as spin operator). In this paper, we study spectral and eigenstate properties of this coupled SYK model. We have found that level statistics in the tail of the spectrum, and for a sufficiently weak coupling, shows substantial deviations from random matrix theory which suggests that traversable wormholes are not quantum chaotic. By contrast, for sufficiently strong coupling, corresponding to the black hole phase, level statistics are well described by random matrix theory. This transition in level statistics coincides approximately with a previously reported Hawking-Page transition for weak coupling. We have shown explicitly that this thermodynamic transition turns into a sharp crossover as the coupling increases. Likewise, this critical coupling also corresponds with the one at which the overlap between the ground state and the thermofield double state (TFD) is smallest. In the range of sizes we can reach by exact diagonalization, the ground state is well approximated by the TFD state only in the strong coupling limit. This is due to the fact that the ground state is close to the eigenstate of the spin operator corresponding to the lowest eigenvalue which is an exact TFD state at infinite temperature. In this region, the spectral density is separated into blobs centered around the eigenvalues of the spin operator. For weaker couplings, the exponential decay of coefficients in a tensor product basis, typical of the TFD, becomes power law. Finally, we have also found that the total Hamiltonian has an additional discrete symmetry which has not been reported previously.
high energy physics theory
Within the next 10 years, advances on resource disaggregation will enable full transparency for most Cloud applications: to run unmodified single-machine applications over effectively unlimited remote computing resources. In this article, we present five serverless predictions for the next decade that will realize this vision of transparency -- equivalent to Tim Wagner's Serverless SuperComputer or AnyScale's Infinite Laptop proposals.
computer science
State machine replication protocols, like MultiPaxos and Raft, are a critical component of many distributed systems and databases. However, these protocols offer relatively low throughput due to several bottlenecked components. Numerous existing protocols fix different bottlenecks in isolation but fall short of a complete solution. When you fix one bottleneck, another arises. In this paper, we introduce compartmentalization, the first comprehensive technique to eliminate state machine replication bottlenecks. Compartmentalization involves decoupling individual bottlenecks into distinct components and scaling these components independently. Compartmentalization has two key strengths. First, compartmentalization leads to strong performance. In this paper, we demonstrate how to compartmentalize MultiPaxos to increase its throughput by 6x on a write-only workload and 16x on a mixed read-write workload. Unlike other approaches, we achieve this performance without the need for specialized hardware. Second, compartmentalization is a technique, not a protocol. Industry practitioners can apply compartmentalization to their protocols incrementally without having to adopt a completely new protocol.
computer science
Accurately predicting patients' risk of 30-day hospital readmission would enable hospitals to efficiently allocate resource-intensive interventions. We develop a new method, Categorical Co-Frequency Analysis (CoFA), for clustering diagnosis codes from the International Classification of Diseases (ICD) according to the similarity in relationships between covariates and readmission risk. CoFA measures the similarity between diagnoses by the frequency with which two diagnoses are split in the same direction versus split apart in random forests to predict readmission risk. Applying CoFA to de-identified data from Berkshire Medical Center, we identified three groups of diagnoses that vary in readmission risk. To evaluate CoFA, we compared readmission risk models using ICD majors and CoFA groups to a baseline model without diagnosis variables. We found substituting ICD majors for the CoFA-identified clusters simplified the model without compromising the accuracy of predictions. Fitting separate models for each ICD major and CoFA group did not improve predictions, suggesting that readmission risk may be more homogeneous that heterogeneous across diagnosis groups.
statistics
We theoretically derive and experimentally compare several different ways to access entropy production in a quantum process under feedback control. We focus on a bipartite quantum system realizing an autonomous Maxwell's demon scheme reported by Najera-Santos et al. [Phys.~Rev.~Research 2, 032025(R) (2020)], where information encoded in a demon is consumed to transfer heat from a cold qubit to a hot cavity. By measuring individual quantum trajectories of the joint demon-cavity-qubit system, we compute the entropy production with six distinct expressions derived from different approaches to the system description and its evolution. Each method uses a specific set of trajectories and data processing. Our results provide a unified view on the various meanings of irreversibility in quantum systems and pave the way to the measurement of entropy production beyond thermal frameworks.
quantum physics
We study implications of perturbative unitarity for quasi-single field inflation. Analyzing high energy scattering, we show that non-Gaussianities with $|f_{\rm NL}|\gtrsim1$ cannot be realized without turning on interactions which violate unitarity at a high energy scale. Then, we provide a relation between $f_{\rm NL}$ and the scale of new physics that is required for UV completion. In particular we find that for the Hubble scale $H\gtrsim 6\times 10^{9}$ GeV, Planck suppressed operators can easily generate too large non-Gaussanities and so it is hard to realize successful quasi-single field inflation without introducing a mechanism to suppress quantum gravity corrections. Also we generalize the analysis to the regime where the isocurvature modes are heavy and the inflationary dynamics is captured by the inflaton effective theory. Requiring perturbative unitarity of the two-scalar UV models with the inflaton and one heavy scalar, we clarify the parameter space of the $P(X,\phi)$ model which is UV completable by a single heavy scalar.
high energy physics theory
One of the most enigmatic science question concerning inertial particle transport by a turbulent boundary layer flow is the value of the turbulent Schmidt number as the ratio of particle diffusivity and turbulent eddy viscosity. Using direct acoustic measurement of turbulent particle flux profile, and two-phase flow turbulence-resolving numerical simulation, it is demonstrated that turbulent dispersion of particles is reduced rather than enhanced when predicted with existing literature model. The explanation lies in the misleading assumption of settling velocity in quiescent water to estimate the turbulent particle diffusivity while direct measurements and simulations of turbulent particle flux support the occurrence of settling retardation.
physics
Seemingly unrelated regression is a natural framework for regressing multiple correlated responses on multiple predictors. The model is very flexible, with multiple linear regression and covariance selection models being special cases. However, its practical deployment in genomic data analysis under a Bayesian framework is limited due to both statistical and computational challenges. The statistical challenge is that one needs to infer both the mean vector and the inverse covariance matrix, a problem inherently more complex than separately estimating each. The computational challenge is due to the dimensionality of the parameter space that routinely exceeds the sample size. We propose the use of horseshoe priors on both the mean vector and the inverse covariance matrix. This prior has demonstrated excellent performance when estimating a mean vector or inverse covariance matrix separately. The current work shows these advantages are also present when addressing both simultaneously. A full Bayesian treatment is proposed, with a sampling algorithm that is linear in the number of predictors. MATLAB code implementing the algorithm is freely available from github at https://github.com/liyf1988/HS_GHS. Extensive performance comparisons are provided with both frequentist and Bayesian alternatives, and both estimation and prediction performances are verified on a genomic data set.
statistics
Employing the algebraic structure of the left brace and the dynamical extensions of cycle sets, we investigate a class of indecomposable involutive set-theoretic solutions of the Yang-Baxter equation having specific imprimitivity blocks. Moreover, we study one-generator left braces of multipermutation level 2.
mathematics
We present PyChain, a fully parallelized PyTorch implementation of end-to-end lattice-free maximum mutual information (LF-MMI) training for the so-called \emph{chain models} in the Kaldi automatic speech recognition (ASR) toolkit. Unlike other PyTorch and Kaldi based ASR toolkits, PyChain is designed to be as flexible and light-weight as possible so that it can be easily plugged into new ASR projects, or other existing PyTorch-based ASR tools, as exemplified respectively by a new project PyChain-example, and Espresso, an existing end-to-end ASR toolkit. PyChain's efficiency and flexibility is demonstrated through such novel features as full GPU training on numerator/denominator graphs, and support for unequal length sequences. Experiments on the WSJ dataset show that with simple neural networks and commonly used machine learning techniques, PyChain can achieve competitive results that are comparable to Kaldi and better than other end-to-end ASR systems.
electrical engineering and systems science
The von Mises distribution is one of the most important distribution in statistics to deal with circular data. In this paper we will consider some basic properties and characterizations of the sine skewed von Mises distribution.
statistics
This manuscript contributes a general and practical framework for casting a Markov process model of a system at equilibrium as a structural causal model, and carrying out counterfactual inference. Markov processes mathematically describe the mechanisms in the system, and predict the system's equilibrium behavior upon intervention, but do not support counterfactual inference. In contrast, structural causal models support counterfactual inference, but do not identify the mechanisms. This manuscript leverages the benefits of both approaches. We define the structural causal models in terms of the parameters and the equilibrium dynamics of the Markov process models, and counterfactual inference flows from these settings. The proposed approach alleviates the identifiability drawback of the structural causal models, in that the counterfactual inference is consistent with the counterfactual trajectories simulated from the Markov process model. We showcase the benefits of this framework in case studies of complex biomolecular systems with nonlinear dynamics. We illustrate that, in presence of Markov process model misspecification, counterfactual inference leverages prior data, and therefore estimates the outcome of an intervention more accurately than a direct simulation.
statistics
Self-reinforcing feedback loops in personalization systems are typically caused by users choosing from a limited set of alternatives presented systematically based on previous choices. We propose a Bayesian choice model built on Luce axioms that explicitly accounts for users' limited exposure to alternatives. Our model is fair---it does not impose negative bias towards unpresented alternatives, and practical---preference estimates are accurately inferred upon observing a small number of interactions. It also allows efficient sampling, leading to a straightforward online presentation mechanism based on Thompson sampling. Our approach achieves low regret in learning to present upon exploration of only a small fraction of possible presentations. The proposed structure can be reused as a building block in interactive systems, e.g., recommender systems, free of feedback loops.
statistics
The Nisan-Ronen conjecture states that no truthful mechanism for makespan-minimization when allocating $m$ tasks to $n$ unrelated machines can have approximation ratio less than $n$. Over more than two decades since its formulation, little progress has been made in resolving it and the best known lower bound is still a small constant. This work makes progress towards validating the conjecture by showing a lower bound of $1+\sqrt{n-1}$.
computer science
Mini-EUSO is a telescope observing the Earth in the ultraviolet band from the International Space Station. It is a part of the JEM-EUSO program, paving the way to future larger missions, such as KEUSO and POEMMA, devoted primarily to the observation of Ultra High Energy Cosmic Rays from space. Mini-EUSO is capable of observing Extensive Air Showers generated by Ultra-High Energy Cosmic Rays with an energy above 10^21 eV and detect artificial showers generated with lasers from the ground. Other main scientific objectives of the mission are the search for nuclearites and Strange Quark Matter, the study of atmospheric phenomena such as Transient Luminous Events, meteors and meteoroids, the observation of sea bioluminescence and of artificial satellites and man-made space debris. Mini-EUSO will map the night-time Earth in the UV range (290 - 430 nm), with a spatial resolution of about 6.3 km and a temporal resolution of 2.5 microseconds, through a nadir-facing UV-transparent window in the Russian Zvezda module. The instrument, launched on August 22, 2019 from the Baikonur cosmodrome, is based on an optical system employing two Fresnel lenses and a focal surface composed of 36 Multi-Anode Photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity and an overall field of view of 44 degrees. Mini-EUSO also contains two ancillary cameras to complement measurements in the near infrared and visible ranges. In this paper we describe the detector and present the various phenomena observed in the first months of operations.
astrophysics
Randomness expansion where one generates a longer sequence of random numbers from a short one is viable in quantum mechanics but not allowed classically. Device-independent quantum randomness expansion provides a randomness resource of the highest security level. Here, we report the first experimental realization of device-independent quantum randomness expansion secure against quantum side information established through quantum probability estimation. We generate $5.47\times10^8$ quantum-proof random bits while consuming $4.39\times10^8$ bits of entropy, expanding our store of randomness by $1.08\times10^8$ bits at a latency of about $13.1$ h, with a total soundness error $4.6\times10^{-10}$. Device-independent quantum randomness expansion not only enriches our understanding of randomness but also sets a solid base to bring quantum-certifiable random bits into realistic applications.
quantum physics
Let $\{\mu_k\}_{k = 1}^N$ be absolutely continuous probability measures on the real line such that every measure $\mu_k$ is supported on the segment $[l_k, r_k]$ and the density function of $\mu_k$ is nonincreasing on that segment for all $k$. We prove that if $\mathbb{E}(\mu_1) + \dots + \mathbb{E}(\mu_N) = C$ and if $r_k - l_k \le C - (l_1 + \dots + l_N)$ for all $k$, then there exists a transport plan with given marginals supported on the hyperplane $\{x_1 + \dots + x_N = C\}$. This transport plan is an optimal solution of the multimarginal Monge-Kantorovich problem for the repulsive harmonic cost function $\sum_{i, j = 1}^N-(x_i - x_j)^2$.
mathematics
A $d$-dimensional random array on a nonempty set $I$ is a stochastic process $\boldsymbol{X}=\langle X_s:s\in {I\choose d}\rangle$ indexed by the set ${I\choose d}$ of all $d$-element subsets of $I$. We obtain structural decompositions of finite, high-dimensional random arrays whose distribution is invariant under certain symmetries. Our first main result is a distributional decomposition of finite, (approximately) spreadable, high-dimensional random arrays whose entries take values in a finite set; the two-dimensional case of this result is the finite version of an infinitary decomposition due to Fremlin and Talagrand. Our second main result is a physical decomposition of finite, spreadable, high-dimensional random arrays with square-integrable entries which is the analogue of the Hoeffding/Efron--Stein decomposition. All proofs are effective. We also present applications of these decompositions in the study of concentration of functions of finite, high-dimensional random arrays.
mathematics
We discuss the extent to which models of Weakly Interacting Massive Particle (WIMP) Dark Matter (DM) at and above the electroweak scale can be probed conclusively in future high energy and astroparticle physics experiments. We consider simplified models with bino-like dark matter and slepton-like coannihilation partners, and find that perturbative models yield the observed relic abundance up to at least 10 TeV. We emphasise that coannihilation can either increase or decrease the dark matter relic abundance. We compute the sensitivity of direct detection experiments to DM-nucleus scattering, consider indirect detection bounds and estimate the sensitivity of future proton colliders to slepton pair production. We find that current and future experiments will be able to probe the Dirac DM models up to at least 10 TeV. However, current and future searches will not be sensitive to models of Majorana dark matter for masses above 2 or 4 TeV, for one or ten coannihilation partners respectively, leaving around 70 % of the parameter space unconstrained. This demonstrates the need for new experimental ideas to access models of coannihilating Majorana dark matter.
high energy physics phenomenology
In this paper we challenge the common assumption that convolutional layers in modern CNNs are translation invariant. We show that CNNs can and will exploit the absolute spatial location by learning filters that respond exclusively to particular absolute locations by exploiting image boundary effects. Because modern CNNs filters have a huge receptive field, these boundary effects operate even far from the image boundary, allowing the network to exploit absolute spatial location all over the image. We give a simple solution to remove spatial location encoding which improves translation invariance and thus gives a stronger visual inductive bias which particularly benefits small data sets. We broadly demonstrate these benefits on several architectures and various applications such as image classification, patch matching, and two video classification datasets.
computer science
The generation of thin and high density plasma slabs at high repetition rate is a key issue for ultra-high intensity laser applications. We present a scheme to create such plasma slabs, based on the propagation and collision in a gas jet of two counter-propagating blast waves (BW). Each BW is launched by a sudden and local heating induced by a nanosecond laser beam that propagates along the side of the jet. The resulting cylindrical BW expands perpendicular to the beam. The shock front, bent by the gas jet density gradient, pushes and compresses the plasma toward the jet center. By using two parallel ns laser beams, this scheme enables to tailor independently two opposite sides of the jet, while avoiding the damage risks associated with counterpropagating laser beams. A parametric study is performed using two and three dimensional hydrodynamic, as well as kinetic simulations. The BWs bending combined with the collision in a stagnation regime increases the density by more than 10 times and generates a very thin (down to few microns), near to over-critical plasma slab with a high density contrast (> 100), and a lifetime of a few hundred picoseconds. Two dimensional particle-in-cell simulations are used to study the influence of plasma tailoring on proton acceleration by a high-intensity sub-picosecond laser pulse. Tailoring the plasma not only at the entrance but also the exit side of the ps-pulse enhances the proton beam collimation, increases significantly the number of high energy protons, as well as their maximum energy.
physics
The laser flash method is highly regarded due to its applicability to a wide temperature range, from cryogenic temperatures to the melting point of refractory metals, and to extreme environments involving radioactive or hazardous materials. Although instruments implementing this method are mostly produced on a commercial basis by major manufacturers, there is always room for improvement both in terms of experimental methods and data treatment procedures. The measurement noise, either due to the detector performance or electromagnetic interferences, presents a significant problem when accurate determination of thermal properties is desired. Noise resilience of the laser flash method is rarely mentioned in published literature; there are currently no data treatment procedures which could guarantee adequate performance under any operating conditions. In this paper, a computational framework combining finite-difference solutions of the heat conduction problem with nonlinear optimization techniques based on the use of quasi-Newton direction search and stochastic linear search with the Wolfe conditions is presented. The application of this framework to data with varying level of noise is considered. Finally, cross-verification and validation using an external standard, a commercial and an in-house built laser flash instrument are presented. The open-source software implementing the described computational method is benchmarked against its industrial counterpart.
physics
We use Dirac matrix representations of the Clifford algebra to build fracton models on the lattice and their effective Chern-Simons-like theory. As an example we build lattice fractons in odd $D$ spatial dimensions and their $(D+1)$ effective theory. The model possesses an anti-symmetric $K$ matrix resembling that of hierarchical quantum Hall states. The gauge charges are conserved in sub-dimensional manifolds which ensures the fractonic behavior. The construction extends to any lattice fracton model built from commuting projectors and with tensor products of spin-$1/2$ degrees of freedom at the sites.
condensed matter
We respond to P. Ao's comment in arXiv:1907.09263, which suggests that vortex many-body effects are the origin of Hall sign reversal in few-unit-cell thick Bi-2212 cuprate crystals (Phys. Rev. Lett. 122, 247001 (2019)). Our experimental results are incompatible with the theoretical predictions detailed in Ao's comment.
condensed matter
Many times the nodes of a complex network, whether deliberately or not, are aggregated for technical, ethical, legal limitations or privacy reasons. A common example is the geographic position: one may uncover communities in a network of places, or of individuals identified with their typical geographical position, and then aggregate these places into larger entities, such as municipalities, thus obtaining another network. The communities found in the networks obtained at various levels of aggregation may exhibit various degrees of similarity, from full alignment to perfect independence. This is akin to the problem of ecological and atomic fallacies in statistics, or to the Modified Areal Unit Problem in geography. We identify the class of community detection algorithms most suitable to cope with node aggregation, and develop an index for aggregability, capturing to which extent the aggregation preserves the community structure. We illustrate its relevance on real-world examples (mobile phone and Twitter reply-to networks). Our main message is that any node-partitioning analysis performed on aggregated networks should be interpreted with caution, as the outcome may be strongly influenced by the level of the aggregation.
physics
Tracking the 6D pose of objects in video sequences is important for robot manipulation. This task, however, introduces multiple challenges: (i) robot manipulation involves significant occlusions; (ii) data and annotations are troublesome and difficult to collect for 6D poses, which complicates machine learning solutions, and (iii) incremental error drift often accumulates in long term tracking to necessitate re-initialization of the object's pose. This work proposes a data-driven optimization approach for long-term, 6D pose tracking. It aims to identify the optimal relative pose given the current RGB-D observation and a synthetic image conditioned on the previous best estimate and the object's model. The key contribution in this context is a novel neural network architecture, which appropriately disentangles the feature encoding to help reduce domain shift, and an effective 3D orientation representation via Lie Algebra. Consequently, even when the network is trained only with synthetic data can work effectively over real images. Comprehensive experiments over benchmarks - existing ones as well as a new dataset with significant occlusions related to object manipulation - show that the proposed approach achieves consistently robust estimates and outperforms alternatives, even though they have been trained with real images. The approach is also the most computationally efficient among the alternatives and achieves a tracking frequency of 90.9Hz.
computer science
We assume a generic real singlet scalar extension of the Standard Model living in the vacuum $(v,w)$ at the electroweak scale with $v=246$ GeV and $w$ being respectively the Higgs and the singlet scalar vacuum expectation values. By requiring {\it absolute} vacuum stability for the vacuum $(v,w)$, the positivity condition and the perturbativity up to the Planck scale, we show that the viable space of parameters in the model is strongly constrained for various singlet scalar vacuum expectation values $w=0.1, 1, 10, 100$ TeV. Also, it turns out that the singlet scalar mass can be from a few GeV up to less than TeV.
high energy physics phenomenology
In all of science, the authors of publications depend on the knowledge presented by the previous publications. Thus they "stand on the shoulders of giants" and there is a flow of knowledge from previous publications to more recent ones. The dominating paradigm for tracking this flow of knowledge is to count the number of direct citations, but this neglects the fact that beneath the first layer of citations there is a full body of literature. In this study, we go underneath the "shoulders" by investigating the cumulative knowledge creation process in a citation network of around 35 million publications. In particular, we study stylized models of persistent influence and diffusion that take into account all the possible chains of citations. When we study the persistent influence values of publications and their citation counts, we find that the publications related to Nobel Prizes i.e. Nobel papers have higher ranks in terms of persistent influence than that due to citations, and that the most outperforming publications are typically early works leading to hot research topics of their time. The diffusion model reveals a significant variation in the rates at which different fields of research share knowledge. We find that these rates have been increasing systematically for several decades, which can be explained by the increase in the publication volumes. Overall, our results suggest that analyzing cumulative knowledge creation on a global scale can be useful in estimating the type and scale of scientific influence of individual publications and entire research areas as well as yielding insights which could not be discovered by using only the direct citation counts.
physics
Disease maps are an important tool in cancer epidemiology used for the analysis of geographical variations in disease rates and the investigation of environmental risk factors underlying spatial patterns. Cancer maps help epidemiologists highlight geographic areas with high and low prevalence, incidence, or mortality rates of cancers, and the variability of such rates over a spatial domain. When more than one cancer is of interest, the models must also capture the inherent or endemic association between the diseases in addition to the spatial association. This article develops interpretable and easily implementable spatial autocorrelation models for two or more cancers. The article builds upon recent developments in univariate disease mapping that have shown the use of mathematical structures such as directed acyclic graphs to capture spatial association for a single cancer, estimating inherent or endemic association for two cancers in addition to the association over space (clustering) for each of the cancers. The method builds a Bayesian hierarchical model where the spatial effects are introduced as latent random effects for each cancer. We analyze the relationship between incidence rates of esophagus and lung cancer extracted from the Surveillance, Epidemiology, and End Results (SEER) Program. Our analysis shows statistically significant association between the county-wise incidence rates of lung and esophagus cancer across California. The bivariate directed acyclic graphical model performs better than competing bivariate spatial models in the existing literature.
statistics
A group $\Gamma$ is said to be uniformly HS stable if any map $\varphi : \Gamma \to U(n)$ that is almost a unitary representation (w.r.t. the Hilbert Schmidt norm) is close to a genuine unitary representation of the same dimension. We present a complete classification of uniformly HS stable groups among finitely generated residually finite ones. Necessity of the residual finiteness assumption is discussed. A similar result is shown to hold assuming only amenability.
mathematics
Production of Z bosons and neutrinos is studied in the expanding de Sitter universe. The expression of the transition amplitudes in the case of Z boson interaction with leptons is established by using perturbative methods. Then the amplitude and probability for the spontaneous generation from vacuum of a Z boson a neutrino and an antineutrino are computed analytically and a graphical analysis is performed in terms of the expansion parameter. We found that the probability for this process is nonvanishing only for large expansion conditions of the early Universe. We discuss the Minkowski limit and obtain that in this limit the amplitude is zero, result which corresponds to the well established fact that spontaneous particle generation from vacuum in Minkowski space-time is forbidden by the simultaneous energy and momentum conservation in perturbative processes. The total probability of the process is computed and we prove that this quantity is important only for the regime of large expansion from early universe and is vanishing in the Minkowski limit.
high energy physics theory
In this paper, we address the makeup transfer task, which aims to transfer the makeup from a reference image to a source image. Existing methods have achieved promising progress in constrained scenarios, but transferring between images with large pose and expression differences is still challenging. Besides, they cannot realize customizable transfer that allows a controllable shade of makeup or specifies the part to transfer, which limits their applications. To address these issues, we propose Pose and expression robust Spatial-aware GAN (PSGAN). It first utilizes Makeup Distill Network to disentangle the makeup of the reference image as two spatial-aware makeup matrices. Then, Attentive Makeup Morphing module is introduced to specify how the makeup of a pixel in the source image is morphed from the reference image. With the makeup matrices and the source image, Makeup Apply Network is used to perform makeup transfer. Our PSGAN not only achieves state-of-the-art results even when large pose and expression differences exist but also is able to perform partial and shade-controllable makeup transfer. We also collected a dataset containing facial images with various poses and expressions for evaluations.
computer science
The midline related pathological image features are crucial for evaluating the severity of brain compression caused by stroke or traumatic brain injury (TBI). The automated midline delineation not only improves the assessment and clinical decision making for patients with stroke symptoms or head trauma but also reduces the time of diagnosis. Nevertheless, most of the previous methods model the midline by localizing the anatomical points, which are hard to detect or even missing in severe cases. In this paper, we formulate the brain midline delineation as a segmentation task and propose a three-stage framework. The proposed framework firstly aligns an input CT image into the standard space. Then, the aligned image is processed by a midline detection network (MD-Net) integrated with the CoordConv Layer and Cascade AtrousCconv Module to obtain the probability map. Finally, we formulate the optimal midline selection as a pathfinding problem to solve the problem of the discontinuity of midline delineation. Experimental results show that our proposed framework can achieve superior performance on one in-house dataset and one public dataset.
electrical engineering and systems science
After 100 years of theoretical treatment of speckle patterns from coherent illumination, there remain some open questions about the nature of ultrasound speckle from soft vascularized tissues. A recent hypothesis is that the fractal branching vasculature is responsible for the dominant echo pattern from organs such as the liver. In that case an analysis of cylindrical scattering structures arranged across a power law distribution of sizes is warranted. Using a simple model of echo strength and basic transformation rules from probability, we derive the first order statistics of speckle considering the amplitude, the intensity, and the natural log of amplitude. The results are given by long tailed distributions that have been studied in the statistics literature for other fields. Examples are given from simulations and animal studies, and the theoretical fit to these preliminary data support the overall framework as a plausible model for characterizing ultrasound speckle statistics.
physics
Much evidence in comparative effectiveness research is based on observational studies. Researchers who conduct observational studies typically assume that there are no unobservable differences between the treated and control groups. Treatment effects are estimated after adjusting for observed differences between treated and controls. However, treatment effect estimates may be biased due to model misspecification. That is, if the method of treatment effect estimation imposes unduly strong functional form assumptions, treatment effect estimates may be significantly biased. In this study, we compare the performance of a wide variety of treatment effect estimation methods. We do so within the context of the REFLUX study from the UK. In REFLUX, after study qualification, participants were enrolled in either a randomized trial arm or patient preference arm. In the randomized trial, patients were randomly assigned to either surgery or medical management. In the patient preference arm, participants selected to either have surgery or medical management. We attempt to recover the treatment effect estimate from the randomized trial arm using the data from the patient preference arm of the study. We vary the method of treatment effect estimation and record which methods are successful and which are not. We apply over 20 different methods including standard regression models as well as advanced machine learning methods. We find that simple propensity score matching methods perform the worst. We also find significant variation in performance across methods. The wide variation in performance suggests analysts should use multiple methods of estimation as a robustness check.
statistics
In this short note we prove that every maximal torus action on the free algebra is conjugate to a linear action. This statement is the free algebra analogue of a classical theorem of A. Bia{\l}ynicki-Birula.
mathematics
The boron-10 based Multi-Grid detector is being developed as an alternative to helium-3 based neutron detectors. At the European Spallation Source, the detector will be used for time-of-flight neutron spectroscopy at cold to thermal neutron energies. The objective of this work is to investigate fine time- and energy-resolved effects of the Multi-Grid detector, down to a few $\mu$eV, while comparing it to the performance of a typical helium-3 tube. Furthermore, it is to characterize differences between the detector technologies in terms of internal scattering, as well as the time reconstruction of ~ $\mu$s short neutron pulses. The data were taken at the Helmholtz Zentrum Berlin, where the Multi-Grid detector and a helium-3 tube were installed at the ESS test beamline, V20. Using a Fermi-chopper, the neutron beam of the reactor was chopped into a few tens of $\mu$s wide pulses before reaching the detector, located a few tens of cm downstream. The data of the measurements show an agreement between the derived and calculated neutron detection efficiency curve. The data also provide fine details on the effect of internal scattering, and how it can be reduced. For the first time, the chopper resolution was comparable to the timing resolution of the Multi-Grid detector. This allowed a detailed study of time- and energy resolved effects, as well as a comparison with a typical helium-3 tube.
physics
Amorphous molybdenum silicide compounds have attracted significant interest for potential device applications, particularly in single-photon detector. In this work, the temperature-dependent resistance and magneto-resistance behaviors were measured to reveal the charge transport mechanism, which is of great importance for applications but is still insufficient. It is found that Mott variable hopping conductivity dominates the transport of sputtered amorphous molybdenum silicide thin films. Additionally, the observed magneto-resistance crossover from negative to positive is ascribed to the interference enhancement and the shrinkage of electron wave function, both of which vary the probability of hopping between localized sites.
physics
We study low energy implications of F-theory GUT models based on $SU(5)$ extended by a $U(1)'$ symmetry which couples non-universally to the three families of quarks and leptons. This gauge group arises naturally from the maximal exceptional gauge symmetry of an elliptically fibred internal space, at a single point of enhancement, $E_8\supset SU(5)\times SU(5)'\supset SU(5)\times U(1)^4$. Rank-one fermion mass textures and a tree-level top quark coupling are guaranteed by imposing a $Z_2$ monodromy group which identifies two abelian factors of the above breaking sequence. The $U(1)'$ factor of the gauge symmetry is an anomaly free linear combination of the three remaining abelian symmetries left over by $Z_2$. Several classes of models are obtained, distinguished with respect to the $U(1)'$ charges of the representations, and possible extra zero modes coming in vector-like representations. The predictions of these models are investigated and are compared with the LHC results and other related experiments. Particular cases interpreting the B-meson anomalies observed in LHCb and BaBar experiments are also discussed.
high energy physics phenomenology
Data-driven defect prediction has become increasingly important in software engineering process. Since it is not uncommon that data from a software project is insufficient for training a reliable defect prediction model, transfer learning that borrows data/knowledge from other projects to facilitate the model building at the current project, namely cross-project defect prediction (CPDP), is naturally plausible. Most CPDP techniques involve two major steps, i.e., transfer learning and classification, each of which has at least one parameter to be tuned to achieve their optimal performance. This practice fits well with the purpose of automated parameter optimization. However, there is a lack of thorough understanding about what are the impacts of automated parameter optimization on various CPDP techniques. In this paper, we present the first empirical study that looks into such impacts on 62 CPDP techniques, 13 of which are chosen from the existing CPDP literature while the other 49 ones have not been explored before. We build defect prediction models over 20 real-world software projects that are of different scales and characteristics. Our findings demonstrate that: (1) Automated parameter optimization substantially improves the defect prediction performance of 77\% CPDP techniques with a manageable computational cost. Thus more efforts on this aspect are required in future CPDP studies. (2) Transfer learning is of ultimate importance in CPDP. Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques. This finding provides important insights about the future design of CPDP techniques.
computer science
We present results from a high-resolution, cosmological, $\Lambda$CDM simulation of a group of field dwarf galaxies with the "superbubble" model for clustered SN feedback, accounting for thermal conduction and cold gas evaporation. The initial conditions and the galaxy formation physics, other than SN feedback, are the same as in Shen et al. (2014). The simulated luminous galaxies have blue colors, low star formation efficiencies and metallicities, and high cold gas content, reproducing the observed scaling relations of dwarfs in the Local Volume. Bursty star formation histories and superbubble-driven outflows lead to the formation of kpc-size DM cores when stellar masses reaches $M_{*} > 10^6$ $M_{\odot}$, similar to previous findings. However, the superbubble model appears more effective in destroying DM cusps than the previously adopted "blastwave" model, reflecting a higher coupling efficiency of SN energy with the ISM. On larger scale, superbubble-driven outflows have a more moderate impact: galaxies have higher gas content, more extended stellar disks, and a smaller metal-enriched region in the CGM. The two halos with $M_{vir} \sim 10^9$ $M_{\odot}$, which formed ultra-faint dwarf galaxies in Shen et al. (2014), remain dark due to the different impact of metal-enriched galactic winds from two nearby luminous galaxies. The column density distributions of H I, Si II, C IV and O VI are in agreement with recent observations of CGM around isolated dwarfs. While H I is ubiquitous with a covering fraction of unity within the CGM, Si II and C IV are less extended. O VI is more extended, but its mass is only 11% of the total CGM oxygen budget, as the diffuse CGM is highly ionised by the UVB. Superbubble feedback produces C IV and O VI an order of magnitude higher column densities than those with blastwave feedback. The CGM and DM cores are most sensitive probes of feedback mechanisms.
astrophysics
We derive explicit isomorphisms between certain congruence subgroups of the Siegel modular group, the Hermitian modular group over an arbitrary imaginary-quadratic number field and the modular group over the Hurwitz quaternions of degree 2 and the discriminant kernels of special orthogonal groups SO 0 (2, n), n = 3, 4, 6. The proof is based on an application of linear algebra adapted to the number theoretical needs.
mathematics
This article deals with an initial-boundary value problem for the coupled chemotaxis-haptotaxis system with nonlinear diffusion $$\left\{\begin{array}{ll} u_t=\nabla\cdot( D(u)\nabla u)-\chi\nabla\cdot(u\nabla v)- \xi\nabla\cdot(u\nabla w)+\mu u(1- u-w), x\in \Omega, t>0,\\ \tau v_t=\Delta v- v +u,\quad x\in \Omega, t>0,\\ w_t=- vw,\quad x\in \Omega, t>0, \end{array}\right.$$ under homogeneous Neumann boundary conditions in a smooth bounded domain $\Omega\subset\mathbb{R}^N(N\geq1)$, where $\tau\in\{0,1\}$ and $\chi$, $\xi$ and $\mu$ are given nonnegative parameters. As far as we know, this situation provides the first {\bf rigorous} result which (precisely) gives the relationship between $m,\xi,\chi$ and $\mu$ that yields to the boundedness of the solutions. Moreover, these results thereby significantly extending results of previous results of several authors (see Remarks 1.1 and 1.2) and some optimal results are obtained.
mathematics
A comprehensive study of the structural, electronic, and optical properties of lead-free perovskites has been carried out by means of first principles method based on DFT. The calculations are performed for the compound of the type A2BX6 with A=Rb, and Cs; B=Sn, Pd, and Pt; and X=Cl, Br, and I. The calculated structural parameters (lattice constants and bond lengths) agree well with the experiments. The computed band gap reveals a semiconducting profile for all these compounds showing a decreasing trend of the band gap energy by changing the halide ions consecutively from Cl to Br and Br to I. However, for variation in the B-site cation, the band gap increases by changing the cation from Pd to Pt via Sn. The most likely compounds, Rb2PdBr6 and Cs2PtI6, exhibit a band gap within the optimal range of 0.9-1.6 eV for single-junction photovoltaic applications. The optical properties in terms of the optimal value of the dielectric constant, optical conductivity, and absorption coefficient are also investigated upto the photon energy of 10 eV. Our results indicate that upon changing the halogen ions (Cl by Br and Br by I) the optical properties altered significantly. Maximum dielectric constants and high optical absorption are found for Rb2PdI6 and Cs2PtI6. The unique optoelectronic properties such as ideal band gap, high dielectric constants, and optimum absorption of A2BX6 perovskites could be efficiently utilized in designing high performance single and multi-junction perovskite solar cells.
condensed matter
A sufficient and necessary condition ensuring that the backward shift operator on the K\"{o}the sequence space admits an invariant distributionally $\varepsilon$-scrambled set for some $\varepsilon>0$ is obtained, improving the main results in [F. Mart\'{\i}nez-Gim\'{e}nez, P. Oprocha, A. Peris, J. Math. Anal. Appl., {\bf 351} (2009), 607--615].
mathematics
Analyzing the story behind TV series and movies often requires understanding who the characters are and what they are doing. With improving deep face models, this may seem like a solved problem. However, as face detectors get better, clustering/identification needs to be revisited to address increasing diversity in facial appearance. In this paper, we address video face clustering using unsupervised methods. Our emphasis is on distilling the essential information, identity, from the representations obtained using deep pre-trained face networks. We propose a self-supervised Siamese network that can be trained without the need for video/track based supervision, and thus can also be applied to image collections. We evaluate our proposed method on three video face clustering datasets. The experiments show that our methods outperform current state-of-the-art methods on all datasets. Video face clustering is lacking a common benchmark as current works are often evaluated with different metrics and/or different sets of face tracks.
computer science
This paper develops energy-efficient hybrid beamforming designs for mmWave multi-user systems where analog precoding is realized by switches and phase shifters such that radio frequency (RF) chain to transmit antenna connections can be switched off for energy saving. By explicitly considering the effect of each connection on the required power for baseband and RF signal processing, we describe the total power consumption in a sparsity form of the analog precoding matrix. However, these sparsity terms and sparsity-modulus constraints of the analog precoding make the system energy-efficiency maximization problem non-convex and challenging to solve. To tackle this problem, we first transform it into a subtractive-form weighted sum rate and power problem. A compressed sensing-based re-weighted quadratic-form relaxation method is employed to deal with the sparsity parts and the sparsity-modulus constraints. We then exploit alternating minimization of the mean-squared error to solve the equivalent problem where the digital precoding vectors and the analog precoding matrix are updated sequentially. The energy efficiency upper bound and a heuristic algorithm are also examined for comparison purposes. Numerical results confirm the superior performances of the proposed algorithm over benchmark energy-efficiency hybrid precoding algorithms and heuristic ones.
electrical engineering and systems science
Masking quantum information, which is impossible without randomness as a resource, is a task that encodes quantum information into bipartite quantum state while forbidding local parties from accessing to that information. In this work, we disprove the geometric conjecture about unitarily maskable states [K. Modi et al., Phys. Rev. Lett. 120, 230501 (2018)], and make an algebraic analysis of quantum masking. First, we show a general result on quantum channel mixing that a subchannel's mixing probability should be suppressed if its classical capacity is larger than the mixed channel's capacity. This constraint combined with the well-known information conservation law, a law that does not exist in classical information theory, gives a lower bound of randomness cost of masking quantum information as a monotone decreasing function of evenness of information distribution. This result provides a consistency test for various scenarios of fast scrambling conjecture on the black hole evaporation process. The results given here are robust to incompleteness of quantum masking.
quantum physics
Here we have introduced the ideas of $ (j-i)sg_\kappa^*$-closed sets and a semi generalized closed set in a bispace; $ i,j=1,2; i\not=j $ and then have studied on pairwise semi $T_0 $-axiom, pairwise semi $T_1 $-axiom and pairwise semi $T_\omega $-axiom. We have investigated some of their topological properties and also established a relation among these axioms under some additional conditions.
mathematics
After a sudden disturbance, the energy balance of generators is disturbed, and the power outputs of synchronous generators vary as their rotor angles shift from their equilibrium points. This trend essentially presents the versatile response of each machine to the disturbance. Because of this change, the phase angle of the bus also differs. Hence, the versatile response of each machine can be assessed by the phase angles change at the buses close to the synchronous generator. This paper introduces a new methodology for discovering the degree of coherency among buses using the correlation index of the voltage angle between each pair of buses and use the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) to partition the network into islands. The proposed approach also provides the network integrity indices (connectivity, splitting, and separation) for studying the dynamic nature of the power network system. The approach is assessed on an IEEE 39 test system with a fully dynamic model. The simulation results presented in this paper demonstrate the efficiency of the proposed approach.
electrical engineering and systems science
The 511 keV line from positron annihilation in the Galaxy was the first $\gamma$-ray line detected to originate from outside our solar system. Going into the fifth decade since the discovery, the source of positrons is still unconfirmed and remains one of the enduring mysteries in $\gamma$-ray astronomy. With a large flux of $\sim$10$^{-3}$ $\gamma$/cm$^{2}$/s, after 15 years in operation INTEGRAL/SPI has detected the 511 keV line at $>50\sigma$ and has performed high-resolution spectral studies which conclude that Galactic positrons predominantly annihilate at low energies in warm phases of the interstellar medium. The results from imaging are less certain, but show a spatial distribution with a strong concentration in the center of the Galaxy. The observed emission from the Galactic disk has low surface brightness and the scale height is poorly constrained, therefore, the shear number of annihilating positrons in our Galaxy is still not well know. Positrons produced in $\beta^+$-decay of nucleosynthesis products, such as $^{26}$Al, can account for some of the annihilation emission in the disk, but the observed spatial distribution, in particular the excess in the Galactic bulge, remains difficult to explain. Additionally, one of the largest uncertainties in these studies is the unknown distance that positrons propagate before annihilation. In this paper, we will summarize the current knowledge base of Galactic positrons, and discuss how next-generation instruments could finally provide the answers.
astrophysics
We obtained the absolute magnitudes, distances, and white dwarf (WD) masses of 32 recent galactic novae based on the time-stretching method for nova light curves. A large part of the light/color curves of two classical novae often overlap each other if we properly squeeze/stretch their timescales. Then, a target nova brightness is related to the other template nova brightness by $(M_V[t])_{\rm template} = (M_V[t/f_{\rm s}] - 2.5 \log f_{\rm s})_{\rm target}$, where $t$ is the time, $M_V[t]$ is the absolute $V$ magnitude, and $f_{\rm s}$ is their timescaling ratio. Moreover, when these two time-stretched light curves, $(t/f_{\rm s})$-$(M_V-2.5 \log f_{\rm s})$, overlap each other, $(t/f_{\rm s})$-$(B-V)_0$ do too, where $(B-V)_0$ is the intrinsic $B-V$ color. Thus, the two nova tracks overlap each other in the $(B-V)_0$-$(M_V-2.5 \log f_{\rm s})$ diagram. Inversely using these properties, we obtain/confirm the distance and reddening by comparing each nova light/color curves with the well calibrated template novae. We classify the 32 novae into two types, LV Vul and V1500 Cyg types, in the time-stretched $(B-V)_0$-$(M_V-2.5 \log f_{\rm s})$ color-magnitude diagram. The WD mass is obtained by direct comparison of the model $V$ light curves with the observation. Thus, we obtain a uniform set of 32 galactic classical novae that provides the distances and WD masses from a single method. Many novae broadly follow the universal decline law and the present method can be applied to them, while some novae largely deviate from the universal decline law and so the method cannot be directly applied to them. We discuss such examples.
astrophysics
When asked to write a contribution to the memorial volume for Peter Freund I went through my memory of the first time I met Peter. This was in 1971 when Dual Models were very popular and I had just joined in the efforts. For a number of years I worked on the problem of finding a realistic Dual Model/String Theory for hadrons, and here I will review those efforts as they happened, but also in the light of what we now know about hadrons from QCD. I will argue for when a string picture of hadrons is appropriate and discuss its limitations and the specific results you get from it.
high energy physics theory
To date, magnetic proximity effect (MPE) has only been conclusively observed in ferromagnet (FM) based systems. We report the observation of anomalous Hall effect and anisotropic magnetoresistance in angular dependent magnetoresistance (ADMR) measurements in Pt on antiferromagnetic (AF) $\alpha$-Fe$_2$O$_3$(0001) epitaxial films at 10 K, which provide evidence for the MPE. The N\'eel order of $\alpha$-Fe$_2$O$_3$ and the induced magnetization in Pt show a unique ADMR compared with all other FM and AF systems. A macrospin response model is established and can explain the AF spin configuration and all main ADMR features in the Pt/$\alpha$-Fe$_2$O$_3$ bilayers.
condensed matter
For very large datasets, random projections (RP) have become the tool of choice for dimensionality reduction. This is due to the computational complexity of principal component analysis. However, the recent development of randomized principal component analysis (RPCA) has opened up the possibility of obtaining approximate principal components on very large datasets. In this paper, we compare the performance of RPCA and RP in dimensionality reduction for supervised learning. In Experiment 1, study a malware classification task on a dataset with over 10 million samples, almost 100,000 features, and over 25 billion non-zero values, with the goal of reducing the dimensionality to a compressed representation of 5,000 features. In order to apply RPCA to this dataset, we develop a new algorithm called large sample RPCA (LS-RPCA), which extends the RPCA algorithm to work on datasets with arbitrarily many samples. We find that classification performance is much higher when using LS-RPCA for dimensionality reduction than when using random projections. In particular, across a range of target dimensionalities, we find that using LS-RPCA reduces classification error by between 37% and 54%. Experiment 2 generalizes the phenomenon to multiple datasets, feature representations, and classifiers. These findings have implications for a large number of research projects in which random projections were used as a preprocessing step for dimensionality reduction. As long as accuracy is at a premium and the target dimensionality is sufficiently less than the numeric rank of the dataset, randomized PCA may be a superior choice. Moreover, if the dataset has a large number of samples, then LS-RPCA will provide a method for obtaining the approximate principal components.
statistics
Neural control is an exciting mystery which we instinctively master. Yet, researchers have a hard time explaining the motor control trajectories. Physiologically accurate biomechanical simulations can, to some extent, mimic live subjects and help us form evidence-based hypotheses. In these simulated environments, muscle excitations are typically calculated through inverse dynamic optimizations which do not possess a closed-form solution. Thus, computationally expensive, and occasionally unstable, iterative numerical solvers are the only widely utilized solution. In this work, we introduce ArtiSynth, a 3D modeling platform that supports the combined simulation of multi-body and finite element models, and extended to support reinforcement learning (RL) training. we further use ArtiSynth to investigate whether a deep RL policy can be trained to drive the motor control of a physiologically accurate biomechanical model in a large continuous action space. We run a comprehensive evaluation of its performance and compare the results with the forward dynamics assisted tracking with a quadratic objective function. We assess the two approaches in terms of correctness, stability, energy-efficiency, and temporal consistency.
electrical engineering and systems science
We study direct limits of embedded Cantor sets and embedded \sier curves. We show that under appropriate conditions on the embeddings, all limits of Cantor spaces give rise to homeomorphic spaces, called $\omega$-Cantor spaces, and similarly, all limits of \sier curves give homeomorphic spaces, called to $\omega$-\sier curves. We then show that the former occur naturally as Morse boundaries of right-angled Artin groups and fundamental groups of non-geometric graph manifolds, while the latter occur as Morse boundaries of fundamental groups of finite-volume, cusped hyperbolic 3-manifolds.
mathematics