text
stringlengths
11
9.77k
label
stringlengths
2
104
This paper describes our system submitted to SemEval 2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours, Subtask A (Gorrell et al., 2019). The challenge focused on classifying whether posts from Twitter and Reddit support, deny, query, or comment a hidden rumour, truthfulness of which is the topic of an underlying discussion thread. We formulate the problem as a stance classification, determining the rumour stance of a post with respect to the previous thread post and the source thread post. The recent BERT architecture was employed to build an end-to-end system which has reached the F1 score of 61.67% on the provided test data. It finished at the 2nd place in the competition, without any hand-crafted features, only 0.2% behind the winner.
computer science
We study theoretically the interaction of ultrashort optical pulses with gapped graphene. Such strong pulse results in finite conduction band population and corresponding electric current both during and after the pulse. Since gapped graphene has broken inversion symmetry, it has an axial symmetry about the $y$-axis but not about the $x$-axis. We show that, in this case, if the linear pulse is polarized along the $x$-axis, the rectified electric current is generated in the $y$ direction. At the same time, the conduction band population distribution in the reciprocal space is symmetric about the $x$-axis. Thus, the rectified current in gapped graphene has inter-band origin, while the intra-band contribution to the rectified current is zero.
condensed matter
We investigate an effective flavor-symmetric Yang-Mills-Higgs model with $N^2-1$ adjoint scalar fields. We find a set of BPS equations that provide vortex solutions and calculate their energies for arbitrary representations. We show that, for a given N-ality $k$, the energy of the corresponding antisymmetric representation is the lowest. This completes the proof that this model is able to reproduce a Casimir law for the string tension at asymptotic distances.
high energy physics theory
We describe the symmetry protected nodal points that can exist in magnetic space groups and show that only 3-, 6-, and 8-fold degeneracies are possible (in addition to the 2- and 4-fold degeneracies that have already been studied.) The 3- and 6-fold degeneracies are derived from "spin-1" Weyl fermions. The 8-fold degeneracies come in different flavors. In particular, we distinguish between 8-fold fermions that realize non-chiral "Rarita-Schwinger fermions" and those that can be described as four degenerate Weyl fermions. We list the (magnetic and non-magnetic) space groups where these exotic fermions can be found. We further show that in several cases, a magnetic translation symmetry pins the Hamiltonian of the multifold fermion to an idealized exactly solvable point that is not achievable in non-magnetic crystals without fine-tuning. Finally, we present known compounds that may host these fermions and methods for systematically finding more candidate materials.
condensed matter
Prosody modeling is an essential component in modern text-to-speech (TTS) frameworks. By explicitly providing prosody features to the TTS model, the style of synthesized utterances can thus be controlled. However, predicting natural and reasonable prosody at inference time is challenging. In this work, we analyzed the behavior of non-autoregressive TTS models under different prosody-modeling settings and proposed a hierarchical architecture, in which the prediction of phoneme-level prosody features are conditioned on the word-level prosody features. The proposed method outperforms other competitors in terms of audio quality and prosody naturalness in our objective and subjective evaluation.
electrical engineering and systems science
Machine learning has been used in high energy physics for a long time, primarily at the analysis level with supervised classification. Quantum computing was postulated in the early 1980s as way to perform computations that would not be tractable with a classical computer. With the advent of noisy intermediate-scale quantum computing devices, more quantum algorithms are being developed with the aim at exploiting the capacity of the hardware for machine learning applications. An interesting question is whether there are ways to apply quantum machine learning to High Energy Physics. This paper reviews the first generation of ideas that use quantum machine learning on problems in high energy physics and provide an outlook on future applications.
quantum physics
The prevailing models for advancing dynamic contact angle are under intensive debates, and the fitting performances are far from satisfying in practice. The present study proposes a model based on the recent understanding of the multi-scale structure and the local friction at the contact line. The model has unprecedented fitting performance for dynamic contact angle over wide spreading speed regime across more than five orders of magnitude. The model also well applies for non-monotonous angle variations which have long been considered as abnormal. The model has three fitting parameters, and two of them are nearly predictable at this stage. The one denoting the multi-scale ratio was nearly constant and the one representing the primary frictional coefficient, has a simple correlation with the liquid bulk viscosity.
condensed matter
The model of the deformation of DNA macromolecule is developed with the accounting of two types of components of deformation: external and internal. External components describe the bend, twist and stretch of the double helix. The internal component - the conformational mobility inside of the double helix. In the work the deformation of DNA macromolecule is considered taking into account the coupling of the external component (of deformation) with the internal component (of conformational change). Under the task consideration the macromolecule twist-stretch coupling and coupling between twist and internal component are taken into account. The solution obtained in these conditions for the deformation components allows changing the character of respond in stretch component on unwind (overwind) in dependence on the applied force to twist component. The changing of the character of deformation from compression to tension achieving of critical untwisting force (and vise versa the changing of the character of deformation from overwind to unwind at critical tension force)is known from the single molecular experiments. The nature of such unexpected behavior of double helix have clarified in the present work by including in consideration the internal component. The obtained solutions and their conformity to experimental results show the essential role of coupling between internal and external components in the double helix conformational mechanics under action force in pN range.
condensed matter
Causal inference in a program evaluation setting faces the problem of external validity when the treatment effect in the target population is different from the treatment effect identified from the population of which the sample is representative. This paper focuses on a situation where such discrepancy arises by a stratified sampling design based on the individual treatment status and other characteristics. In such settings, the design probability is known from the sampling design but the target population depends on the underlying population share vector which is often unknown, and except for special cases, the treatment effect parameters are not identified. First, we propose a method of constructing confidence sets that are valid for a given range of population shares. Second, when a benchmark population share vector and a corresponding estimator of a treatment effect parameter are given, we propose a method to discover the scope of external validity with familywise error rate control. Third, we propose an optimal sampling design which minimizes the semiparametric efficiency bound given a population share associated with a target population. We provide Monte Carlo simulation results and an empirical application to demonstrate the usefulness of our proposals.
statistics
We calculate distributions of different vector mesons in purely exclusive ($p p \to p p V$) and semi-exclusive ($p p \to p X V$) processes with electromagnetic dissociation of a proton. The cross section for the electromagnetic dissociation is expressed through electromagnetic structure functions of the proton. We include the transverse momentum distribution of initial photons in the associated flux. Contributions of the exclusive and semi-exclusive processes are compared for different vector mesons ($V = \phi, J/\psi, \Upsilon$). We discuss how the relative contribution of the semi-exclusive processes depends on the mass of the vector meson as well as on different kinematical variables of the vector meson ($y$, $p_t$). The ratio of semi-exclusive to exclusive contributions are shown and compared for different mesons in different variables.
high energy physics phenomenology
Kendall rank correlation coefficient is used to measure the ordinal association between two measurements. In this paper, we introduce the Concordance coefficient as a generalization of the Kendall rank correlation, and illustrate its use to measure the ordinal association between quantity and quality measures when two or more samples are considered. In this sense, the Concordance coefficient can be seen as a generalization of the Kendall rank correlation coefficient and an alternative to the non-parametric mean rank-based methods to compare two or more samples. A comparison of the proposed Concordance coefficient and the classical Kruskal-Wallis statistic is presented through a comparison of exact distributions of both statistics.
statistics
This article is concerned with data-driven analysis of discrete-time systems under aperiodic sampling, and in particular with a data-driven estimation of the maximum sampling interval (MSI). The MSI is relevant for analysis of and controller design for cyber-physical, embedded and networked systems, since it gives a limit on the time span between sampling instants such that stability is guaranteed. We propose tools to compute the MSI for a given controller and to design a controller with a preferably large MSI, both directly from a finite-length, noise-corrupted state-input trajectory of the system. We follow two distinct approaches for stability analysis, one taking a robust control perspective and the other a switched systems perspective on the aperiodically sampled system. In a numerical example and a subsequent discussion, we demonstrate the efficacy of our developed tools and compare the two approaches.
electrical engineering and systems science
Chest x-rays are a vital tool in the workup of many patients. Similar to most medical imaging modalities, they are profoundly multi-modal and are capable of visualising a variety of combinations of conditions. There is an ever pressing need for greater quantities of labelled data to develop new diagnostic tools, however this is in direct opposition to concerns regarding patient confidentiality which constrains access through permission requests and ethics approvals. Previous work has sought to address these concerns by creating class-specific GANs that synthesise images to augment training data. These approaches cannot be scaled as they introduce computational trade offs between model size and class number which places fixed limits on the quality that such generates can achieve. We address this concern by introducing latent class optimisation which enables efficient, multi-modal sampling from a GAN and with which we synthesise a large archive of labelled generates. We apply a PGGAN to the task of unsupervised x-ray synthesis and have radiologists evaluate the clinical realism of the resultant samples. We provide an in depth review of the properties of varying pathologies seen on generates as well as an overview of the extent of disease diversity captured by the model. We validate the application of the Fr\'echet Inception Distance (FID) to measure the quality of x-ray generates and find that they are similar to other high resolution tasks. We quantify x-ray clinical realism by asking radiologists to distinguish between real and fake scans and find that generates are more likely to be classed as real than by chance, but there is still progress required to achieve true realism. We confirm these findings by evaluating synthetic classification model performance on real scans. We conclude by discussing the limitations of PGGAN generates and how to achieve controllable, realistic generates.
electrical engineering and systems science
We investigate the effect of local stochastic control errors in the time-dependent Hamiltonian on isolated quantum dynamics. The control errors are formulated as time-dependent stochastic noise in the Schr\"odinger equation. For any local stochastic control errors, we establish a threshold theorem that provides a sufficient condition to obtain the target state, which should be determined in noiseless isolated quantum dynamics, as a relation between the number of measurements required and noise strength. The theorem guarantees that if the sum of the noise strengths is less than the inverse of computational time, the target state can be obtained through a constant-order number of measurements. If the opposite is true, the required number of measurements increases exponentially with computational time. Our threshold theorem can be applied to any isolated quantum dynamics such as quantum annealing, adiabatic quantum computation, the quantum approximate optimization algorithm, and the quantum circuit model.
quantum physics
Neuromorphic computing systems uses non-volatile memory (NVM) to implement high-density and low-energy synaptic storage. Elevated voltages and currents needed to operate NVMs cause aging of CMOS-based transistors in each neuron and synapse circuit in the hardware, drifting the transistor's parameters from their nominal values. Aggressive device scaling increases power density and temperature, which accelerates the aging, challenging the reliable operation of neuromorphic systems. Existing reliability-oriented techniques periodically de-stress all neuron and synapse circuits in the hardware at fixed intervals, assuming worst-case operating conditions, without actually tracking their aging at run time. To de-stress these circuits, normal operation must be interrupted, which introduces latency in spike generation and propagation, impacting the inter-spike interval and hence, performance, e.g., accuracy. We propose a new architectural technique to mitigate the aging-related reliability problems in neuromorphic systems, by designing an intelligent run-time manager (NCRTM), which dynamically destresses neuron and synapse circuits in response to the short-term aging in their CMOS transistors during the execution of machine learning workloads, with the objective of meeting a reliability target. NCRTM de-stresses these circuits only when it is absolutely necessary to do so, otherwise reducing the performance impact by scheduling de-stress operations off the critical path. We evaluate NCRTM with state-of-the-art machine learning workloads on a neuromorphic hardware. Our results demonstrate that NCRTM significantly improves the reliability of neuromorphic hardware, with marginal impact on performance.
computer science
Semi-quantum key distribution (SQKD) allows two parties (Alice and Bob) to create a shared secret key, even if one of those parties (say, Alice) is classical. However, most SQKD protocols suffer from practical security problems. The recently developed "Classical Alice with a Controllable Mirror" protocol [Boyer, Katz, Liss, and Mor, Phys. Rev. A 96, 062335 (2017)] is an experimentally feasible SQKD protocol, and it was proven robust, but not yet proved secure. Here we prove the security of the Mirror protocol against a wide class of quantum attacks (the "collective attacks") and evaluate the resulting key rate.
quantum physics
Probing magnetic fields in the interstellar medium (ISM) is notoriously challenging. Motivated by the modern theories of magnetohydrodynamic (MHD) turbulence and turbulence anisotropy, we introduce the Structure-Function Analysis (SFA) as a new approach to measure the magnetic field orientation and estimate the magnetization. We analyze the statistics of turbulent velocities in three-dimensional compressible MHD simulations through the second-order structure functions in both local and global reference frames. In the sub-Alfvenic turbulence with the magnetic energy larger than the turbulent energy, the SFA of turbulent velocities measured in the directions perpendicular and parallel to the magnetic field can be significantly different. Their ratio has a power-law dependence on the Alfven Mach number $M_A$, which is inversely proportional to the magnetic field strength. We demonstrate that the anisotropic structure functions of turbulent velocities can be used to estimate both the orientation and strength of magnetic fields. With turbulent velocities measured using different tracers, our approach can be generally applied to probing the magnetic fields in the multi-phase interstellar medium.
astrophysics
Human brain activity generates scalp potentials (electroencephalography EEG), intracranial potentials (iEEG), and external magnetic fields (magnetoencephalography MEG), all capable of being recorded, often simultaneously, for use in research and clinical applications. The so-called forward problem is the modeling of these fields at their sensors for a given putative neural source configuration. While early generations modeled the head as a simple set of isotropic spheres, today s ubiquitous magnetic resonance imaging (MRI) data allows detailed descriptions of head compartments with assigned isotropic and anisotropic conductivities. In this paper, we present a complete pipeline, integrated into the Brainstorm software, that allows users to generate an individual and accurate head model from the MRI and then calculate the electromagnetic forward solution using the finite element method (FEM). The head model generation is performed by the integration of the latest tools for MRI segmentation and FEM mesh generation. The final head model is divided into five main compartments: white matter, grey matter, CSF, skull, and scalp. For the isotropic compartments, widely-used default conductivity values are assigned. For the brain tissues, we use the process of the effective medium approach (EMA) to estimate anisotropic conductivity tensors from diffusion-weighted imaging (DWI) data. The FEM electromagnetic calculations are performed by the DUNEuro library, integrated into Brainstorm and accessible with a user-friendly graphical interface. This integrated pipeline, with full tutorials and example data sets freely available on the Brainstorm website, gives the neuroscience community easy access to advanced tools for electromagnetic modeling using FEM.
physics
We give a pedagogical introduction to algebraic quantum field theory (AQFT), with the aim of explaining its key structures and features. Topics covered include: algebraic formulations of quantum theory and the GNS representation theorem, the appearance of unitarily inequivalent representations in QFT (exemplified by the van Hove model), the main assumptions of AQFT and simple models thereof, the spectrum condition, Reeh--Schlieder theorem, split property, the universal type of local algebras, and the theory of superselection sectors. The abstract discussion is illustrated by concrete examples. One of our concerns is to explain various ways in which quantum field theory differs from quantum mechanics, not just in terms of technical detail, but in terms of physical content. The text is supplemented by exercises and appendices that enlarge on some of the relevant mathematical background. These notes are based on lectures given by CJF for the International Max Planck Research School at the Albert Einstein Institute, Golm (October, 2018) and by KR at the Raman Research Institute, Bangalore (January, 2019).
high energy physics theory
In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.
computer science
CT perfusion (CTP) has been used to triage ischemic stroke patients in the early stage, because of its speed, availability, and lack of contraindications. Perfusion parameters including cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT) and time of peak (Tmax) could also be computed from CTP data. However, CTP data or the perfusion parameters, are ambiguous to locate the infarct core or tissue at risk (penumbra), which is normally confirmed by the follow-up Diffusion Weighted Imaging (DWI) or perfusion diffusion mismatch. In this paper, we propose a novel generative modelbased segmentation framework composed of an extractor, a generator and a segmentor for ischemic stroke lesion segmentation. First, an extractor is used to directly extract the representative feature images from the CTP feature images. Second, a generator is used to generate the clinical relevant DWI images using the output from the extractor and perfusion parameters. Finally, the segmentor is used to precisely segment the ischemic stroke lesion using the generated DWI from the generator. Meanwhile, a novel pixel-region loss function, generalized dice combined with weighted cross entropy, is used to handle data unbalance problem which is commonly encountered in medical image segmentation. All networks are trained end-to-end from scratch using the 2018 Ischemic Stroke Lesion Segmentation Challenge (ISLES) dataset and our method won the first place in the 2018 ischemic stroke lesions segmentation challenge in the test stage.
electrical engineering and systems science
We show that the horizon geometry for supersymmetric black hole solutions of minimal five-dimensional gauged supergravity is that of a particular Einstein-Cartan-Weyl (ECW) structure in three dimensions, involving the trace and traceless part of both torsion and nonmetricity, and obeying some precise constraints. In the limit of zero cosmological constant, the set of nonlinear partial differential equations characterizing this ECW structure reduces correctly to that of a hyper-CR Einstein-Weyl structure in the Gauduchon gauge, which was shown by Dunajski, Gutowski and Sabra to be the horizon geometry in the ungauged BPS case.
high energy physics theory
Rotation curves of spiral galaxies are the major tool for determining the distribution of mass in spiral galaxies. They provide fundamental information for understanding the dynamics, evolution and formation of spiral galaxies. We describe various methods to derive rotation curves, and review the results obtained. We discuss the basic characteristics of observed rotation curves in relation to various galaxy properties, such as Hubble type, structure, activity, and environment.
astrophysics
For the first time, planar high-purity germanium detectors with thin amorphous germanium contacts were successfully operated directly in liquid nitrogen and liquid argon in a cryostat at the Max-Planck-Institut fuer Physics in Munich. The detectors were fabricated at the Lawrence Berkeley National Laboratory and the University of South Dakota, using crystals grown at the University of South Dakota. They survived long-distance transportation and multiple thermal cycles in both cryogenic liquids and showed reasonable leakage currents and spectroscopic performance. Also discussed are the pros and cons of using thin amorphous semiconductor materials as an alternative contact technology in large-scale germanium experiments searching for physics beyond the Standard Model.
physics
We use the paradigmatic example of short-range interacting particles in a harmonic trap to show that the squared commutator of canonical operators and the related out of-time order correlation functions (OTOCs) of relevant observables are useful for understanding quantum quenches in non-chaotic models. In particular, we find that for finite interactions the long-time average of the squared commutator is directly proportional to the variance of the work probability distribution, which establishes a connection between the scrambling properties of a quench and the induced work fluctuations.
quantum physics
We propose a scheme for entanglement distribution among different single atoms trapped in separated cavities. In our scheme, by reflecting an input coherent optical pulse from a cavity with a single trapped atom, a controlled phase-shift gate between the atom and the coherent optical pulse is achieved. Based on this gate and homodyne detection, we construct an $n$-qubit parity gate and show its use for distribution of a large class of entangled states in one shot, including the GHZ state $\left\vert GHZ_{n}\right\rangle $, W state $\left\vert W_{n}\right\rangle $, Dicke state $\left\vert D_{n,k}\right\rangle $ and certain sums of Dicke states $% \left\vert G_{n,k}\right\rangle $. We also show such distribution could be performed with high success probability and high fidelity even in the presence of channel loss.
quantum physics
We study a curious class of partitions, the parts of which obey an exceedingly strict congruence condition we refer to as "sequential congruence": the $m$th part is congruent to the $(m+1)$th part modulo $m$, with the smallest part congruent to zero modulo the length of the partition. It turns out these obscure-seeming objects are embedded in a natural way in partition theory. We show that sequentially congruent partitions with largest part $n$ are in bijection with the partitions of $n$. Moreover, we show sequentially congruent partitions induce a bijection between partitions of $n$ and partitions of length $n$ whose parts obey a strict "frequency congruence" condition -- the frequency (or multiplicity) of each part is divisible by that part -- and prove families of similar bijections, connecting with G. E. Andrews's theory of partition ideals.
mathematics
It is generally accepted that the Coulomb crystal model can be used to describe matter in the neutron star crust. In [1] we study the properties of deformed Coulomb crystals and how their stability depends on the polarization of the electron background; the breaking stress in the crust $\sigma_{\max}$ at zero temperature was calculated based on the analysis of the electrostatic energy and the phonon spectrum of the Coulomb crystal. In this paper, I briefly discuss the influence of zero-point and thermal contributions on $\sigma_{\max}$.
astrophysics
Interference Management is a vast topic present in many disciplines. The majority of wireless standards suffer the drawback of interference intrusion and the network efficiency drop due to that. Traditionally, interference management has been addressed by proposing signal processing techniques that minimize their effects locally. However, the fast evolution of future communications makes difficult to adapt to new era. In this paper we propose the use of Deep Learning techniques to present a compact system for interference management. In particular, we describe two subsystems capable to detect the presence of interference, even in high Signal to Interference Ratio (SIR), and interference classification in several radio standards. Finally, we present results based on real signals captured from terrestrial and satellite networks and the conclusions unveil the courageous future of AI and wireless communications.
electrical engineering and systems science
In this work, we study the operations of handle slides introduced recently for delta-matroids by Iain Moffatt and Eunice Mphako-Banda. We then prove that the class of binary delta-matroids is the only class of delta-matroids closed under handle slides.
mathematics
Multi-view stacking is a framework for combining information from different views (i.e. different feature sets) describing the same set of objects. In this framework, a base-learner algorithm is trained on each view separately, and their predictions are then combined by a meta-learner algorithm. In a previous study, stacked penalized logistic regression, a special case of multi-view stacking, has been shown to be useful in identifying which views are most important for prediction. In this article we expand this research by considering seven different algorithms to use as the meta-learner, and evaluating their view selection and classification performance in simulations and two applications on real gene-expression data sets. Our results suggest that if both view selection and classification accuracy are important to the research at hand, then the nonnegative lasso, nonnegative adaptive lasso and nonnegative elastic net are suitable meta-learners. Exactly which among these three is to be preferred depends on the research context. The remaining four meta-learners, namely nonnegative ridge regression, nonnegative forward selection, stability selection and the interpolating predictor, show little advantages in order to be preferred over the other three.
statistics
We investigate the sensitivity of electron-proton ($ep$) colliders for charged lepton flavor violation (cLFV) in an effective theory approach, considering a general effective Lagrangian for the conversion of an electron into a muon or a tau via the effective coupling to a neutral gauge boson or a neutral scalar field. For the photon, the $Z$ boson and the Higgs particle of the Standard Model, we present the sensitivities of the LHeC for the coefficients of the effective operators, calculated from an analysis at the reconstructed level. As an example model where such flavor changing neutral current (FCNC) operators are generated at loop level, we consider the extension of the Standard Model by sterile neutrinos. We show that the LHeC could already probe the LFV conversion of an electron into a muon beyond the current experimental bounds, and could reach more than an order of magnitude higher sensitivity than the present limits for LFV conversion of an electron into a tau. We discuss that the high sensitivities are possible because the converted charged lepton is dominantly emitted in the backward direction, enabling an efficient separation of the signal from the background.
high energy physics phenomenology
The stochastic motion of particles in living cells is often spatially inhomogeneous with a higher effective diffusivity in a region close to the cell boundary due to active transport along actin filaments. As a first step to understand the consequence of the existence of two compartments with different diffusion constant for stochastic search problems we consider here a Brownian particle in a circular domain with different diffusion constants in the inner and the outer shell. We focus on the narrow escape problem and compute the mean first passage time (MFPT) for Brownian particles starting at some pre-defined position to find a small region on the outer reflecting boundary. For the annulus geometry we find that the MFPT can be minimized for a specific value of the width of the outer shell. In contrast for the two-shell geometry we show that the MFPT depends monotonously on all model parameters, in particular on the outer shell width. Moreover we find that the distance between the starting point and the narrow escape region which maximizes the MFPT depends discontinuously on the ratio between inner and outer diffusivity.
condensed matter
We investigate whether generating synthetic data can be a viable strategy for providing access to detailed geocoding information for external researchers, without compromising the confidentiality of the units included in the database. Our work was motivated by a recent project at the Institute for Employment Research (IAB) in Germany that linked exact geocodes to the Integrated Employment Biographies (IEB), a large administrative database containing several million records. We evaluate the performance of three synthesizers regarding the trade-off between preserving analytical validity and limiting disclosure risks: One synthesizer employs Dirichlet Process mixtures of products of multinomials (DPMPM), while the other two use different versions of Classification and Regression Trees (CART). In terms of preserving analytical validity, our proposed synthesis strategy for geocodes based on categorical CART models outperforms the other two. If the risks of the synthetic data generated by the categorical CART synthesizer are deemed too high, we demonstrate that synthesizing additional variables is the preferred strategy to address the risk-utility trade-off in practice, compared to limiting the size of the regression trees or relying on the strategy of providing geographical information only on an aggregated level. We also propose strategies for making the synthesizers scalable for large files, present analytical validity measures and disclosure risk measures for the generated data, and provide general recommendations for statistical agencies considering the synthetic data approach for disseminating detailed geographical information.
statistics
Classical shadow tomography provides an efficient method for predicting functions of an unknown quantum state from a few measurements of the state. It relies on a unitary channel that efficiently scrambles the quantum information of the state to the measurement basis. Facing the challenge of realizing deep unitary circuits on near-term quantum devices, we explore the scenario in which the unitary channel can be shallow and is generated by a quantum chaotic Hamiltonian via time evolution. We provide an unbiased estimator of the density matrix for all ranges of the evolution time. We analyze the sample complexity of the Hamiltonian-driven shadow tomography. For Pauli observables, we find that it can be more efficient than the unitary-2-design-based shadow tomography in a sequence of intermediate time windows that range from an order-1 scrambling time to a time scale of $D^{1/6}$, given the Hilbert space dimension $D$. In particular, the efficiency of predicting diagonal Pauli observables is improved by a factor of $D$ without sacrificing the efficiency of predicting off-diagonal Pauli observables.
quantum physics
Heavy-quark symmetry as applied to heavy hadron systems implies that their interactions are independent of their heavy-quark spin (heavy-quark spin symmetry) and heavy flavour contents (heavy flavour symmetry). In the molecular hypothesis the $X(3872)$ resonance is a $1^{++}$ $D^*\bar{D}$ bound state. If this is the case, the application of heavy-quark symmetry to a molecular $X(3872)$ suggests the existence of a series of partner states, the most obvious of which is a possible $2^{++}$ $D^*\bar{D}^*$ bound state for which the two-body potential is identical to that of the $1^{++}$ $D^*\bar{D}$ system, the reason being that these two heavy hadron-antihadron states have identical light-spin content. As already discussed in the literature this leads to the prediction of a partner state at $4012\,{\rm MeV}$, at least in the absence of other dynamical effects which might affect the location of this molecule. However the prediction of further heavy-quark symmetry partners cannot be done solely on the basis of symmetry and requires additional information. We propose to use the one boson exchange model to fill this gap, in which case we will be able to predict or discard the existence of other partner states. Besides the isoscalar $2^{++}$ $D^*\bar{D}^*$ bound state, we correctly reproduce the location and quantum numbers of the isovector hidden-bottom $Z_b(10610)$ and $Z_b(10650)$ molecular candidates. We also predict the hidden-bottom $1^{++}$ $B^*\bar{B}^*$ and $2^{++}$ $B^*\bar{B}^*$ partners of the $X(3872)$, in agreement with previous theoretical speculations, plus a series of other states. The isoscalar, doubly charmed $1^+$ $D D^*$ and $D^* D^*$ molecules and their doubly bottomed counterparts are likely to bind, providing a few instances of explicitly exotic systems.
high energy physics phenomenology
Extreme high-energy peaked BL Lac objects (EHBLs) are an emerging class of blazars. Their typical two-hump structured spectral energy distribution (SED) peaks at higher energies with respect to conventional blazars. Multi-wavelength (MWL) observations constrain their synchrotron peak in the medium to hard X-ray band. Their gamma-ray SED peaks above the GeV band, and in some objects it extends up to several TeV. Up to now, only a few EHBLs have been detected in the TeV gamma-ray range. In this paper, we report the detection of the EHBL 2WHSP J073326.7+515354, observed and detected during 2018 in TeV gamma rays with the MAGIC telescopes. The broad-band SED is studied within a MWL context, including an analysis of the Fermi-LAT data over ten years of observation and with simultaneous Swift-XRT, Swift-UVOT, and KVA data. Our analysis results in a set of spectral parameters that confirms the classification of the source as an EHBL. In order to investigate the physical nature of this extreme emission, different theoretical frameworks were tested to model the broad-band SED. The hard TeV spectrum of 2WHSP J073326.7+515354 sets the SED far from the energy equipartition regime in the standard one-zone leptonic scenario of blazar emission. Conversely, more complex models of the jet, represented by either a two-zone spine-layer model or a hadronic emission model, better represent the broad-band SED.
astrophysics
Features in the distribution of exoplanet parameters by period demonstrate that the distribution of planet parameters is rich with information that can provide essential guidance to understanding planet histories. Structure has been found in the counts of planet-star "objects" by period, and within these structures, there are different correlations of eccentricity with planet number, stellar metallicity, planet count density per log period, stellar multiplicity, and planet mass. These appear to change with each other, and with stellar mass, but there are too few planets to easily and reliably study these important relationships. These relationships are the bulk observables against which the theory of planet formation and evolution must be tested. The opportunity to determine the nature of the relationships of the exoplanet parameters on each other demonstrate the value of finding more planets across period ranging from days to thousands of days and beyond. We recommend support for continuing to find more planets, even giant planets, with periods up to periods of a few thousand days. We also recommend support for the study of the distribution of the many exoplanet parameters.
astrophysics
We introduce a new unsupervised learning problem: clustering wide-sense stationary ergodic stochastic processes. A covariance-based dissimilarity measure together with asymptotically consistent algorithms is designed for clustering offline and online datasets, respectively. We also suggest a formal criterion on the efficiency of dissimilarity measures, and discuss of some approach to improve the efficiency of our clustering algorithms, when they are applied to cluster particular type of processes, such as self-similar processes with wide-sense stationary ergodic increments. Clustering synthetic data and real-world data are provided as examples of applications.
statistics
This contribution describes fast time-stamping cameras sensitive to optical photons and their applications.
physics
Long prediction horizons in Model Predictive Control (MPC) often prove to be efficient, however, this comes with increased computational cost. Recently, a Robust Model Predictive Control (RMPC) method has been proposed which exploits models of different granularity. The prediction over the control horizon is split into short-term predictions with a detailed model using MPC and long-term predictions with a coarse model using RMPC. In many applications robustness is required for the short-term future, but in the long-term future, subject to major uncertainty and potential modeling difficulties, robust planning can lead to highly conservative solutions. We therefore propose combining RMPC on a detailed model for short-term predictions and Stochastic MPC (SMPC), with chance constraints, on a simplified model for long-term predictions. This yields decreased computational effort due to a simple model for long-term predictions, and less conservative solutions, as robustness is only required for short-term predictions. The effectiveness of the method is shown in a mobile robot collision avoidance simulation.
electrical engineering and systems science
The Internet of Things adoption in the manufacturing industry allows enterprises to monitor their electrical power consumption in real time and at machine level. In this paper, we follow up on such emerging opportunities for data acquisition and show that analyzing power consumption in manufacturing enterprises can serve a variety of purposes. Apart from the prevalent goal of reducing overall power consumption for economical and ecological reasons, such data can, for example, be used to improve production processes. Based on a literature review and expert interviews, we discuss how analyzing power consumption data can serve the goals reporting, optimization, fault detection, and predictive maintenance. To tackle these goals, we propose to implement the measures real-time data processing, multi-level monitoring, temporal aggregation, correlation, anomaly detection, forecasting, visualization, and alerting in software. We transfer our findings to two manufacturing enterprises and show how the presented goals reflect in these enterprises. In a pilot implementation of a power consumption analytics platform, we show how our proposed measures can be implemented with a microservice-based architecture, stream processing techniques, and the fog computing paradigm. We provide the implementations as open source as well as a public demo allowing to reproduce and extend our research.
computer science
This article presents a description of a cosmic rays diffusive propagationmodel of a close point-like flash lamp like source and an approximation ofexperimentally observed spectral irregularity with this model. We show thatthis spectral irregularity can be explained using the presented model andprovide the most probable characteristics of such a source as well as severalobserved and identified sources which can be candidates for this role.
astrophysics
We explore the shape and internal structure of diblock copolymer (di-BCP) nanoparticles (NPs) by using the Ginzburg-Landau free-energy expansion. The self-assembly of di-BCP lamellae confined in emulsion droplets can form either ellipsoidal or onion-like NPs. The corresponding inner structure is a lamellar phase that is either perpendicular to the long axis of the ellipsoids (L$_\perp$) or forms a multi-layer concentric shell (C$_\parallel$), respectively. We focus on the effects of the interaction parameter between the A/B monomers $\tau$, and the polymer/solvent $\chi$, as well as the NP size on the nanoparticle shape and internal morphology. The aspect ratio ($l_{\rm AR}$) defined as the length ratio between the long and short axes is used to characterize the overall NP shape. Our results show that for the solvent that is neutral towards the two blocks, as $\tau$ increases, the $l_{\rm AR}$ of the NP first increases and then decreases, indicating that the NP becomes more elongated and then changes to a spherical NP. Likewise, decreasing $\chi$ or increasing the NP size can result in a more elongated NP. However, when the solvent has a preference towards the A or B blocks, the NP shape changes from striped ellipsoid to onion-like sphere by increasing the A/B preference parameter strength. The critical condition of the transition from an L$_\perp$ to C$_\parallel$ phase has been identified. Our results are in good agreement with previous experiments, and some of our predictions could be tested in future experiments.
condensed matter
The X-ray scattering intensities I(k) of linear alkanols OH(CH2)n-1CH3, obtained from experiments (methanol to 1-undecanol) and computer simulations (methanol to 1-nonanol) of different force field models, are comparatively studied, particularly in order to explain the origin and the properties of the scattering pre-peak in the k-vector range 0.3A^{-1}-1A^{-1}. The experimental I(k) show two apparent features: the pre-peak position kP decreases with increasing n, and more intriguingly, the amplitude AP goes through a maximum at 1-butanol (n=4). The first feature is well reproduced by all force field models, while the second shows a strong model dependence. The simulations reveal various shapes of clusters of the hydroxyl head-group, from n>2. kP is directly related to the size of the \emph{meta-objects} corresponding to such clusters surrounded by their alkyl tails. The explanation of the Ap turnover at n=4 is more involved, in terms of cancellations of atom-atom S(k) contributions related to domain ordering. The flexibility of the alkyl tails tend to reduce the cross contributions, thus revealing the crucial importance of this parameter in the models. Force fields with all-atom representation are less successful in reproducing the pre-peak features for smaller alkanols n<6, possibly because they blur the charge ordering process since all atoms bear partial charges. The analysis clearly shows that it is not possible to obtain a model free explanation of the features of I(k)
physics
Connected with the rise of interest in inverse problems is the development and analysis of regularization methods, which are a necessity due to the ill-posedness of inverse problems. Tikhonov-type regularization methods are very popular in this regard. However, its direct implementation for large-scale linear or non-linear problems is a non-trivial task. In such scenarios, iterative regularization methods usually serve as a better alternative. In this paper we propose a new iterative regularization method which uses descent directions, different from the usual gradient direction, that enable a more smoother and effective recovery than the later. This is achieved by transforming the original noisy gradient, via a smoothing operator, to a smoother gradient, which is more robust to the noise present in the data. It is also shown that this technique is very beneficial when dealing with data having large noise level. To illustrate the computational efficiency of this method we apply it to numerically solve some classical integral inverse problems, including image deblurring and tomography problems, and compare the results with certain standard regularization methods, such as Tikhonov, TV, CGLS, etc.
mathematics
Cosmological probes pose an inverse problem where the measurement result is obtained through observations, and the objective is to infer values of model parameters which characterize the underlying physical system -- our Universe. Modern cosmological probes increasingly rely on measurements of the small-scale structure, and the only way to accurately model physical behavior on those scales, roughly 65 Mpc/h or smaller, is via expensive numerical simulations. In this paper, we provide a detailed description of a novel statistical framework for obtaining accurate parameter constraints by combining observations with a very limited number of cosmological simulations. The proposed framework utilizes multi-output Gaussian process emulators that are adaptively constructed using Bayesian optimization methods. We compare several approaches for constructing multi-output emulators that enable us to take possible inter-output correlations into account while maintaining the efficiency needed for inference. Using Lyman alpha forest flux power spectrum, we demonstrate that our adaptive approach requires considerably fewer --- by a factor of a few in Lyman alpha P(k) case considered here --- simulations compared to the emulation based on Latin hypercube sampling, and that the method is more robust in reconstructing parameters and their Bayesian credible intervals.
astrophysics
We present the design, implementation and experimental validation of a supervisory predictive control approach for an electrical heating system featuring a phase change slurry as heat storage and transfer medium. The controller optimizes the energy flows that are used as set points for the heat generation and energy distribution components. The optimization handles the thermal and electrical subsystems simultaneously and is able to switch between different objectives. We show the control can be implemented on low-cost embedded hardware and validate it with an experimental test bed comprising an installation of the complete heating system, including all hydraulic and all electrical components. Experimental results demonstrate the feasibility of both, a heat pump heating system with a phase change slurry, and the optimal control approach. The main control objectives, i.e., thermal comfort and maximum self-consumption of solar energy, can be met. In addition, the system and its controller provide a load shifting potential.
electrical engineering and systems science
In this paper, we follow two main goals. In the first attempt, we give some functorial properties of the $p$-analog of the Fourier-Stieltjes algebras in which we generalize some previously existed definitions and theorems in Arsac and Cowling's works, to utilize them to prove $p$-complete boundedness of some well-known maps on these algebras. In the second part, as an application of these generalizations, we prove $p$-completely boundedness of homomorphisms which are induced by continuous and proper piecewise affine maps that is a generalization of Ilie's work on Fig\`a-Talamanca-Herz algebras.
mathematics
In this work we study kink-antikink and antikink-kink collisions in hyperbolic models of fourth and sixth order. We compared the patterns of scattering with known results from polynomial models of the same order. The hyperbolic models considered here tend to the polynomial $\phi^4$ and $\phi^6$ models in the limit of small values of the scalar field. We show that kinks and antikinks that interact hyperbolically with fourth order differ sensibly from those governed by the polynomial $\phi^4$ model. The increasing of the order of interaction to the sixth order shows that the hyperbolic and polynomial models give intricate structures of scattering that differ only slightly. The dependence on the order of interaction are related to some characteristics of the models such as the potential of perturbations and the number of vibrational modes.
high energy physics theory
We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent datasets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. In a range of experiments we demonstrate how M-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.
statistics
We consider an infinite strip $\Omega_L=(0,2\pi L)^{d-1}\times\mathbb{R}$, $d\geq 2$, $L>0$, and study the control problem of the heat equation on $\Omega_L$ with Dirichlet or Neumann boundary conditions, and control set $\omega\subset\Omega_L$. We provide a sufficient and necessary condition for null-controllability in any positive time $T>0$, which is a geometric condition on the control set $\omega$. This is referred to as "thickness with respect to $\Omega_L$" and implies that the set $\omega$ cannot be concentrated in a particular region of $\Omega_L$. We compare the thickness condition with a previously known necessity condition for null-controllability and give a control cost estimate which only shows dependence on the geometric parameters of $\omega$ and the time $T$.
mathematics
Surveys of the Milky Way (MW) and M31 enable detailed studies of stellar populations across ages and metallicities, with the goal of reconstructing formation histories across cosmic time. These surveys motivate key questions for galactic archaeology in a cosmological context: when did the main progenitor of a MW/M31-mass galaxy form, and what were the galactic building blocks that formed it? We investigate the formation times and progenitor galaxies of MW/M31-mass galaxies using the FIRE-2 cosmological simulations, including 6 isolated MW/M31-mass galaxies and 6 galaxies in Local Group (LG)-like pairs at z = 0. We examine main progenitor "formation" based on two metrics: (1) transition from primarily ex-situ to in-situ stellar mass growth and (2) mass dominance compared to other progenitors. We find that the main progenitor of a MW/M31-mass galaxy emerged typically at z ~ 3-4 (11.6-12.2 Gyr ago), while stars in the bulge region (inner 2 kpc) at z = 0 formed primarily in a single main progenitor at z < 5 (< 12.6 Gyr ago). Compared with isolated hosts, the main progenitors of LG-like paired hosts emerged significantly earlier (\Delta z ~ 2, \Delta t ~ 1.6 Gyr), with ~ 4x higher stellar mass at all z > 4 (> 12.2 Gyr ago). This highlights the importance of environment in MW/M31-mass galaxy formation, especially at early times. Overall, about 100 galaxies with M_star > 10^5 M_sun formed a typical MW/M31-mass system. Thus, surviving satellites represent a highly incomplete census (by ~ 5x) of the progenitor population.
astrophysics
Let N/F be a finite, normal extension of number fields with Galois group G. Suppose that N/F is weakly ramified, and that the square root A(N/F) of the inverse different of N.F is defined. (This latter condition holds if, for example, G is of odd order.) B. Erez has conjectured that the class (A(N/F)) of A(N/F) in the locally free class group Cl(ZG) of ZG is equal to Chinburg's second Omega invariant attached to N/F. We show that this equality holds whenever A(N/F) is defined and N/F is tame. This extends a result of the second-named author and S. Vinatier.
mathematics
In this paper, we adapt Recurrent Neural Networks with Stochastic Layers, which are the state-of-the-art for generating text, music and speech, to the problem of acoustic novelty detection. By integrating uncertainty into the hidden states, this type of network is able to learn the distribution of complex sequences. Because the learned distribution can be calculated explicitly in terms of probability, we can evaluate how likely an observation is then detect low-probability events as novel. The model is robust, highly unsupervised, end-to-end and requires minimum preprocessing, feature engineering or hyperparameter tuning. An experiment on a benchmark dataset shows that our model outperforms the state-of-the-art acoustic novelty detectors.
electrical engineering and systems science
Let $I_1=[a_0,a_1),\ldots,I_{k}= [a_{k-1},a_k)$ be a partition of the interval $I=[0,1)$ into $k$ subintervals. Let $f:I\to I$ be a map such that each restriction $f|_{I_i}$ is an increasing Lipschitz contraction. We prove that any $f$ admits at most $k$ periodic orbits, where the upper bound is sharp. We are also interested in the dynamics of piecewise linear $\lambda$-affine maps, where $0<\lambda<1$. Let $b_1,\ldots,b_k$ be real numbers and let $F_\lambda: I\to \mathbb{R}$ be a function such that each restriction $F_\lambda|_{I_i}(x)=\lambda x +b_i$. Under a generic assumption on the parameters $a_1,\ldots,a_{k-1},b_1,\ldots,b_k$, we prove that, up to a zero Hausdorff dimension set of slopes $\lambda$, the $\omega$-limit set of the piecewise $\lambda$-affine maps $f_\lambda:x\in I \mapsto F_\lambda(x)\pmod{1}$ at every point equals a periodic orbit and there exist at most $k$ periodic orbits. Moreover, let $\mathfrak{E}^{(k)}$ be the exceptional set of parameters $\lambda,a_1,\ldots,a_{k-1},b_1,\ldots,b_k$ which define non-asymptotically periodic $f$, we prove that $\mathfrak{E}^{(k)}$ is a Lebesgue null measure set whose Hausdorff dimension is large or equal to $k$.
mathematics
The identification of electronic nematicity across series of iron-based superconductors raises the question of its relationship with superconductivity and other ordered states. Here, we report a systematic elastoresistivity study on LaFe$_{1-x}$Co$_x$AsO single crystals, which have well separated structural and magnetic transition lines. All crystals show Curie-Weiss-like nematic susceptibility in the tetragonal phase. The extracted nematic temperature is monotonically suppressed upon cobalt doping, and changes sign around the optimal doping level, indicating a possible nematic quantum critical point beneath the superconducting dome. The amplitude of nematic susceptibility shows a peculiar double-peak feature. This could be explained by a combined effect of different contributions to the nematic susceptibility, which are amplified at separated doping levels of LaFe$_{1-x}$Co$_x$AsO.
condensed matter
Atomically-abrupt interfaces in transition metal oxide (TMO) heterostructures could host a variety of exotic condensed matter phases that may not be found in the bulk materials at equilibrium. A critical step in the development of such atomically-sharp interfaces is the deposition of atomically-smooth TMO thin films. Optimized deposition conditions exist for the growth of perovskite oxides. However, the deposition of rutile oxides, such as VO$_2$, with atomic-layer precision has been challenging. In this work, we used pulsed laser deposition (PLD) to grow atomically-smooth VO$_2$ thin films on rutile TiO$_2$ (101) substrates. We show that optimal substrate preparation procedure followed by the deposition of VO$_2$ films at a temperature conducive for step-flow growth mode is essential for achieving atomically-smooth VO$_2$ films. The films deposited at optimal substrate temperatures show a step and terrace structure of the underlying TiO$_2$ substrate. At lower deposition temperatures, there is a transition to a mixed growth mode comprising of island growth and layer-by-layer growth modes. VO$_2$ films deposited at optimal substrate temperatures undergo a metal to insulator transition at a transition temperature of $\sim$325 K with $\sim$10$^3$ times increase in resistance, akin to MIT in bulk VO$_2$.
condensed matter
The term natural experiment is used inconsistently. In one interpretation, it refers to an experiment where a treatment is randomly assigned by someone other than the researcher. In another interpretation, it refers to a study in which there is no controlled random assignment, but treatment is assigned by some external factor in a way that loosely resembles a randomized experiment---often described as an "as if random" assignment. In yet another interpretation, it refers to any non-randomized study that compares a treatment to a control group, without any specific requirements on how the treatment is assigned. I introduce an alternative definition that seeks to clarify the integral features of natural experiments and at the same time distinguish them from randomized controlled experiments. I define a natural experiment as a research study where the treatment assignment mechanism (i) is neither designed nor implemented by the researcher, (ii) is unknown to the researcher, and (iii) is probabilistic by virtue of depending on an external factor. The main message of this definition is that the difference between a randomized controlled experiment and a natural experiment is not a matter of degree, but of essence, and thus conceptualizing a natural experiment as a research design akin to a randomized experiment is neither rigorous nor a useful guide to empirical analysis. Using my alternative definition, I discuss how a natural experiment differs from a traditional observational study, and offer practical recommendations for researchers who wish to use natural experiments to study causal effects.
statistics
We consider a hybrid system of matter and light as a sensing device and quantify the role of cooperative effects. The latter generically enhance the precision with which modifications of the effective light-matter coupling constant can be measured. In particular, considering a fundamental model of $N$ qubits coupled to a single electromagnetic mode, we show that the ultimate bound for the precision shows double-Heisenberg scaling: $\Delta\theta\propto1/(Nn)$, with $N$ and $n$ being the number of qubits and photons, respectively. Moreover, even using classical states and measuring only one subsystem, a Heisenberg-times-shot-noise scaling, i.e. $1/(N\sqrt{n})$ or $1/(n\sqrt{N})$, is reached. As an application, we show that a Bose-Einstein condensate trapped in a double-well potential within an optical cavity can detect the gravitational acceleration $g$ with the relative precision of $\Delta g/g\simeq10^{-9}\text{Hz}^{-1/2}$. The analytical approach presented in this study takes into account the leakage of photons through the cavity mirrors, and allows to determine the sensitivity when $g$ is inferred via measurements on atoms or photons.
condensed matter
In Orthogonal Time Frequency Space (OTFS) modulation, information symbols are embedded in the delay-Doppler (DD) domain instead of the time-frequency (TF) domain. n order to ensure compatibility with existing OFDM systems (e.g. 4G LTE), most prior work on OTFS receivers consider a two-step conversion, where the received time-domain (TD) signal is firstly converted to a time-frequency (TF) signal (using an OFDM demodulator) followed by post-processing of this TF signal into a DD domain signal. In this paper, we show that the spectral efficiency (SE) performance of a two-step conversion based receiver degrades in very high mobility scenarios where the Doppler shift is a significant fraction of the communication bandwidth (e.g., control and non-payload communication (CNPC) in Unmanned Aircraft Systems (UAS)). We therefore consider an alternate conversion, where the received TD signal is directly converted to the DD domain. The resulting received DD domain signal is shown to be not the same as that obtained in the two-step conversion considered in prior works. The alternate conversion does not require an OFDM demodulator and is shown to have lower complexity than the two-step conversion. Analysis and simulations reveal that even in very high mobility scenarios, the SE achieved with the alternate conversion is invariant of Doppler shift and is significantly higher than the SE achieved with two-step conversion (which degrades with increasing Doppler shift).
computer science
We build a quantum cellular automaton (QCA) which coincides with 1+1 QED on its known continuum limits. It consists in a circuit of unitary gates driving the evolution of particles on a one dimensional lattice, and having them interact with the gauge field on the links. The particles are massive fermions, and the evolution is exactly U(1) gauge-invariant. We show that, in the continuous-time discrete-space limit, the QCA converges to the Kogut-Susskind staggered version of 1+1 QED. We also show that, in the continuous spacetime limit and in the free one particle sector, it converges to the Dirac equation, a strong indication that the model remains accurate in the relativistic regime.
quantum physics
This paper addresses the problem of finding the depth overhead that will be incurred when running quantum circuits on near-term quantum computers. Specifically, it is envisaged that near-term quantum computers will have low qubit connectivity: each qubit will only be able to interact with a subset of the other qubits, a reality typically represented by a qubit interaction graph in which a vertex represents a qubit and an edge represents a possible direct 2-qubit interaction (gate). Thus the depth overhead is unavoidably incurred by introducing swap gates into the quantum circuit to enable general qubit interactions. This paper proves that there exist quantum circuits where a depth overhead in $\Omega(\log n)$ must necessarily be incurred when running quantum circuits with $n$ qubits on quantum computers whose qubit interaction graph has finite degree, but that such a logarithmic depth overhead is achievable. The latter is shown by the construction of a 4-regular qubit interaction graph and associated compilation algorithm that can execute any quantum circuit with only a logarithmic depth overhead.
quantum physics
The D-Wave quantum annealers make it possible to obtain high quality solutions of NP-hard problems by mapping a problem in a QUBO (quadratic unconstrained binary optimization) or Ising form to the physical qubit connectivity structure on the D-Wave chip. However, the latter is restricted in that only a fraction of all pairwise couplers between physical qubits exists. Modeling the connectivity structure of a given problem instance thus necessitates the computation of a minor embedding of the variables in the problem specification onto the logical qubits, which consist of several physical qubits "chained" together to act as a logical one. After annealing, it is however not guaranteed that all chained qubits get the same value (-1 or +1 for an Ising model, and 0 or 1 for a QUBO), and several approaches exist to assign a final value to each logical qubit (a process called "unembedding"). In this work, we present tailored unembedding techniques for four important NP-hard problems: the Maximum Clique, Maximum Cut, Minimum Vertex Cover, and Graph Partitioning problems. Our techniques are simple and yet make use of structural properties of the problem being solved. Using Erd\H{o}s-R\'enyi random graphs as inputs, we compare our unembedding techniques to three popular ones (majority vote, random weighting, and minimize energy). We demonstrate that our proposed algorithms outperform the currently available ones in that they yield solutions of better quality, while being computationally equally efficient.
quantum physics
High Q optical resonators are a key component for ultra-narrow linewidth lasers, frequency stabilization, precision spectroscopy and quantum applications. Integration of these resonators in a photonic waveguide wafer-scale platform is key to reducing their cost, size and power as well as sensitivity to environmental disturbances. However, to date, the intrinsic Q of integrated all-waveguide resonators has been relegated to below 150 Million. Here, we report an all-waveguide Si3N4 resonator with an intrinsic Q of 422 Million and a 3.4 Billion absorption loss limited Q. The resonator has a 453 kHz intrinsic linewidth and 906 kHz loaded linewidth, with a finesse of 3005. The corresponding linear loss of 0.060 dB/m is the lowest reported to date for an all-waveguide design with deposited upper cladding oxide. These are the highest intrinsic and absorption loss limited Q factors and lowest linewidth reported to date for a photonic integrated all-waveguide resonator. This level of performance is achieved through a careful reduction of scattering and absorption loss components. We quantify, simulate and measure the various loss contributions including scattering and absorption including surface-state dangling bonds that we believe are responsible in part for absorption. In addition to the ultra-high Q and narrow linewidth, the resonator has a large optical mode area and volume, both critical for ultra-low laser linewidths and ultra-stable, ultra-low frequency noise reference cavities. These results demonstrate the performance of bulk optic and etched resonators can be realized in a photonic integrated solution, paving the way towards photonic integration compatible Billion Q cavities for precision scientific systems and applications such as nonlinear optics, atomic clocks, quantum photonics and high-capacity fiber communications systems on-chip.
physics
We compute a family of scalar loop diagrams in $AdS$. We use the spectral representation to derive various bulk vertex/propagator identities, and these identities enable to reduce certain loop bubble diagrams to lower loop diagrams, and often to tree-level exchange or contact diagrams. An important example is the computation of the finite coupling 4-point function of the large-$N$ conformal $O(N)$ model on $AdS_3$. Remarkably, the re-summation of bubble diagrams is equal to a tree-level contact diagram: the $\bar{D}_{1,1,\frac{3}{2},\frac{3}{2}} (z,\bar z)$ function. Another example is a scalar with $\phi^4$ or $\phi^3$ coupling in $AdS_3$: we compute various 4-point (and higher point) loop bubble diagrams with alternating integer and half-integer scaling dimensions in terms of a finite sum of contact diagrams and tree-level exchange diagrams. The 4-point function with external scaling dimensions differences obeying $\Delta_{12}=0$ and $\Delta_{34}=1$ enjoys significant simplicity which enables us to compute in quite generality. For integer or half-integer scaling dimensions, we show that the $M$-loop bubble diagram can be written in terms of Lerch transcendent functions of the cross-ratios $z$ and $\bar z$. Finally, we compute 2-point bulk bubble diagrams with endpoints in the bulk, and the result can be written in terms of Lerch transcendent functions of the AdS chordal distance. We show that the similarity of the latter two computations is not a coincidence, but arises from a vertex identity between the bulk 2-point function and the double-discontinuity of the boundary 4-point function.
high energy physics theory
In seismic monitoring, one is usually interested in the response of a changing target zone, embedded in a static inhomogeneous medium. We introduce an efficient method which predicts reflection responses at the earth's surface for different target-zone scenarios, from a single reflection response at the surface and a model of the changing target zone. The proposed process consists of two main steps. In the first step, the response of the original target zone is removed from the reflection response, using the Marchenko method. In the second step, the modelled response of a new target zone is inserted between the overburden and underburden responses. The method fully accounts for all orders of multiple scattering and, in the elastodynamic case, for wave conversion. For monitoring purposes, only the second step needs to be repeated for each target-zone model. Since the target zone covers only a small part of the entire medium, the proposed method is much more efficient than repeated modelling of the entire reflection response.
physics
Unconventional quasiparticle excitations in condensed matter systems have become one of the most important research frontiers. Beyond two- and fourfold degenerate Weyl and Dirac fermions, three-, six- and eightfold symmetry protected degeneracies have been predicted however remain challenging to realize in solid state materials. Here, charge density wave compound TaTe4 is proposed to hold eightfold fermionic excitation and Dirac point in energy bands. High quality TaTe4 single crystals are prepared, where the charge density wave is revealed by directly imaging the atomic structure and a pseudogap of about 45 meV on the surface. Shubnikov de-Haas oscillations of TaTe4 are consistent with band structure calculation. Scanning tunneling microscopy reveals atomic step edge states on the surface of TaTe4. This work uncovers that charge density wave is able to induce new topological phases and sheds new light on the novel excitations in condensed matter materials.
condensed matter
The $\varepsilon$-form of a system of differential equations for Feynman integrals has led to tremendeous progress in our abilities to compute Feynman integrals, as long as they fall into the class of multiple polylogarithms. It is therefore of current interest, if these methods extend beyond the case of multiple polylogarithms. In this talk I discuss Feynman integrals, which are associated to elliptic curves and their differential equations. I show for non-trivial examples how the system of differential equations can be brought into an $\varepsilon$-form. Single-scale and multi-scale cases are discussed.
high energy physics phenomenology
We introduce a full set of rules to directly express all $M$-point conformal blocks in one- and two-dimensional conformal field theories, irrespective of the topology. The $M$-point conformal blocks are power series expansion in some carefully-chosen conformal cross-ratios. We then prove the rules for any topology constructively with the help of the known position space operator product expansion. To this end, we first compute the action of the position space operator product expansion on the most general function of position space coordinates relevant to conformal field theory. These results provide the complete knowledge of all $M$-point conformal blocks with arbitrary external and internal quasi-primary operators (including arbitrary spins in two dimensions) in any topology.
high energy physics theory
The objective of this paper is 'open-set' speaker recognition of unseen speakers, where ideal embeddings should be able to condense information into a compact utterance-level representation that has small intra-speaker and large inter-speaker distance. A popular belief in speaker recognition is that networks trained with classification objectives outperform metric learning methods. In this paper, we present an extensive evaluation of most popular loss functions for speaker recognition on the VoxCeleb dataset. We demonstrate that the vanilla triplet loss shows competitive performance compared to classification-based losses, and those trained with our proposed metric learning objective outperform state-of-the-art methods.
electrical engineering and systems science
Real Clifford algebras for arbitrary number of space and time dimensions as well as their representations in terms of spinors are reviewed and discussed. The Clifford algebras are classified in terms of isomorphic matrix algebras of real, complex or quaternionic type. Spinors are defined as elements of minimal or quasi-minimal left ideals within the Clifford algebra and as representations of the pin and spin groups. Two types of Dirac adjoint spinors are introduced carefully. The relation between mathematical structures and applications to describe relativistic fermions is emphasized throughout.
high energy physics theory
In this paper, we propose a framework design for wireless sensor networks based on multiple unmanned aerial vehicles (UAVs). Specifically, we aim to minimize deployment and operational costs, with respect to budget and power constraints. To this end, we first optimize the number and locations of cluster heads (CHs) guaranteeing data collection from all sensors. Then, to minimize the data collection flight time, we optimize the number and trajectories of UAVs. Accordingly, we distinguish two trajectory approaches: 1) where a UAV hovers exactly above the visited CH; and 2) where a UAV hovers within a range of the CH. The results of this include guidelines for data collection design. The characteristics of sensor nodes' K-means clustering are then discussed. Next, we illustrate the performance of optimal and heuristic solutions for trajectory planning. The genetic algorithm is shown to be near-optimal with only $3.5\%$ degradation. The impacts of the trajectory approach, environment, and UAVs' altitude are investigated. Finally, fairness of UAVs trajectories is discussed.
electrical engineering and systems science
We study the collective motion of a large set of self-propelled particles subject to voter-like interactions. Each particle moves on a two-dimensional space at a constant speed in a direction that is randomly assigned initially. Then, at every step of the dynamics, each particle adopts the direction of motion of a randomly chosen neighboring particle. We investigate the time evolution of the global alignment of particles measured by the order parameter $\varphi$, until complete order $\varphi=1.0$ is reached (polar consensus). We find that $\varphi$ increases as $t^{1/2}$ for short times and approaches exponentially fast to $1.0$ for long times. Also, the mean time to consensus $\tau$ varies non-monotonically with the density of particles $\rho$, reaching a minimum at some intermediate density $\rho_{\tiny \mbox{min}}$. At $\rho_{\tiny \mbox{min}}$, the mean consensus time scales with the system size $N$ as $\tau_{\tiny \mbox{min}} \sim N^{0.765}$, and thus the consensus is faster than in the case of all-to-all interactions (large $\rho$) where $\tau=2N$. We show that the fast consensus, also observed at intermediate and high densities, is a consequence of the segregation of the system into clusters of equally-oriented particles which breaks the balance of transitions between directional states in well mixed systems.
physics
The standard benchmark for teleportation is the average fidelity of teleportation and according to this benchmark not all states are useful for teleportation. It was recently shown however that all entangled states lead to non-classical teleportation, with there being no classical scheme able to reproduce the states teleported to Bob. Here we study the operational significance of this result. On the one hand we demonstrate that every entangled state is useful for teleportation if a generalization of the average fidelity of teleportation is considered which concerns teleporting quantum correlations. On the other hand, we show the strength of a particular entangled state and entangled measurement for teleportation -- as quantified by the robustness of teleportation -- precisely characterizes their ability to offer an advantage in the task of subchannel discrimination with side information. This connection allows us to prove that every entangled state outperforms all separable states when acting as a quantum memory in this discrimination task. Finally, within the context of a resource theory of teleportation, we show that the two operational tasks considered provide complete sets of monotones for two partial orders based upon the notion of teleportation simulation, one classical, and one quantum.
quantum physics
We introduce a new set of effective field theory rules for constructing Lagrangians with $\mathcal{N} = 1$ supersymmetry in collinear superspace. In the standard superspace treatment, superfields are functions of the coordinates $(x^\mu,\theta^\alpha, \theta^{\dagger \dot{\alpha}})$, and supersymmetry preservation is manifest at the Lagrangian level in part due to the inclusion of auxiliary $F$- and $D$-term components. By contrast, collinear superspace depends on a smaller set of coordinates $(x^\mu,\eta,\eta^\dagger)$, where $\eta$ is a complex Grassmann number without a spinor index. This provides a formulation of supersymmetric theories that depends exclusively on propagating degrees of freedom, at the expense of obscuring Lorentz invariance and introducing inverse momentum scales. After establishing the general framework, we construct collinear superspace Lagrangians for free chiral matter and non-Abelian gauge fields. For the latter construction, an important ingredient is a superfield representation that is simultaneously chiral, anti-chiral, and real; this novel object encodes residual gauge transformations on the light cone. Additionally, we discuss a fundamental obstruction to constructing interacting theories with chiral matter; overcoming these issues is the subject of our companion paper, where we introduce a larger set of superfields to realize the full range of interactions compatible with $\mathcal{N} = 1$. Along the way, we provide a novel framing of reparametrization invariance using a spinor decomposition, which provides insight into this important light-cone symmetry.
high energy physics theory
Joint radar-communication (JRC) waveform can be used for simultaneous radar detection and communication in the same frequency band. However, radar detection processing requires the prior knowledge of the waveform including the embedded information for matched filtering. To remove this requirement, we propose a unimodular JRC waveform based on composite modulation where the internal modulation embeds information by mapping the bit sequence to different orthogonal signals, and the external modulation performs phase modulation on the internal waveform to satisfy the demand of detection. By adjusting the number of the orthogonal signals, a trade-off between the detection and the communication performance can be made. Besides, a new parameter, dissimilarity, is defined to evaluate the detection performance robustness to unknown embedded information. The numerical results show that the SER performance of the proposed system is similar to that of the multilevel frequency shift keying system, the ambiguity function resembles that of the phase coded signal, and the dissimilarity performance is better than other JRC systems.
electrical engineering and systems science
This work aims to study the generalizability of a pre-developed deep learning (DL) dose prediction model for volumetric modulated arc therapy (VMAT) for prostate cancer and to adapt the model to three different internal treatment planning styles and one external institution planning style. We built the source model with planning data from 108 patients previously treated with VMAT for prostate cancer. For the transfer learning, we selected patient cases planned with three different styles from the same institution and one style from a different institution to adapt the source model to four target models. We compared the dose distributions predicted by the source model and the target models with the clinical dose predictions and quantified the improvement in the prediction quality for the target models over the source model using the Dice similarity coefficients (DSC) of 10% to 100% isodose volumes and the dose-volume-histogram (DVH) parameters of the planning target volume and the organs-at-risk. The source model accurately predicts dose distributions for plans generated in the same source style but performs sub-optimally for the three internal and one external target styles, with the mean DSC ranging between 0.81-0.94 and 0.82-0.91 for the internal and the external styles, respectively. With transfer learning, the target model predictions improved the mean DSC to 0.88-0.95 and 0.92-0.96 for the internal and the external styles, respectively. Target model predictions significantly improved the accuracy of the DVH parameter predictions to within 1.6%. We demonstrated model generalizability for DL-based dose prediction and the feasibility of using transfer learning to solve this problem. With 14-29 cases per style, we successfully adapted the source model into several different practice styles. This indicates a realistic way to widespread clinical implementation of DL-based dose prediction.
physics
Empirical researchers are usually interested in investigating the impacts of baseline covariates have when uncovering sample heterogeneity and separating samples into more homogeneous groups. However, a considerable number of studies in the structural equation modeling (SEM) framework usually start with vague hypotheses in terms of heterogeneity and possible reasons. It suggests that (1) the determination and specification of a proper model with covariates is not straightforward, and (2) the exploration process may be computational intensive given that a model in the SEM framework is usually complicated and the pool of candidate covariates is usually huge in the psychological and educational domain where the SEM framework is widely employed. Following \citet{Bakk2017two}, this article presents a two-step growth mixture model (GMM) that examines the relationship between latent classes of nonlinear trajectories and baseline characteristics. Our simulation studies demonstrate that the proposed model is capable of clustering the nonlinear change patterns, and estimating the parameters of interest unbiasedly, precisely, as well as exhibiting appropriate confidence interval coverage. Considering the pool of candidate covariates is usually huge and highly correlated, this study also proposes implementing exploratory factor analysis (EFA) to reduce the dimension of covariate space. We illustrate how to use the hybrid method, the two-step GMM and EFA, to efficiently explore the heterogeneity of nonlinear trajectories of longitudinal mathematics achievement data.
statistics
In the first part of this study, we compared the performances of two categories of no-slip boundary treatments, i.e., the interpolated bounce-back schemes and the immersed boundary methods in a series of laminar flow simulations within the lattice Boltzmann method. In this second part, these boundary treatments are further compared in the simulations of turbulent flows with complex geometry to provide a next-level assessment of these schemes. Two non-trivial turbulent flow problems, a fully developed turbulent pipe flow at a low Reynolds number, and a decaying homogeneous isotropic turbulent flow laden with a large number of resolved spherical particles are considered. The major problem of the immersed boundary method revealed by the present study is its incapability in computing the local velocity gradients inside the diffused interface, which can result in significantly underestimated dissipation rate and viscous diffusion locally near the particle surfaces. Otherwise, both categories of the no-slip boundary treatments are able to provide accurate results for most of turbulent statistics in both the carrier and dispersed phases, provided that sufficient grid resolutions are used.
physics
The tremendous advancements in the Internet of Things (IoT) increasingly involve computationally intensive services. These services often require more computation resources than can entirely be satisfied on local IoT devices. Cloud computing is traditionally used to provide unlimited computation resources at distant servers. However, such remote computation may not address the short-delay constraints that many of today's IoT applications require. Edge computing allows offloading computing close to end users to overcome computation and delay issues. Nonetheless, the edge servers may suffer from computing inefficiencies. Indeed, some IoT applications are invoked multiple times by multiple devices. These invocations are often used with the same or similar input data, which leads to the same computational output (results). Still, the edge server willfully executes all received redundant tasks. In this work, we investigate the use of the computation reuse concept at the edge server. We design a network-based computation reuse architecture for IoT applications. The architecture stores previously executed results and reuses them to satisfy newly arrived similar tasks instead of performing computation from scratch. By doing so, we eliminate redundant computation, optimize resource utilization, and decrease task completion time. We implemented the architecture and evaluated its performance both at the networking and application levels. From the networking perspective, we reach up to an 80\% reduction in task completion time and up to 60\% reduction in resource utilization. From the application perspective, we achieve up to 90\% computation correctness and accuracy.
computer science
Fast-cooling after sintering or annealing of BiFeO3-BaTiO3 mixed oxide ceramics yields core-shell structures that give excellent functional properties, but their precise phase assemblage and nanostructure remains an open question. By comparing conventional electron energy loss spectroscopy (EELS) with scanning precession electron diffraction (SPED) mapping using a direct electron detector, we correlate chemical composition with the presence or absence of octahedral tilting and with changes in lattice parameters. This reveals that some grains have a 3-phase assemblage of a BaTiO3-rich pseudocubic shell; a BiFeO3-rich outer core with octahedral tilting consistent with an R3c structure; and an inner core richer in Ba and even poorer in Ti, which seems to show a pseudocubic structure of slightly smaller lattice parameter than the shell region. This last structure has not been previously identified in these materials, but the composition and structure fit with previous studies. These inner cores are likely to be non-polar and play no part in the ferroelectric properties. Nevertheless, the combination of EELS and SPED clearly provides a novel way to examine heterogeneous microstructures with high spatial resolution, thus revealing the presence of phases that may be too subtle to detect with more conventional techniques.
condensed matter
The ontology proposed in this paper is aimed at demonstrating that it is possible to understand the counter-intuitive predictions of quantum mechanics while still retaining much of the framework underlying classical physics, the implication being that it is better to avoid wandering into unnecessarily speculative realms without the support of conclusive evidence. In particular, it is argued that it is possible to interpret quantum mechanics as simply describing an external world consisting of familiar physical entities (e.g., particles or fields) residing in classical 3-dimensional space (not configuration space) with Lorentz covariance maintained.
quantum physics
We review the gauge hierarchy problem in the standard model. We discuss the meaning of the quadratic divergence in terms of the Wilsonian renormalization group. Classical scale symmetry, which prohibits dimensionful parameters in the bare action, could play a key role for the understanding of the origin of the electroweak scale. We discuss the scale-generation mechanism, i.e. scalegenesis in scale invariant theories. In this paper, we introduce a scale invariant extension of the SM based on a strongly interacting scalar-gauge theory. It is discussed that asymptotically safe quantum gravity provides a hint about solutions to the gauge hierarchy problem.
high energy physics phenomenology
DBSCAN is a classical density-based clustering procedure with tremendous practical relevance. However, DBSCAN implicitly needs to compute the empirical density for each sample point, leading to a quadratic worst-case time complexity, which is too slow on large datasets. We propose DBSCAN++, a simple modification of DBSCAN which only requires computing the densities for a chosen subset of points. We show empirically that, compared to traditional DBSCAN, DBSCAN++ can provide not only competitive performance but also added robustness in the bandwidth hyperparameter while taking a fraction of the runtime. We also present statistical consistency guarantees showing the trade-off between computational cost and estimation rates. Surprisingly, up to a certain point, we can enjoy the same estimation rates while lowering computational cost, showing that DBSCAN++ is a sub-quadratic algorithm that attains minimax optimal rates for level-set estimation, a quality that may be of independent interest.
computer science
In this paper, the holographic p-wave superfluid model with charged complex vector field is studied in dRGT massive gravity beyond the probe limit. The stability of p-wave and p+ip solutions are compared in the grand canonical ensemble. The p-wave solution always get lower value of grand potential than the p+ip solution, showing that the holographic system still favors an anisotropic (p-wave) solution even with considering a massive gravity theory in bulk. In the holographic superconductor models with dRGT massive gravity in bulk, a key sailing symmetry is found to be violated by fixing the reference metric parameter $c_0$. Therefore, in order to get the dependence of condensate and grand potential on temperature, different values of horizon radius should be considered in numerical work. With a special choice of model parameters, we further study the dependence of critical back-reaction strength on the graviton mass parameter, beyond which the superfluid phase transition become first order. We also give the dependence of critical temperature on the back reaction strength $b$ and graviton mass parameter $m^2$.
high energy physics theory
Recent experiments completed by collaborating research groups from Google, NASA Ames, UC Santa Barbara, and others provided compelling evidence that quantum supremacy has finally been achieved on a superconducting quantum processor. The theoretical basis for these experiments depends on sampling the output distributions of random quantum circuits; unfortunately, understanding how this theoretical basis can be used to define quantum supremacy is an extremely difficult task. Anyone attempting to understand how this sampling task relates to quantum supremacy must study concepts from random matrix theory, mathematical analysis, quantum chaos, computational complexity, and probability theory. Resources connecting these concepts in the context of quantum supremacy are scattered and often difficult to find. This article is an attempt to alleviate this difficulty in those who wish to understand the theoretical basis of Google's quantum supremacy experiments, by carefully walking through a derivation of their precise mathematical definition of quantum supremacy. It's designed for advanced undergraduate or graduate students who want more information than can be provided in popular science articles, but who might not know where to begin when tackling the many research papers related to quantum supremacy.
quantum physics
To study how nitrogen contributes to perpendicular magnetocrystalline anisotropy (PMA) in the ferrimagnetic antiperovskite Mn$_4$N, we examined both the fabrication of epitaxial Mn$_4$N films with various nitrogen contents and first-principles density-functional calculations. Saturation magnetization ($M_{\rm s}$) peaks of 110 mT and uniaxial PMA energy densities ($K_{\rm u}$) of 0.1 MJ/m$^3$ were obtained for a N$_2$ gas flow ratio ($Q$) of $\sim 10 \%$ during sputtering deposition, suggesting nearly single-phase crystalline $\epsilon$-Mn$_4$N. Segregation of $\alpha$-Mn and nitrogen-deficient Mn$_4$N grains was observed for $Q \approx 6\%$, which was responsible for a decrease in the $M_{\rm s}$ and $K_{\rm u}$. The first-principles calculations revealed that the magnetic structure of Mn$_4$N showing PMA was "type-B" having a collinear structure, whose magnetic moments couple parallel within the c-plane and alternating along the c-direction. In addition, the $K_{\rm u}$ calculated using Mn$_{32}$N$_x$ supercells showed a strong dependence on nitrogen deficiency, in qualitative agreement with the experimental results. The second-order perturbation analysis of $K_{\rm u}$ with respect to the spin-orbit interaction revealed that not only spin-conserving but also spin-flip processes contribute significantly to the PMA in Mn$_4$N. We also found that both contributions decreased with increasing nitrogen deficiency, resulting in the reduction of $K_{\rm u}$. It was noted that the decrease in the spin-flip contribution occurred at the Mn atoms in face-centered sites. This is one of the specific PMA characteristics we found for antiperovskite-type Mn$_4$N.
condensed matter
The ability to generate mode-engineered single photons to interface with disparate quantum systems is of importance for building a quantum network. Here we report on the generation of a pulsed, heralded single photon source with a sub-GHz spectral bandwidth that couples to indium arsenide quantum dots centered at 942 nm. The source is built with a type-II PPKTP down-conversion crystal embedded in a semi-confocal optical cavity and pumped with a 76 MHz repetition rate pulsed laser to emit collinear, polarization-correlated photon pairs resonant with a single quantum dot. In order to demonstrate direct coupling, we use the mode-engineered cavity-SPDC single-photon source to resonantly excite an isolated single quantum dot.
quantum physics
Metalenses offer the ground-breaking opportunity to realize highly performing low-weight, flat and ultrathin, optical elements which substantially reduce size and complexity of imaging systems. Today, a major challenge in metalenses design is still the realization of achromatic optical elements ideally focussing a broad wavelength spectrum at a single focal length. Here we present a fast and effective way to design and fabricate extremely thin all-dielectric metalenses, optimally solving achromaticity issues by means of machine learning codes. The enabling technology for fabrication is a recently developed hyper resolute two-photon direct laser writing lithography equipment. The fabricated metalenses, based on a completely flat and ultrathin design, show intriguing optical features. Overall, achromatic behavior, focal length of 1.14 mm, depth of focus of hundreds of microns and thickness of only few nanometers allow considering the design of novel and efficient imaging systems in a completely new perspective.
physics
Detumbling refers to the act of dampening the angular velocity of the satellite. This operation is of paramount importance since it is virtually impossible to nominally perform any other operation without some degree of attitude control. Common methods used to detumble satellites usually involve magnetic actuation, paired with different types of sensors which are used to provide angular velocity feedback. This paper presents the adverse effects of time-discretization on the stability of two detumbling algorithms. An extensive literature review revealed that both algorithms achieve absolute stability for systems involving continuous feedback and output. However, the physical components involved impose limitations on the maximum frequency of the algorithm, thereby making any such system inconceivable. This asserts the need to perform a discrete-time stability analysis, as it is better suited to reflect on the actual implementation and dynamics of these algorithms. The paper starts with the current theory and views on the stability of these algorithms. The next sections describe the continuous and discrete-time stability analysis performed by the team and the conclusions derived from it. Theoretical investigation led to the discovery of multiple conditions on angular velocity and operating frequencies of the hardware, for which the algorithms were unstable. These results were then verified through various simulations on MATLAB and Python3.6.7. The paper concludes with a discussion on the various instabilities posed by time-discretization and the conditions under which the detumbling algorithm would be infeasible.
electrical engineering and systems science
We propose a new framework of XGBoost that predicts the entire conditional distribution of a univariate response variable. In particular, XGBoostLSS models all moments of a parametric distribution (i.e., mean, location, scale and shape [LSS]) instead of the conditional mean only. Choosing from a wide range of continuous, discrete and mixed discrete-continuous distribution, modelling and predicting the entire conditional distribution greatly enhances the flexibility of XGBoost, as it allows to gain additional insight into the data generating process, as well as to create probabilistic forecasts from which prediction intervals and quantiles of interest can be derived. We present both a simulation study and real world examples that demonstrate the virtues of our approach.
statistics
We report the findings of new exact analytical solutions to the cosmological fluid equations, namely for the case where the initial conditions are perturbatively close to a spherical top-hat profile. To do so we enable a fluid description in a Lagrangian-coordinates approach, and prove the convergence of the Taylor-series representation of the Lagrangian displacement field until the time of collapse ("shell-crossing"). This allows the determination of the time for quasi-spherical collapse, which is shown to happen generically earlier than in the spherical case. For pedagogical reasons, calculations are first given for a spatially flat universe that is only filled with a non-relativistic component of cold dark matter (CDM). Then, the methodology is updated to a $\Lambda$CDM Universe, with the inclusion of a cosmological constant $\Lambda>0$.
astrophysics
During planet formation, numerous small impacting bodies result in cratering impacts on large target bodies. A fraction of the target surface is eroded, while a fraction of the impactor material accretes onto the surface. These fractions depend upon the impact velocities, the impact angles, and the escape velocities of the target. This study uses smoothed particle hydrodynamics simulations to model cratering impacts onto a planar icy target for which gravity is the dominant force and material strength is neglected. By evaluating numerical results, scaling laws are derived for the escape mass of the target material and the accretion mass of the impactor material onto the target surface. Together with recently derived results for rocky bodies in a companion study, a conclusion is formulated that typical cratering impacts on terrestrial planets, except for those on Mercury, led to a net accretion, while those on the moons of giant planets, e.g., Rhea and Europa, led to a net erosion. Our newly derived scaling laws would be useful for predicting the erosion of the target body and the accretion of the impactor for a variety of cratering impacts that would occur on large rocky and icy planetary bodies during planet formation and collisional evolution from ancient times to today.
astrophysics
Majorana fermions are often proposed to be realized by first singling out one Fermi surface without spin degeneracy via spin-orbit coupling, and then imposing boundaries or defects. In this work, we take a different route starting with two degenerate Fermi surfaces without spin-orbit coupling, and show that by the method of "kink on boundary", the dispersive chiral Majorana fermions can be realized in superconducting systems with $p\pm is$ pairings. The surfaces of these systems develop spontaneous magnetizations whose directions are determined by the boundary orientations and the phase difference between the $p$ and $s$-component gap functions. Along the magnetic domain walls on the surface, there exist chiral Majorana fermions propagating unidirectionally, which can be conveniently dragged and controlled by external magnetic fields. Furthermore, the surface magnetization is shown to be a magnetoelectric effect based on a Ginzburg-Landau free energy analysis. We also discuss how to use the proximity effects to realize chiral Majorana fermions by performing the "kink on boundary" method.
condensed matter
Overshooting from the convective cores of stars more massive than about 1.2 M(Sun) has a profound impact on their subsequent evolution. And yet, the formulation of the overshooting mechanism in current stellar evolution models has a free parameter (f[ov] in the diffusive approximation) that remains poorly constrained by observations, affecting the determination of astrophysically important quantities such as stellar ages. In an earlier series of papers we assembled a sample of 37 well-measured detached eclipsing binaries to calibrate the dependence of f[ov] on stellar mass, showing that it increases sharply up to a mass of roughly 2 M(Sun), and remains constant thereafter out to at least 4.4 M(Sun). Recent claims have challenged the utility of eclipsing binaries for this purpose, on the basis that the uncertainties in f[ov] from the model fits are typically too large to be useful, casting doubt on a dependence of overshooting on mass. Here we reexamine those claims and show them to be too pessimistic, mainly because they did not account for all available constraints --- both observational and theoretical --- in assessing the true uncertainties. We also take the opportunity to add semi-empirical f[ov] determinations for 13 additional binaries to our previous sample, and to update the values for 9 others. All are consistent with, and strengthen our previous conclusions, supporting a dependence of f[ov] on mass that is now based on estimates for a total of 50 binary systems (100 stars).
astrophysics
Visible light communication (VLC) enables access to huge unlicensed bandwidth, a higher security level, and no radio frequency interference. With these advantages, VLC emerges as a complementary solution to radio frequency communications. VLC systems have primarily been designed for indoor scenarios with typical transmission distances between 2 and 5 m. Different designs would be required for larger distances. This paper proposes for the first time the use of a liquid crystal (LC)-based re-configurable intelligent surface (RIS) on improving the VLC signal detection and transmission range. An LC-based RIS presents multiple advantages, including the tunability of its photo-refractive parameters. Another advantage is its light amplification capabilities when under the influence of an externally applied field. In this paper, we analyze an LC-based RIS structure to amplify the detected light and improve the VLC signal detection and transmission range. Results show that mixing LC with 4 to 8 wt\% concentration of a dye such as the terthiophene (3T-2MB) improves the VLC transmission range of about 0.20 to 1.08 m. This improvement can reach 6.56 m if we combine 8 wt\% concentration of 3T-2MB and 0.1 wt\% concentration of trinitrofluorenone.
electrical engineering and systems science
The Lorenz--Mie formulation of electromagnetic scattering by a homogeneous, isotropic, dielectric-magnetic sphere was extended to incorporate topologically insulating surface states characterized by a surface admittance $\gamma$. Closed-form expressions were derived for the expansion coefficients of the scattered field phasors in terms of those of the incident field phasors. These expansion coefficients were used to obtain analytical expressions for the total scattering, extinction, forward scattering, and backscattering efficiencies of the sphere. Resonances exist for relatively low values of $\gamma$, when the sphere is either nondissipative or weakly dissipative. For large values of $\gamma$, the scattering characteristics are close to that of a perfect electrically conducting sphere, regardless of whether the sphere is composed of a dissipative or nondissipative material, and regardless of whether that material supports planewave propagation with positive or negative phase velocity.
physics
We propose a 3+1 Higgs Doublet Model based on the $\Delta(27)$ family symmetry supplemented by several auxiliary cyclic symmetries leading to viable Yukawa textures for the Standard Model fermions, consistent with the observed pattern of fermion masses and mixings. The charged fermion mass hierarchy and the quark mixing pattern is generated by the spontaneous breaking of the discrete symmetries due to flavons that act as Froggatt-Nielsen fields. The tiny neutrino masses arise from a radiative seesaw mechanism at one loop level, thanks to a preserved $Z_2^{\left( 1\right)}$ discrete symmetry, which also leads to stable scalar and fermionic dark matter candidates. The leptonic sector features the predictive cobimaximal mixing pattern, consistent with the experimental data from neutrino oscillations. For the scenario of normal neutrino mass hierarchy, the model predicts an effective Majorana neutrino mass parameter in the range % $3$~meV$\lesssim m_{\beta\beta}\lesssim$ $18$ meV, which is within the declared range of sensitivity of modern experiments. The model predicts Flavour Changing Neutral Currents which constrain the model, for instance Kaon mixing and $\mu \to e$ nuclear conversion processes, the latter which are found to be within the reach of the forthcoming experiments.
high energy physics phenomenology
Mammography is a vital screening technique for early revealing and identification of breast cancer in order to assist to decrease mortality rate. Practical applications of mammograms are not limited to breast cancer revealing, identification ,but include task based lens design, image compression, image classification, content based image retrieval and a host of others. Mammography computational analysis methods are a useful tool for specialists to reveal hidden features and extract significant information in mammograms. Digital mammograms are mammography images available along with the conventional screen-film mammography to make automation of mammograms easier. In this paper, we descriptively discuss computational advancement in digital mammograms to serve as a compass for research and practice in the domain of computational mammography and related fields. The discussion focuses on research aiming at a variety of applications and automations of mammograms. It covers different perspectives on image pre-processing, feature extraction, application of mammograms, screen-film mammogram, digital mammogram and development of benchmark corpora for experimenting with digital mammograms.
electrical engineering and systems science