text
stringlengths
11
9.77k
label
stringlengths
2
104
Relative Fisher information (IR), which is a measure of correlative fluctuation between two probability densities, has been pursued for a number of quantum systems, such as, 1D quantum harmonic oscillator (QHO) and a few central potentials namely, 3D isotropic QHO, hydrogen atom and pseudoharmonic potential (PHP) in both position ($r$) and momentum ($p$) spaces. In the 1D case, the $n=0$ state is chosen as reference, whereas for a central potential, the respective circular or node-less (corresponding to lowest radial quantum number $n_{r}$) state of a given $l$ quantum number, is selected. Starting from their exact wave functions, expressions of IR in both $r$ and $p$ spaces are obtained in closed analytical forms in all these systems. A careful analysis reveals that, for the 1D QHO, IR in both coordinate spaces increase linearly with quantum number $n$. Likewise, for 3D QHO and PHP, it varies with single power of radial quantum number $n_{r}$ in both spaces. But, in H atom they depend on both principal ($n$) and azimuthal ($l$) quantum numbers. However, at a fixed $l$, IR (in conjugate spaces) initially advance with rise of $n$ and then falls off; also for a given $n$, it always decreases with $l$.
quantum physics
A graph $G$ is well-covered if all maximal independent sets of $G$ have the same cardinality. In 1992 Topp and Volkmann investigated the structure of well-covered graphs that have nontrivial factorizations with respect to some of the standard graph products. In particular, they showed that both factors of a well-covered direct product are also well-covered and proved that the direct product of two complete graphs (respectively, two cycles) is well-covered precisely when they have the same order (respectively, both have order 3 or 4). Furthermore, they proved that the direct product of two well-covered graphs with independence number one-half their order is well-covered. We initiate a characterization of nontrivial, connected well-covered graphs $G$ and $H$, whose independence numbers are strictly less than one-half their orders, such that their direct product $G \times H$ is well-covered. In particular, we show that in this case both $G$ and $H$ have girth 3 and we present several infinite families of such well-covered direct products. Moreover, we show that if $G$ is a factor of any well-covered direct product, then $G$ is a complete graph unless it is possible to create an isolated vertex by removing the closed neighborhood of some independent set of vertices in $G$.
mathematics
With the advancement in drone technology, in just a few years, drones will be assisting humans in every domain. But there are many challenges to be tackled, communication being the chief one. This paper aims at providing insights into the latest UAV (Unmanned Aerial Vehicle) communication technologies through investigation of suitable task modules, antennas, resource handling platforms, and network architectures. Additionally, we explore techniques such as machine learning and path planning to enhance existing drone communication methods. Encryption and optimization techniques for ensuring long lasting and secure communications, as well as for power management, are discussed. Moreover, applications of UAV networks for different contextual uses ranging from navigation to surveillance, URLLC (Ultra reliable and low latency communications), edge computing and work related to artificial intelligence are examined. In particular, the intricate interplay between UAV, advanced cellular communication, and internet of things constitutes one of the focal points of this paper. The survey encompasses lessons learned, insights, challenges, open issues, and future directions in UAV communications. Our literature review reveals the need for more research work on drone to drone and drone to device communications.
electrical engineering and systems science
Very recently, the Fermilab report of muon $(g-2)$ shows a $4.2\sigma$ discrepancy between the standard model prediction. Motivated by this inspiring result and the strengthening tension with the dark matter observations. We argue that in general next-to-minimal supersymmetric standard model, a singlino-dominated neutralino and singlet-dominated Higgs bosons may form a secluded dark matter sector with $\tilde{\chi}_1^0\tilde{\chi}_1^0 \to h_s A_s$ responsible for the measured abundance by adopting an appropriate Yukawa coupling $\kappa$. This sector communicates with the Standard Model sector by a weak singlet-doublet Higgs mixing. So the scatterings of the singlino-dominated dark matter with nucleons are suppressed. Besides, the dark matter must be heavier than about $160~{\rm GeV}$ to proceed with the annihilation, sparticle decay chains are lengthened in comparison with the minimal supersymmetric standard model prediction due to the singlet nature of the dark matter, and the singlet-dominated Higgs bosons as final state usually increase sparticle decay channel. These characteristics make sparticle detection at LHC rather tricky. This study shows that the theory can readily explain the discrepancy of the muon anomalous magnetic moment between its SM prediction and experimentally measured value, without conflicting the DM and the Higgs experimental results and the LHC searches for sparticle.
high energy physics phenomenology
Chemical abundances and abundance ratios measured in galaxies provide precious information about the mechanisms, modes and time scales of the assembly of cosmic structures. Yet, the nucleogenesis and chemical evolution of elements heavier than helium are dictated mostly by the physics of the stars and the shape of the stellar mass spectrum. In particular, estimates of CNO isotopic abundances in the hot, dusty media of high-redshift starburst galaxies offer a unique glimpse into the shape of the stellar initial mass function (IMF) in extreme environments that can not be accessed with direct observations (star counts). Underlying uncertainties in stellar evolution and nucleosynthesis theory, however, may hurt our chances of getting a firm grasp of the IMF in these galaxies. In this work, we adopt new yields for massive stars, covering different initial rotational velocities. First, we implement the new yield set in a well-tested chemical evolution model for the Milky Way. The calibrated model is then adapted to the specific case of a prototype submillimeter galaxy (SMG). We show that, if the formation of fast-rotating stars is favoured in the turbulent medium of violently star-forming galaxies irrespective of metallicity, the IMF needs to be skewed towards high-mass stars in order to explain the CNO isotope ratios observed in SMGs. If, instead, stellar rotation becomes negligible beyond a given metallicity threshold, as is the case for our own Galaxy, there is no need to invoke a top-heavy IMF in starbursts.
astrophysics
Managing resources---file handles, database connections, etc.---is a hard problem. Debugging resource leaks and runtime errors due to resource mismanagement are difficult in evolving production code. Programming languages with static type systems are great tools to ensure erroneous code is detected at compile time. However, modern static type systems do little in the aspect of resource management as resources are treated as normal values. We propose a type system, Qub, based on the logic of bunched implications (BI) which models resources as first class citizens. We distinguish two kinds of program objects---restricted and unrestricted---and two kinds of functions---sharing and separating. Our approach guarantees resource correctness without compromising existing functional abstractions.
computer science
Emotional voice conversion models adapt the emotion in speech without changing the speaker identity or linguistic content. They are less data hungry than text-to-speech models and allow to generate large amounts of emotional data for downstream tasks. In this work we propose EmoCat, a language-agnostic emotional voice conversion model. It achieves high-quality emotion conversion in German with less than 45 minutes of German emotional recordings by exploiting large amounts of emotional data in US English. EmoCat is an encoder-decoder model based on CopyCat, a voice conversion system which transfers prosody. We use adversarial training to remove emotion leakage from the encoder to the decoder. The adversarial training is improved by a novel contribution to gradient reversal to truly reverse gradients. This allows to remove only the leaking information and to converge to better optima with higher conversion performance. Evaluations show that Emocat can convert to different emotions but misses on emotion intensity compared to the recordings, especially for very expressive emotions. EmoCat is able to achieve audio quality on par with the recordings for five out of six tested emotion intensities.
electrical engineering and systems science
3D moving object detection is one of the most critical tasks in dynamic scene analysis. In this paper, we propose a novel Drosophila-inspired 3D moving object detection method using Lidar sensors. According to the theory of elementary motion detector, we have developed a motion detector based on the shallow visual neural pathway of Drosophila. This detector is sensitive to the movement of objects and can well suppress background noise. Designing neural circuits with different connection modes, the approach searches for motion areas in a coarse-to-fine fashion and extracts point clouds of each motion area to form moving object proposals. An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories. We evaluate the proposed approach on the widely-used KITTI benchmark, and state-of-the-art performance was obtained by using the proposed approach on the task of motion detection.
computer science
A Private Set Operation (PSO) protocol involves at least two parties with their private input sets. The goal of the protocol is for the parties to learn the output of a set operation, i.e. set intersection, on their input sets, without revealing any information about the items that are not in the output set. Commonly, the outcome of the set operation is revealed to parties and no-one else. However, in many application areas of PSO the result of the set operation should be learned by an external participant whom does not have an input set. We call this participant the decider. In this paper, we present new variants of multi-party PSO, where there is a decider who gets the result. All parties expect the decider have a private set. Other parties neither learn this result, nor anything else about this protocol. Moreover, we present a generic solution to the problem of PSO.
computer science
We investigate neutrinoless double beta decay ($0\nu\beta\beta$) in the presence of sterile neutrinos with Majorana mass terms. These gauge-singlet fields are allowed to interact with Standard-Model (SM) fields via renormalizable Yukawa couplings as well as higher-dimensional gauge-invariant operators up to dimension seven in the Standard Model Effective Field Theory extended with sterile neutrinos. At the GeV scale, we use Chiral effective field theory involving sterile neutrinos to connect the operators at the level of quarks and gluons to hadronic interactions involving pions and nucleons. This allows us to derive an expression for $0\nu\beta\beta$ rates for various isotopes in terms of phase-space factors, hadronic low-energy constants, nuclear matrix elements, the neutrino masses, and the Wilson coefficients of higher-dimensional operators. The needed hadronic low-energy constants and nuclear matrix elements depend on the neutrino masses, for which we obtain interpolation formulae grounded in QCD and chiral perturbation theory that improve existing formulae that are only valid in a small regime of neutrino masses. The resulting framework can be used directly to assess the impact of $0\nu\beta\beta$ experiments on scenarios with light sterile neutrinos and should prove useful in global analyses of sterile-neutrino searches. We perform several phenomenological studies of $0\nu\beta\beta$ in the presence of sterile neutrinos with and without higher-dimensional operators. We find that non-standard interactions involving sterile neutrinos have a dramatic impact on $0\nu\beta\beta$ phenomenology, and next-generation experiments can probe such interactions up to scales of $\mathcal O(100)$ TeV.
high energy physics phenomenology
Speech signal is constituted and contributed by various informative factors, such as linguistic content and speaker characteristic. There have been notable recent studies attempting to factorize speech signal into these individual factors without requiring any annotation. These studies typically assume continuous representation for linguistic content, which is not in accordance with general linguistic knowledge and may make the extraction of speaker information less successful. This paper proposes the mixture factorized auto-encoder (mFAE) for unsupervised deep factorization. The encoder part of mFAE comprises a frame tokenizer and an utterance embedder. The frame tokenizer models linguistic content of input speech with a discrete categorical distribution. It performs frame clustering by assigning each frame a soft mixture label. The utterance embedder generates an utterance-level vector representation. A frame decoder serves to reconstruct speech features from the encoders'outputs. The mFAE is evaluated on speaker verification (SV) task and unsupervised subword modeling (USM) task. The SV experiments on VoxCeleb 1 show that the utterance embedder is capable of extracting speaker-discriminative embeddings with performance comparable to a x-vector baseline. The USM experiments on ZeroSpeech 2017 dataset verify that the frame tokenizer is able to capture linguistic content and the utterance embedder can acquire speaker-related information.
electrical engineering and systems science
I present several scenarios developed in Zagreb, in which TeV-scale particles belonging to non-trivial weak-isospin multiplets give rise to neutrino-mass mechanisms different from conventional type I, II and III seesaw models. Two dim 9 tree-level mechanisms, presented first, provide an appealing testability of their exotic TeV-scale particles at the LHC. These models are not genuine, since their particles also provide competing dim 5 loop contributions. The loop-models presented next are genuine, without competing tree-level contributions. Among them, the three-loop model involves high-order weak multiplets leading to Landau poles. The one-loop model with scalar triplet as the largest multiplet, in addition to good UV properties, provides the particle set promising for gauge coupling unification. Therefore, it served us as a starting point for a study of SU(5) embedding of additional particles leading to viable unification scenarios. To distinguish among them begs for additional principle which reigns over particle completion and eventual dark matter considerations.
high energy physics phenomenology
We propose a differential geometric construction for families of low-rank covariance matrices, via interpolation on low-rank matrix manifolds. In contrast with standard parametric covariance classes, these families offer significant flexibility for problem-specific tailoring via the choice of "anchor" matrices for the interpolation. Moreover, their low-rank facilitates computational tractability in high dimensions and with limited data. We employ these covariance families for both interpolation and identification, where the latter problem comprises selecting the most representative member of the covariance family given a data set. In this setting, standard procedures such as maximum likelihood estimation are nontrivial because the covariance family is rank-deficient; we resolve this issue by casting the identification problem as distance minimization. We demonstrate the power of these differential geometric families for interpolation and identification in a practical application: wind field covariance approximation for unmanned aerial vehicle navigation.
statistics
Exploring and controlling the physical factors that determine the topography of thin liquid dielectric films are of interest in manifold fields of research in physics, applied mathematics, and engineering and have been a key aspect of many technological advancements. Visualization of thin liquid dielectric film topography and local thickness measurements are essential tools for characterizing and interpreting the underlying processes. However, achieving high sensitivity with respect to subnanometric changes in thickness via standard optical methods is challenging. We propose a combined imaging and optical patterning projection platform that is capable of optically inducing dynamical flows in thin liquid dielectric films and plasmonically resolving the resulting changes in topography and thickness. In particular, we employ the thermocapillary effect in fluids as a novel heat-based method to tune plasmonic resonances and visualize dynamical processes in thin liquid dielectric films. The presented results indicate that light-induced thermocapillary flows can form and translate droplets and create indentation patterns on demand in thin liquid dielectric films of subwavelength thickness and that plasmonic microscopy can image these fluid dynamical processes with a subnanometer sensitivity along the vertical direction.
physics
The Mid-Infrared Instrument (MIRI) on the James Webb Space Telescope (JWST), has imaging, four coronagraphs and both low and medium resolution spectroscopic modes . Being able to simulate MIRI observations will help commissioning of the instrument, as well as get users familiar with representative data. We designed the MIRI instrument simulator (MIRISim) to mimic the on-orbit performance of the MIRI imager and spectrometers using the Calibration Data Products (CDPs) developed by the MIRI instrument team. The software encorporates accurate representations of the detectors, slicers, distortions, and noise sources along the light path including the telescope's radiative background and cosmic rays. The software also includes a module which enables users to create astronomical scenes to simulate. MIRISim is a publicly available Python package that can be run at the command line, or from within Python. The outputs of MIRISim are detector images in the same uncalibrated data format that will be delivered to MIRI users. These contain the necessary metadata for ingestion by the JWST calibration pipeline.
astrophysics
We analyse velocity fluctuations in the solar wind at magneto-fluid scales in two datasets, extracted from Wind data in the period 2005-2015, that are characterised by strong or weak expansion. Expansion affects measurements of anisotropy because it breaks axisymmetry around the mean magnetic field. Indeed, the small-scale three-dimensional local anisotropy of magnetic fluctuations ({\delta}B) as measured by structure functions (SF_B) is consistent with tube-like structures for strong expansion. When passing to weak expansion, structures become ribbon-like because of the flattening of SFB along one of the two perpendicular directions. The power-law index that is consistent with a spectral slope -5/3 for strong expansion now becomes closer to -3/2. This index is also characteristic of velocity fluctuations in the solar wind. We study velocity fluctuations ({\delta}V) to understand if the anisotropy of their structure functions (SF_V ) also changes with the strength of expansion and if the difference with the magnetic spectral index is washed out once anisotropy is accounted for. We find that SF_V is generally flatter than SF_B. When expansion passes from strong to weak, a further flattening of the perpendicular SF_V occurs and the small-scale anisotropy switches from tube-like to ribbon-like structures. These two types of anisotropy, common to SF_V and SF_B, are associated to distinct large-scale variance anisotropies of {\delta}B in the strong- and weak-expansion datasets. We conclude that SF_V shows anisotropic three-dimensional scaling similar to SF_B, with however systematic flatter scalings, reflecting the difference between global spectral slopes.
astrophysics
For a large set of flavor symmetries, the lowest-dimensional baryon- or lepton-number violating operators in the Standard Model effective field theory (SMEFT) with flavor symmetry are of mass dimension 9. As a consequence, baryon- and lepton-number violating processes are further suppressed with the introduction of flavor symmetries, e.g., the allowed scale associated with proton decay is typically lowered to $10^5$ GeV, which is significantly lower than the GUT scale. To illustrate these features, we discuss Minimal Flavor Violation for the Standard Model augmented by sterile neutrinos.
high energy physics phenomenology
Quantum internet will enable a number of revolutionary applications. It relies on entanglement of remote quantum memories over long distances. Despite enormous progresses so far, the maximal physical separation achieved between two nodes is 1.3 km, and challenges for long distance remain. Here we make a significant step forward by entangling two atomic ensembles in one lab via photon transmission through metropolitan-scale fibers. We use cavity enhancement to create bright atom-photon entanglement, and harness quantum frequency conversion to shift the atomic wavelength to telecom. We realize entanglement over 22 km field-deployed fibers via two-photon interference, and entanglement over 50 km coiled fibers via single-photon interference. Our experiment can be extended to physically separated nodes with similar distance as a functional segment for atomic quantum networks, thus paving the way towards establishing atomic entanglement over many nodes and over much longer distance.
quantum physics
The nature and structure of the observed east-west flows on Jupiter and Saturn has been one of the longest-lasting mysteries in planetary science. This mystery has been recently unraveled due to the accurate gravity measurements provided by the Juno mission to Jupiter and the Grand Finale of the Cassini mission to Saturn. These two experiments, which coincidentally happened around the same time, allowed determination of the vertical and meridional profiles of the zonal flows on both planets. This paper reviews the topic of zonal jets on the gas giants in light of the new data from these two experiments. The gravity measurements not only allow the depth of the jets to be constrained, yielding the inference that the jets extend roughly 3000 and 9000 km below the observed clouds on Jupiter and Saturn, respectively, but also provide insights into the mechanisms controlling these zonal flows. Specifically, for both planets this depth corresponds to the depth where electrical conductivity is within an order of magnitude of 1 S/m, implying that the magnetic field likely plays a key role in damping the zonal flows.
astrophysics
This article provides the unconditional security of a semi quantum key distribution (SQKD) protocol based on 3-dimensional quantum states. By deriving a lower bound for the key rate, in the asymptotic scenario, as a function of the quantum channel's noise, we find that this protocol has improved secret key rate with much more tolerance for noise compared to the previous 2-dimensional SQKD protocol. Our results highlight that, similar to the fully quantum key distribution protocol, increasing the dimension of the system can increase the noise tolerance in the semi-quantum key distribution, as well.
quantum physics
Objectives: Value of information (VOI) analyses can help policy-makers make informed decisions about whether to conduct and how to design future studies. Historically, a computationally expensive method to compute the Expected Value of Sample Information (EVSI) restricted the use of VOI to simple decision models and study designs. Recently, four EVSI approximation methods have made such analyses more feasible and accessible. We provide practical recommendations for analysts computing EVSI by evaluating these novel methods. Methods: Members of the Collaborative Network for Value of Information (ConVOI) compared the inputs, analyst's expertise and skills, and software required for four recently developed approximation methods. Information was also collected on the strengths and limitations of each approximation method. Results: All four EVSI methods require a decision-analytic model's probabilistic sensitivity analysis (PSA) output. One of the methods also requires the model to be re-run to obtain new PSA outputs for each EVSI estimation. To compute EVSI, analysts must be familiar with at least one of the following skills: advanced regression modeling, likelihood specification, and Bayesian modeling. All methods have different strengths and limitations, e.g., some methods handle evaluation of study designs with more outcomes more efficiently while others quantify uncertainty in EVSI estimates. All methods are programmed in the statistical language R and two of the methods provide online applications. Conclusion: Our paper helps to inform the choice between four efficient EVSI estimation methods, enabling analysts to assess the methods' strengths and limitations and select the most appropriate EVSI method given their situation and skills.
statistics
Relative Fisher information (IR), which is a measure of correlative fluctuation between two probability densities, has been pursued for a number of quantum systems, such as, 1D quantum harmonic oscillator (QHO) and a few central potentials namely, 3D isotropic QHO, hydrogen atom and pseudoharmonic potential (PHP) in both position ($r$) and momentum ($p$) spaces. In the 1D case, the $n=0$ state is chosen as reference, whereas for a central potential, the respective circular or node-less (corresponding to lowest radial quantum number $n_{r}$) state of a given $l$ quantum number, is selected. Starting from their exact wave functions, expressions of IR in both $r$ and $p$ spaces are obtained in closed analytical forms in all these systems. A careful analysis reveals that, for the 1D QHO, IR in both coordinate spaces increase linearly with quantum number $n$. Likewise, for 3D QHO and PHP, it varies with single power of radial quantum number $n_{r}$ in both spaces. But, in H atom they depend on both principal ($n$) and azimuthal ($l$) quantum numbers. However, at a fixed $l$, IR (in conjugate spaces) initially advance with rise of $n$ and then falls off; also for a given $n$, it always decreases with $l$.
quantum physics
We have carried out a theoretical investigation of hot electron power loss $P$, involving electron-acoustic phonon interaction, as a function of twist angle $\theta$, electron temperature $T_e$ and electron density $n_s$ in twisted bilayer graphene (tBLG). It is found that as $\theta$ decreases closer to magic angle $\theta_m$, $P$ enhances strongly and $\theta$ acts as an important tunable parameter, apart from $T_e$ and $n_s$. In the range of $T_e$ =1-50 K, this enhancement is $\sim$ 250-450 times the $P$ in monolayer graphene (MLG), which is manifestation of the great suppression of Fermi velocity ${v_F}^*$ of electrons in moir\'e flat band. As $\theta$ increases away from $\theta_m$, the impact of $\theta$ on $P$ decreases, tending to that of MLG at $\theta$ $\sim$ 3$^{\circ}$. In the Bloch-Gr\"uneisen (BG) regime, $P$ $\sim$ ${T_e}^4$, ${n_s}^{-1/2}$ and ${v_F}^{*-2}$. In the higher temperature region ($\sim$10- 50 K), $P$ $\sim$ ${T_e}^{\delta}$, with $\delta \sim$ 2.0, and the behavior is still super linear in $T_e$, unlike the phonon limited linear-in- $T$ ( lattice temperature) resistivity $\rho_p$. $P$ is weakly, decreasing (increasing) with increasing $n_s$ at lower (higher) $T_e$, as found in MLG. The energy relaxation time $\tau_e$ is also discussed as a function of $\theta$ and $T_e$. Expressing the power loss $P = F_e(T_e)- F_e(T)$, in the BG regime, we have obtained a simple and useful relation $F_e(T) \mu_p (T) = (e{v_s}^2$/2) i.e. $Fe(T) = (n_se^2 {v_s}^2/2)\rho_p$, where $\mu_p$ is the acoustic phonon limited mobility and $v_s$ is the acoustic phonon velocity. The $\rho_p$ estimated from this relation using our calculated $F_e(T)$ is nearly agreeing with the $\rho_p$ of Wu et al (Phys. Rev. B 99, 165112 (2019)).
condensed matter
We explore the application of a two-component model of proton structure functions in the analysis of deep-inelastic scattering (DIS) data at low $Q^2$ and small $x$. This model incorporates both vector meson dominance and the correct photo-production limit. The CJ15 parameterization is applied to the QCD component, in order to take into account effects of order $1/Q^2$ effects, such as target mass corrections and higher twist contributions. The parameters of the leading twist parton distribution functions and higher twist coefficient functions are determined by fitting deep inelastic scattering data. The second moments of the parton distribution functions are extracted and compared with other global fits and lattice determinations.
high energy physics phenomenology
Tactile perception is crucial for a variety of robot tasks including grasping and in-hand manipulation. New advances in flexible, event-driven, electronic skins may soon endow robots with touch perception capabilities similar to humans. These electronic skins respond asynchronously to changes (e.g., in pressure, temperature), and can be laid out irregularly on the robot's body or end-effector. However, these unique features may render current deep learning approaches such as convolutional feature extractors unsuitable for tactile learning. In this paper, we propose a novel spiking graph neural network for event-based tactile object recognition. To make use of local connectivity of taxels, we present several methods for organizing the tactile data in a graph structure. Based on the constructed graphs, we develop a spiking graph convolutional network. The event-driven nature of spiking neural network makes it arguably more suitable for processing the event-based data. Experimental results on two tactile datasets show that the proposed method outperforms other state-of-the-art spiking methods, achieving high accuracies of approximately 90\% when classifying a variety of different household objects.
electrical engineering and systems science
Electron-phonon superconductors at high pressures have displayed the highest values of critical superconducting temperature $T_c$ on record, now rapidly approaching room temperature. Despite the importance of high-$P$ superconductivity in the quest for room-temperature superconductors, a mechanistic understanding of the effect of pressure and its complex interplay with phonon anharmonicity and superconductivity is missing, as numerical simulations can only bring system-specific details clouding out key players controlling the physics. Here we develop a minimal model of electron-phonon superconductivity under an applied pressure which takes into account the anharmonic decoherence of the optical phonons. We find that $T_c$ behaves non-monotonically as a function of the ratio $\Gamma/\omega_0$, where $\Gamma$ is the optical phonon damping and $\omega_0$ the optical phonon energy at zero pressure and momentum. Optimal pairing occurs for a critical ratio $\Gamma/\omega_0$ when the phonons are on the verge of decoherence ("diffuson-like" limit). Our framework gives insights into recent experimental observations of $T_c$ as a function of pressure in the complex BCS material TlInTe$_2$.
condensed matter
We derive the commutation relations for open-string coordinates on D-branes in non-geometric background spaces. Starting from D0-branes on a three-dimensional torus with H-flux, we show that open strings with end points on D3-branes in a three-dimensional R-flux background exhibit a non-associative phase-space algebra, which is similar to the non-associative R-flux algebra of closed strings. Therefore, the effective open-string gauge theory on the D3-branes is expected to be a non-associative gauge theory. We also point out differences between the non-associative phase space structure of open and closed strings in non-geometric backgrounds, which are related to the different structure of the world-sheet commutators of open and closed strings.
high energy physics theory
We design a framework for studying prelinguistic child voicefrom 3 to 24 months based on state-of-the-art algorithms in di-arization. Our system consists of a time-invariant feature ex-tractor, a context-dependent embedding generator, and a clas-sifier. We study the effect of swapping out different compo-nents of the system, as well as changing loss function, to findthe best performance. We also present a multiple-instancelearning technique that allows us to pre-train our parame-ters on larger datasets with coarser segment boundary labels.We found that our best system achieved 43.8% DER on testdataset, compared to 55.4% DER achieved by LENA soft-ware. We also found that using convolutional feature extrac-tor instead of logmel features significantly increases the per-formance of neural diarization.
electrical engineering and systems science
We give a new proof of the Caffarelli contraction theorem, which states that the Brenier optimal transport map sending the standard Gaussian measure onto a uniformly log-concave probability measure is Lipschitz. The proof combines a recent variational characterization of Lipschitz transport map by the second author and Juillet with a convexity property of optimizers in the dual formulation of the entropy-regularized optimal transport (or Schr{\"o}dinger) problem.
mathematics
We explore the host galaxies of compact-object binaries (black hole--black hole binaries, BHBs; neutron star--black hole binaries, NSBHs; double-neutron stars; DNSs) across cosmic time, by means of population-synthesis simulations combined with the Illustris cosmological simulation. At high redshift ($z\gtrsim{}4$) the host galaxies of BHBs, NSBHs and DNSs are very similar and are predominantly low-mass galaxies (stellar mass $M<10^{11}$ M$_\odot$). If $z\gtrsim{}4$ most compact objects form and merge in the same galaxy, with a short delay time. At low redshift ($z\leq{}2$), the host galaxy populations of DNSs differ significantly from the host galaxies of both BHBs and NSBHs. DNSs merging at low redshift tend to form and merge in the same galaxy, with relatively short delay time. The stellar mass of DNS hosts peaks around $\sim{}10^{10}-10^{11}$ M$_\odot$. In contrast, BHBs and NSBHs merging at low redshift tend to form in rather small galaxies at high redshift and then to merge in larger galaxies with long delay times. This difference between DNSs and black hole binaries is a consequence of their profoundly different metallicity dependence.
astrophysics
We construct supersymmetric Lifshitz field theories with four real supercharges in a general number of space dimensions. The theories consist of complex bosons and fermions and exhibit a holomorphic structure and non-renormalization properties of the superpotential. We study the theories in a diverse number of space dimensions and for various choices of marginal interactions. We show that there are lines of quantum critical points with an exact Lifshitz scale invariance and a dynamical critical exponent that depends on the coupling constants.
high energy physics theory
Phyllotaxis, the regular arrangement of leaves or other lateral organs in plants including pineapples, sunflowers and some cacti, has attracted scientific interest for centuries. More recently there has been interest in phyllotaxis within physical systems, especially for cylindrical geometry. In this letter, we expand from a cylindrical geometry and investigate transitions between phyllotactic states of soft vortex matter confined to a conical frustum. We show that the ground states of this system are consistent with previous results for cylindrical confinement and discuss the resulting defect structures at the transitions. We then eliminate these defects from the system by introducing a density gradient to create a configuration in a single state. The nature of the density gradient limits this approach to a small parameter range on the conical system. We therefore seek a new surface, the horn, for which a defect-free state can be maintained for a larger range of parameters.
condensed matter
We develop a formalism for photoionization (PI) and potential energy curves (PECs) of Rydberg atoms in ponderomotive optical lattices and apply it to examples covering several regimes of the optical-lattice depth. The effect of lattice-induced PI on Rydberg-atom lifetime ranges from noticeable to highly dominant when compared with natural decay. The PI behavior is governed by the generally rapid decrease of the PI cross sections as a function of angular-momentum ($\ell$), and by lattice-induced $\ell$-mixing across the optical-lattice PECs. In GHz-deep lattices, $\ell$-mixing leads to a rich PEC structure, and the significant low-$\ell$ PI cross sections are distributed over many lattice-mixed Rydberg states. In lattices less than several tens-of-MHz deep, atoms on low-$\ell$ PECs are essentially $\ell$-mixing-free and maintain large PI cross sections, while atoms on high-$\ell$ PECs trend towards being PI-free. Characterization of PI in GHz-deep Rydberg-atom lattices may be beneficial for optical control and quantum-state manipulation of Rydberg atoms, while data on PI in shallower lattices are potentially useful in high-precision spectroscopy and quantum-computing applications of lattice-confined Rydberg atoms.
physics
By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified. This enables on-device machine learning without any centralized training data or coordination by utilizing a consensus mechanism in blockchain. Moreover, we analyze an end-to-end latency model of BlockFL and characterize the optimal block generation rate by considering communication, computation, and consensus delays.
computer science
One of the most important achievements of inflationary cosmology is to predict a departure from scale invariance of the power spectrum for scalar curvature cosmological fluctuations. This tilt is understood as a consequence of a quasi de Sitter classical equation of state describing the inflationary dark energy dominated era. Here, following previous work, we find a departure of scale invariance for the quantum Fisher information associated to de Sitter vacuum for scalar quantum spectator modes. This gives rise to a purely quantum cosmological tilt with a well defined dependence on energy scale. This quantum tilt is imprinted, in a scale dependent energy uncertainty for the spectator modes. The effective quasi de Sitter description of this model independent energy uncertainty uniquely sets the effective quasi de Sitter parameters at all energy scales. In particular, in the slow-roll regime characterized by an almost constant $\epsilon$, the quantum Fisher -- model independent -- prediction for the spectral index is $(1-n_s) = 0.0328$ ($n_s=0.9672$). Moreover, the energy scale dependence of the quantum cosmological tilt implies the existence of a cosmological phase transition at energies higher than the CMB scale where the tilt goes from red into blue. This strongly suggest the existence of a pre-inflationary phase where the effective scalaron contributes to the spectral index as normal relativistic matter and where the corresponding growth of the power spectrum can result in dark matter in the form of small mass primordial black holes. The source and features of the quantum cosmological tilt leading to these predictions are determined by the entanglement features of the de Sitter vacuum states.
high energy physics theory
In recent years, the first author has developed three successful numerical methods to solve the 1D radiative transport equation yielding highly precise benchmarks. The second author has shown a keen interest in novel solution methodologies and an ability for their implementation. Here, we combine talents to generate yet another high precision solution, the Matrix Riccati Equation Method (MREM). MREM features the solution to two of the four matrix Riccati ODEs that arise from the interaction principle of particle transport. Through interaction coefficients, the interaction principle describes how particles reflect from- and transmit through- a single slab. On combination with Taylor series and doubling, a high quality numerical benchmark, to nearly seven places, is established.
physics
We launched a community platform for collecting the ATC speech world-wide in the ATCO2 project. Filtering out unseen non-English speech is one of the main components in the data processing pipeline. The proposed English Language Detection (ELD) system is based on the embeddings from Bayesian subspace multinomial model. It is trained on the word confusion network from an ASR system. It is robust, easy to train, and light weighted. We achieved 0.0439 equal-error-rate (EER), a 50% relative reduction as compared to the state-of-the-art acoustic ELD system based on x-vectors, in the in-domain scenario. Further, we achieved an EER of 0.1352, a 33% relative reduction as compared to the acoustic ELD, in the unseen language (out-of-domain) condition. We plan to publish the evaluation dataset from the ATCO2 project.
electrical engineering and systems science
Learning-based approaches to grasp planning are preferred over analytical methods due to their ability to better generalize to new, partially observed objects. However, data collection remains one of the biggest bottlenecks for grasp learning methods, particularly for multi-fingered hands. The relatively high dimensional configuration space of the hands coupled with the diversity of objects common in daily life requires a significant number of samples to produce robust and confident grasp success classifiers. In this paper, we present the first active deep learning approach to grasping that searches over the grasp configuration space and classifier confidence in a unified manner. We base our approach on recent success in planning multi-fingered grasps as probabilistic inference with a learned neural network likelihood function. We embed this within a multi-armed bandit formulation of sample selection. We show that our active grasp learning approach uses fewer training samples to produce grasp success rates comparable with the passive supervised learning method trained with grasping data generated by an analytical planner. We additionally show that grasps generated by the active learner have greater qualitative and quantitative diversity in shape.
computer science
Needle-free injector technology (NFIT) has drawn attention due to their advantages. By using NFIT, it can be used several time unlike a conventional needle. Also, it makes patients free from pain. Since NFIT by jet injection is achieved by ejecting a liquid drug through a narrow orifice at high pressure, thereby creating a fine high-speed fluid jet that can readily penetrate skin and tissue. Until very recently, all jet injectors utilized force-and pressure-generating principles that progress injection in anuncontrolled mannerwith limited ability to regulate delivery volume and injection depth. In order to address these shortcomings, we have developed a controllable jet injection device, based on a custom high-stroke linear Lorentz-force motor. Using this device, we are able to monitor and modulate continuously the speed of the drug jet, and regulate precisely the volume of drug delivered during the injection process.
physics
In support of the growing interest in quantum computing experimentation, programmers need new tools to write quantum algorithms as program code. Compared to debugging classical programs, debugging quantum programs is difficult because programmers have limited ability to probe the internal states of quantum programs; those states are difficult to interpret even when observations exist; and programmers do not yet have guidelines for what to check for when building quantum programs. In this work, we present quantum program assertions based on statistical tests on classical observations. These allow programmers to decide if a quantum program state matches its expected value in one of classical, superposition, or entangled types of states. We extend an existing quantum programming language with the ability to specify quantum assertions, which our tool then checks in a quantum program simulator. We use these assertions to debug three benchmark quantum programs in factoring, search, and chemistry. We share what types of bugs are possible, and lay out a strategy for using quantum programming patterns to place assertions and prevent bugs.
quantum physics
We introduce a new technique for the simulation of dissipative quantum systems. This method is composed of an approximate decomposition of the Lindblad equation into a Kraus map, from which one can define an ensemble of wavefunctions. Using principal component analysis, this ensemble can be truncated to a manageable size without sacrificing numerical accuracy. We term this method \emph{Ensemble Rank Truncation} (ERT), and find that in the regime of weak coupling, this method is able to outperform existing wavefunction Monte-Carlo methods by an order of magnitude in both accuracy and speed. We also explore the possibility of combining ERT with approximate techniques for simulating large systems (such as Matrix Product States (MPS)), and show that in many cases this approach will be more efficient than directly expressing the density matrix in its MPS form. We expect the ERT technique to be of practical interest when simulating dissipative systems for quantum information, metrology and thermodynamics.
quantum physics
Heron angle: both its sine and cosine are rational Heron triangle: all its sides and area are rational Heron Parallelogram: all its sides, diagonals and area are rational We give one-to-one (bijective) parametrizations for all three concepts.
mathematics
We propose two generic methods for improving semi-supervised learning (SSL). The first integrates weight perturbation (WP) into existing "consistency regularization" (CR) based methods. We implement WP by leveraging variational Bayesian inference (VBI). The second method proposes a novel consistency loss called "maximum uncertainty regularization" (MUR). While most consistency losses act on perturbations in the vicinity of each data point, MUR actively searches for "virtual" points situated beyond this region that cause the most uncertain class predictions. This allows MUR to impose smoothness on a wider area in the input-output manifold. Our experiments show clear improvements in classification errors of various CR based methods when they are combined with VBI or MUR or both.
computer science
River water-quality monitoring is increasingly conducted using automated in situ sensors, enabling timelier identification of unexpected values. However, anomalies caused by technical issues confound these data, while the volume and velocity of data prevent manual detection. We present a framework for automated anomaly detection in high-frequency water-quality data from in situ sensors, using turbidity, conductivity and river level data. After identifying end-user needs and defining anomalies, we ranked their importance and selected suitable detection methods. High priority anomalies included sudden isolated spikes and level shifts, most of which were classified correctly by regression-based methods such as autoregressive integrated moving average models. However, using other water-quality variables as covariates reduced performance due to complex relationships among variables. Classification of drift and periods of anomalously low or high variability improved when we applied replaced anomalous measurements with forecasts, but this inflated false positive rates. Feature-based methods also performed well on high priority anomalies, but were also less proficient at detecting lower priority anomalies, resulting in high false negative rates. Unlike regression-based methods, all feature-based methods produced low false positive rates, but did not and require training or optimization. Rule-based methods successfully detected impossible values and missing observations. Thus, we recommend using a combination of methods to improve anomaly detection performance, whilst minimizing false detection rates. Furthermore, our framework emphasizes the importance of communication between end-users and analysts for optimal outcomes with respect to both detection performance and end-user needs. Our framework is applicable to other types of high frequency time-series data and anomaly detection applications.
statistics
In this chapter we review some of the basic attack constructions that exploit a stochastic description of the state variables. We pose the state estimation problem in a Bayesian setting and cast the bad data detection procedure as a Bayesian hypothesis testing problem. This revised detection framework provides the benchmark for the attack detection problem that limits the achievable attack disruption. Indeed, the trade-off between the impact of the attack, in terms of disruption to the state estimator, and the probability of attack detection is analytically characterized within this Bayesian attack setting. We then generalize the attack construction by considering information-theoretic measures that place fundamental limits to a broad class of detection, estimation, and learning techniques. Because the attack constructions proposed in this chapter rely on the attacker having access to the statistical structure of the random process describing the state variables, we conclude by studying the impact of imperfect statistics on the attack performance. Specifically, we study the attack performance as a function of the size of the training data set that is available to the attacker to estimate the second-order statistics of the state variables.
electrical engineering and systems science
The evolution of molecular clouds in galactic centres is thought to differ from that in galactic discs due to a significant influence of the external gravitational potential. We present a set of numerical simulations of molecular clouds orbiting on the 100-pc stream of the Central Molecular Zone (the central $\sim500$ pc of the Galaxy) and characterise their morphological and kinematic evolution in response to the background potential and eccentric orbital motion. We find that the clouds are shaped by strong shear and torques, by tidal and geometric deformation, and by their passage through the orbital pericentre. Within our simulations, these mechanisms control cloud sizes, aspect ratios, position angles, filamentary structure, column densities, velocity dispersions, line-of-sight velocity gradients, spin angular momenta, and kinematic complexity. By comparing these predictions to observations of clouds on the Galactic Centre 'dust ridge', we find that our simulations naturally reproduce a broad range of key observed morphological and kinematic features, which can be explained in terms of well-understood physical mechanisms. We argue that the accretion of gas clouds onto the central regions of galaxies, where the rotation curve turns over and the tidal field is fully compressive, is accompanied by transformative dynamical changes to the clouds, leading to collapse and star formation. This can generate an evolutionary progression of cloud collapse with a common starting point, which either marks the time of accretion onto the tidally-compressive region or of the most recent pericentre passage. Together, these processes may naturally produce the synchronised starbursts observed in numerous (extra)galactic nuclei.
astrophysics
We show the following bounds on the prime counting function $\pi(x)$ using principles from analytic number theory, giving an estimate: $$2 \log 2 \geq \limsup_{x \rightarrow \infty} \frac{\pi(x)}{x / \log x} \geq \liminf_{x \rightarrow \infty} \frac{\pi(x)}{x / \log x} \geq \log 2$$ for all $x$ sufficiently large. We also conjecture about the bounding of $\pi((x+1)^{2}) - \pi(x^{2})$, as is relevant to Legendre's conjecture about the number of primes in the aforementioned interval such that: $$ \left \lfloor\frac{1}{2}\left(\frac{\left(x+1\right)^{2}}{\log\left(x+1\right)}-\frac{x^{2}}{\log x}\right)-\frac{\left(\log x\right)^{2}}{\log\left(\log x\right)}\right \rfloor \leq \pi((x+1)^{2}) - \pi(x^{2}) \leq $$ $$ \left \lfloor\frac{1}{2}\left(\frac{\left(x+1\right)^{2}}{\log\left(x+1\right)}-\frac{x^{2}}{\log x}\right) + \log^{2}x\log\log x \right \rfloor$$
mathematics
A scheme of THz radiation using two moderately intense lasers and a moderately relativistic electron beam is proposed. In the scheme, a laser encounters a co-propagating relativistic electron beam, and excites plasmons via the two-plasmon decay. The excited plasmons will emit the THz radiations, interacting with the second laser via the Raman scattering. Our estimation suggests that the mena-free-path of the pump laser to the THz radiation is much shorter than the Thomson scattering. The physical parametes for practical interests are presented.
physics
Causal inference analyses often use existing observational data, which in many cases has some clustering of individuals. In this paper we discuss propensity score weighting methods in a multilevel setting where within clusters individuals share unmeasured confounders that are related to treatment assignment and the potential outcomes. We focus in particular on settings where models with fixed cluster effects are either not feasible or not useful due to the presence of a large number of small clusters. We found, both through numerical experiments and theoretical derivations, that a strategy of grouping clusters with similar treatment prevalence and estimating propensity scores within such cluster groups is effective in reducing bias from unmeasured cluster-level covariates under mild conditions on the outcome model. We apply our proposed method in evaluating the effectiveness of center-based pre-school program participation on children's achievement at kindergarten, using the Early Childhood Longitudinal Study, Kindergarten data.
statistics
In many engineering applications the level of nonlinear distortions in frequency response function (FRF) measurements is quantified using specially designed periodic excitation signals called random phase multisines and periodic noise. The technique is based on the concept of the best linear approximation (BLA) and it allows one to check the validity of the linear framework with a simple experiment. Although the classical BLA theory can handle measurement noise only, in most applications the noise generated by the system -- called process noise -- is the dominant noise source. Therefore, there is a need to extend the existing BLA theory to the process noise case. In this paper we study in detail the impact of the process noise on the BLA of nonlinear continuous-time systems operating in a closed loop. It is shown that the existing nonparametric estimation methods for detecting and quantifying the level of nonlinear distortions in FRF measurements are still applicable in the presence of process noise. All results are also valid for discrete-time systems and systems operating in open loop.
electrical engineering and systems science
We derive the upper limit to the ejecta mass of S190814bv, a black hole-neutron star merger candidate, through the radiative transfer simulations for kilonovae with the realistic ejecta density profile as well as the detailed opacity and heating rate models. The limits to the ejecta mass strongly depend on the viewing angle. For the face-on observations ($\le45^\circ$), the total ejecta mass should be smaller than $0.1\,M_\odot$ for the average distance of S190814bv ($D=267$ Mpc), while larger mass is allowed for the edge-on observations. We also derive the conservative upper limits of the dynamical ejecta mass to be $0.02\,M_\odot$, $0.03\,M_\odot$, and $0.05\,M_\odot$ for the viewing angle $\le 20^\circ$, $\le 50^\circ$, and for $\le 90^\circ$, respectively. We show that the {\it iz}-band observation deeper than $22$ mag within 2 d after the GW trigger is crucial to detect the kilonova with the total ejecta mass of $0.06\,M_\odot$ at the distance of $D=300$ Mpc. We also show that a strong constraint on the NS mass-radius relation can be obtained if the future observations put the upper limit of $0.03\,M_\odot$ to the dynamical ejecta mass for a BH-NS event with the chirp mass smaller than $\lesssim 3\,M_\odot$ and effective spin larger than $\gtrsim 0.5$.
astrophysics
Preprint for a book chapter introducing Audio Content Analysis. With a focus on Music Information Retrieval systems, this chapter defines musical audio content, introduces the general process of audio content analysis, and surveys basic approaches to audio content analysis. The various tasks in Audio Content Analysis are categorized into three classes: music transcription, music performance analysis, and music identification and categorization. The examples for music transcription systems include music key detection, fundamental frequency detection, and music structure detection. Music performance analysis systems feature an overview of beat and tempo detection approaches as well as music performance assessment. The covered music classification systems are audio fingerprinting, music genre classification, and music emotion recognition. The chapter concludes with a discussion and current challenges in the field and a speculation on future perspectives.
electrical engineering and systems science
We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we use critical values that take into account the possible bias of the estimator upon which the CIs are based. We show that this approach leads to CIs that are more efficient than conventional CIs that achieve coverage by undersmoothing or subtracting an estimate of the bias. We give sharp efficiency bounds of using different kernels, and derive the optimal bandwidth for constructing honest CIs. We show that using the bandwidth that minimizes the maximum mean-squared error results in CIs that are nearly efficient and that in this case, the critical value depends only on the rate of convergence. For the common case in which the rate of convergence is $n^{-2/5}$, the appropriate critical value for 95% CIs is 2.18, rather than the usual 1.96 critical value. We illustrate our results in a Monte Carlo analysis and an empirical application.
statistics
In this communication we consider generalities of the proximity effect in a contact between a conventional $s$-wave superconductor (S) nano-island and a thin film of a topological insulator (TI). A local hybridization coupling mechanism is considered and a corresponding model is corroborated that captures not only the induced unconventional superconductivity in a TI, but also predicts the spreading of topologically protected surface states into the superconducting over-layer. This dual nature of the proximity effect leads specifically to a modified description of topological superconductivity in these systems. Experimentally accessible signatures of this phenomenon are discussed in the context of scanning tunneling microscopy measurements. For this purpose an effective density of states is computed in both the superconductor and topological insulator. As a guiding example, practical applications are made for Nb islands deposited on a surface of Bi$_2$Se$_3$. The obtained results are general and can be applied beyond the particular material system used. Possible implications of these results to proximity circuits and hybrid hardware devices for quantum computation processing are discussed.
condensed matter
Recently van Dokkum et al. (2018b) reported that the galaxy NGC 1052-DF2 (DF2) lacks dark matter if located at $20$ Mpc from Earth. In contrast, DF2 is a dark-matter-dominated dwarf galaxy with a normal globular cluster population if it has a much shorter distance near $10$ Mpc. However, DF2 then has a high peculiar velocity wrt. the cosmic microwave background of $886$ $\rm{km\,s^{-1}}$, which differs from that of the Local Group (LG) velocity vector by $1298$ $\rm{km\,s^{-1}}$ with an angle of $117 \, ^{\circ}$. Taking into account the dynamical $M/L$ ratio, the stellar mass, half-light radius, peculiar velocity, motion relative to the LG, and the luminosities of the globular clusters, we show that the probability of finding DF2-like galaxies in the lambda cold dark matter ($\Lambda$CDM) TNG100-1 simulation is at most $1.0\times10^{-4}$ at $11.5$ Mpc and is $4.8\times10^{-7}$ at $20.0$ Mpc. At $11.5$ Mpc, the peculiar velocity is in significant tension with the TNG100-1, TNG300-1, and Millennium simulations, but occurs naturally in a Milgromian cosmology. At $20.0$ Mpc, the unusual globular cluster population would challenge any cosmological model. Estimating that precise measurements of the internal velocity dispersion, stellar mass, and distance exist for $100$ galaxies, DF2 is in $2.6\sigma$ ($11.5$ Mpc) and $4.1\sigma$ ($20.0$ Mpc) tension with standard cosmology. Adopting the former distance for DF2 and assuming that NGC 1052-DF4 is at $20.0$ Mpc, the existence of both is in tension at $\geq4.8\sigma$ with the $\Lambda$CDM model. If both galaxies are at $20.0$ Mpc the $\Lambda$CDM cosmology has to be rejected by $\geq5.8\sigma$.
astrophysics
We derive the first renormalized factorization theorem for a process described at subleading power in soft-collinear effective theory. Endpoint divergences in convolution integrals, which arise generically beyond leading power, are regularized and removed by systematically rearranging the factorization formula. We study in detail the example of the $b$-quark induced $h\to\gamma\gamma$ decay of the Higgs boson, for which we derive the evolution equations for all quantities in the factorization theorem and resum large logarithms of the ratio $M_h/m_b$ at next-to-leading logarithmic order.
high energy physics phenomenology
The existence of a Radius Valley in the Kepler size distribution stands as one of the most important observational constraints to understand the origin and composition of exoplanets with radii between that of Earth and Neptune. The goal of this work is to provide insights into the existence of the Radius Valley from, first, a pure formation point of view, and second, a combined formation-evolution model. We run global planet formation simulations including the evolution of dust by coagulation, drift and fragmentation; and the evolution of the gaseous disc by viscous accretion and photoevaporation. A planet grows from a moon-mass embryo by either silicate or icy pebble accretion, depending on its position with respect to the water ice line. We account for gas accretion and type-I/II migration. We perform an extensive parameter study evaluating a wide range in disc properties and embryo's initial location. We account for photoevaporation driven mass-loss after formation. We find that due to the change in dust properties at the water ice line, rocky cores form typically with $\sim$3 M$_{\oplus}$ and have a maximum mass of $\sim$5 M$_{\oplus}$, while icy cores peak at $\sim$10 $M_{\oplus}$, with masses lower than 5 M$_{\oplus}$ being scarce. When neglecting the gaseous envelope, rocky and icy cores account naturally for the two peaks of the Kepler size distribution. The presence of massive envelopes for cores more massive than $\sim$10 M$_{\oplus}$ inflates the radii of those planets above 4 R$_{\oplus}$. While the first peak of the Kepler size distribution is undoubtedly populated by bare rocky cores, the second peak can host water-rich planets with thin H-He atmospheres. Some envelope-loss mechanism should operate efficiently at short orbital periods to explain the presence of $\sim$10-40 M$_{\oplus}$ planets falling in the second peak of the size distribution.
astrophysics
In this paper, four adaptive radar architectures for target detection in heterogeneous Gaussian environments are devised. The first architecture relies on a cyclic optimization exploiting the Maximum Likelihood Approach in the original data domain, whereas the second detector is a function of transformed data which are normalized with respect to their energy and with the unknown parameters estimated through an Expectation-Maximization-based alternate procedure. The remaining two architectures are obtained by suitably combining the estimation procedures and the detector structures previously devised. Performance analysis, conducted on both simulated and measured data, highlights that the architecture working in the transformed domain guarantees the constant false alarm rate property with respect to the interference power variations and a limited detection loss with respect to the other detectors, whose detection thresholds nevertheless are very sensitive to the interference power.
electrical engineering and systems science
Bell inequalities are consequences of local realism while violated by quantum mechanics. In particle physics, entangled high energy particles can be produced from a common source, and the decay of each particle plays the role of measurement. However, in a hidden variable theory, the decay could be determined by hidden variables. This loophole killed such approaches to Bell test in particle physics. It is a special form of measurement-setting or free-will loophole, which also exists in other systems. Using entangled baryons, we present new inequalities of local realism with the explicit assumption of the dependence of the decays on hidden variables, as well as the consideration of the statistical mixture of polarizations and the separation of local hidden variables for objects with spacelike distances. These violations closes the measurement-setting loophole once and for all. We propose to use the processes $\eta _c\to \Lambda \bar{\Lambda}$ and $\chi _{c0} \to \Lambda \bar{\Lambda}$ to test our inequalities, and show that their violations are likely to be observed with the data already collected in BESIII.
high energy physics phenomenology
The presence of certain elements within a star, and by extension its planet, strongly impacts the formation and evolution of the planetary system. The positive correlation between a host star's iron-content and the presence of an orbiting giant exoplanet has been confirmed; however, the importance of other elements in predicting giant planet occurrence is less certain despite their central role in shaping internal planetary structure. We designed and applied a machine learning algorithm to the Hypatia Catalog (Hinkel et a. 2014) to analyze the stellar abundance patterns of known host stars to determine those elements important in identifying potential giant exoplanet host stars. We analyzed a variety of different elements ensembles, namely volatiles, lithophiles, siderophiles, and Fe. We show that the relative abundances of oxygen, carbon, and sodium, in addition to iron, are influential indicators of the presence of a giant planet. We demonstrate the predictive power of our algorithm by analyzing stars with known giant planets and found that they had median 75% prediction score. We present a list of ~350 stars with no currently discovered planets that have a $\geq$90% prediction probability likelihood of hosting a giant exoplanet. We investigated archival HARPS data and found significant trends that HIP62345, HIP71803, and HIP10278 host long-period giant planet companions with estimated minimum $M_p\sin(i)$ values of 3.7, 6.8, and 8.5 M$_{J}$, respectively. We anticipate that our findings will revolutionize future target selection, the role that elements play in giant planet formation, and the determination of giant planet interior structure models.
astrophysics
The Jiangmen Underground Neutrino Observatory is a multipurpose neutrino experiment designed to determine neutrino mass hierarchy and precisely measure oscillation parameters by detecting reactor neutrinos from the Yangjiang and Taishan Nuclear Power Plants, observe supernova neutrinos, study the atmospheric, solar neutrinos and geo-neutrinos, and perform exotic searches, with a 20-thousand-ton liquid scintillator detector of unprecedented 3\% energy resolution (at 1 MeV) at 700-meter deep underground. In this proceeding, the subsystems of the experiment, including the cental detector, the online scintillator internal radioactivity investigation system, the PMT, the veto detector, the calibration system and the taishan antineutrino observatory, will be described. The construction is expected to be completed in 2021.
physics
Recently, holey graphene (HG) has successfully synthesized at atomic precision of hole size and shape. This shows interesting physical and chemical properties for energy and environmental applications. Shaping of the pores also transforms semimetallic graphene to semiconductor holey graphene, which opens new door for its use in electronic applications. We systematically investigated the structural, electronic, optical and thermoelectric properties of HG structure using first-principles calculations. HG was found to have a direct band gap with 0.65 eV (PBE functional), 0.95 eV (HSE06 functional) and HSE06 functional is in good agreement with experimental results. For the optical properties, we use single-shot G0W0 calculations by solving the Bethe-Salpeter equation to determining the intralayer excitonic effects. From the absorption spectrum, we obtained the optical gap of 1.28 eV and a week excitonic binding energy of 80 meV. We have found the large values of thermopower of 1662.59 $\mu$V/K and better electronic figure of merit, ZT$_{e}$ as 1.13 from the investigated thermoelectric properties. Our investigations exhibit strong and broad optical absorption in the visible light region, which makes HG monolayer a promising candidate for optoelectronic and thermoelectric applications.
condensed matter
This paper proposes a full-band and sub-band fusion model, named as FullSubNet, for single-channel real-time speech enhancement. Full-band and sub-band refer to the models that input full-band and sub-band noisy spectral feature, output full-band and sub-band speech target, respectively. The sub-band model processes each frequency independently. Its input consists of one frequency and several context frequencies. The output is the prediction of the clean speech target for the corresponding frequency. These two types of models have distinct characteristics. The full-band model can capture the global spectral context and the long-distance cross-band dependencies. However, it lacks the ability to modeling signal stationarity and attending the local spectral pattern. The sub-band model is just the opposite. In our proposed FullSubNet, we connect a pure full-band model and a pure sub-band model sequentially and use practical joint training to integrate these two types of models' advantages. We conducted experiments on the DNS challenge (INTERSPEECH 2020) dataset to evaluate the proposed method. Experimental results show that full-band and sub-band information are complementary, and the FullSubNet can effectively integrate them. Besides, the performance of the FullSubNet also exceeds that of the top-ranked methods in the DNS Challenge (INTERSPEECH 2020).
electrical engineering and systems science
We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.
computer science
Emission metrics, a crucial tool in setting effective equivalences between greenhouse gases, currently require a subjective, arbitrary choice of time horizon. Here, we propose a novel framework that uses a specific temperature goal to calculate the time horizon that aligns with scenarios satisfying that temperature goal. We analyze the Intergovernmental Panel on Climate Change Special Report on Global Warming of 1.5 C Scenario Database 1 to find that justified time horizons for the 1.5 C and 2 C global warming goals of the Paris Agreement are 22 +/- 1 and 55 +/- 1 years respectively. We then use these time horizons to quantify time-dependent emission metrics. Using methane as an example, we find that emission metrics that align with the 1.5 C and 2 C warming goals respectively (using their associated time horizons) are 80 +/- 1 and 45 +/- 1 for the Global Warming Potential, 62 +/- 1 and 11 +/- 1 for the Global Temperature change Potential, and 89 +/- 1 and 50 +/- 1 for the integrated Global Temperature change Potential. Using the most commonly used time horizon, 100 years, results in underestimating methane emission metrics by 40-70% relative to the values we calculate that align with the 2 C goal.
physics
The concept of causality has a controversial history. The question of whether it is possible to represent and address causal problems with probability theory, or if fundamentally new mathematics such as the do calculus is required has been hotly debated, e.g. Pearl (2001) states "the building blocks of our scientific and everyday knowledge are elementary facts such as "mud does not cause rain" and "symptoms do not cause disease" and those facts, strangely enough, cannot be expressed in the vocabulary of probability calculus". This has lead to a dichotomy between advocates of causal graphical modeling and the do calculus, and researchers applying Bayesian methods. In this paper we demonstrate that, while it is critical to explicitly model our assumptions on the impact of intervening in a system, provided we do so, estimating causal effects can be done entirely within the standard Bayesian paradigm. The invariance assumptions underlying causal graphical models can be encoded in ordinary Probabilistic graphical models, allowing causal estimation with Bayesian statistics, equivalent to the do calculus. Elucidating the connections between these approaches is a key step toward enabling the insights provided by each to be combined to solve real problems.
statistics
In this paper, we consider the following nonlinear Kirchhoff type problem: \[ \left\{\begin{array}{lcl}-\left(a+b\displaystyle\int_{\mathbb{R}^3}|\nabla u|^2\right)\Delta u+V(x)u=f(u), & \textrm{in}\,\,\mathbb{R}^3,\\ u\in H^1(\mathbb{R}^3), \end{array}\right. \] where $a,b>0$ are constants, the nonlinearity $f$ is superlinear at infinity with subcritical growth and $V$ is continuous and coercive. For the case when $f$ is odd in $u$ we obtain infinitely many sign-changing solutions for the above problem by using a combination of invariant sets method and the Ljusternik-Schnirelman type minimax method. To the best of our knowledge, there are only few existence results for this problem. It is worth mentioning that the nonlinear term may not be 4-superlinear at infinity, in particular, it includes the power-type nonlinearity $|u|^{p-2}u$ with $p\in(2,4]$.
mathematics
Foliated fracton order is a qualitatively new kind of phase of matter. It is similar to topological order, but with the fundamental difference that a layered structure, referred to as a foliation, plays an essential role and determines the mobility restrictions of the topological excitations. In this work, we introduce a new kind of field theory to describe these phases: a foliated field theory. We also introduce a new lattice model and string-membrane-net condensation picture of these phases, which is analogous to the string-net condensation picture of topological order.
condensed matter
This work introduces an Adaptive Mesh Refinement (AMR) strategy for the topology optimization of structures made of discrete geometric components using the geometry projection method. Practical structures made of geometric shapes such as bars and plates typically exhibit low volume fractions with respect to the volume of the design region they occupy. To maintain an accurate analysis and to ensure well-defined sensitivities in the geometry projection, it is required that the element size is smaller than the smallest dimension of each component. For low-volume-fraction structures, this leads to finite element meshes with very large numbers of elements. To improve the efficiency of the analysis and optimization, we propose a strategy to adaptively refine the mesh and reduce the number of elements by having a finer mesh on the geometric components, and a coarser mesh away from them. The refinement indicator stems very naturally from the geometry projection and is thus straightforward to implement. We demonstrate the effectiveness of the proposed AMR method by performing topology optimization for the design of minimum-compliance and stress-constrained structures made of bars and plates.
mathematics
Vacuum fluctuations are a fundamental feature of quantized fields. It is usually assumed that observations connected to vacuum fluctuations require a system well isolated from other influences. In this work, we demonstrate that effects of the quantum vacuum can already occur in simple colloidal nano-assemblies prepared by wet chemistry. We claim that the electromagnetic field fluctuations at the zero-point level saturate the absorption of dye molecules self-assembled at the surface of plasmonic nano-resonators. For this effect to occur, reaching the strong coupling regime between the plasmons and excitons is not required. This intriguing effect of vacuum-induced saturation (VISA) is discussed within a simple quantum optics picture and demonstrated by comparing the optical spectra of hybrid gold-core dye-shell nanorods to electromagnetic simulations.
physics
Human societies include many types of social relationships. Friends, family, business colleagues, online contacts, and religious groups, for example, can all contribute to an individual's social life. Individuals may behave differently in different domains, but their success in one domain may nonetheless engender success in another. The complexity caused by distinct, but coupled, arenas of social interaction may be a key driver of prosocial or selfish behavior in societies. Here, we study this problem using multilayer networks to model a population with multiple domains of social interactions. An individual can appear in multiple different layers, each with separate behaviors and environments. We provide mathematical results on the resulting behavioral dynamics, for any multilayer structure. Across a diverse space of structures, we find that coupling between layers tends to promote prosocial behavior. In fact, even if prosociality is disfavored in each layer alone, multilayer coupling can promote its proliferation in all layers simultaneously. We apply these techniques to six real-world multilayer social networks, ranging from the networks of socio-emotional and professional relationships in a Zambian community, to the networks of online and offline relationships within an academic University. Our results suggest that coupling between distinct domains of social interaction is critical for the spread of prosociality in human societies.
physics
Current image-guided prostate radiotherapy often relies on the use of implanted fiducials or transducers for target localization. Fiducial or transducer insertion requires an invasive procedure that adds cost and risks for bleeding, infection, and discomfort to some patients. We are developing a novel markerless prostate localization strategy using a pre-trained deep learning model to interpret routine projection kV X-ray images without the need for daily cone-beam computed tomography (CBCT). A deep learning model was first trained by using several thousand annotated projection X-ray images. The trained model is capable of identifying the location of the prostate target for a given input X-ray projection image. To assess the accuracy of the approach, three patients with prostate cancer received volumetric modulated arc therapy (VMAT) were retrospectively studied. The results obtained by using the deep learning model and the actual position of the prostate were compared quantitatively. The deviations between the target positions obtained by the deep learning model and the corresponding annotations ranged from 1.66 mm to 2.77 mm for anterior-posterior (AP) direction, and from 1.15 mm to 2.88 mm for lateral direction. Target position provided by deep learning model for the kV images acquired using OBI is found to be consistent that derived from the fiducials. This study demonstrates, for the first time, that highly accurate markerless prostate localization based on deep learning is achievable. The strategy provides a clinically valuable solution to daily patient positioning and real-time target tracking for image-guided radiotherapy (IGRT) and interventions.
physics
We investigate the evolution of the Q values for the implementation of Deep Q Learning (DQL) in the Stable Baselines library. Stable Baselines incorporates the latest Reinforcement Learning techniques and achieves superhuman performance in many game environments. However, for some simple non-game environments, the DQL in Stable Baselines can struggle to find the correct actions. In this paper we aim to understand the types of environment where this suboptimal behavior can happen, and also investigate the corresponding evolution of the Q values for individual states. We compare a smart TrafficLight environment (where performance is poor) with the AI Gym FrozenLake environment (where performance is perfect). We observe that DQL struggles with TrafficLight because actions are reversible and hence the Q values in a given state are closer than in FrozenLake. We then investigate the evolution of the Q values using a recent decomposition technique of Achiam et al.. We observe that for TrafficLight, the function approximation error and the complex relationships between the states lead to a situation where some Q values meander far from optimal.
computer science
For dimensions $n\geq 3$ and $k\in\{2, \cdots, n\}$, we show that the space of metrics of $k$-positive Ricci curvature on the sphere $S^{n}$ has the structure of an $H$-space with a homotopy commutative, homotopy associative product operation. We further show, using the theory of operads and results of Boardman, Vogt and May that the path component of this space containing the round metric is weakly homotopy equivalent to an $n$-fold loop space.
mathematics
Hyperaccretion occurs when the gas inflow rate onto a black hole (BH) is so high that the radiative feedback cannot reverse the accretion flow. This extreme process is a promising mechanism for the rapid growth of seed BHs in the early universe, which can explain high-redshift quasars powered by billion solar mass BHs. In theoretical models, spherical symmetry is commonly adopted for hyperaccretion flows; however, the sustainability of such structures on timescales corresponding to the BH growth has not been addressed yet. Here we show that stochastic interactions between the ionizing radiation from the BH and nonuniform accretion flow can lead to the formation of a rotating gas disk around the BH. Once the disk forms, the supply of gas to the BH preferentially occurs via biconical-dominated accretion flow perpendicular to the disk, avoiding the centrifugal barrier of the disk. Biconical-dominated accretion flows from opposite directions collide in the vicinity of the BH supplying high-density, low angular momentum gas to the BH, whereas most of the gas with nonnegligible angular momentum is deflected to the rotationally supported outflowing decretion disk. The disk becomes reinforced progressively as more mass from the biconical flow transfers to the disk and some of the outflowing gas from the disk is redirected to the biconical accretion funnels through a meridional structure. This axisymmetric hydrodynamic structure of a biconical-dominated accretion flow and decretion disk continues to provide uninterrupted flow of high-density gas to the BH.
astrophysics
Jets launched by active galactic nuclei (AGN) are believed to play a significant role in shaping the properties of galaxies and provide an energetically viable mechanism through which galaxies can become quenched. Here we present a novel AGN feedback model, which we have incorporated into the AREPO code, that evolves the black hole mass and spin as the accretion flow proceeds through a thin $\alpha$-disc which we self-consistently couple to a Blandford-Znajek jet. We apply our model to the central region of a typical radio-loud Seyfert galaxy embedded in a hot circumgalactic medium (CGM). We find that jets launched into high pressure environments thermalise efficiently due to the formation of recollimation shocks and the vigorous instabilities that these shocks excite increase the efficiency of the mixing of CGM and jet material. The beams of more overpressured jets, however, are not as readily disrupted by instabilities so the majority of the momentum flux at the jet base is retained out to the head, where the jet terminates in a reverse shock. All jets entrain a significant amount of cold circumnuclear disc material which, while energetically insignificant, dominates the lobe mass together with the hot, entrained CGM material. The jet power evolves significantly due to effective self-regulation by the black hole, fed by secularly-driven, intermittent mass flows. The direction of jets launched directly into the circumnuclear disc changes considerably due to effective Bardeen-Petterson torquing. Interestingly, these jets obliterate the innermost regions of the disc and drive large-scale, multi-phase, turbulent, bipolar outflows.
astrophysics
Exclusive non-leptonic two-body decays of $B$ mesons have been studied extensively in the past two decades within the framework of factorization. However, the exploration of the corresponding three-body case has only started recently, in part motivated by new data. We consider here the simplest non-leptonic three-body $B$ decays from the point of view of factorization, namely heavy-to-heavy transitions. We provide a careful derivation of the SCET/QCDF factorized amplitudes to NNLO in $\alpha_s$, and discuss the numerical impact of NLO and NNLO corrections. We then study the narrow-width limit, showing that the three-body amplitude reproduces analytically the known quasi-two-body decay amplitudes, and compute finite-width corrections. Finally, we discuss certain observables that are sensitive to perturbative NLO and NNLO corrections and to higher Gegenbauer moments of the dimeson LCDAs. This is the first study of non-leptonic three-body $B$ decays to NNLO in QCD.
high energy physics phenomenology
To solve a machine learning problem, one typically needs to perform data preprocessing, modeling, and hyperparameter tuning, which is known as model selection and hyperparameter optimization.The goal of automated machine learning (AutoML) is to design methods that can automatically perform model selection and hyperparameter optimization without human interventions for a given dataset. In this paper, we propose a meta-learning method that can search for a high-performance machine learning pipeline from the predefined set of candidate pipelines for supervised classification datasets in an efficient way by leveraging meta-data collected from previous experiments. More specifically, our method combines an adaptive Bayesian regression model with a neural network basis function and the acquisition function from Bayesian optimization. The adaptive Bayesian regression model is able to capture knowledge from previous meta-data and thus make predictions of the performances of machine learning pipelines on a new dataset. The acquisition function is then used to guide the search of possible pipelines based on the predictions.The experiments demonstrate that our approach can quickly identify high-performance pipelines for a range of test datasets and outperforms the baseline methods.
computer science
The ABCD method is one of the most widely used data-driven background estimation techniques in high energy physics. Cuts on two statistically-independent classifiers separate signal and background into four regions, so that background in the signal region can be estimated simply using the other three control regions. Typically, the independent classifiers are chosen "by hand" to be intuitive and physically motivated variables. Here, we explore the possibility of automating the design of one or both of these classifiers using machine learning. We show how to use state-of-the-art decorrelation methods to construct powerful yet independent discriminators. Along the way, we uncover a previously unappreciated aspect of the ABCD method: its accuracy hinges on having low signal contamination in control regions not just overall, but relative to the signal fraction in the signal region. We demonstrate the method with three examples: a simple model consisting of three-dimensional Gaussians; boosted hadronic top jet tagging; and a recasted search for paired dijet resonances. In all cases, automating the ABCD method with machine learning significantly improves performance in terms of ABCD closure, background rejection and signal contamination.
high energy physics phenomenology
We consider the problem of designing distributed controllers to ensure passivity of a large-scale interconnection of linear subsystems connected in a cascade topology. The control design process needs to be carried out at the subsystem-level with no direct knowledge of the dynamics of other subsystems in the interconnection. We present a distributed approach to solve this problem, where subsystem-level controllers are locally designed in a sequence starting at one end of the cascade using only the dynamics of the particular subsystem, coupling with the immediately preceding subsystem and limited information from the preceding subsystem in the cascade to ensure passivity of the interconnected system up to that point. We demonstrate that this design framework also allows for new subsystems to be compositionally added to the interconnection without requiring redesign of the pre-existing controllers.
computer science
We present a study devoted to a detailed description of modulated rotating waves (MRW) in the magnetized spherical Couette system. The set-up consists of a liquid metal confined between two differentially rotating spheres and subjected to an axially applied magnetic field. When the magnetic field strength is varied, several branches of MRW are obtained by means of three dimensional direct numerical simulations (DNS). The MRW originate from parent branches of rotating waves (RW) and are classified according to Rand's (Arch. Ration. Mech. Anal 79:1-37, 182) and Coughling & Marcus (J. Fluid Mech. 234:1-18,1992) theoretical description. We have found relatively large intervals of multistability of MRW at low magnetic field, corresponding to the radial jet instability known from previous studies. However, at larger magnetic field, corresponding to the return flow regime, the stability intervals of MRW are very narrow and thus they are unlikely to be found without detailed knowledge of their bifurcation point. A careful analysis of the spatio-temporal symmetries of the most energetic modes involved in the different classes of MRW will allow in the future a comparison with the HEDGEHOG experiment, a magnetized spherical Couette device hosted at the Helmholtz-Zentrum Dresden-Rossendorf.
physics
We study the well-known Bogomolny's equations, in general coordinate system, for monopoles and dyons in the $SU(2)$ Yang-Mills-Higgs model using the BPS Lagrangian method. We extract an explicit form of BPS Lagrangian that yield these Bogomolny's equations. We generalize this BPS Lagrangian by adding scalar fields-dependent couplings into each of its terms and use this generalized BPS Lagrangian to derive Bogomolny's equations for monopoles and dyons in the generalized $SU(2)$ Yang-Mills-Higgs model which contains additional scalar fields-dependent couplings compared to its corresponding canonical model. There are additional constraint equations comming from the Euler-Lagrange equations of the generalized BPS Lagrangian, that can be considered as the Gauss's law constraint equations in the BPS limit which is a limit when the Bogomolny's equations are satisfied. In the case of monopoles, these constraint equations are trivial while for the case of dyons they are non-trivial. Unfortunately, in the Julia-Zee ansatz these constraint equations imply the scalar fields-dependent couplings to be constants in which their solutions are the standard BPS dyons. The existance of generalized BPS dyons may require a different ansatz that mutually solves the Bogomolny's equations, the constraint equations and an equation that relates all the scalar fields-dependent couplings.
high energy physics theory
If the Past Hypothesis underlies the arrows of time, what is the status of the Past Hypothesis? In this paper, I examine the role of the Past Hypothesis in the Boltzmannian account and defend the view that the Past Hypothesis is a candidate fundamental law of nature. Such a view is known to be compatible with Humeanism about laws, but as I argue it is also supported by a minimal non-Humean "governing" view. Some worries arise from the non-dynamical and time-dependent character of the Past Hypothesis as a boundary condition, the intrinsic vagueness in its specification, and the nature of the initial probability distribution. I show that these worries do not have much force, and in any case they become less relevant in a new quantum framework for analyzing time's arrows -- the Wentaculus. Hence, the view that the Past Hypothesis is a candidate fundamental law should be more widely accepted than it is now.
physics
In this paper, a novel end-to-end learning approach, namely JTRD-Net, is proposed for uplink multiuser single-input multiple-output (MU-SIMO) joint transmitter and non-coherent receiver design (JTRD) in fading channels. The basic idea lies in the use of artificial neural networks (ANNs) to replace traditional communication modules at both transmitter and receiver sides. More specifically, the transmitter side is modeled as a group of parallel linear layers, which are responsible for multiuser waveform design; and the non-coherent receiver is formed by a deep feed-forward neural network (DFNN) so as to provide multiuser detection (MUD) capabilities. The entire JTRD-Net can be trained from end to end to adapt to channel statistics through deep learning. After training, JTRD-Net can work efficiently in a non-coherent manner without requiring any levels of channel state information (CSI). In addition to the network architecture, a novel weight-initialization method, namely symmetrical-interval initialization, is proposed for JTRD-Net. It is shown that the symmetrical-interval initialization outperforms the conventional method (e.g. Xavier initialization) in terms of well-balanced convergence-rate among users. Simulation results show that the proposed JTRD-Net approach takes significant advantages in terms of reliability and scalability over baseline schemes on both i.i.d. complex Gaussian channels and spatially-correlated channels.
electrical engineering and systems science
Dissipation within the turbulent boundary layer under sea ice is one of many processes contributing to wave energy attenuation in ice-covered seas. Although recent observations suggest that the contribution of that process to the total energy dissipation is significant, its parameterizations used in spectral wave models are based on fairly crude, heuristic approximations. In this paper, an improved source term for the under-ice turbulent dissipation is proposed, taking into account the spectral nature of that process (as opposed to parameterizations based on the so-called representative wave), as well as effects related to sea ice concentration and floe-size distribution, formulated on the basis of the earlier results of discrete-element modeling. The core of the new source term is based on an analogous model for dissipation due to bottom friction derived by Weber in 1991 (https://doi.org/10.1017/S0022112091003634). The shape of the wave energy attenuation curves and frequency-dependence of the attenuation coefficients are analyzed in detail for compact sea ice. The role of floe size in modifying the attenuation intensity and spectral distribution is illustrated by calibrating the model to observational data from a sudden sea ice break-up event in the marginal ice zone.
physics
The sorption of radionuclides by graphene oxides synthesized by different methods was studied through a combination of batch experiments with characterization by microscopic and spectroscopic techniques such as X-ray photoelectron spectroscopy (XPS), attenuated total reflection fourier-transform infrared spectroscopy (ATR-FTIR), high-energy resolution fluorescence detected X-Ray absorption spectroscopy (HERFD-XANES), extended X-ray absorption fine structure (EXAFS) and high resolution transmission electron microscopy (HRTEM).
physics
The Transiting Exoplanet Survey Satellite TESS has begun a new age of exoplanet discoveries around bright host stars. We present the discovery of HD 1397b (TOI-120.01), a giant planet in an 11.54day eccentric orbit around a bright (V=7.9) G-type subgiant. We estimate both host star and planetary parameters consistently using EXOFASTv2 based on TESS time-series photometry of transits and CORALIE radial velocity measurements. We find that HD 1397b is a Jovian planet, with a mass of $0.419\pm-0.024$ M$_{\rm Jup}$ and a radius of $1.023^{+0.023}_{-0.026$}$ R$_{\rm Jup}$. Characterising giant planets in short-period eccentric orbits, such as HD 1397b, is important for understanding and testing theories for the formation and migration of giant planets as well as planet-star interactions.
astrophysics
Quantum Process Tomography (QPT) methods aim at identifying, i.e. estimating, a given quantum process. QPT is a major quantum information processing tool, since it especially allows one to characterize the actual behavior of quantum gates, which are the building blocks of quantum computers. However, usual QPT procedures are complicated, since they set several constraints on the quantum states used as inputs of the process to be characterized. In this paper, we extend QPT so as to avoid two such constraints. On the one hand, usual QPT methods requires one to know, hence to precisely control (i.e. prepare), the specific quantum states used as inputs of the considered quantum process, which is cumbersome. We therefore propose a Blind, or unsupervised, extension of QPT (i.e. BQPT), which means that this approach uses input quantum states whose values are unknown and arbitrary, except that they are requested to meet some general known properties (and this approach exploits the output states of the considered quantum process). On the other hand, usual QPT methods require one to be able to prepare many copies of the same (known) input state, which is constraining. On the contrary, we propose "single-preparation methods", i.e. methods which can operate with only one instance of each considered input state. These two new concepts are here illustrated with practical BQPT methods which are numerically validated, in the case when: i) random pure states are used as inputs and their required properties are especially related to the statistical independence of the random variables that define them, ii) the considered quantum process is based on cylindrical-symmetry Heisenberg spin coupling. These concepts may be extended to a much wider class of processes and to BQPT methods based on other input quantum state properties.
quantum physics
We produce a trimerized kagome lattice for ultracold atoms using an optical superlattice formed by overlaying triangular lattices generated with two colors of light at a 2:1 wavelength ratio. Adjusting the depth of each lattice tunes the strong intra-trimer (J) and weak inter-trimer (J') tunneling energies, and also the on-site interaction energy U. Two different trimerization patterns are distinguished using matter-wave diffraction. We characterize the coherence of a strongly interacting Bose gas in this lattice, observing persistent nearest-neighbor spatial coherence in the large U/J' limit, and that such coherence displays asymmetry between the strongly and the weakly coupled bonds.
quantum physics
We give a microscopic two dimensional ${\cal N}=(2,2)$ gauge theory description of arbitrary M2-branes ending on $N_f$ M5-branes wrapping a punctured Riemann surface. These realize surface operators in four dimensional ${\cal N}=2$ field theories. We show that the expectation value of these surface operators on the sphere is captured by a Toda CFT correlation function in the presence of an additional degenerate vertex operator labelled by a representation ${\cal R}$ of $SU(N_f)$, which also labels M2-branes ending on M5-branes. We prove that symmetries of Toda CFT correlators provide a geometric realization of dualities between two dimensional gauge theories, including ${\cal N}=(2,2)$ analogues of Seiberg and Kutasov--Schwimmer dualities. As a bonus, we find new explicit conformal blocks, braiding matrices, and fusion rules in Toda CFT.
high energy physics theory
We calculate the complete quark and gluon cusp anomalous dimensions in four-loop massless QCD analytically from first principles. In addition, we determine the complete matter dependence of the quark and gluon collinear anomalous dimensions. Our approach is to Laurent expand four-loop quark and gluon form factors in the parameter of dimensional regularization. We employ finite field and syzygy techniques to reduce the relevant Feynman integrals to a basis of finite integrals, and subsequently evaluate the basis integrals directly from their standard parametric representations.
high energy physics phenomenology
Ge atoms segregating on zirconium diboride thin films grown on Ge(111) were found to crystallize into a two-dimensional bitriangular structure which was recently predicted to be a flat band material. Angle-resolved photoemission experiments together with theoretical calculations verified the existence of a nearly flat band in spite of non-negligible in-plane long-range hopping and interactions with the substrate. This provides the first experimental evidence that a flat band can emerge from the electronic coupling between atoms and not from the geometry of the atomic structure.
condensed matter
Despite numerous experimental and theoretical investigations of the mechanical behavior of high-capacity Si and Ge Li-ion battery anodes, our basic understanding of swelling-driven fracture in these materials remains limited. Existing theoretical studies have provided insights into elasto-plastic deformations caused by large volume change phase transformations, but have not modeled fracture explicitly beyond Griffith's criterion. Here we use a multi-physics phase-field approach to model self-consistently anisotropic phase transformation, elasto-plastic deformation, and crack initiation and propagation during lithiation of Si nanopillars. Our results reveal the existence of a vulnerable window of yield strength inside which pillars fracture during lithiation. They identify two different modes of fracture inside that window with and without surface localization of plastic deformation prior to fracture for lower and higher yield strength, respectively, and highlight the importance of taking into account this localization to accurately predict the onset of fracture within Griffith theory. The results further demonstrate how the increased robustness of hollow nanopillars can be understood as a direct effect of anode geometry on the size of this vulnerable window. Those insights provide an improved theoretical basis for designing mechanically stable phase-transforming battery materials undergoing large volume changes.
physics
We use a representation of a graded twisted tensor product of $K[x]$ with $K[y]$ in $L(K^{\Bbb{N}_0})$ in order to obtain a nearly complete classification of these graded twisted tensor products via infinite matrices. There is one particular example and three main cases: quadratic algebras classified by Conner and Goetz, a family called $A(n,d,a)$ with the $n+1$-extension property for $n\ge 2$, and a third case, not fully classified, which contains a family $B(a,L)$ parameterized by quasi-balanced sequences.
mathematics
Extending a result of Mashreghi and Ransford, we prove that every complex separable infinite dimensional Fr\'echet space with a continuous norm is isomorphic to a space continuously included in a space of holomorphic functions on the unit disc or the complex plane, which contains the polynomials as a dense subspace. As a consequence examples of nuclear Fr\'echet spaces of holomorphic functions without the bounded approximation exist.
mathematics
A single-electron transistor incorporated as part of a nanomechanical resonator represents an extreme limit of electron-phonon coupling. While it allows for fast and sensitive electromechanical measurements, it also introduces backaction forces from electron tunnelling which randomly perturb the mechanical state. Despite the stochastic nature of this backaction, under conditions of strong coupling it is predicted to create self-sustaining coherent mechanical oscillations. Here, we verify this prediction using time-resolved measurements of a vibrating carbon nanotube transistor. This electromechanical oscillator has intriguing similarities with a laser. The single-electron transistor, pumped by an electrical bias, acts as a gain medium while the resonator acts as a phonon cavity. Despite the unconventional operating principle, which does not involve stimulated emission, we confirm that the output is coherent, and demonstrate other laser behaviour including injection locking and frequency narrowing through feedback.
condensed matter
We propose a dynamical model for describing the spread of epidemics. This model is an extension of the SIQR (susceptible-infected-quarantined-recovered) and SIRP (susceptible-infected-recovered-pathogen) models used earlier to describe various scenarios of epidemic spreading. As compared to the basic SIR model, our model takes into account two possible routes of contagion transmission: direct from the infected compartment to the susceptible compartment and indirect via some intermediate medium or fomites. Transmission rates are estimated in terms of average distances between the individuals in selected social environments and characteristic time spans for which the individuals stay in each of these environments. We also introduce a collective economic resource associated with the average amount of money or income per individual to describe the socioeconomic interplay between the spreading process and the resource available to infected individuals. The epidemic-resource coupling is supposed to be of activation type, with the recovery rate governed by the Arrhenius-like law. Our model brings an advantage of building various control strategies to mitigate the effect of epidemic and can be applied, in particular, to modeling the spread of COVID-19.
physics
Indoor positioning systems (IPS) are emerging technologies due to an increasing popularity and demand in location based service (LBS). Because traditional positioning systems such as GPS are limited to outdoor applications, many IPS have been proposed in literature. WLAN-based IPS are the most promising due to its proven accuracy and infrastructure deployment. Several WLAN-based IPS have been proposed in the past, from which the best results have been shown by so-called fingerprint-based systems. This paper proposes an indoor positioning system which extends traditional WLAN fingerprinting by using received signal strength (RSS) measurements along with channel estimates as an effort to improve classification accuracy for scenarios with a low number of Access Points (APs). The channel estimates aim to characterize complex indoor environments making it a unique signature for fingerprinting-based IPS and therefore improving pattern recognition in radio-maps. Since commercial WLAN cards offer limited measurement information, software-defined radio (SDR) as an emerging trend for fast prototyping and research integration is chosen as the best cost-effective option to extract channel estimates. Therefore, this paper first proposes an 802.11b WLAN SDR beacon receiver capable of measuring RSS and channel estimates. The SDR is designed using LabVIEW (LV) environment and leverages several inherent platform acceleration features that achieve real-time capturing. The receiver achieves a fast-rate measurement capture of 9 packets per second per AP. The classification of the propose IPS uses a support vector machine (SVM) for offline training and online navigation. Several tests are conducted in a cluttered indoor environment with a single AP in 802.11b legacy mode. Finally, navigation accuracy results are discussed.
electrical engineering and systems science
Gibbs-type random probability measures, or Gibbs-type priors, are arguably the most "natural" generalization of the celebrated Dirichlet prior. Among them the two parameter Poisson-Dirichlet prior certainly stands out for the mathematical tractability and interpretability of its predictive probabilities, which made it the natural candidate in several applications. Given a sample of size $n$, in this paper we show that the predictive probabilities of any Gibbs-type prior admit a large $n$ approximation, with an error term vanishing as $o(1/n)$, which maintains the same desirable features as the predictive probabilities of the two parameter Poisson-Dirichlet prior.
statistics
Generalized Parton Distributions (GPDs) have emerged over the 1990s as a powerful concept and tool to study nucleon structure. They provide nucleon tomography from the correlation between transverse position and longitudinal momentum of partons. The Double Deeply Virtual Compton Scattering (DDVCS) process consists of the Deeply Virtual Compton Scattering (DVCS) process with a virtual photon in the final state eventually generating a lepton pair, which can be either an electron-positron or a muon-antimuon pair. The virtuality of the final time-like photon can be measured and varied, thus providing an extra lever arm and allowing one to measure the GPDs for the initial and transferred momentum dependences independently. This unique feature of DDVCS is of relevance, among others, for the determination of the distribution of nuclear forces which is accessed through the skewness dependency of GPDs. This proceeding discusses the feasibility and merits of a DDVCS experiment in the context of JLab 12 GeV based on model-predicted pseudo-data and the capability of extraction of Compton Form Factors based on a fitter algorithm.
high energy physics phenomenology