text
stringlengths
11
9.77k
label
stringlengths
2
104
The explosive growth of bandwidth hungry Internet applications has led to the rapid development of new generation mobile network technologies that are expected to provide broadband access to the Internet in a pervasive manner. For example, 6G networks are capable of providing high-speed network access by exploiting higher frequency spectrum; high-throughout satellite communication services are also adopted to achieve pervasive coverage in remote and isolated areas. In order to enable seamless access, Integrated Satellite-Terrestrial Communication Networks (ISTCN) has emerged as an important research area. ISTCN aims to provide high speed and pervasive network services by integrating broadband terrestrial mobile networks with satellite communication networks. As terrestrial mobile networks began to use higher frequency spectrum (between 3GHz to 40GHz) which overlaps with that of satellite communication (4GHz to 8GHz for C band and 26GHz to 40GHz for Ka band), there are opportunities and challenges. On one hand, satellite terminals can potentially access terrestrial networks in an integrated manner; on the other hand, there will be more congestion and interference in this spectrum, hence more efficient spectrum management techniques are required. In this paper, we propose a new technique to improve spectrum sharing performance by introducing Non-orthogonal Frequency Division Multiplexing (NOMA) and Cognitive Radio (CR) in the spectrum sharing of ISTCN. In essence, NOMA technology improves spectrum efficiency by allowing different users to transmit on the same carrier and distinguishing users by user power levels while CR technology improves spectrum efficiency through dynamic spectrum sharing. Furthermore, some open researches and challenges in ISTCN will be discussed.
electrical engineering and systems science
We propose a spatial analog of the Berry's phase mechanism for the coherent manipulation of states of non-relativistic massive particles moving in a two-dimensional landscape. In our construction the temporal modulation of the system Hamiltonian is replaced by a modulation of the confining potential along the transverse direction of the particle propagation. By properly tuning the model parameters the resulting scattering input-output relations exhibit a Wilczek-Zee non-abelian phase shift contribution that is intrinsically geometrical, hence insensitive to the specific details of the potential landscape. A theoretical derivation of the effect is provided together with practical examples.
quantum physics
Recently, the concept of wake-up radio based access has been considered as an effective power saving mechanism for 5G mobile devices. In this article, the average power consumption of a wake-up radio enabled mobile device is analyzed and modeled by using a semi-Markov process. Building on this, a delay-constrained optimization problem is then formulated, to maximize the device energy-efficiency under given latency requirements, allowing the optimal parameters of the wake-up scheme to be obtained in closed form. The provided numerical results show that, for a given delay requirement, the proposed solution is able to reduce the power consumption by up to 40% compared with an optimized discontinuous reception (DRX) based reference scheme.
computer science
We propose differentially private algorithms for parameter estimation in both low-dimensional and high-dimensional sparse generalized linear models (GLMs) by constructing private versions of projected gradient descent. We show that the proposed algorithms are nearly rate-optimal by characterizing their statistical performance and establishing privacy-constrained minimax lower bounds for GLMs. The lower bounds are obtained via a novel technique, which is based on Stein's Lemma and generalizes the tracing attack technique for privacy-constrained lower bounds. This lower bound argument can be of independent interest as it is applicable to general parametric models. Simulated and real data experiments are conducted to demonstrate the numerical performance of our algorithms.
statistics
Recent studies have revealed that all large (over 1000 km in diameter) trans-Neptunian objects (TNOs) form satellite systems. Although the largest Plutonian satellite, Charon, is thought to be an intact fragment of an impactor directly formed via a giant impact, whether giant impacts can explain the variations in secondary-to-primary mass ratios and spin/orbital periods among all large TNOs remains to be determined. Here we systematically perform hydrodynamic simulations to investigate satellite formation via giant impacts. We find that the simulated secondary-to-primary mass ratio varies over a wide range, which overlaps with observed mass ratios. We also reveal that the satellite systems' current distribution of spin/orbital periods and small eccentricity can be explained only when their spins and orbits tidally evolve: initially as fluid-like bodies, but finally as rigid bodies. These results suggest that all satellites of large TNOs were formed via giant impacts in the early stage of solar system formation, before the outward migration of Neptune, and that they were fully or partially molten during the giant impact era.
astrophysics
A high light yield calcium iodide (CaI2) scintillator is being developed for an astroparticle physics experiments. This paper reports scintillation performance of calcium iodide (CaI2) crystal. Large light emission of 2.7 times that of NaI(Tl) and an emission wavelength in good agreement with the sensitive wavelength of the photomultiplier were obtained. A study of pulse shape discrimination using alpha and gamma sources was also performed. We confirmed that CaI2 has excellent pulse shape discrimination potential with a quick analysis.
physics
High-frequency pulse electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) were used to clarify the electronic structure of the color centers with an optically induced high-temperature spin-3/2 alignment in hexagonal 4H-, 6H- and rhombic 15R- silicon carbide (SiC) polytypes. The identification is based on resolved ligand hyperfine interactions with carbon and silicon nearest, next nearest and the more distant neighbors and on the determination of the spin state. The ground state and the excited state were demonstrated to have spin S = 3/2. The microscopic model suggested from the EPR and ENDOR results is as follows: a paramagnetic negatively charged silicon vacancy that is noncovalently bonded to a non-paramagnetic neutral carbon vacancy, located on the adjacent site along the SiC symmetry c-axis.
condensed matter
We study renormalisable models with minimal field content that can provide a viable Dark Matter candidate through the standard freeze-out paradigm and, simultaneously, accommodate the observed anomalies in semileptonic $B$-meson decays at one loop. Following the hypothesis of minimality, this outcome can be achieved by extending the particle spectrum of the Standard Model either with one vector-like fermion and two scalars or two vector-like fermions and one scalar. The Dark Matter annihilations are mediated by $t$-channel exchange of other new particles contributing to the $B$-anomalies, thus resulting in a correlation between flavour observables and Dark Matter abundance. Again based on minimality, we assume the new states to couple only with left-handed muons and second and third generation quarks. Besides an ad hoc symmetry needed to stabilise the Dark Matter, the interactions of the new states are dictated only by gauge invariance. We present here for the first time a systematic classification of the possible models of this kind, according to the quantum numbers of the new fields under the Standard Model gauge group. Within this general setup we identify a group of representative models that we systematically study, applying the most updated constraints from flavour observables, dedicated Dark Matter experiments, and LHC searches of leptons and/or jets and missing energy, and of disappearing charged tracks.
high energy physics phenomenology
We investigate the consequences of deviations from the Standard Model observed in $b\to s\mu\mu$ transitions for flavour-changing neutral-current processes involving down-type quarks and neutrinos. We derive the relevant Wilson coefficients within an effective field theory approach respecting the SM gauge symmetry, including right-handed currents, a flavour structure based on approximate $U(2)$ symmetry, and assuming only SM-like light neutrinos. We discuss correlations among $B \to K^{(*)} \nu \bar \nu$ and $K\to \pi \nu \bar \nu$ branching ratios in the case of linear Minimal Flavour Violation and in a more general framework, highlighting in each case the role played by various New Physics scenarios proposed to explain $b\to s\mu\mu$ deviations.
high energy physics phenomenology
We revisit the question of describing critical spin systems and field theories using matrix product states, and formulate a scaling hypothesis in terms of operators, eigenvalues of the transfer matrix, and lattice spacing in the case of field theories. Critical exponents and central charge are determined by optimizing the exponents such as to obtain a data collapse. We benchmark this method by studying critical Ising and Potts models, where we also obtain a scaling ansatz for the correlation length and entanglement entropy. The formulation of those scaling functions turns out to be crucial for studying critical quantum field theories on the lattice. For the case of $\lambda\phi^4$ with mass $\mu^2$ and lattice spacing $a$, we demonstrate a double data collapse for the correlation length $ \delta \xi(\mu,\lambda,D)=\tilde{\xi} \left((\alpha-\alpha_c)(\delta/a)^{-1/\nu}\right)$ with $D$ the bond dimension, $\delta$ the gap between eigenvalues of the transfer matrix, and $\alpha_c=\mu_R^2/\lambda$ the parameter which fixes the critical quantum field theory.
condensed matter
Integrated optics provides a platform for the experimental implementation of highly complex and compact circuits for practical applications as well as for advances in the fundamental science of quantum optics. The lithium niobate (LN) waveguide is an important candidate for the construction of integrated optical circuits. Based on the bound state in the continuum (BIC) in a LN waveguide, we propose an efficient way to produce polarization-entangled photon pairs. The implementation of this method is simple and does not require the polarization process needed for periodically poled LN. The generation rate of the entangled photon pairs increases linearly with the length of the waveguide. For visible light, the generation efficiency can be improved by more than five orders of magnitude with waveguides having the length of only a few millimeters, compared with the corresponding case without BICs. The phenomena can appear in a very wide spectrum range from the visible to THz regions. This study is of great significance for the development of active integrated quantum chips in various wavelength ranges.
physics
The Digital Twins offer promising solutions for smart grid challenges related to the optimal operation, management, and control of energy assets, for safe and reliable distribution of energy. These challenges are more pressing nowadays than ever due to the large-scale adoption of distributed renewable resources at the edge of the grid. The digital twins are leveraging technologies such as the Internet of Things, big data analytics, machine learning, and cloud computing, to analyze data from different energy sensors, view and verify the status of physical energy assets and extract useful information to predict and optimize the assets performance. In this paper, we will provide an overview of the Digital Twins application domains in the smart grid while analyzing existing the state of the art literature. We have focused on the following application domains: energy asset modeling, fault and security diagnosis, operational optimization, and business models. Most of the relevant literature approaches found are published in the last three years showing that the domain of Digital Twins application in smart grid is hot and gradually developing. Anyway, there is no unified view on the Digital Twins implementation and integration with energy management processes, thus, much work still needs to be done to understand and automatize the smart grid management.
electrical engineering and systems science
The standard model of elementary interactions has long qualified as a theory of matter, in which the postulated conservation laws (one baryonic and three leptonic) acquire theoretical meaning. However, recent observations of lepton number violations -- neutrino oscillations -- demonstrate its incompleteness. We discuss why these considerations suggest the correctness of Ettore Majorana's ideas on the nature of neutrino mass, and add further interest to the search for an ultra-rare nuclear process in which two particles of matter (electrons) are created, commonly called neutrinoless double beta decay. The approach of the discussion is mainly historical and its character is introductory. Some technical considerations, which highlight the usefulness of Majorana's representation of gamma matrices, are presented in the appendix.
high energy physics phenomenology
During speech, people spontaneously gesticulate, which plays a key role in conveying information. Similarly, realistic co-speech gestures are crucial to enable natural and smooth interactions with social agents. Current end-to-end co-speech gesture generation systems use a single modality for representing speech: either audio or text. These systems are therefore confined to producing either acoustically-linked beat gestures or semantically-linked gesticulation (e.g., raising a hand when saying "high"): they cannot appropriately learn to generate both gesture types. We present a model designed to produce arbitrary beat and semantic gestures together. Our deep-learning based model takes both acoustic and semantic representations of speech as input, and generates gestures as a sequence of joint angle rotations as output. The resulting gestures can be applied to both virtual agents and humanoid robots. Subjective and objective evaluations confirm the success of our approach. The code and video are available at the project page https://svito-zar.github.io/gesticulator .
computer science
Supernovae and cooling neutron stars have long been used to constrain the properties of axions, such as their mass and interactions with nucleons and other Standard Model particles. We investigate the prospects of using neutron star mergers as a similar location where axions can be probed in the future. We examine the impact axions would have on mergers, considering both the possibility that they free-stream through the dense nuclear matter and the case where they are trapped. We calculate the mean free path of axions in merger conditions, and find that they would free-stream through the merger in all thermodynamic conditions. In contrast to previous calculations, we integrate over the entire phase space while using a relativistic treatment of the nucleons, assuming the matrix element is momentum-independent. In particular, we use a relativistic mean field theory to describe the nucleons, taking into account the precipitous decrease in the effective mass of the nucleons as density increases above nuclear saturation density. We find that within current constraints on the axion-neutron coupling, axions could cool nuclear matter on timescales relevant to neutron star mergers. Our results may be regarded as first steps aimed at understanding how axions affect merger simulations and potentially interface with observations.
high energy physics phenomenology
Electric vehicles (EVs) provide a cleaner alternative that not only reduces greenhouse gas emissions but also improves air quality and reduces noise pollution. The consumer market for electrical vehicles is growing very rapidly. Designing a network with adequate capacity and types of public charging stations is a challenge that needs to be addressed to support the current trend in the EV market. In this research, we propose a choice modeling approach embedded in a two-stage stochastic programming model to determine the optimal layout and types of EV supply equipment for a community while considering randomness in demand and drivers' behaviors. Some of the key random data parameters considered in this study are: EV's dwell time at parking {\sv location}, battery's state of charge, distance from home, willingness to walk, drivers' arrival patterns, and traffic on weekdays and weekends. The two-stage model uses the sample average approximation method, which asymptotically converges to an optimal solution. To address the computational challenges for large-scale instances, we propose an outer approximation decomposition algorithm. We conduct extensive computational experiments to quantify the efficacy of the proposed approach. In addition, we present the results and a sensitivity analysis for a case study based on publicly available data sources.
electrical engineering and systems science
We describe one.
mathematics
Oversmoothing has been assumed to be the major cause of performance drop in deep graph convolutional networks (GCNs). In this paper, we propose a new view that deep GCNs can actually learn to anti-oversmooth during training. This work interprets a standard GCN architecture as layerwise integration of a Multi-layer Perceptron (MLP) and graph regularization. We analyze and conclude that before training, the final representation of a deep GCN does over-smooth, however, it learns anti-oversmoothing during training. Based on the conclusion, the paper further designs a cheap but effective trick to improve GCN training. We verify our conclusions and evaluate the trick on three citation networks and further provide insights on neighborhood aggregation in GCNs.
computer science
We study the interplay between four-derivative 4d gauged supergravity, holography, wrapped M5-branes, and theories of class $\mathcal{R}$. Using results from Chern-Simons theory on hyperbolic three-manifolds and the 3d-3d correspondence we are able to constrain the two independent coefficients in the four-derivative supergravity Lagrangian. This in turn allows us to calculate the subleading terms in the large-$N$ expansion of supersymmetric partition functions for an infinite class of three-dimensional $\mathcal{N}=2$ SCFTs of class $\mathcal{R}$. We also determine the leading correction to the Bekenstein-Hawking entropy of asymptotically AdS$_4$ black holes arising from wrapped M5-branes. In addition, we propose and test some conjectures about the perturbative partition function of Chern-Simons theory with complexified ADE gauge groups on closed hyperbolic three-manifolds.
high energy physics theory
The extended state observer (ESO) plays an important role in the design of feedback control for nonlinear systems. However, its high-gain nature creates a challenge in engineering practice in cases where the output measurement is corrupted by non-negligible, high-frequency noise. The presence of such noise puts a constraint on how high the observer gains can be, which forces a trade-off between fast convergence of state estimates and quality of control task realization. In this work, a new observer design is proposed to improve the estimation performance in the presence of noise. In particular, a unique cascade combination of ESOs is developed, which is capable of fast and accurate signals reconstruction, while avoiding over-amplification of the measurement noise. The effectiveness of the introduced observer structure is verified here while working as a part of an active disturbance rejection control (ADRC) scheme. The conducted numerical validation and theoretical analysis of the new observer structure show improvement over standard solution in terms of noise attenuation.
electrical engineering and systems science
Utilizing multisnapshot quantized data in line spectral estimation (LSE) for improving the estimation accuracy is of vital importance in signal processing, e.g., channel estimation in energy efficient massive MIMO systems and direction of arrival estimation. Recently, gridless variational line spectral estimation (VALSE) treating frequencies as random variables has been proposed. VALSE has the advantage of low computation complexity, high accuracy, automatically estimating the model order and noise variance. In this paper, we utilize expectation propagation (EP) to develop multi snapshot VALSE-EP (MVALSE-EP) to deal with the LSE from multisnapshot quantized data. The basic idea of MVALSE-EP is to iteratively approximate the quantized model as a sequence of simple multiple pseudo unquantized models sharing the same frequency profile, where the noise in each pseudo linear model is i.i.d. and heteroscedastic (different components having different variance). Moreover, the Cram\'{e}r Rao bound (CRB) is derived as a benchmark performance of the proposed algorithm. Finally, numerical results demonstrate the effectiveness of MVALSE-EP, in particular for the application of direction of arrival (DOA) problems.
electrical engineering and systems science
In this article we extend results of Zomorrodian to determine upper bounds for the order of a nilpotent group of automorphisms of a complex $d$-dimensional family of compact Riemann surfaces, where $d \geqslant 1.$ We provide conditions under which these bounds are sharp and, in addition, for the one-dimensional case we construct and describe an explicit family attaining the bound for infinitely many genera. We obtain similar results for the case of $p$-groups of automorphisms.
mathematics
We propose new mixing schemes for (3+1) neutrinos which describe mixing among active-active and active-sterile neutrinos. The mixing matrix in these mixing schemes can be factored into a zeroth order flavor symmetric part and another part representing small perturbations needed for generating non-zero $U_{e3}$, nonmaximal $\theta_{23}$, CP violation and active-sterile mixing. We find interesting correlations amongst various neutrino mixing angles and, also, calculate the parameter space for various parameters.
high energy physics phenomenology
We demonstrate that a recently proposed classical double copy procedure to construct the effective action of two massive particles in dilaton-gravity from the analogous problem of two color charged particles in Yang-Mills gauge theory fails at next-to-next-to-leading orders in the post-Minkowskian (3PM) or post-Newtonian (2PN) expansions.
high energy physics theory
Surgical skill directly affects surgical procedure outcomes; thus, effective training is needed to ensure satisfactory results. Many objective assessment metrics have been developed and some are widely used in surgical training simulators. These objective metrics provide the trainee with descriptive feedback about their performance however, often lack feedback on how to proceed to improve performance. The most effective training method is one that is intuitive, easy to understand, personalized to the user and provided in a timely manner. We propose a framework to enable user-adaptive training using near-real-time detection of performance, based on intuitive styles of surgical movements (e.g., fluidity, smoothness, crispness, etc.), and propose a haptic feedback framework to assist with correcting styles of movement. We evaluate the ability of three types of force feedback (spring, damping, and spring plus damping feedback), computed based on prior user positions, to improve different stylistic behaviors of the user during kinematically constrained reaching movement tasks. The results indicate that four out of the six styles studied here were statistically significantly improved (p<0.05) using spring guidance force feedback and a significant reduction in task time was also found using spring feedback. The path straightness and targeting error in the task were other task performance metrics studied which were improved significantly using the spring-damping feedback. This study presents a groundwork for adaptive training in robotic surgery based on near-real-time human-centric models of surgical behavior.
computer science
In the study of integrable non-linear $\sigma$-models which are assemblies and/or deformations of principal chiral models and/or WZW models, a rational function called the twist function plays a central role. For a large class of such models, we show that they are one-loop renormalizable, and that the renormalization group flow equations can be written directly in terms of the twist function in a remarkably simple way. The resulting equation appears to have a universal character when the integrable model is characterized by a twist function.
high energy physics theory
Code smells represent sub-optimal implementation choices applied by developers when evolving software systems. The negative impact of code smells has been widely investigated in the past: besides developers' productivity and ability to comprehend source code, researchers empirically showed that the presence of code smells heavily impacts the change-proneness of the affected classes. On the basis of these findings, in this paper we conjecture that code smell-related information can be effectively exploited to improve the performance of change prediction models, ie models having as goal that of indicating to developers which classes are more likely to change in the future, so that they may apply preventive maintenance actions. Specifically, we exploit the so-called intensity index - a previously defined metric that captures the severity of a code smell - and evaluate its contribution when added as additional feature in the context of three state of the art change prediction models based on product, process, and developer-based features. We also compare the performance achieved by the proposed model with the one of an alternative technique that considers the previously defined antipattern metrics, namely a set of indicators computed considering the history of code smells in files. Our results report that (i) the prediction performance of the intensity-including models is statistically better than that of the baselines and (ii) the intensity is a more powerful metric with respect to the alternative smell-related ones.
computer science
In a companion publication, we have explored how to examine the summation of large logarithms in a parton shower. Here, we apply this general program to the thrust distribution in electron-positron annihilation, using several shower algorithms. The method is to work with an appropriate integral transform of the distribution for the observable of interest. Then, we reformulate the parton shower calculation so as to obtain the transformed distribution as an exponential for which we can compute the terms in the perturbative expansion of the exponent.
high energy physics phenomenology
In this paper, we consider a model called CHARME (Conditional Heteroscedastic Autoregressive Mixture of Experts), a class of generalized mixture of nonlinear nonparametric AR-ARCH time series. Under certain Lipschitz-type conditions on the autoregressive and volatility functions, we prove that this model is stationary, ergodic and $\tau$-weakly dependent. These conditions are much weaker than those presented in the literature that treats this model. Moreover, this result forms the theoretical basis for deriving an asymptotic theory of the underlying (non)parametric estimation, which we present for this model. As an application, from the universal approximation property of neural networks (NN), we develop a learning theory for the NN-based autoregressive functions of the model, where the strong consistency and asymptotic normality of the considered estimator of the NN weights and biases are guaranteed under weak conditions.
statistics
We consider dark matter which have non-zero electromagnetic form factors like electric/magnetic dipole moments and anapole moment for fermionic dark matter and Rayleigh form factor for scalar dark matter. We consider dark matter mass $m_\chi > \cal{ O}({\rm MeV})$ and put constraints on their mass and electromagnetic couplings from CMB and LSS observations. Fermionic dark matter with non-zero electromagnetic form factors can annihilate to $e^+ e^-$ and scalar dark matter can annihilate to $2\gamma$ at the time of recombination and distort the CMB. We analyze dark matter with multipole moments with Planck and BAO observations. We find upper bounds on anapole moment $g_{A}<7.163\times 10^{3} \text{GeV}^{-2}$, electric dipole moment ${\cal D}<7.978\times 10^{-9} \text{e-cm}$, magnetic dipole moment ${\mu}<2.959\times 10^{-7} \mu_B$, and the bound on Rayleigh form factor of dark matter is $g_4/\Lambda_4^2<1.085\times 10^{-2}\text{GeV}^{-2}$ with $95\%$C.L.
high energy physics phenomenology
The decentralized and trustless nature of cryptocurrencies and blockchain technology leads to a shift in the digital world. The possibility to execute small programs, called smart contracts, on cryptocurrencies like Ethereum opened doors to countless new applications. One particular exciting use case is decentralized finance (DeFi), which aims to revolutionize traditional financial services by founding them on a decentralized infrastructure. We show the potential of DeFi by analyzing its advantages compared to traditional finance. Additionally, we survey the state-of-the-art of DeFi products and categorize existing services. Since DeFi is still in its infancy, there are countless hurdles for mass adoption. We discuss the most prominent challenges and point out possible solutions. Finally, we analyze the economics behind DeFi products. By carefully analyzing the state-of-the-art and discussing current challenges, we give a perspective on how the DeFi space might develop in the near future.
computer science
We use large-scale exact diagonalization to study the quantum Ising chain in a transverse field with long-range power-law interactions decaying with exponent $\alpha$. Analyzing various eigenstate and eigenvalue properties, we find numerical evidence for ergodic behavior in the thermodynamic limit for $\alpha>0$, \textit{i.~e.} for the slightest breaking of the permutation symmetry at $\alpha=0$. Considering {an} excited-state fidelity susceptibility, {an} energy-resolved average level-spacing ratio and the eigenstate expectations of observables, we observe that a behavior consistent with eigenstate thermalization first emerges at high energy densities for finite system sizes, as soon as $\alpha>0$. We argue that ergodicity moves towards lower energy densities for increasing system sizes. While we argue the system to be ergodic for any $\alpha>0$, we also find a peculiar behaviour near $\alpha=2$ suggesting the proximity to a yet unknown integrable point. We further study the symmetry-breaking properties of the eigenstates. We argue that for weak transverse fields the eigenstates break the $\mathbb{Z}_2$ symmetry, and show long-range order, at finite excitation energy densities for all the values of $\alpha$ we can technically address ($\alpha\leq 1.5$). Our contribution settles central theoretical questions on long-range quantum Ising chains and are also of direct relevance for the nonequilibrium dynamics in such systems such as realized experimentally in systems of trapped ions.
condensed matter
During the last decade, numerous and varied observations, along with increasingly sophisticated numerical simulations, have awakened astronomers to the central role the circumgalactic medium (CGM) plays in regulating galaxy evolution. It contains the majority of the baryonic matter associated with a galaxy, along with most of the metals, and must continually replenish the star forming gas in galaxies that continue to sustain star formation. And while the CGM is complex, containing gas ranging over orders of magnitude in temperature and density, a simple emergent property may be governing its structure and role. Observations increasingly suggest that the ambient CGM pressure cannot exceed the limit at which cold clouds start to condense out and precipitate toward the center of the potential well. If feedback fueled by those clouds then heats the CGM and causes it to expand, the pressure will drop and the "rain" will diminish. Such a feedback loop tends to suspend the CGM at the threshold pressure for precipitation. The coming decade will offer many opportunities to test this potentially fundamental principle of galaxy evolution.
astrophysics
The formation of water-in-oil-in-water (W/O/W) double emulsions can be well-controlled through an organized self-emulsification mechanism in the presence of rigid bottlebrush amphiphilic block copolymers. Nanoscale water droplets with well-controlled diameters form ordered spatial arrangements within the micron-scale oil droplets. Upon solvent evaporation, solid microspheres with hexagonal close packed nanopore arrays are obtained resulting in bright structural colors. The reflected color is precisely tunable across the whole visible light range through tailoring contour length of the bottlebrush molecule. In-situ observation of the W/O interface using confocal laser scanning microscopy provides insights into the mechanism of the organized self-emulsification. This work provides a powerful strategy for the fabrication of structural colored materials in an easy and scalable manner.
physics
We report on experimental studies of the effects induced by surface acoustic waves on the optical emission dynamics of GaN/InGaN nanowire quantum dots. We employ stroboscopic optical excitation with either time-integrated or time-resolved photoluminescence detection. In the absence of the acoustic wave, the emission spectra reveal signatures originated from the recombination of neutral exciton and biexciton confined in the probed nanowire quantum dot. When the nanowire is perturbed by the propagating acoustic wave, the embedded quantum dot is periodically strained and its excitonic transitions are modulated by the acousto-mechanical coupling. Depending on the recombination lifetime of the involved optical transitions, we can resolve acoustically driven radiative processes over time scales defined by the acoustic cycle. At high acoustic amplitudes, we also observe distortions in the transmitted acoustic waveform, which are reflected in the time-dependent spectral response of our sensor quantum dot. In addition, the correlated intensity oscillations observed during temporal decay of the exciton and biexciton emission suggest an effect of the acoustic piezoelectric fields on the quantum dot charge population. The present results are relevant for the dynamic spectral and temporal control of photon emission in III-nitride semiconductor heterostructures.
condensed matter
Ranking alternatives is a natural way for humans to explain their preferences. It is being used in many settings, such as school choice, course allocations and residency matches. In some cases, several `items' are given to each participant. Without having any information on the underlying cardinal utilities, arguing about fairness of allocation requires extending the ordinal item ranking to ordinal bundle ranking. The most commonly used such extension is stochastic dominance (SD), where a bundle X is preferred over a bundle Y if its score is better according to all additive score functions. SD is a very conservative extension, by which few allocations are necessarily fair while many allocations are possibly fair. We propose to make a natural assumption on the underlying cardinal utilities of the players, namely that the difference between two items at the top is larger than the difference between two items at the bottom. This assumption implies a preference extension which we call diminishing differences (DD), where X is preferred over Y if its score is better according to all additive score functions satisfying the DD assumption. We give a full characterization of allocations that are necessarily-proportional or possibly-proportional according to this assumption. Based on this characterization, we present a polynomial-time algorithm for finding a necessarily-DD-proportional allocation if it exists. Using simulations, we show that with high probability, a necessarily-proportional allocation does not exist but a necessarily-DD-proportional allocation exists, and moreover, that allocation is proportional according to the underlying cardinal utilities. We also consider chore allocation under the analogous condition --- increasing-differences.
computer science
Higgs sector of the Standard model (SM) is replaced by the gauge $SU(3)_f$ quantum flavor dynamics (QFD) with one parameter, the scale $\Lambda$. Anomaly freedom of QFD demands extension of the fermion sector of SM by three sterile right-handed neutrino fields. Poles of fermion propagators with chirality-changing self-energies $\Sigma(p^2)$ spontaneously generated by QFD at strong coupling define: (1) Three sterile-neutrino Majorana masses $M_{fR}$ of order $\Lambda$. (2) Three Dirac masses $m_f$, degenerate for $e_f, \nu_f, u_f, d_f$ in family $f$, exponentially small with respect to $\Lambda$. Goldstone theorem implies: All eight flavor gluons acquire masses of order $M_{fR}$. $W$ and $Z$ bosons acquire masses of order $\sum m_f$, the effective Fermi scale. Composite 'would-be' Nambu-Goldstone bosons have their 'genuine' partners, the composite Higgs particles: The SM-like Higgs $h$ and two new Higgses $h_3$ and $h_8$, all with masses at Fermi scale; three Higgses $\chi_i$ with masses at scale $\Lambda$. Large pole-mass splitting of charged leptons and quarks in $f$ is arguably due to full QED $\Sigma(p^2)$-dependent fermion-photon vertices enforced by Ward-Takahashi identities. The argument relies on illustrative computation of pole-mass splitting found non-analytic in fermion electric charges. Neutrinos are the Majorana particles with seesaw mass spectrum computed solely by QFD. Available data fix $\Lambda$ to, say, $\Lambda \sim 10^{14} \rm GeV$.
high energy physics phenomenology
We study the evolution of a small-scale emerging flux region (EFR) in the quiet Sun, from its emergence to its decay. We track processes and phenomena across all atmospheric layers, explore their interrelations and compare our findings with recent numerical modelling studies. We used imaging, spectral and spectropolarimetric observations from space-borne and ground-based instruments. The EFR appears next to the chromospheric network and shows all characteristics predicted by numerical simulations. The total magnetic flux of the EFR exhibits distinct evolutionary phases, namely an initial subtle increase, a fast increase and expansion of the region area, a more gradual increase, and a slow decay. During the initial stages, bright points coalesce, forming clusters of positive- and negative-polarity in a largely bipolar configuration. During the fast expansion, flux tubes make their way to the chromosphere, producing pressure-driven absorption fronts, visible as blueshifted chromospheric features. The connectivity of the quiet-Sun network gradually changes and part of the existing network forms new connections with the EFR. A few minutes after the bipole has reached its maximum magnetic flux, it brightens in soft X-rays forming a coronal bright point, exhibiting episodic brightenings on top of a long smooth increase. These coronal brightenings are also associated with surge-like chromospheric features, which can be attributed to reconnection with adjacent small-scale magnetic fields and the ambient magnetic field. The emergence of magnetic flux even at the smallest scales can be the driver of a series of energetic phenomena visible at various atmospheric heights and temperature regimes. Multi-wavelength observations reveal a wealth of mechanisms which produce diverse observable effects during the different evolutionary stages of these small-scale structures.
astrophysics
We adopt the chiral perturbation theory to calculate the $\Sigma_{c}^{(*)}\bar{D}^{(*)}$ interaction to the next-to-leading order (NLO) and include the couple-channel effect in the loop diagrams. We reproduce the three $P_c$ states in the molecular picture after including the $\Lambda_{c}\bar{D}^{(*)}$ intermediate states. We also discuss some novel observations arising from the loop diagrams.
high energy physics phenomenology
Faraday rotation has become a powerful tool in a large variety of physics applications. Most prominently, Faraday rotation can be used in precision magnetometry. Here we report the first measurements of gyromagnetic Faraday rotation on a dense, hyperpolarized $^3$He gas target. Theoretical calculations predict the rotations of linearly polarized light due to the magnetization of spin-1/2 particles are on the scale of 10$^{-7}$ radians. To maximize the signal, a $^3$He target designed to use with a multipass cavity is combined with a sensitive apparatus for polarimetry that can detect optical rotations on the order of 10$^{-8}$ radians. Although the expected results are well above the sensitivity for the given experimental conditions, no nuclear-spin induced rotation was observed.
physics
The sparse identification of nonlinear dynamics (SINDy) is a regression framework for the discovery of parsimonious dynamic models and governing equations from time-series data. As with all system identification methods, noisy measurements compromise the accuracy and robustness of the model discovery procedure. In this work, we develop a variant of the SINDy algorithm that integrates automatic differentiation and recent time-stepping constrained motivated by Rudy et al. for simultaneously (i) denoising the data, (ii) learning and parametrizing the noise probability distribution, and (iii) identifying the underlying parsimonious dynamical system responsible for generating the time-series data. Thus within an integrated optimization framework, noise can be separated from signal, resulting in an architecture that is approximately twice as robust to noise as state-of-the-art methods, handling as much as 40% noise on a given time-series signal and explicitly parametrizing the noise probability distribution. We demonstrate this approach on several numerical examples, from Lotka-Volterra models to the spatio-temporal Lorenz 96 model. Further, we show the method can identify a diversity of probability distributions including Gaussian, uniform, Gamma, and Rayleigh.
electrical engineering and systems science
Stochastic optimization problems often involve data distributions that change in reaction to the decision variables. This is the case for example when members of the population respond to a deployed classifier by manipulating their features so as to improve the likelihood of being positively labeled. Recent works on performative prediction have identified an intriguing solution concept for such problems: find the decision that is optimal with respect to the static distribution that the decision induces. Continuing this line of work, we show that typical stochastic algorithms -- originally designed for static problems -- can be applied directly for finding such equilibria with little loss in efficiency. The reason is simple to explain: the main consequence of the distributional shift is that it corrupts algorithms with a bias that decays linearly with the distance to the solution. Using this perspective, we obtain sharp convergence guarantees for popular algorithms, such as stochastic gradient, clipped gradient, proximal point, and dual averaging methods, along with their accelerated and proximal variants. In realistic applications, deployment of a decision rule is often much more expensive than sampling. We show how to modify the aforementioned algorithms so as to maintain their sample efficiency while performing only logarithmically many deployments.
mathematics
Tacotron-based text-to-speech (TTS) systems directly synthesize speech from text input. Such frameworks typically consist of a feature prediction network that maps character sequences to frequency-domain acoustic features, followed by a waveform reconstruction algorithm or a neural vocoder that generates the time-domain waveform from acoustic features. As the loss function is usually calculated only for frequency-domain acoustic features, that doesn't directly control the quality of the generated time-domain waveform. To address this problem, we propose a new training scheme for Tacotron-based TTS, referred to as WaveTTS, that has 2 loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features. WaveTTS ensures both the quality of the acoustic features and the resulting speech waveform. To our best knowledge, this is the first implementation of Tacotron with joint time-frequency domain loss. Experimental results show that the proposed framework outperforms the baselines and achieves high-quality synthesized speech.
electrical engineering and systems science
We studied the $ep\rightarrow ep+2jets$ diffractive cross section with ZEUS phase space. Neglecting the $t$-channel momentum in the Born and gluon dipole impact factors, we calculated the corresponding contributions to the cross section differential in $\beta=\frac{Q^{2}}{Q^{2}+M_{2jets}^{2}}$ and the angle $\phi$ between the leptonic and hadronic planes. The gluon dipole contribution was obtained in the exclusive $k_{t}$-algorithm with the exclusive cut $y_{cut}=0.15$ in the small $y_{cut}$ approximation. In the collinear approximation we canceled singularities between real and virtual contributions to the $q\bar{q}$ dipole configuration, keeping the exact $y_{cut}$ dependency. We used the Golec-Biernat - W\"usthoff (GBW) parametrization for the dipole matrix element and linearized the double dipole contributions. The results give roughly $\frac{1}{2}$ of the observed cross section for small $\beta$ and coincides with it for large $\beta.$
high energy physics phenomenology
Mini-EUSO is a space experiment selected to be installed inside the International Space Station. It has a compact telescope with a large field of view ($44 $\times$ 44$ sq. deg.) focusing light on an array of photo-multipliers tubes in order to observe UV emission coming from Earth's atmosphere. Observations will be complemented with data recorded by some ancillary detectors. In particular, the Mini-EUSO Additional Data Acquisition System (ADS) is composed by two cameras, which will allow us to obtain data in the near infrared, and in the visible range. These will be used to monitor the observation conditions, and to acquire useful information on several scientific topics to be studied with the main instrument, such as the physics of atmosphere, meteors, and strange quark matter. Here we present the ADS control software developed to stream cameras together with the UV main instrument, in order to grab images in an automated and independent way, and we also describe the calibration activities performed on these two ancillary cameras before flight.
astrophysics
This project aims to break down large pathology images into small tiles and then cluster those tiles into distinct groups without the knowledge of true labels, our analysis shows how difficult certain aspects of clustering tumorous and non-tumorous cells can be and also shows that comparing the results of different unsupervised approaches is not a trivial task. The project also provides a software package to be used by the digital pathology community, that uses some of the approaches developed to perform unsupervised unsupervised tile classification, which could then be easily manually labelled. The project uses a mixture of techniques ranging from classical clustering algorithms such as K-Means and Gaussian Mixture Models to more complicated feature extraction techniques such as deep Autoencoders and Multi-loss learning. Throughout the project, we attempt to set a benchmark for evaluation using a few measures such as completeness scores and cluster plots. Throughout our results we show that Convolutional Autoencoders manages to slightly outperform the rest of the approaches due to its powerful internal representation learning abilities. Moreover, we show that Gaussian Mixture models produce better results than K-Means on average due to its flexibility in capturing different clusters. We also show the huge difference in the difficulties of classifying different types of pathology textures.
electrical engineering and systems science
The goal of optimization-based meta-learning is to find a single initialization shared across a distribution of tasks to speed up the process of learning new tasks. Conditional meta-learning seeks task-specific initialization to better capture complex task distributions and improve performance. However, many existing conditional methods are difficult to generalize and lack theoretical guarantees. In this work, we propose a new perspective on conditional meta-learning via structured prediction. We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions by weighing meta-training data on target tasks. Our non-parametric approach is model-agnostic and can be combined with existing meta-learning methods to achieve conditioning. Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
computer science
A mixture of multivariate contaminated normal (MCN) distributions is a useful model-based clustering technique to accommodate data sets with mild outliers. However, this model only works when fitted to complete data sets, which is often not the case in real applications. In this paper, we develop a framework for fitting a mixture of MCN distributions to incomplete data sets, i.e. data sets with some values missing at random. We employ the expectation-conditional maximization algorithm for parameter estimation. We use a simulation study to compare the results of our model and a mixture of Student's t distributions for incomplete data.
statistics
We consider the fundamental problem of communicating an estimate of a real number $x\in[0,1]$ using a single bit. A sender that knows $x$ chooses a value $X\in\set{0,1}$ to transmit. In turn, a receiver estimates $x$ based on the value of $X$. We consider both the biased and unbiased estimation problems and aim to minimize the cost. For the biased case, the cost is the worst-case (over the choice of $x$) expected squared error, which coincides with the variance if the algorithm is required to be unbiased. We first overview common biased and unbiased estimation approaches and prove their optimality when no shared randomness is allowed. We then show how a small amount of shared randomness, which can be as low as a single bit, reduces the cost in both cases. Specifically, we derive lower bounds on the cost attainable by any algorithm with unrestricted use of shared randomness and propose near-optimal solutions that use a small number of shared random bits. Finally, we discuss open problems and future directions.
computer science
The optimal gain matrix of the Kalman filter is often derived by minimizing the trace of the posterior covariance matrix. Here, I show that the Kalman gain also minimizes the determinant of the covariance matrix, a quantity known as the generalized variance. When the error distributions are Gaussian, the differential entropy is also minimized.
electrical engineering and systems science
Sum rules connecting low-energy observables to high-energy physics are an interesting way to probe the mechanism of inflation and its ultraviolet origin. Unfortunately, such sum rules have proven difficult to study in a cosmological setting. Motivated by this problem, we investigate a precise analogue of inflation in anti-de Sitter spacetime, where it becomes dual to a slow renormalization group flow in the boundary quantum field theory. This dual description provides a firm footing for exploring the constraints of unitarity, analyticity, and causality on the bulk effective field theory. We derive a sum rule that constrains the bulk coupling constants in this theory. In the bulk, the sum rule is related to the speed of radial propagation, while on the boundary, it governs the spreading of nonlocal operators. When the spreading speed approaches the speed of light, the sum rule is saturated, suggesting that the theory becomes free in this limit. We also discuss whether similar results apply to inflation, where an analogous sum rule exists for the propagation speed of inflationary fluctuations.
high energy physics theory
This work analyzes how attention-based Bidirectional Long Short-Term Memory (BLSTM) models adapt to noise-augmented speech. We identify crucial components for noise adaptation in BLSTM models by freezing model components during fine-tuning. We first freeze larger model subnetworks and then pursue a fine-grained freezing approach in the encoder after identifying its importance for noise adaptation. The first encoder layer is shown to be crucial for noise adaptation, and the weights are shown to be more important than the other layers. Appreciable accuracy benefits are identified when fine-tuning on a target noisy environment from a model pretrained with noisy speech relative to fine-tuning from a model pretrained with only clean speech when tested on the target noisy environment. For this analysis, we produce our own dataset augmentation tool and it is open-sourced to encourage future efforts in exploring noise adaptation in ASR.
electrical engineering and systems science
Anomaly detection aims to identify observations that deviate from the typical pattern of data. Anomalous observations may correspond to financial fraud, health risks, or incorrectly measured data in practice. We show detecting anomalies in high-dimensional mixed data is enhanced through first embedding the data then assessing an anomaly scoring scheme. We focus on unsupervised detection and the continuous and categorical (mixed) variable case. We propose a kurtosis-weighted Factor Analysis of Mixed Data for anomaly detection, FAMDAD, to obtain a continuous embedding for anomaly scoring. We illustrate that anomalies are highly separable in the first and last few ordered dimensions of this space, and test various anomaly scoring experiments within this subspace. Results are illustrated for both simulated and real datasets, and the proposed approach (FAMDAD) is highly accurate for high-dimensional mixed data throughout these diverse scenarios.
statistics
Thermal transport through nanosystems is central to numerous processes in chemistry, material sciences, electrical and mechanical engineering, with classical molecular dynamics as the key simulation tool. Here we focus on thermal junctions with a molecule bridging two solids that are maintained at different temperatures. The classical steady state heat current in this system can be simulated in different ways, either at the interfaces with the solids, which are represented by thermostats, or between atoms within the conducting molecule. We show that while the latter, intramolecular definition feasibly converges to the correct limit, the molecule-thermostat interface definition is more challenging to converge to the correct result. The problem with the interface definition is demonstrated by simulating heat transport in harmonic and anharmonic one-dimensional chains illustrating unphysical effects such as thermal rectification in harmonic junctions.
condensed matter
This paper is devoted to the primary spectro-polarimetric observation performed at the New Vacuum Solar Telescope of China since 2017, and our aim is to precisely evaluate the real polarimetric accuracy and sensitivity of this polarimetry by using full Stokes spectro-polarimetric observations of the photospheric line Fe I 532.4 nm. In the work, we briefly describe the salient characteristic of the NVST as a polarimeter in technology and then characterize its instrumental polarization based on the operation in 2017 and 2019. It is verified that the calibration method making use of the instrumental polarization calibration unit (ICU) is stable and credible. The calibration accuracy can reach up to 3$\times 10^{-3}$ . Based on the scientific observation of the NOAA 12645 on April 5th, 2017, we estimate that the residual cross-talk from Stokes $I$ to Stokes $Q$, $U$ and $V$, after the instrumental polarization calibration, is about 4$\times10^{-3}$ on average, which is consistent with the calibration accuracy and close to the photon noise. The polarimetric sensitivity (i.e., the detection limit) for polarized light is of the order of $10^{-3}$ with an integration time over 20 seconds. Slow modulation rate is indeed an issue for the present system. The present NVST polarimeter is expected to be integrated with an high-order adaptive optics system and a field scanner to realize 2D magnetic field vector measurements in the following instrumentation update.
astrophysics
A simple generating procedure for Lagrangians of conformal gauge fields of mixed-symmetry type is presented. The construction originates from the analysis of the near-boundary behaviour of the associated AdS gauge fields using the ambient space approach to leading boundary values. Manifestly ambient form of the Lagrangian is also obtained. As an illustration we apply the procedure to the simplest mixed-symmetry conformal gauge field, described by the two-row Young diagram, and derive the explicit component form of the respective Lagrangian.
high energy physics theory
Chemical plants are complex and dynamical systems consisting of many components for manipulation and sensing, whose state transitions depend on various factors such as time, disturbance, and operation procedures. For the purpose of supporting human operators of chemical plants, we are developing an AI system that can semi-automatically synthesize operation procedures for efficient and stable operation. Our system can provide not only appropriate operation procedures but also reasons why the procedures are considered to be valid. This is achieved by integrating automated reasoning and deep reinforcement learning technologies with a chemical plant simulator and external knowledge. Our preliminary experimental results demonstrate that it can synthesize a procedure that achieves a much faster recovery from a malfunction compared to standard PID control.
computer science
The Short-Baseline Near Detector time projection chamber is unique in the design of its charge readout planes. These anode plane assemblies (APAs) have been fabricated and assembled to meet strict accuracy and precision requirements: wire spacing of 3 mm +/- 0.5 mm and wire tension of 7 N +/- 1 N across 3,964 wires per APA, and flatness within 0.5 mm over the 4 m +/- 2.5 m extent of each APA. This paper describes the design, manufacture and assembly of these key detector components, with a focus on the quality assurance at each stage.
physics
Reinforcement learning (RL) methods often rely on massive exploration data to search optimal policies, and suffer from poor sampling efficiency. This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual representations of environmental dynamics to search the optimal policy with the purpose of improving both learning accuracy and training speed. The dual representations indicate the environmental model and the state-action data: the former can accelerate the learning process of RL, while its inherent model uncertainty generally leads to worse policy accuracy than the latter, which comes from direct measurements of states and actions. In the framework design of the mixed RL, the compensation of the additive stochastic model uncertainty is embedded inside the policy iteration RL framework by using explored state-action data via iterative Bayesian estimator (IBE). The optimal policy is then computed in an iterative way by alternating between policy evaluation (PEV) and policy improvement (PIM). The convergence of the mixed RL is proved using the Bellman's principle of optimality, and the recursive stability of the generated policy is proved via the Lyapunov's direct method. The effectiveness of the mixed RL is demonstrated by a typical optimal control problem of stochastic non-affine nonlinear systems (i.e., double lane change task with an automated vehicle).
electrical engineering and systems science
Theories suggest that filament fragmentation should occur on a characteristic fragmentation length-scale. This fragmentation length-scale can be related to filament properties, such as the width and the dynamical state of the filament. Here we present a study of a number of fragmentation analysis techniques applied to filaments, and their sensitivity to characteristic fragmentation length-scales. We test the sensitivity to both single-tier and two-tier fragmentation, i.e. when the fragmentation can be characterised with one or two fragmentation length-scales respectively. The nearest neighbour separation, minimum spanning tree separation and two-point correlation function are all able to robustly detect characteristic fragmentation length-scales. The Fourier power spectrum and the Nth nearest neighbour technique are both poor techniques, and require very little scatter in the core spacings for the characteristic length-scale to be successfully determined. We develop a null hypothesis test to compare the results of the nearest neighbour and minimum spanning tree separation distribution with randomly placed cores. We show that a larger number of cores is necessary to successfully reject the null hypothesis if the underlying fragmentation is two-tier, N>20. Once the null is rejected we show how one may decide if the observed fragmentation is best described by single-tier or two-tier fragmentation, using either Akaike's information criterion or the Bayes factor. The analysis techniques, null hypothesis tests, and model selection approaches are all included in a new open-source Python/C library called FragMent.
astrophysics
Applying standard statistical methods after model selection may yield inefficient estimators and hypothesis tests that fail to achieve nominal type-I error rates. The main issue is the fact that the post-selection distribution of the data differs from the original distribution. In particular, the observed data is constrained to lie in a subset of the original sample space that is determined by the selected model. This often makes the post-selection likelihood of the observed data intractable and maximum likelihood inference difficult. In this work, we get around the intractable likelihood by generating noisy unbiased estimates of the post-selection score function and using them in a stochastic ascent algorithm that yields correct post-selection maximum likelihood estimates. We apply the proposed technique to the problem of estimating linear models selected by the lasso. In an asymptotic analysis the resulting estimates are shown to be consistent for the selected parameters and to have a limiting truncated normal distribution. Confidence intervals constructed based on the asymptotic distribution obtain close to nominal coverage rates in all simulation settings considered, and the point estimates are shown to be superior to the lasso estimates when the true model is sparse.
statistics
We obtain a new 3D gravity model from two copies of parity-odd Einstein-Cartan theories. Using Hamiltonian analysis, we demonstrate that the only local degrees of freedom are two massive spin-2 modes. Unitarity of the model in anti-de Sitter and Minkowski backgrounds can be satisfied for vast choices of the parameters without fine-tuning. The recent "exotic massive 3D gravity" model arises as a limiting case of the new model. We also show that there exist trajectories on the parameter space of the new model which cross the boundary between unitary and non-unitary regions. At the crossing point, one massive graviton decouples resulting in a unitary model with just one bulk degree of freedom but two positive central charges at odds with the usual expectation that the critical model has at least one vanishing central charge. Given the fact that a suitable non-relativistic version of bi-gravity has been used as an effective theory for gapped spin-2 fractional quantum Hall states, our model may have interesting applications in condensed matter physics.
high energy physics theory
We analyze the intriguing possibility to explain both dark mass components in a galaxy: the dark matter (DM) halo and the supermassive dark compact object lying at the center, by a unified approach in terms of a quasi-relaxed system of massive, neutral fermions in general relativity. The solutions to the mass distribution of such a model that fulfill realistic halo boundary conditions inferred from observations, develop a highly-density core supported by the fermion degeneracy pressure able to mimic massive black holes at the center of galaxies. Remarkably, these dense core-diluted halo configurations can explain the dynamics of the closest stars around Milky Way's center (SgrA*) all the way to the halo rotation curve, without spoiling the baryonic bulge-disk components, for a narrow particle mass range $mc^2 \sim 10$-$10^2$~keV.
astrophysics
We consider both facial reduction, FR, and symmetry reduction, SR, techniques for semidefinite programming, SDP. We show that the two together fit surprisingly well in an alternating direction method of multipliers, ADMM, approach. In fact, this approach allows for simply adding on nonnegativity constraints, and solving the doubly nonnegative, DNN, relaxation of many classes of hard combinatorial problems. We also show that the singularity degree does not increase after SR, and that the DNN relaxations considered here have singularity degree one, that is reduced to zero after FR. The combination of FR and SR leads to a significant improvement in both numerical stability and running time for both the ADMM and interior point approaches. We test our method on various DNN relaxations of hard combinatorial problems including quadratic assignment problems with sizes of more than $n=500$. This translates to a semidefinite constraint of order $250,000$ and $625\times 10^8$ nonnegative constrained variables.
mathematics
Obtaining large annotated datasets is critical for training successful machine learning models and it is often a bottleneck in practice. Weak supervision offers a promising alternative for producing labeled datasets without ground truth annotations by generating probabilistic labels using multiple noisy heuristics. This process can scale to large datasets and has demonstrated state of the art performance in diverse domains such as healthcare and e-commerce. One practical issue with learning from user-generated heuristics is that their creation requires creativity, foresight, and domain expertise from those who hand-craft them, a process which can be tedious and subjective. We develop the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic. Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels. We conduct user studies, which show that users are able to effectively provide feedback on heuristics and that test set results track the performance of simulated oracles.
computer science
This study deals with the estimation of the trajectory of coronavirus COVID-19 adhering to respiratory droplets projected horizontally, considering the geographical altitude. The size of viruses and respiratory droplets is the factor that determines the trajectory of the microparticles in a viscous medium such as air; For this purpose, a graphical comparison of the diameters and masses of the microparticles produced in respiratory activity has been made. The estimation of the vertical movement of the microparticles through the air is based on Stokes' Law, it was determined that respiratory droplets smaller than 10 {\mu}m in diameter have very small speeds, in practice they are floating for a few seconds before evaporating in the air; Regarding the horizontal displacement of respiratory droplets, frames from Scharfman et al. to determine its scope. In the case of a sneeze, the respiratory droplets can reach a distance of 1.65 m in 1 s, stopping rapidly until reaching 1.71 m in 2 seconds, then an analysis of the effect of geographical altitude on the movement of the micro-droplets was made, determining minimal change in kinematic variables.
physics
Hitherto acoustic cloaking devices, which conceal objects externally, have depended on the objects' characteristics. Despite previous works, we design cloaking device placed neighbor an arbitrary object and makes it invisible without the need to make it enclosed. Applying sequential linear coordinate transformations, leads to a non-closed acoustic cloak (NCAC) with homogeneous materials that creates an open invisible region. We propose a non-closed carpet cloak to conceal objects on a reflecting plane. Numerical simulations verify the cloaking effect, which is completely independent of the geometry and material properties of the hidden object. Due to the simple acoustic constitutive parameters of presented structures, this work paves the way toward realization of non-closed acoustic devices, which could find applications in air born sound manipulation and underwater demands.
physics
Compressing images at extremely low bitrates (< 0.1 bpp) has always been a challenging task since the quality of reconstruction significantly reduces due to the strong imposed constraint on the number of bits allocated for the compressed data. With the increasing need to transfer large amounts of images with limited bandwidth, compressing images to very low sizes is a crucial task. However, the existing methods are not effective at extremely low bitrates. To address this need, we propose a novel network called CompressNet which augments a Stacked Autoencoder with a Switch Prediction Network (SAE-SPN). This helps in the reconstruction of visually pleasing images at these low bitrates (< 0.1 bpp). We benchmark the performance of our proposed method on the Cityscapes dataset, evaluating over different metrics at extremely low bitrates to show that our method outperforms the other state-of-the-art. In particular, at a bitrate of 0.07, CompressNet achieves 22% lower Perceptual Loss and 55% lower Frechet Inception Distance (FID) compared to the deep learning SOTA methods.
electrical engineering and systems science
We initiate here the study of Gromov-Witten theory of locally conformally symplectic manifolds or $\lcs$ manifolds, $\lcsm$'s for short, which are a natural generalization of both contact and symplectic manifolds. We find that the main new phenomenon (relative to the symplectic case) is the potential existence of holomorphic sky catastrophes, an analogue for pseudo-holomorphic curves of sky catastrophes in dynamical systems originally discovered by Fuller. We are able to rule these out in some situations, particularly for certain $\lcs$ 4-folds, and as one application we show that in dimension 4 the classical Gromov non-squeezing theorem has certain $C ^{0} $ rigidity or persistence with respect to $\lcs$ deformations, this is one version of $\lcs$ non-squeezing a first result of its kind. In a different direction we study Gromov-Witten theory of the $\lcsm$ $C \times S ^{1} $ induced by a contact manifold $(C, \lambda)$, and show that the Gromov-Witten invariant (as defined here) counting certain elliptic curves in $C \times S ^{1} $ is identified with the classical Fuller index of the Reeb vector field $R ^{\lambda} $. This has some non-classical applications, and based on the story we develop, we give a kind of `holomorphic Seifert/Weinstein conjecture' which is a direct extension for some types of $\lcsm$'s of the classical Seifert/Weinstein conjecture. This is proved for $\lcs$ structures $C ^{\infty} $ nearby to the Hopf $\lcs$ structure on $S ^{2k+1} \times S ^{1} $.
mathematics
In this paper, we propose a new practical power allocation technique based on bit error probability (BEP) for physical layer security systems. It is shown that the secrecy rate that is the most commonly used in physical layer security systems, cannot be a suitable criterion lonely. Large positive values are suitable for the secrecy rate in physical layer security, but it does not consider the performance of the legitimate and adversary users. In this paper, we consider and analyze BEP for physical layer security systems because based on it, the performance of the legitimate and adversary users are guaranteed and it is needed to use lower power. BEP is calculated for the legitimate and adversary users and it is shown that BEP can be better criterion for performance evaluation of the physical layer security systems. Based on BEP, the optimum transmit power is obtained and a new definition for outage probability is proposed and obtained theoretically. Also, the proposed approach is applied for adversary users with unknown mode and the cooperative adversary users. Simulation results show that the proposed method needs more than 5dB lower power for different scenarios.
electrical engineering and systems science
We present a phenomenological unpolarized Parton Distribution Functions for diquarks based on a soft-wall light front AdS/QCD quark-diquark nucleon model. From a probed model consistent with the Drewll-Yan-West relation and quark counting rule, we have performed a fit of some free parameters using known phenomenological quark PDF data. The model considers the entire set of possible diquarks within the nucleon valence, in the present work we focus on the spin-0 $ud_0$, spin-1 $ud_1$ and spin-1 $uu_1$ diquarks into the valence of protons. The diquark PDFs obtained are able to used in proton-proton collision simulations.
high energy physics phenomenology
Automated input generators are widely used for large-scale dynamic analysis of mobile apps. Such input generators must constantly choose which UI element to interact with and how to interact with it, in order to achieve high coverage with a limited time budget. Currently, most input generators adopt pseudo-random or brute-force searching strategies, which may take very long to find the correct combination of inputs that can drive the app into new and important states. In this paper, we propose Humanoid, a deep learning-based approach to GUI test input generation by learning from human interactions. Our insight is that if we can learn from human-generated interaction traces, it is possible to automatically prioritize test inputs based on their importance as perceived by users. We design and implement a deep neural network model to learn how end-users would interact with an app (specifically, which UI elements to interact with and how). Our experiments showed that the interaction model can successfully prioritize user-preferred inputs for any new UI (with a top-1 accuracy of 51.2% and a top-10 accuracy of 85.2%). We implemented an input generator for Android apps based on the learned model and evaluated it on both open-source apps and market apps. The results indicated that Humanoid was able to achieve higher coverage than six state-of-the-art test generators. However, further analysis showed that the learned model was not the main reason of coverage improvement. Although the learned interaction pattern could drive the app into some important GUI states with higher probabilities, it had limited effect on the width and depth of GUI state search, which is the key to improve test coverage in the long term. Whether and how human interaction patterns can be used to improve coverage is still an unknown and challenging problem.
computer science
With the introduction of the variational autoencoder (VAE), probabilistic latent variable models have received renewed attention as powerful generative models. However, their performance in terms of test likelihood and quality of generated samples has been surpassed by autoregressive models without stochastic units. Furthermore, flow-based models have recently been shown to be an attractive alternative that scales well to high-dimensional data. In this paper we close the performance gap by constructing VAE models that can effectively utilize a deep hierarchy of stochastic variables and model complex covariance structures. We introduce the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path. We show that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution. We observe that BIVA, in contrast to recent results, can be used for anomaly detection. We attribute this to the hierarchy of latent variables which is able to extract high-level semantic features. Finally, we extend BIVA to semi-supervised classification tasks and show that it performs comparably to state-of-the-art results by generative adversarial networks.
statistics
In previous work, black hole vortex solutions in Einstein gravity with AdS$_3$ background were found where the scalar matter profile had a singularity at the origin $r=0$. In this paper, we find numerically static vortex solutions where the scalar and gauge fields have a non-singular profile under Einstein gravity in an AdS$_3$ background. Vortices with different winding numbers $n$, VEV $v$ and cosmological constant $\Lambda$ are obtained. These vortices have positive mass and are not BTZ black holes as they have no event horizon. The mass is determined in two ways: by subtracting the numerical values of two separate asymptotic metrics and via an integral that is purely over the matter fields. The mass of the vortex increases as the cosmological constant becomes more negative and this coincides with the core of the vortex becoming smaller (compressed). We then consider the vortex with gravity in asymptotically flat spacetime for different values of the coupling $\alpha=1/(16 \pi G)$. At the origin, the spacetime has its highest curvature and there is no singularity. It transitions to an asymptotic conical spacetime with angular deficit that increases significantly as $\alpha$ decreases. For comparison, we also consider the vortex without gravity in flat spacetime. For this case, one cannot obtain the mass by the first method (subtracting two metrics) but remarkably, via a limiting procedure, one can obtain an integral mass formula. In the absence of gauge fields, there is a well-known logarithmic divergence in the energy of the vortex. With gravity, we present this divergence in a new light. We show that the metric acquires a logarithmic term which is the $2+1$ dimensional realization of the Newtonian gravitational potential when General Relativity is supplemented with a scalar field. This opens up novel possibilities which we discuss in the conclusion.
high energy physics theory
Bipartite and multipartite entangled states are of central interest in quantum information processing and foundational studies. Efficient verification of these states, especially in the adversarial scenario, is a key to various applications, including quantum computation, quantum simulation, and quantum networks. However, little is known about this topic in the adversarial scenario. Here we initiate a systematic study of pure-state verification in the adversarial scenario. In particular, we introduce a general method for determining the minimal number of tests required by a given strategy to achieve a given precision. In the case of homogeneous strategies, we can even derive an analytical formula. Furthermore, we propose a general recipe to verifying pure quantum states in the adversarial scenario by virtue of protocols for the nonadversarial scenario. Thanks to this recipe, the resource cost for verifying an arbitrary pure state in the adversarial scenario is comparable to the counterpart for the nonadversarial scenario, and the overhead is at most three times for high-precision verification. Our recipe can readily be applied to efficiently verify bipartite pure states, stabilizer states, hypergraph states, weighted graph states, and Dicke states in the adversarial scenario, even if only local projective measurements are accessible. This paper is an extended version of the companion paper Zhu and Hayashi, Phys. Rev. Lett. 123, 260504 (2019).
quantum physics
We compute by supersymmetric localization the expectation values of half-BPS 't Hooft line operators in $\mathcal{N}=2$ $U(N)$, $SO(N)$ and $USp(N)$ gauge theories on $S^1 \times \mathbb{R}^3$ with an $\Omega$-deformation. We evaluate the non-perturbative contributions due to monopole screening by calculating the supersymmetric indices of the corresponding supersymmetric quantum mechanics, which we obtain by realizing the gauge theories and the 't Hooft operators using branes and orientifolds in type II string theories.
high energy physics theory
We introduce a method for high-fidelity quantum state transduction between a superconducting microwave qubit and the ground state spin system of a solid-state artificial atom, mediated via an acoustic bus connected by piezoelectric transducers. Applied to present-day experimental parameters for superconducting circuit qubits and diamond silicon vacancy centers in an optimized phononic cavity, we estimate quantum state transduction with fidelity exceeding 99\% at a MHz-scale bandwidth. By combining the complementary strengths of superconducting circuit quantum computing and artificial atoms, the hybrid architecture provides high-fidelity qubit gates with long-lived quantum memory, high-fidelity measurement, large qubit number, reconfigurable qubit connectivity, and high-fidelity state and gate teleportation through optical quantum networks.
quantum physics
Phoretic particles exploit local self-generated physico-chemical gradients to achieve self-propulsion at the micron scale. The collective dynamics of a large number of such particles is currently the focus of intense research efforts, both from a physical perspective to understand the precise mechanisms of the interactions and their respective roles, as well as from an experimental point of view to explain the observations of complex dynamics as well as formation of coherent large-scale structures. However, an exact modelling of such multi-particle problems is difficult and most efforts so far rely on the superposition of far-field approximations for each particle's signature, which are only valid asymptotically in the dilute suspension limit. A systematic and unified analytical framework based on the classical Method of Reflections (MoR) is developed here for both Laplace and Stokes' problems to obtain the higher-order interactions and the resulting velocities of multiple phoretic particles, up to any order of accuracy in the radius-to-distance ratio $\varepsilon$ of the particles. Beyond simple pairwise chemical or hydrodynamic interactions, this model allows us to account for the generic chemo-hydrodynamic couplings as well as $N$-particle interactions ($N\geq 3$). The $\varepsilon^5$-accurate interaction velocities are then explicitly obtained and the resulting implementation of this MoR model is discussed and validated quantitatively against exact solutions of a few canonical problems.
physics
E-commerce is a kind of shopping by use of the internet. E-commerce, very different from the usual shopping concept, is compatible with today's economic dynamics. E-commerce is becoming an indispensable method with the increase of internet usage. With the use of E-commerce, there are also a number of advantages for companies. On the other hand, SAP is a pioneer and leader in the company resource planning software sector. SAP is very important for large-scale companies. They manage all their processes on SAP and its integration is very important with other related software. In this article, we give brief information on some important aspects of e-commerce and propose a solution for ERP integration of an e-commerce system.
computer science
We present Atacama Large Millimeter-Submillimeter Array (ALMA) observations of CK Vulpeculae which is identified with "Nova Vulpeculae 1670". They trace obscuring dust in the inner regions of the associated nebulosity. The dust forms two cocoons, each extending ~5 arcsec north and south of the presumed location of the central star. Brighter emission is in a more compact east-west structure (2 arcsec by 1 arcsec) where the cocoons intersect. We detect line emission in NH$_2$CHO, CN, four organic molecules and C$^{17}$O. CN lines trace bubbles within the dusty cocoons; CH$_3$OH a north-south S-shaped jet; and other molecules a central cloud with a structure aligned with the innermost dust structure. The major axis of the overall dust and gas bubble structure has a projected inclination of ~24 degrees with respect to a 71 arcsec extended "hourglass" nebulosity, previously seen in H alpha. Three cocoon limbs align with dark lanes in the inner regions of the same H alpha images. The central 2 arcsec by 1 arcsec dust is resolved into a structure consistent with a warped dusty disc. The velocity structure of the jets indicates an origin at the centre of this disc and precession with an unknown period. Deceleration regions at both the northern and southern tips of the jets are roughly coincident with additional diffuse dust emission over regions approximately 2 arcsec across. These structures are consistent with a bipolar outflow expanding into surrounding high density material. We suggest that a white dwarf and brown dwarf merged between 1670 and 1672, with the observed structures and extraordinary isotopic abundances generated as a result.
astrophysics
One limitation in characterizing exoplanet candidates is the availability of infrared, high-resolution spectrographs. An important factor in the scarcity of high precision IR spectrographs is the high cost of these instruments. We present a new optical design, which leads to a cost-effective solution. Our instrument is a high-resolution (R=60,000) infrared spectrograph with a R6 Echelle grating and an image slicer. We compare the best possible performance of quasi-Littrow and White Pupil setups, and prefer the latter because it achieves higher image quality. The instrument is proposed for the University of Tokyo Atacama Observatory (TAO) 6.5 m telescope in Chile. The Tao Aiuc high Resolution (d) Y band Spectrograph (TARdYS) covers 0.843-1.117 um. To reduce the cost, we squeeze 42 spectral orders onto a 1K detector with a semi-cryogenic solution. We obtain excellent resolution even when taking realistic manufacturing and alignment tolerances as well as thermal variations into account. In this paper, we present early results from the prototype of this spectrograph at ambient temperature.
astrophysics
A typical audio signal processing pipeline includes multiple disjoint analysis stages, including calculation of a time-frequency representation followed by spectrogram-based feature analysis. We show how time-frequency analysis and nonnegative matrix factorisation can be jointly formulated as a spectral mixture Gaussian process model with nonstationary priors over the amplitude variance parameters. Further, we formulate this nonlinear model's state space representation, making it amenable to infinite-horizon Gaussian process regression with approximate inference via expectation propagation, which scales linearly in the number of time steps and quadratically in the state dimensionality. By doing so, we are able to process audio signals with hundreds of thousands of data points. We demonstrate, on various tasks with empirical data, how this inference scheme outperforms more standard techniques that rely on extended Kalman filtering.
statistics
We use the large isometry group of the Stenzel asymptotically conical Calabi-Yau metric on $T^{\star}S^{4}$ to study the relationship between the Spin(7) instanton and Hermitian-Yang Mills (HYM) equations. We reduce both problems to tractable ODEs and look for invariant solutions. In the abelian case, we establish local equivalence and prove a global nonexistence result. We analyze the nonabelian equations with structure group SO(3) and construct the moduli space of invariant Spin(7) instantons in this setting. This includes an explicit one parameter family of irreducible Spin(7) instantons only one of which is HYM. We thus negatively resolve the question regarding the equivalence of the two gauge theoretic PDEs. The HYM connections play a role in the compactification of this moduli space, exhibiting a phenomenon that we aim to further look into in future work.
mathematics
In this work, we examine the topological phases that can arise in triangular lattices with disconnected elementary band representations. We show that, although these phases may be "fragile" with respect to the addition of extra bands, their topological properties are manifest in certain nontrivial holonomies (Wilson loops) in the space of nontrivial bands. We introduce an eigenvalue index for fragile topology, and we show how a nontrivial value of this index manifests as the winding of a hexagonal Wilson loop; this remains true even in the absence of time-reversal or sixfold rotational symmetry. Additionally, when time-reversal and twofold rotational symmetry are present, we show directly that there is a protected nontrivial winding in more conventional Wilson loops. Crucially, we emphasize that these Wilson loops cannot change without closing a gap to the nontrivial bands. By studying the entanglement spectrum for the fragile bands, we comment on the relationship between fragile topology and the "obstructed atomic limit" of B. Bradlyn et al., Nature 547, 298--305 (2017). We conclude with some perspectives on topological matter beyond the K-theory classification.
condensed matter
A pulsed oscillating power amplifier has been developed for high frequency biasing\cite{kn:deb1} and real time turbulent feedback experiment in STOR-M tokamak. It is capable to provide output peak to peak oscillating voltage of around $\pm60$V and current around 30A within frequency band 1kHz-50kHz without any distortion of any waveform signal. Overall output power is amplified by two stages power mosfet op-amp as well as nine identical push-pull amplifiers which are parallel connected in final stages. The power amplifier input signal, collected from plasma floating potential during plasma shot, is optically isolated with tokamak vessel for real time feedback experiment. Here, filtered floating potential fluctuations having band width between 5kHz-40kHz has been amplified and fed to an electrode inserted into the plasma edge to study response of plasma turbulence. It is observed that magnetic fluctuations are suppressed due to real time feedback of floating potential.
physics
I argue that we have good reason for being realist about quantum states. Though a research programme of attempting to construct a plausible theory that accounts for quantum phenomena without ontic quantum states is well-motivated, that research programme is confronted by considerable obstacles. Two theorems are considered that place restrictions on a theory of that sort: a theorem due to Barrett, Cavalcanti, Lal, and Maroney, and an extension, by the author, of the Pusey-Barrett-Rudolph theorem, that employs an assumption weaker than their Cartesian Product Assumption. These theorems have assumptions, of course. If there were powerful evidence against the conclusion that quantum states correspond to something in physical reality, it might be reasonable to reject these assumptions. But the situation we find ourselves in is the opposite: there is no evidence at all supporting irrealism about quantum states.
quantum physics
We adapt the complexity as action prescription (CA) to a semi-classical model of two-dimensional dilaton gravity and determine the rate of increase of holographic complexity for an evaporating black hole. The results are consistent with our previous numerical results for semi-classical black hole complexity using a volume prescription (CV) in the same model, but the CA calculation is fully analytic and provides a non-trivial positive test for the holographic representation of the black hole interior.
high energy physics theory
We derive relativistic hydrodynamic equations with a dynamical spin degree of freedom on the basis of an entropy-current analysis. The first and second laws of local thermodynamics constrain possible structures of the constitutive relations including a spin current and the antisymmetric part of the (canonical) energy-momentum tensor. Solving the obtained hydrodynamic equations within the linear-mode analysis, we find spin-diffusion modes, indicating that spin density is damped out after a characteristic time scale controlled by transport coefficients introduced in the antisymmetric part of the energy-momentum tensor in the entropy-current analysis. This is a consequence of mutual convertibility between spin and orbital angular momentum.
high energy physics theory
In this work we study the semi-leptonic decay of $\bar{B}_s^0\to \phi l^+ l^-$($l=e, \mu, \tau$) with QCD sum rule method. We calculate the $\bar{B}_s^0\to \phi$ translation form factors relevant to this semi-leptonic decay, then the branching ratios of $\bar{B}_s^0\to \phi l^+ l^-$($l=e, \mu, \tau$) decays are calculated with the form factors obtained here. Our result for the branching ratio of $\bar{B}_s^0\to \phi\mu^+ \mu^-$ agrees very well with the recent experimental data. For the unmeasured decay modes such as $\bar{B}_s^0\to \phi e^+ e^-$ and $\bar{B}_s^0\to \phi\tau^+ \tau^-$, we give theoretical predictions.
high energy physics phenomenology
This is the first of two closely related papers on transversality. Here we introduce the notion of tangential transversality of two closed subsets of a Banach space. It is an intermediate property between transversality and subtransversality. Using it, we obtain a variety of known results and some new ones in a unified way. Our proofs do not use variational principles and we are concentrated mainly on tangential conditions in the primal space.
mathematics
Anchor-based techniques reduce the computational complexity of spectral clustering algorithms. Although empirical tests have shown promising results, there is currently a lack of theoretical support for the anchoring approach. We define a specific anchor-based algorithm and show that it is amenable to rigorous analysis, as well as being effective in practice. We establish the theoretical consistency of the method in an asymptotic setting where data is sampled from an underlying continuous probability distribution. In particular, we provide sharp asymptotic conditions for the algorithm parameters which ensure that the anchor-based method can recover with high probability disjoint clusters that are mutually separated by a positive distance. We illustrate the performance of the algorithm on synthetic data and explain how the theoretical convergence analysis can be used to inform the practical choice of parameter scalings. We also test the accuracy and efficiency of the algorithm on two large scale real data sets. We find that the algorithm offers clear advantages over standard spectral clustering. We also find that it is competitive with the state-of-the-art LSC method of Chen and Cai (Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011), while having the added benefit of a consistency guarantee.
statistics
The properties of the smallest subunits of cometary dust contain information on their origin and clues to the formation of planetesimals and planets. Compared to IDPs or particles collected during the Stardust mission, dust collected in the coma of comet 67P/Churyumov-Gerasimenko during the Rosetta mission provides a resource of minimally altered material with known origin whose structural properties can be used to further the investigation of our early Solar System. A novel method is presented to achieve the highest spatial resolution of imaging possible with the MIDAS Atomic Force Microscope on-board Rosetta. 3D topographic images with resolutions of down to 8\,nm are analysed to determine the subunit sizes of particles on the nanometre scale. Three morphological classes can be determined, namely (i) fragile agglomerate particles of sizes larger than about 10\,$\mathrm{\mu m}$ comprised by micrometre-sized subunits that may be again aggregates and show a moderate packing density on the surface of the particles; (ii) a fragile agglomerate with a size about few tens of micrometres comprised by micrometre-sized subunits suggested to be again aggregates and arranged in a structure with a fractal dimension less than two; (iii) small, micrometre-sized particles comprised by subunits in the hundreds of nanometres size range that show surface features suggested to again represent subunits. Their differential size distributions follow a log-normal distribution with means about 100\,nm and standard deviations between 20 and 35\,nm. All micrometre-sized particles are hierarchical dust agglomerates of smaller subunits. The arrangement, appearance and size distribution of the smallest determined surface features are reminiscent of those found in CP IDPs and they represent the smallest directly detected subunits of comet 67P.
astrophysics
Even if it tends to hide more often in the current autumnal season, our host star, the Sun, is the principal source of energy on our planet. It has a luminosity of 3.828e26 Watts, and despite that we receive only 2 parts per billion of this, it allows for the Earth's life to thrive. Moreover, we know that the Sun has been shining in the same way for about 4.6 billion years (the age of the Earth) and will likely still do the same for another 5 billion years or so. The Sun's power engine, as well as the one of all the stars we see in the night sky, has for long been a mystery, but astronomers now have a good understanding of it.
physics
Methods for processing point cloud information have seen a great success in collider physics applications. One recent breakthrough in machine learning is the usage of Transformer networks to learn semantic relationships between sequences in language processing. In this work, we apply a modified Transformer network called Point Cloud Transformer as a method to incorporate the advantages of the Transformer architecture to an unordered set of particles resulting from collision events. To compare the performance with other strategies, we study jet-tagging applications for highly-boosted particles.
physics
Low-scale models of neutrino mass generation often feature sterile neutrinos with masses in the GeV-TeV range, which can be produced at colliders through their mixing with the Standard Model neutrinos. We consider an alternative scenario in which the sterile neutrino is produced in the decay of a heavier particle, such that its production cross section does not depend on the active-sterile neutrino mixing angles. The mixing angles can be accessed through the decays of the sterile neutrino, provided that they lead to observable displaced vertices. We present an explicit realization of this scenario in which the sterile neutrino is the supersymmetric partner of a pseudo-Nambu-Goldstone boson, and is produced in the decays of higgsino-like neutralinos and charginos. The model predicts the active-sterile neutrino mixing angles in terms of a small number of parameters. We show that a sterile neutrino with a mass between a few 10 GeV and 200 GeV can lead to observable displaced vertices at the LHC, and outline a strategy for reconstructing experimentally its mixing angles.
high energy physics phenomenology
Fine resolution estimates of demographic and socioeconomic attributes are crucial for planning and policy development. While several efforts have been made to produce fine-scale gridded population estimates, socioeconomic features are typically not available at scales finer than Census units, which may hide local heterogeneity and disparity. In this paper we present a new statistical downscaling approach to derive fine-scale estimates of key socioeconomic attributes. The method leverages demographic and geographical extensive covariates available at multiple scales and additional Census covariates only available at coarse resolution, which are included in the model hierarchically within a "forward learning" approach. For each selected socioeconomic variable, a Random Forest model is trained on the source Census units and then used to generate fine-scale gridded predictions, which are then adjusted to ensure the best possible consistency with the coarser Census data. As a case study, we apply this method to Census data in the United States, downscaling the selected socioeconomic variables available at the block group level, to a grid of ~300 spatial resolution. The accuracy of the method is assessed at both spatial scales, first computing a pseudo cross-validation coefficient of determination for the predictions at the block group level and then, for extensive variables only, also for the (unadjusted) predicted counts summed by block group. Based on these scores and on the inspection of the downscaled maps, we conclude that our method is able to provide accurate, smoother, and more detailed socioeconomic estimates than the available Census data.
statistics
This work investigates the problem of detecting gravitational wave (GW) events based on simulated damped sinusoid signals contaminated with white Gaussian noise. It is treated as a classification problem with one class for the interesting events. The proposed scheme consists of the following two successive steps: decomposing the data using a wavelet packet, representing the GW signal and noise using the derived decomposition coefficients; and determining the existence of any GW event using a convolutional neural network (CNN) with a logistic regression output layer. The characteristics of this work is its comprehensive investigations on CNN structure, detection window width, data resolution, wavelet packet decomposition and detection window overlap scheme. Extensive simulation experiments show excellent performances for reliable detection of signals with a range of GW model parameters and signal-to-noise ratios. While we use a simple waveform model in this study, we expect the method to be particularly valuable when the potential GW shapes are too complex to be characterized with a template bank.
astrophysics
Cycles, which can be found in many different kinds of networks, make the problems more intractable, especially when dealing with dynamical processes on networks. On the contrary, tree networks in which no cycle exists, are simplifications and usually allow for analyticity. There lacks a quantity, however, to tell the ratio of cycles which determines the extent of network being close to tree networks. Therefore we introduce the term Cycle Nodes Ratio (CNR) to describe the ratio of number of nodes belonging to cycles to the number of total nodes, and provide an algorithm to calculate CNR. CNR is studied in both network models and real networks. The CNR remains unchanged in different sized Erd\"os R\'enyi (ER) networks with the same average degree, and increases with the average degree, which yields a critical turning point. The approximate analytical solutions of CNR in ER networks are given, which fits the simulations well. Furthermore, the difference between CNR and two-core ratio (TCR) is analyzed. The critical phenomenon is explored by analysing the giant component of networks. We compare the CNR in network models and real networks, and find the latter is generally smaller. Combining the coarse-graining method can distinguish the CNR structure of networks with high average degree. The CNR is also applied to four different kinds of transportation networks and fungal networks, which give rise to different zones of effect. It is interesting to see that CNR is very useful in network recognition of machine learning.
physics
Context: Stars evolving through the asymptotic giant branch (AGB) phase provide significant feedback to their host system, in form of both gas enriched in nuclear-burning products and dust formed in their winds, which they eject into the interstellar medium. Therefore AGB stars are an essential ingredient for the chemical evolution of the Milky Way and other galaxies. Aims: We study AGB models with super-solar metallicity, to complete our large database, so far extending from metal-poor to solar chemical compositions. We provide chemical yields for masses in the range 1-8 Msun and metallicities Z=0.03 and Z=0.04. We also study dust production in this metallicity domain. Methods: We calculated the evolutionary sequences from the pre main sequence through the whole AGB phase. We follow the variation of the surface chemical composition to calculate the chemical yields of the various species and model dust formation in the winds to determine the dust production rate and the total dust mass produced by each star during the AGB phase. Results: The physical and chemical evolution of the star is sensitive to the initial mass: M> 3Msun stars experience hot bottom burning, whereas the surface chemistry of the lower mass counterparts is altered only by third dredge-up. The carbon-star phase is reached by 2.5-3.5Msun stars of metallicity Z=0.03, whereas all the Z=0.04 stars (except the 2.5 Msun) remain O-rich for the whole AGB phase. Most of the dust produced by metal-rich AGBs is in the form of silicates particles. The total mass of dust produced increases with the mass of the star, reaching ~ 0.012 Msun for 8Msun stars.
astrophysics
Radial velocity (RV) searches for exoplanets have surveyed many of the nearest and brightest stars for long-term velocity variations indicative of a companion body. Such surveys often detect high-amplitude velocity signatures of objects that lie outside the planetary mass regime, most commonly those of a low-mass star. Such stellar companions are frequently discarded as false-alarms to the main science goals of the survey, but high-resolution imaging techniques can be employed to either directly detect or place significant constraints on the nature of the companion object. Here, we present the discovery of a compact companion to the nearby star HD~118475. Our Anglo-Australian Telescope (AAT) RV data allow the extraction of the full Keplerian orbit of the companion, found to have a minimum mass of 0.445~$M_\odot$. Follow-up speckle imaging observations at the predicted time of maximum angular separation rule out a main sequence star as the source of the RV signature at the 3.3$\sigma$ significance level, implying that the companion must be a low-luminosity compact object, most likely a white dwarf. We provide an isochrone analysis combined with our data that constrain the possible inclinations of the binary orbit. We discuss the eccentric orbit of the companion in the context of tidal circularization timescales and show that non-circular orbit was likely inherited from the progenitor. Finally, we emphasize the need for utilizing such an observation method to further understand the demographics of white dwarf companions around nearby stars.
astrophysics