text
stringlengths
11
9.77k
label
stringlengths
2
104
Exoplanet surveys of evolved stars have provided increasing evidence that the formation of giant planets depends not only on stellar metallicity ([Fe/H]), but also the mass ($M_\star$). However, measuring accurate masses for subgiants and giants is far more challenging than it is for their main-sequence counterparts, which has led to recent concerns regarding the veracity of the correlation between stellar mass and planet occurrence. In order to address these concerns we use HIRES spectra to perform a spectroscopic analysis on an sample of 245 subgiants and derive new atmospheric and physical parameters. We also calculate the space velocities of this sample in a homogeneous manner for the first time. When reddening corrections are considered in the calculations of stellar masses and a -0.12 M$_{\odot}$ offset is applied to the results, the masses of the subgiants are consistent with their space velocity distributions, contrary to claims in the literature. Similarly, our measurements of their rotational velocities provide additional confirmation that the masses of subgiants with $M_\star \geq 1.6$ M$_{\odot}$ (the "Retired A Stars") have not been overestimated in previous analyses. Using these new results for our sample of evolved stars, together with an updated sample of FGKM dwarfs, we confirm that giant planet occurrence increases with both stellar mass and metallicity up to 2.0 M$_{\odot}$. We show that the probability of formation of a giant planet is approximately a one-to-one function of the total amount of metals in the protoplanetary disk $M_\star 10^{[Fe/H]}$. This correlation provides additional support for the core accretion mechanism of planet formation.
astrophysics
The next generations of wireless networks will work in frequency bands ranging from sub-6 GHz up to 100 GHz. Radio signal propagation differs here in several critical aspects from the behaviour in the microwave frequencies currently used. With wavelengths in the millimeter range (mmWave), both penetration loss and free-space path loss increase, while specular reflection will dominate over diffraction as an important propagation channel. Thus, current channel model protocols used for the generation of mobile networks and based on statistical parameter distributions obtained from measurements become insufficient due to the lack of deterministic information about the surroundings of the base station and the receiver-devices. These challenges call for new modelling tools for channel modelling which work in the short wavelength/high-frequency limit and incorporate site-specific details -- both indoors and outdoors. Typical high-frequency tools used in this context -- besides purely statistical approaches -- are based on ray-tracing techniques. Ray-tracing can become challenging when multiple reflections dominate. In this context, mesh-based energy flow methods have become popular in recent years. In this study, we compare the two approaches both in terms of accuracy and efficiency and benchmark them against traditional power balance methods.
electrical engineering and systems science
We propose to study the flavor properties of the top quark at the future Circular Electron Positron Collider (CEPC) in China. We systematically consider the full set of 56 real parameters that characterize the flavor-changing neutral interactions of the top quark, which can be tested at CEPC in the single top production channel. Compared with the current bounds from the LEP2 data and the projected limits at the high-luminosity LHC, we find that CEPC could improve the limits of the four-fermion flavor-changing coefficients by one to two orders of magnitude, and would also provide similar sensitivity for the two-fermion flavor-changing coefficients. Overall, CEPC could explore a large fraction of currently allowed parameter space that will not be covered by the LHC upgrade. We show that the $c$-jet tagging capacity at CEPC could further improve its sensitivity to top-charm flavor-changing couplings. If a signal is observed, the kinematic distribution as well as the $c$-jet tagging could be exploited to pinpoint the various flavor-changing couplings, providing valuable information about the flavor properties of the top quark.
high energy physics phenomenology
We study the properties of an impurity immersed in a weakly interacting Bose gas, i.e., of a Bose polaron. In the perturbatively-tractable of limit weak impurity-boson interactions many of its properties are known to depend only on the scattering length. Here we demonstrate that for strong (unitary) impurity-boson interactions all static quasiproperties of a Bose polaron in a dilute Bose gas, such as its energy, its residue, its Tan's contact and the number of bosons trapped nearby the impurity, depend on the impurity-boson potential via a single parameter.
condensed matter
While growth and dissolution of surface nanobubbles has been widely studied in recent years, their stability under pressure changes or a temperature increase has not received the same level of scrutiny. Here, we present theoretical predictions based on classical theory for pressure and temperature thresholds ($p_c$ and $T_c$) at which unstable growth occurs for the case of air nanobubbles on a solid surface in water. We show that bubbles subjected to pinning have much lower $p_c$ and higher $T_c$ compared to both unpinned and bulk bubbles of similar size, indicating that pinned bubbles can withstand a larger tensile stress (negative pressure) and higher temperatures. The values of $p_c$ and $T_c$ obtained from many-body dissipative particle dynamics (MDPD) simulations of quasi-two-dimensional (quasi-2D) surface nanobubbles are consistent with the theoretical predictions, provided that the lateral expansion during growth is taken into account. This suggests that the modified classical thermodynamic description is valid for pinned bubbles as small as several nanometers. While some discrepancies still exist between our theoretical results and previous experiments, further experimental data is needed before a comprehensive understanding of the stability of surface nanobubbles can be achieved.
condensed matter
Quantum circuit simulations are critical for evaluating quantum algorithms and machines. However, the number of state amplitudes required for full simulation increases exponentially with the number of qubits. In this study, we leverage data compression to reduce memory requirements, trading computation time and fidelity for memory space. Specifically, we develop a hybrid solution by combining the lossless compression and our tailored lossy compression method with adaptive error bounds at each timestep of the simulation. Our approach optimizes for compression speed and makes sure that errors due to lossy compression are uncorrelated, an important property for comparing simulation output with physical machines. Experiments show that our approach reduces the memory requirement of simulating the 61-qubit Grover's search algorithm from 32 exabytes to 768 terabytes of memory on Argonne's Theta supercomputer using 4,096 nodes. The results suggest that our techniques can increase the simulation size by 2 to 16 qubits for general quantum circuits.
quantum physics
The temperature renormalization of the bulk band structure of a topological crystalline insulator, SnTe, is calculated using first principles methods. We explicitly include the effect of thermal-expansion-induced modification of electronic states and their band inversion on electron-phonon interaction. We show that the direct gap decreases with temperature, as both thermal expansion and electron-phonon interaction drive SnTe towards the phase transition to a topologically trivial phase as temperature increases. The band gap renormalization due to electron-phonon interaction exhibits a non-linear dependence on temperature as the material approaches the phase transition, while the lifetimes of the conduction band states near the band edge show a non-monotonic behavior with temperature. These effects should have important implications on bulk electronic and thermoelectric transport in SnTe and other topological insulators.
condensed matter
The superposition principle is one of the main tenets of quantum mechanics. Despite its counter-intuitiveness, it has been experimentally verified using electrons, photons, atoms, and molecules. However, a similar experimental demonstration using a nano or a micro particle is non-existent. Here in this Letter, exploiting macroscopic quantum coherence and quantum tunneling, we propose an experiment using levitated magnetic nanoparticle to demonstrate such an effect. It is shown that the spatial separation between the delocalized wavepackets of a $20~$nm ferrimagnetic yttrium iron garnet (YIG) nanoparticle can be as large as $5~$$\mu$m. We argue that this large spatial separation can be used to test different modifications such as collapse models to the standard quantum mechanics. Furthermore, we show that the spatial superposition of a core-shell structure, a YIG core and a non-magnetic silica shell, can be used to probe quantum gravity.
quantum physics
Functional data have been the subject of many research works over the last years. Functional regression is one of the most discussed issues. Specifically, significant advances have been made for functional linear regression models with scalar response. Let $(\mathcal{H},<\cdot,\cdot>)$ be a separable Hilbert space. We focus on the model $Y=<\Theta,X>+b+\varepsilon$, where $Y$ and $\varepsilon$ are real random variables, $X$ is an $\mathcal{H}$-valued random element, and the model parameters $b$ and $\Theta$ are in $\mathbb{R}$ and $\mathcal{H}$, respectively. Furthermore, the error satisfies that $E(\varepsilon|X)=0$ and $E(\varepsilon^2|X)=\sigma^2<\infty$. A consistent bootstrap method to calibrate the distribution of statistics for testing $H_0: \Theta=0$ versus $H_1: \Theta\neq 0$ is developed. The asymptotic theory, as well as a simulation study and a real data application illustrating the usefulness of our proposed bootstrap in practice, is presented.
statistics
In this paper, we investigate a model describing induction hardening of steel. The related system consists of an energy balance, an ODE for the different phases of steel, and Maxwell's equations in a potential formulation. The existence of weak entropy solutions is shown by a suitable regularization and discretization technique. Moreover, we prove the weak-strong uniqueness of these solutions, i.e., that a weak entropy solutions coincides with a classical solution emanating form the same initial data as long as the classical one exists. The weak entropy solution concept has advantages in comparison to the previously introduced weak solutions, e.g., it allows to include free energy functions with low regularity properties corresponding to phase transitions.
mathematics
The high internal quantum efficiency observed in higher plants remains an outstanding problem in understanding photosynthesis. Several approaches such as quantum entanglement and quantum coherence have been explored. However, none has yet drawn an analogy between superlattices and the geometrical structure of granal thylakoids in leaves. In this paper, we calculate the transmission coefficients and perform numerical simulations using the parameters relevant to a stack of thylakoid discs. We then show that quantum resonant tunneling can occur at low effective mass of particles for 680 nm and 700 nm incident wavelengths corresponding to energies at which photosynthesis occurs.
physics
Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting.
computer science
Various new phenomena emerge in quantum materials under elastic deformations, such as hydrostatic or uniaxial stresses. In particular, using uniaxial strain or stress can help to tune or uncover specific structural or electronic orders in materials with multiple coexisting phases. Those phases may be associated with a quantum phase transition requiring a millikelvin environment combined with multiple experimental probes. Here, we describe our unique apparatus, which allows in situ tuning of strain in large samples inside a dilution refrigerator while the samples are monitored via an optical microscope. We describe the engineering details and show some typical results of characterizing superconducting strontium titanate under stress. This letter should serve as a practical reference for experts in ultra-low temperature experimental physics involving uniaxial stresses or strains.
physics
In this work, we investigate the correlation between morphology, composition, and the mechanical properties of metallic amorphous tungsten-oxygen and amorphous tungsten-oxide films deposited by Pulsed Laser Deposition. This correlation is investigated by the combined use of Brillouin Spectroscopy and the substrate curvature method. The stiffness of the films is strongly affected by both the oxygen content and the mass density. The elastic moduli show a decreasing trend as the mass density decreases and the oxygen-tungsten ratio increases. A plateaux region is detected in correspondence of the transition between metallic and oxide films. The compressive residual stresses, moderate stiffness and high local ductility that characterize compact amorphous tungsten-oxide films make them promising for applications involving thermal or mechanical loads. The coefficient of thermal expansion is quite high (i.e. 8.9 $\cdot$ 10$^{-6}$ K$^{-1}$), being strictly correlated to the amorphous structure and stoichiometry of the films. Under thermal treatments they show a quite low relaxation temperature (i.e. 450 K). They crystallize into the $\gamma$ monoclinic phase of WO$_3$ starting from 670 K, inducing an increase by about 70\% of material stiffness.
condensed matter
In this paper we present a novel simulation technique for generating high quality images of any predefined resolution. This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission, with across track resolutions of any chosen magnitude. In essence, our model extends Generative Adversarial Networks (GANs) based architecture into a conditional recursive setting, that facilitates the continuity of the generated images. The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition for the sonars with higher resolutions, such as EdgeTech. The seabed topography can be fully controlled by the user. The visual assessment tests demonstrate that humans cannot distinguish the simulated images from real. Moreover, experimental results suggest that in the absence of real data the autonomous recognition systems can benefit greatly from training with the synthetic data, produced by the R2D2-GANs.
electrical engineering and systems science
Modern data sets in many applications no longer comprise samples of real vectors in a real vector space but samples of much more complex structures which may be represented as points in a space with certain underlying geometric structure, namely a manifold. Manifold learning is an emerging field for learning the underlying structure. The study of manifold learning can be split into two main branches, namely dimension reduction and manifold fitting. With the aim of interacting statistics and geometry, we tackle the problem of manifold fitting in the ambient space. Inspired by the relation between the eigenvalues of the Laplace-Beltrami operator and the geometry of a manifold, we aim to find a small set of points that preserve the geometry of the underlying manifold. Based on this relationship, we extend the idea of subsampling to noisy datasets in high dimensional space and utilize the Moving Least Squares (MLS) approach to approximate the underlying manifold. We analyze the two core steps in our proposed method theoretically and also provide the bounds for the MLS approach. Our simulation results and real data analysis demonstrate the superiority of our method in estimating the underlying manifold from noisy data.
statistics
The shear volumes of data generated from earth observation and remote sensing technologies continue to make major impact; leaping key geospatial applications into the dual data and compute intensive era. As a consequence, this rapid advancement poses new computational and data processing challenges. We implement a novel remote sensing data flow (RESFlow) for advanced machine learning and computing with massive amounts of remotely sensed imagery. The core contribution is partitioning massive amount of data based on the spectral and semantic characteristics for distributed imagery analysis. RESFlow takes advantage of both a unified analytics engine for large-scale data processing and the availability of modern computing hardware to harness the acceleration of deep learning inference on expansive remote sensing imagery. The framework incorporates a strategy to optimize resource utilization across multiple executors assigned to a single worker. We showcase its deployment across computationally and data-intensive on pixel-level labeling workloads. The pipeline invokes deep learning inference at three stages; during deep feature extraction, deep metric mapping, and deep semantic segmentation. The tasks impose compute intensive and GPU resource sharing challenges motivating for a parallelized pipeline for all execution steps. By taking advantage of Apache Spark, Nvidia DGX1, and DGX2 computing platforms, we demonstrate unprecedented compute speed-ups for deep learning inference on pixel labeling workloads; processing 21,028~Terrabytes of imagery data and delivering an output maps at area rate of 5.245sq.km/sec, amounting to 453,168 sq.km/day - reducing a 28 day workload to 21~hours.
computer science
Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our bound becomes tight as the marginal contribution of additional features decreases. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, $l_1$-penalized logistic regression and LASSO, while being orders of magnitude faster. For a large data set, having more than with $1.6$ million training points and about $12$ million features, and with a non-optimized CPU implementation, our sparse naive Bayes model can be trained in less than 15 seconds.
computer science
We reveal a deep connection between alignment of dust grains by RAdiative torques (RATs) and MEchanical Torques (METs) and rotational disruption of grains introduced by Hoang et al. (2019). The disruption of grains happens if they have attractor points corresponding to high angular momentum (high-J). We introduce {\it fast disruption} for grains that are directly driven to the high-J attractor on a timescale of spin-up, and {\it slow disruption} for grains that are first moved to the low-J attractor and gradually transported to the high-J attractor by gas collisions. The enhancement of grain magnetic susceptibility via iron inclusions expands the parameter space for high-J attractors and increases percentage of grains experiencing the disruption. The increase in the magnitude of RATs or METs can increase the efficiency of fast disruption, but counter-intuitively, decreases the effect of slow disruption by forcing grains towards low-J attractors, whereas the increase in gas density accelerates disruption by faster transporting grains to the high-J attractor. We also show that disruption induced by RATs and METs depends on the angle between the magnetic field and the anisotropic flow. We find that pinwheel torques can increase the efficiency of {\it fast disruption} but may decrease the efficiency of {\it slow disruption} by delaying the transport of grains from the low-J to high-J attractors via gas collisions. The selective nature of the rotational disruption opens a possibility of observational testing of grain composition as well as physical processes of grain alignment.
astrophysics
We propose a class of quantum simulators for antiferromagnetic spin systems, based on coupled photonic cavities in presence of two-photon driving and dissipation. By modeling the coupling between the different cavities through a hopping term with negative amplitude, we solve numerically the quantum master equation governing the dynamics of the open system and determine its non-equilibrium steady state. Under suitable conditions, the steady state can be described in terms of the degenerate ground states of an antiferromagnetic Ising model. When the geometry of the cavity array is incommensurate with the antiferromagnetic coupling, the steady state presents properties which bear full analogy with those typical of the spin liquid phases arising in frustrated magnets.
quantum physics
We present a new method to obtain more realistic initial conditions for N-body simulations of young star clusters. We start from the outputs of hydrodynamical simulations of molecular cloud collapse, in which star formation is modelled with sink particles. In our approach, we instantaneously remove gas from these hydrodynamical simulation outputs to mock the end of the gas-embedded phase, induced by stellar feedback. We then enforce a realistic initial mass function by splitting or joining the sink particles based on their mass and position. Such initial conditions contain more consistent information on the spatial distribution and the kinematical and dynamical states of young star clusters, which are fundamental to properly study these systems. For example, by applying our method to a set of previously run hydrodynamical simulations, we found that the early evolution of young star clusters is affected by gas removal and by the early dry merging of sub-structures. This early evolution can either quickly erase the rotation acquired by our (sub-)clusters in their embedded phase or "fuel" it by feeding of angular momentum by sub-structure mergers, before two-body relaxation acts on longer timescales.
astrophysics
We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called "trigger" words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pair-wise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regularized by a contrastive margin. Our architecture is based on the recently introduced transformer networks, BERT, that exhibits strong performance on many NLP benchmarks. Empirical results show that our method alleviates the limitation of current supervised approaches for commonsense reasoning. This study opens up avenues for exploiting inexpensive self-supervision to achieve performance gain in commonsense reasoning tasks.
computer science
The Jarzynski estimator is a powerful tool that uses nonequilibrium statistical physics to numerically obtain partition functions of probability distributions. The estimator reconstructs partition functions with trajectories of simulated Langevin dynamics through the Jarzynski equality. However, the original estimator suffers from its slow convergence because it depends on rare trajectories of stochastic dynamics. In this paper we present a method to significantly accelerate the convergence by introducing deterministic virtual trajectories generated in augmented state space under Hamiltonian dynamics. We theoretically show that our approach achieves second-order acceleration compared to a naive estimator with Langevin dynamics and zero variance estimation on harmonic potentials. Moreover, we conduct numerical experiments on three multimodal distributions where the proposed method outperforms the conventional method, and provide theoretical explanations.
condensed matter
Most methods for identifying location effects in unreplicated fractional factorial designs assume homoscedasticity of the response values. However, dispersion effects in the underlying process may create heteroscedasticity in the response values. This heteroscedasticity may go undetected when identification of location effects is pursued. Indeed, methods for identifying dispersion effects typically require first modeling location effects. Therefore, it is imperative to understand how methods for identifying location effects function in the presence of undetected dispersion effects. We used simulation studies to examine the robustness of four different methods for identifying location effects---Box and Meyer (1986), Lenth (1989), Berk and Picard (1991), and Loughin and Noble (1997)---under models with one, two, or three dispersion effects of varying sizes. We found that the first three methods usually performed acceptably with respect to error rates and power, but the Loughin-Noble method lost control of the individual error rate when moderate-to-large dispersion effects were present.
statistics
We consider the potential of the Higgs boson pair production process to probe the light quark Yukawa couplings. We show within an effective theory description that the prospects of constraining enhanced first generation light quark Yukawa couplings in Higgs pair production are similar to other methods and channels, due to a coupling of two Higgs bosons to two fermions. Higgs pair production can hence also probe if the Higgs sector couples non-linearly to the light quark generations. For the second generation, we show that by employing charm tagging for the Higgs boson pair decaying to $c\bar{c}\gamma\gamma$, we can obtain similarly good prospects for measuring the charm Yukawa coupling as in other direct probes.
high energy physics phenomenology
Nonclassicality is studied through a quasidistribution of phases for the Raman process under both weak and strong pump conditions. In the former case, the solution is applicable to both resonant and off-resonant Raman processes, while strong classical pump is assumed at resonance. Under weak pump conditions (i.e., in a complete quantum treatment), the phase difference of phases described by single nonclassical modes is required to be filtered to describe a regular distribution function, which is not the case with strong pump. Compound Stokes-phonon mode shows nonclassical features of phases in both weak and strong pumping, which effect is similar to that for compound pump-phonon (Stokes-anti-Stokes) mode with weak (strong) pump. While anti-Stokes-phonon mode is observed to be classical and coherence conserving in strong pump case, pump-Stokes mode shows similar behavior in a special case in quantum treatment.
quantum physics
Wind farm control is an active and growing field of research in which the control actions of individual turbines in a farm are coordinated, accounting for inter-turbine aerodynamic interaction, to improve the overall performance of the wind farm and to reduce costs. The primary objectives of wind farm control include increasing power production, reducing turbine loads, and providing electricity grid support services. Additional objectives include improving reliability or reducing external impacts to the environment and communities. In 2019, a European research project (FarmConners) was started with the main goal of providing an overview of the state-of-the-art in wind farm control, identifying consensus of research findings, data sets, and best practices, providing a summary of the main research challenges, and establishing a roadmap on how to address these challenges. Complementary to the FarmConners project, an IEA Wind Topical Expert Meeting (TEM) and two rounds of surveys among experts were performed. From these events we can clearly identify an interest in more public validation campaigns. Additionally, a deeper understanding of the mechanical loads and the uncertainties concerning the effectiveness of wind farm control are considered two major research gaps.
electrical engineering and systems science
These are significantly expanded lecture notes for the author's minicourse at MSRI in June 2012, as published in the MSRI lecture note series, with some minor additional corrections. In these notes, we give an example-motivated review of the deformation theory of associative algebras in terms of the Hochschild cochain complex as well as quantization of Poisson structures, and Kontsevich's formality theorem in the smooth setting. We then discuss quantization and deformation via Calabi-Yau algebras and potentials. Examples discussed include Weyl algebras, enveloping algebras of Lie algebras, symplectic reflection algebras, quasihomogeneous isolated hypersurface singularities (including du Val singularities), and Calabi-Yau algebras.
mathematics
This paper generalises the treatment of compositional game theory as introduced by the second and third authors with Ghani and Winschel, where games are modelled as morphisms of a symmetric monoidal category. From an economic modelling perspective, the existing notion of an open game is not expressive enough for many applications. This includes stochastic environments, stochastic choices by players, as well as incomplete information regarding the game being played. The current paper addresses these three issue all at once. To achieve this we make significant use of category theory, especially the 'coend optics' of Riley.
computer science
Reasoning about graphs evolving over time is a challenging concept in many domains, such as bioinformatics, physics, and social networks. We consider a common case in which edges can be short term interactions (e.g., messaging) or long term structural connections (e.g., friendship). In practice, long term edges are often specified by humans. Human-specified edges can be both expensive to produce and suboptimal for the downstream task. To alleviate these issues, we propose a model based on temporal point processes and variational autoencoders that learns to infer temporal attention between nodes by observing node communication. As temporal attention drives between-node feature propagation, using the dynamics of node interactions to learn this key component provides more flexibility while simultaneously avoiding issues associated with human-specified edges. We also propose a bilinear transformation layer for pairs of node features instead of concatenation, typically used in prior work, and demonstrate its superior performance in all cases. In experiments on two datasets in the dynamic link prediction task, our model often outperforms the baseline model that requires a human-specified graph. Moreover, our learned attention is semantically interpretable and infers connections similar to actual graphs.
statistics
A kagome lattice is composed of corner-sharing triangles arranged on a honeycomb lattice such that each honeycomb bond hosts a kagome site while each kagome triangle encloses a honeycomb site. Such close relation implies that the two lattices share common features. We predict here that a kagome crystal, similar to the honeycomb lattice graphene, reacts to elastic strain in a unique way that the bulk electronic states in the vicinity of Dirac points are reorganized by the strain-induced pseudomagnetic field into flat Landau levels, while the degenerate edge states in the undeformed crystal become separated in the energy dimension. When the strain is tuned continuously, the resulting scanning pseudomagnetic field gives rise to quantum oscillations in both density of states (DOS) and electric conductivity.
condensed matter
In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to an assembly function a string of words that would likely be assigned by a human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define the problem, we provide reasonable justifications for the choices that we made for the design of training and the tests. We performed an analysis on a large real-world corpora constituted by nearly 9 millions of functions taken from more than 22k softwares. In such framework we test baselines coming from the field of Natural Language Processing (e.g., Seq2Seq networks and Transformer). Interestingly, our evaluation shows promising results beating the state-of-the-art and reaching good performance. We investigate the applicability of tine-tuning (i.e., taking a model already trained on a large generic corpora and retraining it for a specific task). Such technique is popular and well-known in the NLP field. Our results confirm that fine-tuning is effective even when neural networks are applied to binaries. We show that a model, pre-trained on the aforementioned corpora, when fine-tuned has higher performances on specific domains (such as predicting names in system utilites, malware, etc).
computer science
We introduce a variant of the birational symbols group of Kontsevich, Pestun, and the second author, and use this to define birational invariants of algebraic orbifolds.
mathematics
The importance of $S$-matrix unitarity in realistic meson spectroscopy is reviewed, both its historical development and more recent applications. First the effects of imposing $S$-matrix unitarity on meson resonances is demonstrated in both the elastic and the inelastic case. Then, the static quark model is revisited and its theoretical as well as phenomenological shortcomings are highlighted. A detailed account is presented of the mesons in the tables of the Particle Data Group that cannot be explained at all or only poorly in models describing mesons as pure quark-antiquark bound states. Next the earliest unitarised and coupled-channel models are revisited, followed by several examples of puzzling meson resonances and their understanding in a modern unitarised framework. Also, recent and fully unquenched lattice descriptions of such mesons are summarised. Finally, attention is paid to production processes, which require an unconventional yet related unitary approach. Proposals for further improvement are discussed.
high energy physics phenomenology
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.
computer science
Based on the information communicated in press releases, and finally published towards the end of 2020 by Pfizer, Moderna and AstraZeneca, we have built up a simple Bayesian model, in which the main quantity of interest plays the role of {\em vaccine efficacy} (`$\epsilon$'). The resulting Bayesian Network is processed by a Markov Chain Monte Carlo (MCMC), implemented in JAGS interfaced to R via rjags. As outcome, we get several probability density functions (pdf's) of $\epsilon$, each conditioned on the data provided by the three pharma companies. The result is rather stable against large variations of the number of people participating in the trials and it is `somehow' in good agreement with the results provided by the companies, in the sense that their values correspond to the most probable value (`mode') of the pdf's resulting from MCMC, thus reassuring us about the validity of our simple model. However we maintain that the number to be reported as `vaccine efficacy' should be the mean of the distribution, rather than the mode, as it was already very clear to Laplace about 250 years ago (its `rule of succession' follows from the simplest problem of the kind). This is particularly important in the case in which the number of successes equals the numbers of trials, as it happens with the efficacy against `severe forms' of infection, claimed by Moderna to be 100%. The implication of the various uncertainties on the predicted number of vaccinated infectees is also shown, using both MCMC and approximated formulae.
statistics
Flat bands imply lack of itinerancy due to some constraints that, in principle, results in anomalous behaviors with randomness. By a molecular orbital (MO) representation of the flat band systems, random MO models are introduced where the degeneracy due to the flat bands is preserved even with randomness. The zero modes of the chiral symmetric system with sublattice imbalance belong to the class. After explaining the generic flat band construction by MOs, several examples are discussed with numerical demonstration as sawtooth lattice in one dimension and hyper-Pyrochlore lattice in any $d$-dimensions that extends the Kagome ($d=2$) and Pyrochlore ($d=3$) lattices to general dimensions.
condensed matter
Strongly correlated materials are expected to feature unconventional transport properties, such that charge, spin, and heat conduction are potentially independent probes of the dynamics. In contrast to charge transport, the measurement of spin transport in such materials is highly challenging. We observed spin conduction and diffusion in a system of ultracold fermionic atoms that realizes the half-filled Fermi-Hubbard model. For strong interactions, spin diffusion is driven by super-exchange and doublon-hole-assisted tunneling, and strongly violates the quantum limit of charge diffusion. The technique developed in this work can be extended to finite doping, which can shed light on the complex interplay between spin and charge in the Hubbard model.
condensed matter
Video is one of the robust sources of information and the consumption of online and offline videos has reached an unprecedented level in the last few years. A fundamental challenge of extracting information from videos is a viewer has to go through the complete video to understand the context, as opposed to an image where the viewer can extract information from a single frame. Apart from context understanding, it almost impossible to create a universal summarized video for everyone, as everyone has their own bias of keyframe, e.g; In a soccer game, a coach person might consider those frames which consist of information on player placement, techniques, etc; however, a person with less knowledge about a soccer game, will focus more on frames which consist of goals and score-board. Therefore, if we were to tackle problem video summarization through a supervised learning path, it will require extensive personalized labeling of data. In this paper, we attempt to solve video summarization through unsupervised learning by employing traditional vision-based algorithmic methodologies for accurate feature extraction from video frames. We have also proposed a deep learning-based feature extraction followed by multiple clustering methods to find an effective way of summarizing a video by interesting key-frame extraction. We have compared the performance of these approaches on the SumMe dataset and showcased that using deep learning-based feature extraction has been proven to perform better in case of dynamic viewpoint videos.
computer science
Spin noise spectroscopy is emerging as a powerful technique for studying the dynamics of various spin systems also beyond their thermal equilibrium and linear response. Here, we study spin fluctuations of room-temperature neutral atoms in a Bell-Bloom type magnetometer. Driven by indirect pumping and undergoing a parametric excitation, this system is known to produce noise-squeezing. Our measurements not only reveal a strong asymmetry in the noise distribution of the atomic signal quadratures at the magnetic resonance, but also provide insight into the mechanism behind its generation and evolution. In particular, a structure in the spectrum is identified which allows to investigate the main dependencies and the characteristic timescales of the noise process. The results obtained are compatible with parametrically induced noise squeezing. Notably, the noise spectrum provides information on the spin dynamics even in regimes where the macroscopic atomic coherence is lost, effectively enhancing the sensitivity of the measurements. Our work promotes spin noise spectroscopy as a versatile technique for the study of noise squeezing in a wide range of spin based magnetic sensors.
physics
It is shown that if $A \subseteq \mathbb{R}^3$ is a Borel set of Hausdorff dimension $\dim A \in (3/2,5/2)$, then for a.e. $\theta \in [0,2\pi)$ the projection $\pi_{\theta}(A)$ of $A$ onto the 2-dimensional plane orthogonal to $\frac{1}{\sqrt{2}}(\cos \theta, \sin \theta, 1)$ satisfies $\dim \pi_{\theta}(A) \geq \max\left\{\frac{4\dim A}{9} + \frac{5}{6},\frac{2\dim A+1}{3} \right\}$. This improves the bound of Oberlin and Oberlin, and of Orponen and Venieri, for $\dim A \in (3/2,5/2)$. More generally, a weaker lower bound is given for families of planes in $\mathbb{R}^3$ parametrised by curves in $S^2$ with nonvanishing geodesic curvature.
mathematics
Many real-world graphs or networks are temporal, e.g., in a social network persons only interact at specific points in time. This information directs dissemination processes on the network, such as the spread of rumors, fake news, or diseases. However, the current state-of-the-art methods for supervised graph classification are designed mainly for static graphs and may not be able to capture temporal information. Hence, they are not powerful enough to distinguish between graphs modeling different dissemination processes. To address this, we introduce a framework to lift standard graph kernels to the temporal domain. Specifically, we explore three different approaches and investigate the trade-offs between loss of temporal information and efficiency. Moreover, to handle large-scale graphs, we propose stochastic variants of our kernels with provable approximation guarantees. We evaluate our methods on a wide range of real-world social networks. Our methods beat static kernels by a large margin in terms of accuracy while still being scalable to large graphs and data sets. Hence, we confirm that taking temporal information into account is crucial for the successful classification of dissemination processes.
computer science
We prove that a randomly initialized neural network of *any architecture* has its Tangent Kernel (NTK) converge to a deterministic limit, as the network widths tend to infinity. We demonstrate how to calculate this limit. In prior literature, the heuristic study of neural network gradients often assumes every weight matrix used in forward propagation is independent from its transpose used in backpropagation (Schoenholz et al. 2017). This is known as the *gradient independence assumption (GIA)*. We identify a commonly satisfied condition, which we call *Simple GIA Check*, such that the NTK limit calculation based on GIA is correct. Conversely, when Simple GIA Check fails, we show GIA can result in wrong answers. Our material here presents the NTK results of Yang (2019a) in a friendly manner and showcases the *tensor programs* technique for understanding wide neural networks. We provide reference implementations of infinite-width NTKs of recurrent neural network, transformer, and batch normalization at https://github.com/thegregyang/NTK4A.
statistics
A number of proposed extensions of the Standard Model include new strongly interacting dynamics, in the form of SU(N) gauge fields coupled to various numbers of fermions. Often, these extensions allow N = 3 as a plausible choice, or even require N = 3, such as in twin Higgs models, where the new dynamics is a "copy" of QCD. However, the fermion masses in such a sector are typically different from (often heavier than) the ones of real-world QCD, relative to the confinement scale. Many of the strong interaction masses and matrix elements for SU(3) at heavy fermion masses have already been computed on the lattice, typically as a byproduct of the approach to the physical point of real QCD. We provide a summary of these relevant results for the phenomenological community.
high energy physics phenomenology
Unsharp measurements play an increasingly important role in quantum information theory. In this paper, we study a three-party prepare-transform-measure experiment with unsharp measurements based on $ 3 \rightarrow 1 $ sequential random access codes (RACs). We derive optimal trade-off between the two correlation witnesses in $ 3 \rightarrow 1 $ sequential quantum random access codes (QRACs), and use the result to complete the self-testing of quantum preparations, instruments and measurements for three sequential parties. We also give the upper and lower bounds of the sharpness parameter to complete the robustness analysis of the self-testing scheme. In addition, we find that classical correlation witness violation based on $3 \rightarrow 1 $ sequential RACs cannot be obtained by both correlation witnesses simultaneously. This means that if the second party uses strong unsharp measurements to overcome the classical upper bound, the third party cannot do so even with sharp measurements. Finally, we give the analysis and comparison of the random number generation efficiency under different sharpness parameters based on the determinant value, $2 \rightarrow 1 $ and $3 \rightarrow 1 $ QRACs separately. This letter sheds new light on generating random numbers among multi-party in semi-device independent framework.
quantum physics
We develop a folding approach to study two-dimensional symmetry-enriched topological (SET) phases with the mirror reflection symmetry. Our folding approach significantly transforms the mirror SETs, such that their properties can be conveniently studied through previously known tools: (i) it maps the nonlocal mirror symmetry to an onsite $\mathbb{Z}_2$ layer-exchange symmetry after folding the SET along the mirror axis, so that we can gauge the symmetry; (ii) it maps all mirror SET information into the boundary properties of the folded system, so that they can be studied by the anyon condensation theory---a general theory for studying gapped boundaries of topological orders; and (iii) it makes the mirror anomalies explicitly exposed in the boundary properties, i.e., strictly 2D SETs and those that can only live on the surface of a 3D system can be easily distinguished through the folding approach. With the folding approach, we derive a set of physical constraints on data that describes mirror SET, namely mirror permutation and mirror symmetry fractionalization on the anyon excitations in the topological order. We conjecture that these constraints may be complete, in the sense that all solutions are realizable in physical systems. Several examples are discussed to justify this. Previously known general results on the classification and anomalies are also reproduced through our approach.
condensed matter
Total eclipses permit a deep analysis of both the inner and the outer parts of the corona using the continuum White-Light (W-L) radiations from electrons (K-corona), the superposed spectrum of forbidden emission lines from ions (E-corona) and the dust component with F-lines (F-corona). By sufficiently dispersing the W-L spectrum, the Fraunhofer (F) spectrum of the dust component of the corona appears and the continuum Thomson radiation can be evaluated. The superposed emission lines of ions with different degrees of ionization are studied to allow the measurement of temperatures, non-thermal velocities, Doppler shifts and abundances. We describe a slit spectroscopic experiment of high spectral resolution for providing an analysis of the most typical parts of the quasi-minimum type corona observed during the total solar eclipse of Aug. 21, 2017 observed from Idaho, USA. Streamers, active region enhancements and polar coronal holes (CHs) are well measured using deep spectra. 60 spectra are obtained during the totality with a long slit, covering +/-3 solar radii in the range of 510 to 590nm. The K+F continuum corona is well exposed up to 2 solar radius. The F-corona can be measured even at the solar limb. New weak emission lines were discovered or confirmed. The rarely observed high FIP ArX line is recorded almost everywhere; the FeXIV and NiXIII lines are well recorded everywhere. For the first time hot lines are also measured inside the CH regions. The radial variations of the non-thermal turbulent velocities of the lines do not show a great departure from the average values. No significantly large Doppler shifts are seen anywhere in the inner and the middle corona. The wings of the FeXIV line show some non-Gaussianity.
astrophysics
We introduce a local radiative heat-pumping effect between two bodies in a many-body system, obtained by periodically modulating both the temperature and the position of an intermediate object using an external source of energy. We show that the magnitude and the sign of energy flow can be tuned by changing the oscillation amplitude and dephasing of the two parameters. This many-body effect paves the way for an efficient and active control of heat fluxes at the nanoscale.
condensed matter
Wind energy resource quantification, air pollution monitoring, and weather forecasting all rely on rapid, accurate measurement of local wind conditions. Visual observations of the effects of wind---the swaying of trees and flapping of flags, for example---encode information regarding local wind conditions that can potentially be leveraged for visual anemometry that is inexpensive and ubiquitous. Here, we demonstrate a coupled convolutional neural network and recurrent neural network architecture that extracts the wind speed encoded in visually recorded flow-structure interactions of a flag and tree in naturally occurring wind. Predictions for wind speeds ranging from 0.75-11 m/s showed agreement with measurements from a cup anemometer on site, with a root-mean-squared error approaching the natural wind speed variability due to atmospheric turbulence. Generalizability of the network was demonstrated by successful prediction of wind speed based on recordings of other flags in the field and in a controlled wind tunnel test. Furthermore, physics-based scaling of the flapping dynamics accurately predicts the dependence of the network performance on the video frame rate and duration.
statistics
We investigate collisional shifts of spectral lines involving excited hydrogenic states, where van der Waals coefficients have recently been shown to have large numerical values when expressed in atomic units. Particular emphasis is laid on the recent hydrogen 2S-4P experiment (and an ongoing 2S-6P experiment) in Garching, but numerical input data are provided for other transitions (e.g., involving S states), as well. We show that the frequency shifts can be described, to sufficient accuracy, in the impact approximation. The pressure related effects were separated into two parts, (i) related to collisions of atoms inside of the beam, and (ii) related to collisions of the atoms in the atomic beam with the residual background gas. The latter contains both atomic as well as molecular hydrogen. The dominant effect of intra-beam collisions is evaluated by a Monte-Carlo simulation, taking the geometry of the experimental apparatus into account. While, in the Garching experiment, the collisional shift is on the order of 10 Hz, and thus negligible, it can decisively depend on the experimental conditions. We present input data which can be used in order to describe the effect for other transitions of current and planned experimental interest.
physics
Parameterized quantum circuits serve as ans\"{a}tze for solving variational problems and provide a flexible paradigm for programming near-term quantum computers. Ideally, such ans\"{a}tze should be highly expressive so that a close approximation of the desired solution can be accessed. On the other hand, the ansatz must also have sufficiently large gradients to allow for training. Here, we derive a fundamental relationship between these two essential properties: expressibility and trainability. This is done by extending the well established barren plateau phenomenon, which holds for ans\"{a}tze that form exact 2-designs, to arbitrary ans\"{a}tze. Specifically, we calculate the variance in the cost gradient in terms of the expressibility of the ansatz, as measured by its distance from being a 2-design. Our resulting bounds indicate that highly expressive ans\"{a}tze exhibit flatter cost landscapes and therefore will be harder to train. Furthermore, we provide numerics illustrating the effect of expressiblity on gradient scalings, and we discuss the implications for designing strategies to avoid barren plateaus.
quantum physics
We propose a scheme for the construction of charge and spin linear-response functions of an interacting electronic system via quantum phase estimation and statistical sampling on a quantum computer. By using the unitary decomposition of electronic operators for avoiding the difficulty due to their non-unitarity, we provide the circuits equipped with ancillae for probabilistic preparation of qubit states on which the necessary non-unitary operators have acted. We perform simulations of such construction of the response functions for C2 and N2 molecules by comparing with the accurate ones based on the full configuration interaction calculations. It is found that the accurate detection of subtle structures coming from the weak poles in the response functions requires a large number of measurements.
quantum physics
We treat here interaction round the face (IRF) solvable lattice models. We study the algebraic structures underlining such models. For the three block case, we show that the Yang Baxter equation is obeyed, if and only if, the Birman--Murakami--Wenzl (BMW) algebra is obeyed. We prove this by an algebraic expansion of the Yang Baxter equation (YBE). For four blocks IRF models, we show that the BMW algebra is also obeyed, apart from the skein relation, which is different. This indicates that the BMW algebra is a sub--algebra for all models with three or more blocks. We find additional relations for the four block algebra using the expansion of the YBE. The four blocks result, that is the BMW algebra and the four blocks skein relation, is enough to define new knot invariant, which depends on three arbitrary parameters, important in knot theory.
high energy physics theory
The Dirac delta function can be defined by the limitation of the rectangular function covering a unit area with decrease of the width of the rectangle to zero, and in quantum mechanics the eigenvectors of the position operator take the form of the delta function. When discussing the position measurement in quantum mechanics, one is prompted by the mathematical convention that uses the rectangular wave function of sufficiently narrow width to approximate the delta function in order to making the state of the position physical. We argue that such an approximation is improper in physics, because during the position measurement the energy transfer to the particle might be infinitely large. The continuous and square-integrable functions of both sharp peak and sufficiently narrow width can then be better approximations of the delta function to represent the physical states of position. When the slit experiment is taken as an apparatus of position measurement, no matter what potential is used to model the slit, only the ground state of the slit-dependent wave function matters.
quantum physics
Magnetic fields associated with currents flowing in tissue can be measured non-invasively by means of zero-field-encoded ultra-low-field magnetic resonance imaging (ULF MRI) enabling current density imaging (CDI) and possibly conductivity mapping of human head tissues. Since currents applied to a human are limited by safety regulations and only a small fraction of the current passes through the relatively high-resistive skull, a sufficient signal-to-noise ratio (SNR) may be difficult to obtain when using this method. In this work, we study the relationship between the image SNR and the SNR of the field reconstructions from zero-field-encoded data. We evaluate these results for two existing ULF MRI scanners, one ultra-sensitive single-channel system and one whole-head multi-channel system, by simulating sequences necessary for current-density reconstruction. We also derive realistic current-density and magnetic-field estimates from finite-element-method simulations based on a three-compartment head model. We found that existing ULF-MRI systems reach sufficient SNR to detect intra-cranial current distributions with statistical uncertainty below 10%. However, they also reveal that image artifacts influence the reconstruction quality. Further, our simulations indicate that current-density reconstruction in the scalp requires a resolution less than 5 mm and demonstrate that the necessary sensitivity coverage can be accomplished by multi-channel devices.
physics
Heavily doped semiconductors have emerged as tunable low-loss plasmonic materials at mid-infrared frequencies. In this article we investigate nonlinear optical phenomena associated with high concentration of free electrons. We use a hydrodynamic description to study free electron dynamics in heavily doped semiconductors up to third-order terms, which are usually negligible for noble metals. We find that cascaded third-harmonic generation due to second-harmonic signals can be as strong as direct third-harmonic generation contributions even when the second-harmonic generation efficiency is zero. Moreover, we show that when coupled with plasmonic enhancement free electron nonlinearities could be up to two orders of magnitude larger than conventional semiconductor nonlinearities. Our study might open a new route for nonlinear optical integrated devices at mid-infrared frequencies.
physics
We propose encoder-centric stepwise models for extractive summarization using structured transformers -- HiBERT and Extended Transformers. We enable stepwise summarization by injecting the previously generated summary into the structured transformer as an auxiliary sub-structure. Our models are not only efficient in modeling the structure of long inputs, but they also do not rely on task-specific redundancy-aware modeling, making them a general purpose extractive content planner for different tasks. When evaluated on CNN/DailyMail extractive summarization, stepwise models achieve state-of-the-art performance in terms of Rouge without any redundancy aware modeling or sentence filtering. This also holds true for Rotowire table-to-text generation, where our models surpass previously reported metrics for content selection, planning and ordering, highlighting the strength of stepwise modeling. Amongst the two structured transformers we test, stepwise Extended Transformers provides the best performance across both datasets and sets a new standard for these challenges.
computer science
We develop computationally affordable and encoding independent gradient evaluation procedures for unitary coupled-cluster type operators, applicable on quantum computers. We show that, within our framework, the gradient of an expectation value with respect to a parameterized n-fold fermionic excitation can be evaluated by four expectation values of similar form and size, whereas most standard approaches based on the direct application of the parameter-shift-rule come with an associated cost of O(2^(2n)) expectation values. For real wavefunctions, this cost can be further reduced to two expectation values. Our strategies are implemented within the open-source package tequila and allow blackboard style construction of differentiable objective functions. We illustrate initial applications for electronic ground and excited states.
quantum physics
Forecasting of future snow depths is useful for many applications like road safety, winter sport activities, avalanche risk assessment and hydrology. Motivated by the lack of statistical forecasts models for snow depth, in this paper we present a set of models to fill this gap. First, we present a model to do short term forecasts when we assume that reliable weather forecasts of air temperature and precipitation are available. The covariates are included nonlinearly into the model following basic physical principles of snowfall, snow aging and melting. Due to the large set of observations with snow depth equal to zero, we use a zero-inflated gamma regression model, which is commonly used to similar applications like precipitation. We also do long term forecasts of snow depth and much further than traditional weather forecasts for temperature and precipitation. The long-term forecasts are based on fitting models to historic time series of precipitation, temperature and snow depth. We fit the models to data from three locations in Norway with different climatic properties. Forecasting five days into the future, the results showed that, given reliable weather forecasts of temperature and precipitation, the forecast errors in absolute value was between 3 and 7 cm for different locations in Norway. Forecasting three weeks into the future, the forecast errors were between 7 and 16 cm.
statistics
Using the fermionic basis discovered in the 6-vertex model, we derive exact formulas for the expectation values of local operators of the sine-Gordon theory in any eigenstate of the Hamiltonian. We tested our formulas in the pure multi-soliton sector of the theory. In the ultraviolet limit, we checked our results against Liouville 3-point functions, while in the infrared limit, we evaluated our formulas in the semi-classical limit and compared them upto 2-particle contributions against the semi-classical limit of the previously conjectured LeClair-Mussardo type formula. Complete agreement was found in both cases.
high energy physics theory
We study spin transport through a suspended Cu channel by an electrical non-local 4-terminal measurement for future spin mechanics applications. A magnetoresistance due to spin transport through the suspended Cu channel is observed, and its magnitude is comparable to that of a conventional fixed Cu lateral spin valve. The spin diffusion length in the suspended Cu channel is estimated to be 340 nm at room temperature from the spin signal dependence on the distance between the ferromagnetic injector and detector electrodes. This value is found to be slightly shorter than in a fixed Cu. The decrease in the spin diffusion length in the suspended Cu channel is attributed to an increase in spin scattering originating from naturally oxidized Cu at the bottom of the Cu channel.
physics
We describe the effect of the marginal deformation of the $\cal N = (4, 4)$ superconformal $(T^4)^N /S_N$ orbifold theory on a doublet of R-neutral twisted Ramond fields, in the large-$N$ approximation. Our analysis of their dynamics explores the explicit analytic form of the genus-zero four-point function involving two R-neutral Ramond fields with two deformation operators. We compute this correlation function by using two different approaches: the Lunin-Mathur path-integral technique and the stress-tensor method. From its short distance limits, we extract the OPE structure constants and the scaling dimensions of new non-BPS Ramond states. In the deformed SCFT, at second order in the deformation parameter, the two-point function of the R-neutral twisted Ramond fields gets UV-divergent contributions. The implementation of an appropriate regularization procedure, together with further renormalization of the bare (undeformed) fields, furnishes well defined corrections to this two-point function and to the bare conformal weights of the considered Ramond fields. The fields with maximal twist $N$, however, remain BPS-protected, keeping unchanged the values of their bare conformal dimensions.
high energy physics theory
Understanding crowd mobility behaviors would be a key enabler for crowd management in smart cities, benefiting various sectors such as public safety, tourism and transportation. This article discusses the existing challenges and the recent advances to overcome them and allow sharing information across stakeholders of crowd management through Internet of Things (IoT) technologies. The article proposes the usage of the new federated interoperable semantic IoT platform (FIESTA-IoT), which is considered as "a system of systems". The platform can support various IoT applications for crowd management in smart cities. In particular, the article discusses two integrated IoT systems for crowd mobility: 1) Crowd Mobility Analytics System, 2) Crowd Counting and Location System (from the SmartSantander testbed). Pilot studies are conducted in Gold Coast, Australia and Santander, Spain to fulfill various requirements such as providing online and offline crowd mobility analyses with various sensors in different regions. The analyses provided by these systems are shared across applications in order to provide insights and support crowd management in smart city environments.
electrical engineering and systems science
The high-energy universe has revealed that energetic particles are ubiquitous in the cosmos and play a vital role in the cultivation of cosmic environments on all scales. Energetic particles in our own galaxy, galactic cosmic rays (GCRs), engage in a complex interplay with the interstellar medium and magnetic fields in the galaxy, giving rise to many of its key characteristics. This White Paper is the first of a two-part series highlighting the most well-known high-energy cosmic accelerators and contributions that MeV gamma-ray astronomy will bring to understanding their energetic particle phenomena. The focus of this white paper is galactic cosmic rays, supernova remnants, protostellar jets and superbubbles, and colliding wind binaries.
astrophysics
Children learn word meanings by tapping into the commonalities across different situations in which words are used and overcome the high level of uncertainty involved in early word learning experiences. In a set of computational studies, we show that to successfully learn word meanings in the face of uncertainty, a learner needs to use two types of competition: words competing for association to a referent when learning from an observation and referents competing for a word when the word is used.
computer science
Isothermal Close Space Sublimation (ICSS) technique was used for embedding porous silicon (PS) films with ZnTe. It was studied the influence of the preparation conditions and in particular of a chemical etching step before the ZnTe growth, on the composition profile and final porosity of ZnTe embedded PS. The structure of the embedded material was determined by x-ray diffraction analysis while the thickness of the samples was determined by scanning electron microscopy (SEM). Rutherford backscattering (RBS) and Energy Dispersive (EDS) spectrometries allowed determining the composition profiles. We conclude that the etching of the PS surface before the ZnTe growth has two main effects: the increase of the porosity and enhancing the reactivity of the inner surface. It was observed that both effects benefit the filling process of the pores. Since RBS and EDS cannot detect the porosity in the present system, we explore the evolution of porosity by the fitting of the UV-VIS reflectance spectra. The atomic percent determined with this method was in relatively good agreement with that obtained from the RBS and EDS measurements.
condensed matter
We demonstrate a method of concentrating and patterning of biological cells on a chip, exploiting the confluence of electric and thermal fields, without necessitating the use of any external heating or illuminating source. The technique simply employs two parallel plate electrodes and an insulating layer over the bottom electrode, with a drilled insulating layer for inducing localized variations in the thermal field. A strong induced electric field, in the process, penetrates through the narrow hole and generates highly non-uniform heating, which in turn, results in gradients in electrical properties and induces mobile charges to impose directional fluid flow. The toroidal vortices, induced by secondary electrokinetic forces originating out of temperature-dependent electrical property variations, transport the suspended cells towards a hot-spot site of the chip, for rapid concentrating and patterning into different shaped clusters based on pre-designed conditions, without exceeding safe temperature limits that do not result in damage of thermally labile biological samples. We characterize the efficacy of the cell trapping process for two different biological entities, namely, Escherichia coli bacteria and yeast cell. These results may be of profound importance towards developing novel biomedical microdevices for drug discovery, antibiotic resistance assessment and medical diagnostics.
physics
The concept of the generalized continuity equation (GCE) was recently introduced in [J. Phys. A: Math. and Theor. {\bf 52}, 1552034 (2019)], and was derived in the context of $N$ independent Schr\"{o}dinger systems. The GCE is induced by a symmetry transformation which mixes the states of these systems, even though the $N$-system Lagrangian does not. As the $N$-system Schr\"{o}dinger Lagrangian is not invariant under such a transformation, the GCE will involve source terms which, under certain conditions vanish and lead to conserved currents. These conditions may hold globally or locally in a finite domain, leading to globally or locally conserved currents, respectively. In this work, we extend this idea to the case of arbitrary $SU(N)$-transformations and we show that a similar GCE emerges for $N$ systems in the Dirac dynamics framework. The emerging GCEs and the conditions which lead to the attendant conservation laws provide a rich phenomenology and potential use for the preparation and control of fermionic states.
quantum physics
Nonlocal compensation of magnetic damping by spin injection has been theoretically shown to establish dynamic, noncollinear magnetization states that carry spin currents over micrometer distances. Such states can be generically referred to as dissipative exchange flows (DEFs) because spatially diffusing spin currents are established by the mutual exchange torque exerted by neighboring spins. Analytical studies to date have been limited to the weak spin injection assumption whereby the equation of motion for the magnetization is mapped to hydrodynamic equations describing spin flow and then linearized. Here, we analytically and numerically study easy-plane ferromagnetic channels subject to spin injection of arbitrary strength at one extremum under a unified hydrodynamic framework. We find that DEFs generally exhibit a nonlinear profile along the channel accompanied by a nonlinear frequency tuneability. At large injection strengths, we fully characterize a novel magnetization state we call a contact-soliton DEF (CS-DEF) composed of a stationary soliton at the injection site, which smoothly transitions into a DEF and exhibits a negative frequency tuneability. The transition between a DEF and a CS-DEF occurs at the maximum precessional frequency and coincides with the Landau criterion: a subsonic to supersonic flow transition. Leveraging the hydraulic-electrical analogy, the current-voltage characteristics of a nonlinear DEF circuit are presented. Micromagnetic simulations of nanowires that include magnetocrystalline anisotropy and non-local dipole fields are in qualitative agreement with the analytical results. The magnetization states found here along with their characteristic profile and spectral features provide quantitative guidelines to pursue an experimental demonstration of DEFs in ferromagnetic materials and establishes a unified description for long-distance spin transport.
condensed matter
We prove that every connected locally finite regular graph has a double cover which is isomorphic to a Schreier graph.
mathematics
We prove that for almost every initial data $(u_0,u_1) \in H^s \times H^{s-1}$ with $s > \frac{p-3}{p-1}$ there exists a global weak solution to the supercritical semilinear wave equation $\partial _t^2u - \Delta u +|u|^{p-1}u=0$ where $p>5$, in both $\mathbb{R}^3$ and $\mathbb{T}^3$. This improves in a probabilistic framework the classical result of Strauss who proved global existence of weak solutions associated to $H^1 \times L^2$ initial data. The proof relies on techniques introduced by T. Oh and O. Pocovnicu based on the pioneer work of N. Burq and N. Tzvetkov. We also improve the global well-posedness result of C. Sun and B. Xia for the subcritical regime $p<5$ to the endpoint $s=\frac{p-3}{p-1}$.
mathematics
In this paper, we propose a simple yet effective approach, named Point Adversarial Self Mining (PASM), to improve the recognition accuracy in facial expression recognition. Unlike previous works focusing on designing specific architectures or loss functions to solve this problem, PASM boosts the network capability by simulating human learning processes: providing updated learning materials and guidance from more capable teachers. Specifically, to generate new learning materials, PASM leverages a point adversarial attack method and a trained teacher network to locate the most informative position related to the target task, generating harder learning samples to refine the network. The searched position is highly adaptive since it considers both the statistical information of each sample and the teacher network capability. Other than being provided new learning materials, the student network also receives guidance from the teacher network. After the student network finishes training, the student network changes its role and acts as a teacher, generating new learning materials and providing stronger guidance to train a better student network. The adaptive learning materials generation and teacher/student update can be conducted more than one time, improving the network capability iteratively. Extensive experimental results validate the efficacy of our method over the existing state of the arts for facial expression recognition.
computer science
Low-frequency radio observations are revealing an increasing number of diffuse synchrotron sources from galaxy clusters, dominantly in the form of radio halos or radio relics. The existence of this diffuse synchrotron emission indicates the presence of relativistic particles and magnetic fields. It is still an open question what mechanisms exactly are responsible for the population of relativistic electrons driving this synchrotron emission. The LOFAR Two-metre Sky Survey Deep Fields offer a unique view of this problem. Reaching noise levels below 30 $\mu$Jy/beam, these are the deepest images made at the low frequency of 144 MHz. This paper presents a search for diffuse emission in galaxy clusters in the first data release of the LOFAR Deep Fields. We detect a new high-redshift radio halo with a flux density of $8.9 \pm 1.0$ mJy and corresponding luminosity of $P_{144\mathrm{MHz}}=(3.6 \pm 0.6)\times10^{25}$ W Hz$^{-1}$ in an X-ray detected cluster at $z=0.77$ with a mass estimate of $M_{500} = 3.3_{-1.7}^{+1.1} \times 10^{14} M_\odot.$ Deep upper limits are placed on clusters with non-detections. We compare the results to the correlation between halo luminosity and cluster mass derived for radio halos found in the literature. This study is one of few to find diffuse emission in low mass ($M_{500} < 5\times10^{14} M_\odot$) systems and shows that deep low-frequency observations of galaxy clusters are fundamental for opening up a new part of parameter space in the study of non-thermal phenomena in galaxy clusters.
astrophysics
Low-energy neutrinos are clean messengers from supernovae explosions and probably carry unique insights into the process of stellar evolution. We estimate the expected number of events considering coherent elastic scattering of neutrinos off silicon nuclei, as would happen in Charge Coupled Devices (CCD) detectors. The number of expected events, integrated over a window of about 18 s, is $\sim$ 4 if we assume 10 kg of silicon and a supernovae 1 kpc away. For a distance similar to the red supergiant Betelgeuse, the number of expected events increases to $\sim$ 30 - 120, depending on the supernovae model. We argue that silicon detectors can be effective for supernovae neutrinos, and might possibly distinguish between models for certain target masses and distances.
high energy physics phenomenology
We calculate the surface temperature and the resulting brightness of sub-relativistic objects moving through the Solar system due to collisional heating by gas and radiative heating by solar radiation. The thermal emission from objects of size $\gtrsim 100$ m and speed of $\gtrsim 0.1c$, can be detected by the upcoming {\it James Webb Space Telescope} out to a distance of $\sim 100$ au. Future surveys could therefore set interesting limits on the abundance of fast-moving interstellar objects or spacecraft.
astrophysics
Information freshness in IoT-based status update systems has recently been studied through the Age of Information (AoI) and Peak AoI (PAoI) performance metrics. In this paper, we study a discrete-time server arising in multi-source IoT systems which accepts incoming information packets from multiple information sources so as to be forwarded to a remote monitor for status update purposes. Under the assumption of Bernoulli information packet arrivals and a common geometric service time distribution across all the sources, we numerically obtain the exact per-source distributions of AoI and PAoI in matrix-geometric form for three different queueing disciplines: i) Non-Preemptive Bufferless (NPB) ii) Preemptive Bufferless (PB) iii) Non-Preemptive Single Buffer with Replacement (NPSBR). The proposed numerical algorithm employs the theory of Discrete-Time Markov Chains (DTMC) of Quasi-Birth-Death (QBD) type and is matrix analytical, i.e, the algorithm is based on numerically stable and efficient vector-matrix operations.Numerical examples are provided to validate the accuracy and effectiveness of the proposed queueing model. We also present a numerical example on the optimum choice of the Bernoulli parameters in a practical IoT system with two sources with diverse AoI requirements.
computer science
The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems - this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties - device scaling, software complexity, adaptability, energy consumption, and fabrication economics - indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature's innate computational capacity. We call this type of computing "Thermodynamic Computing" or TC.
computer science
Materials with an allotropic phase transformation can result in the formation of microstructures where grains have orientation relationships (ORs) determined by the transformation history. These microstructures influence the final material properties. In zirconium alloys, there is a solid-state body centred cubic (BCC) to hexagonal close packed (HCP) phase transformation, where the crystal orientations of the HCP phase can be related to the parent BCC structure via the Burgers orientation relationship (BOR). In the present work, we adapt a reconstruction code developed for steels which uses a Markov Chain clustering algorithm to analyse electron backscatter diffraction (EBSD) maps and apply this to the HCP and BCC BOR. Once the parent microstructure is reconstructed from EBSD data, it is possible to analyse the variants of the HCP phase that are present and understand shared crystal planes and shared lattice directions within each prior beta grain with a view to understanding the transformation related deformation properties of the final microstructure. This analysis can be used to understand how the microstructure evolves and properties such as the effective slip length for plastic deformation.
condensed matter
We report an experimental study documenting the challenge of employing dielectric dilatometry for the study of pressure densification in glass-forming materials. An influence of the dielectric cell geometry on the resulting capacitance of 5-poly-phenyl-ether upon vitrification under different thermobaric pathways is documented. The capacitive response is studied for two different multilayer capacitors: one with, in principle, fixed plate distance and one with Kapton spacers allowing for contraction/expansion. A combination of changes in the dielectric permittivity of the material and modifications of the capacitor geometry determines the final capacitance. We conclude that, in order to convert the measured capacitance to material density, it is of paramount importance to understand the geometry. The data presented do not make it possible to conclude on whether or not simple glass formers such as 5-poly-phenyl-ether can be pressure densified, but our work highlights the challenge of utilizing dielectric spectroscopy to tackle this problem effectively.
condensed matter
Moving Morphable Component (MMC) based topology optimization approach is an explicit algorithm since the boundary of the entity explicitly described by its functions. Compared with other pixel or node point-based algorithms, it is optimized through the parameter optimization of a Topological Description Function (TDF). However, the optimized results partly depend on the selection of related parameters of Method of Moving Asymptote (MMA), which is the optimizer of MMC based topology optimization. Practically, these parameters are tuned according to the experience and the feasible solution might not be easily obtained, even the solution might be infeasible due to improper parameter setting. In order to address these issues, a Machine Learning (ML) based parameter tuning strategy is proposed in this study. An Extra-Trees (ET) based image classifier is integrated to the optimization framework, and combined with Particle Swarm Optimization (PSO) algorithm to form a closed loop. It makes the optimization process be free from the manual parameter adjustment and the reasonable solution in the design domain is obtained. In this study, two classical cases are presented to demonstrate the efficiency of the proposed approach.
mathematics
Mixing, ignition and combustion behavior in a rapid compression and expansion machine operated under Premixed Charge Compression Ignition (PCCI) relevant conditions are investigated by combined passive optical and laser-optical high-speed diagnostics. The PCCI concept is realized using a split injection schedule consisting of a long base load injection and two closely separated short injections near top dead center. Previous studies of close-coupled double injections under constant ambient conditions showed an increased penetration rate of the subsequent fuel spray. However, the aerodynamic gain from the preceding injection is counteracted by the density rise during the compression stroke under transient engine conditions. The study confirms that the rate of mixing of the subsequent fuel spray is significantly increased. Regarding combustion behavior, the thermodynamic analysis exhibits contributions of low temperature oxidation reactions of more than 20 % to the total heat release, with a notable amount of unburnt fuel mass varying from 25 to 61 %. The analysis of the optical data reveals the multi-dimensional impact of changes in operating parameters on the local mixture field and ignition dynamics. The onset of low temperature reactivity of the first short injection is found to be dominated by the operating strategy, while the location is strongly related to the local mixing state. Low temperature ignition of the consecutive fuel spray is significantly promoted, when upstream low temperature reactivity of the preceding injection is sustained. Likewise, it is shown that high temperature ignition is accelerated by the entrainment of persistent upstream low temperature reactivity.
physics
Population studies of the extragalactic objects are a major part of the universe large-scale structure study. Apart from radio, infrared, and visible wavelength bands, observations and further identification of extragalactic objects such as galaxies, quasars, blazers, liners, and active star burst regions are also conducted in the X-ray and gamma bands. In this paper we make identification and cross-correlate of the infrared and X-ray observational data, build a distribution of a selected sample sources by types and attempted to analyze types of the extragalactic objects at distances up to z = 0.1 using observational data of relevant space observatories. Data from a leading X-ray space observatory XMM-Newton were used to compile the largest catalog of X-ray sources. Current version of XMM SSC (Serendipitous Source Catalog) contains more than half a million sources. In our previous works we selected and analyzed a sample of 5021 X-ray galaxies observed by XMM-Newton. Identification and classification of these sources is essential next step of the study. In this study we used infrared apparent magnitudes from WISE catalog of AGN candidates. In 2010 space telescope WISE performed full sky survey in four infrared bands and detected 747 million sources. WISE catalog of AGN candidates amounts 4 million of possible extragalactic sources. We built infrared color-color diagram for our sample of X-ray galaxies and assessed their types using WISE telescope data. In this study we also analyzed large scale structure of the universe (distances up to z=0.1). This analysis revealed Coma galaxy cluster and SDSS Sloan Great Wall. In the further studies we are planning to investigate the distribution of different types of X-ray galaxies within the large-scale structures of the Universe.
astrophysics
In this paper, we present a quasi infinite horizon nonlinear model predictive control (MPC) scheme for tracking of generic reference trajectories. This scheme is applicable to nonlinear systems, which are locally incrementally stabilizable. For such systems, we provide a reference generic offline procedure to compute an incrementally stabilizing feedback with a continuously parameterized quadratic quasi infinite horizon terminal cost. As a result we get a nonlinear reference tracking MPC scheme with a valid terminal cost for general reachable reference trajectories without increasing the online computational complexity. As a corollary, the terminal cost can also be used to design nonlinear MPC schemes that reliably operate under online changing conditions, including unreachable reference signals. The practicality of this approach is demonstrated with a benchmark example. This paper is an extended version of the accepted paper [1], and contains additional details regarding \textit{robust} trajectory tracking (App.~B), continuous-time dynamics (App.~C), output tracking stage costs (App.~D) and the connection to incremental system properties (App.~A).
electrical engineering and systems science
With emergence of blockchain technologies and the associated cryptocurrencies, such as Bitcoin, understanding network dynamics behind Blockchain graphs has become a rapidly evolving research direction. Unlike other financial networks, such as stock and currency trading, blockchain based cryptocurrencies have the entire transaction graph accessible to the public (i.e., all transactions can be downloaded and analyzed). A natural question is then to ask whether the dynamics of the transaction graph impacts the price of the underlying cryptocurrency. We show that standard graph features such as degree distribution of the transaction graph may not be sufficient to capture network dynamics and its potential impact on fluctuations of Bitcoin price. In contrast, the new graph associated topological features computed using the tools of persistent homology, are found to exhibit a high utility for predicting Bitcoin price dynamics. %explain higher order interactions among the nodes in Blockchain graphs and can be used to build much more accurate price prediction models. Using the proposed persistent homology-based techniques, we offer a new elegant, easily extendable and computationally light approach for graph representation learning on Blockchain.
computer science
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis. Since ML is a data-driven approach, it seemingly fits into our daily lives and operations as well as complex and interdisciplinary fields. With the rise of commercial, open-source and user-catered ML tools, a key question often arises whenever ML is applied to explore a phenomenon or a scenario: what constitutes a good ML model? Keeping in mind that a proper answer to this question depends on a variety of factors, this work presumes that a good ML model is one that optimally performs and best describes the phenomenon on hand. From this perspective, identifying proper assessment metrics to evaluate performance of ML models is not only necessary but is also warranted. As such, this paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms, with emphasis on engineering applications.
computer science
We introduce the concept of access-based intuitionistic knowledge which relies on the intuition that agent $i$ knows $\varphi$ if $i$ has found access to a proof of $\varphi$. Basic principles are distribution and factivity of knowledge as well as $\square\varphi\rightarrow K_i\varphi$ and $K_i(\varphi\vee\psi) \rightarrow (K_i\varphi\vee K_i\psi)$, where $\square\varphi$ reads `$\varphi$ is proved'. The formalization extends a family of classical modal logics designed in [Lewitzka 2015, 2017, 2019] as combinations of $IPC$ and $CPC$ and as systems for the reasoning about proof, i.e. intuitionistic truth. We adopt a formalization of common knowledge from [Lewitzka 2011] and interpret it here as access-based common knowledge. We compare our proposal with recent approaches to intuitionistic knowledge [Artemov and Protopopescu 2016; Lewitzka 2017, 2019] and bring together these different concepts in a unifying semantic framework based on Heyting algebra expansions.
computer science
We investigate the implications of energy conditions on cosmological compactification solutions of the higher-dimensional Einstein field equations. It is known that the Strong Energy Condition forbids time-independent compactifications to de Sitter space but allows time-dependent compactifications to other (homogeneous and isotropic) expanding universes that undergo a {\it transient} period of acceleration. Here we show that the same assumptions allow compactification to FLRW universes undergoing {late-time} accelerated expansion; the late-time stress tensor is a perfect fluid but with a lower bound on the pressure/energy-density ratio that excludes de Sitter but allows accelerated power-law expansion. The compact space undergoes a decelerating expansion that leads to decompactification, but on an arbitrarily long timescale.
high energy physics theory
A possibility of explaining the anomalies in the semileptonic $B$-meson decay $B \to K^{*} \mu \bar\mu$ has been explored in the framework of the gauged $U(1)_{\mu-\tau}$ symmetry. Apart from the muon anomalous magnetic moment and neutrino sector, we formulate the model starting with a valid Lagrangian and consider the constraints from the neutral meson mixings, the bounds on direct detection and the relic density of the bosonic dark matter candidate augmented to collider constraints. We search the parameter space, which accommodates the size of the anomaly of the $B \rightarrow K^* \mu \bar \mu$ decay, to satisfy all experimental constraints. We found the allowed region on the plane of the dark matter and $Z'$ masses is a rather narrow compared to the previous analysis.
high energy physics phenomenology
Background: Independent Component Analysis (ICA) is a widespread tool for exploration and denoising of electroencephalography (EEG) or magnetoencephalography (MEG) signals. In its most common formulation, ICA assumes that the signal matrix is a noiseless linear mixture of independent sources that are assumed non-Gaussian. A limitation is that it enforces to estimate as many sources as sensors or to rely on a detrimental PCA step. Methods: We present the Spectral Matching ICA (SMICA) model. Signals are modelled as a linear mixing of independent sources corrupted by additive noise, where sources and the noise are stationary Gaussian time series. Thanks to the Gaussian assumption, the negative log-likelihood has a simple expression as a sum of divergences between the empirical spectral covariance matrices of the signals and those predicted by the model. The model parameters can then be estimated by the expectation-maximization (EM) algorithm. Results: Experiments on phantom MEG datasets show that SMICA can recover dipole locations more precisely than usual ICA algorithms or Maxwell filtering when the dipole amplitude is low. Experiments on EEG datasets show that SMICA identifies a source subspace which contains sources that have less pairwise mutual information, and are better explained by the projection of a single dipole on the scalp. Comparison with existing methods: Noiseless ICA models lead to degenerate likelihood when there are fewer sources than sensors, while SMICA succeeds without resorting to prior dimension reduction. Conclusions: SMICA is a promising alternative to other noiseless ICA models based on non-Gaussian assumptions.
electrical engineering and systems science
In this paper, we present statistics of soft gamma repeater (SGR) bursts from SGR J1550-5418, SGR 1806-20 and SGR 1900+14 by adding new bursts from K{\i}rm{\i}z{\i}bayrak et al. (2017) detected with the Rossi X-ray Timing Explorer (RXTE). We find that the fluence distributions of magnetar bursts are well described by power-law functions with indices 1.84, 1.68, and 1.65 for SGR J1550-5418, SGR 1806-20 and SGR 1900+14, respectively. The duration distributions of magnetar bursts also show power-law forms. Meanwhile, the waiting time distribution can be described by a non-stationary Poisson process with an exponentially growing occurrence rate. These distributive features indicate that magnetar bursts can be regarded as a self-organizing critical process. We also compare these distributions with the repeating FRB 121102. The statistical properties of repeating FRB 121102 are similar with magentar bursts, combing with the large required magnetic filed ($B\geq 10^{14}$G) of neutron star for FRB 121102, which indicates that the central engine of FRB 121102 may be a magnetar.
astrophysics
In this paper we consider new geometric flow equations, called D-flow, which describe the variation of space-time geometries under the change of the number of dimensions. The D-flow is originating from the non-trivial dependence of the volume of space-time manifolds on the number of space-time dimensions and it is driven by certain curvature invariants. We will work out specific examples of D-flow equations and their solutions for the case of D-dimensional spheres. The discussion of the paper is motivated from recent swampland considerations, where the number $D$ of space-time dimensions is treated as a new swampland parameter.
high energy physics theory
Imaging Atmospheric Cherenkov Telescopes (IACTs) currently in operation feature large mirrors and order of 1 ns time response to signals of a few photo-electrons produced by optical photons. This means that they are ideally suited for optical interferometry observations. Thanks to their sensitivity to visible wavelengths and long baselines optical intensity interferometry with IACTs allows reaching angular resolutions of tens to microarcsec. We have installed a simple optical setup on top of the cameras of the two 17 m diameter MAGIC IACTs and observed coherent fluctuations in the photon intensity measured at the two telescopes for three different stars. The sensitivity is roughly 10 times better than that achieved in the 1970s with the Narrabri interferometer.
astrophysics
We explore the possibility that dark matter interactions with Standard Model particles are dominated by interactions with neutrinos. We examine whether it is possible to construct such a scenario in a gauge invariant manner. We first study the coupling of dark matter to the full lepton doublet and confirm that this generally leads to the dark matter phenomenology being dominated by interactions with charged leptons. We then explore two different implementations of the neutrino portal in which neutrinos mix with a Standard Model singlet fermion that interacts directly with dark matter through either a scalar or vector mediator. In the latter cases we find that the neutrino interactions can dominate the dark matter phenomenology. Present neutrino detectors can probe dark matter annihilations into neutrinos and already set the strongest constraints on these realisations. Future experiments such as Hyper-Kamiokande, MEMPHYS, DUNE, or DARWIN could allow to probe dark matter-neutrino cross sections down to the value required to obtain the correct thermal relic abundance.
high energy physics phenomenology
Integrating mathematical morphology operations within deep neural networks has been subject to increasing attention lately. However, replacing standard convolution layers with erosions or dilations is particularly challenging because the min and max operations are not differentiable. Relying on the asymptotic behavior of the counter-harmonic mean, p-convolutional layers were proposed as a possible workaround to this issue since they can perform pseudo-dilation or pseudo-erosion operations (depending on the value of their inner parameter p), and very promising results were reported. In this work, we present two new morphological layers based on the same principle as the p-convolutional layer while circumventing its principal drawbacks, and demonstrate their potential interest in further implementations within deep convolutional neural network architectures.
electrical engineering and systems science
Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space. A desirable class of priors would represent weights compactly, capture correlations between weights, facilitate calibrated reasoning about uncertainty, and allow inclusion of prior knowledge about the function space such as periodicity or dependence on contexts such as inputs. To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for network weights based on unit embeddings that can flexibly encode correlated weight structures, and (ii) input-dependent versions of these weight priors that can provide convenient ways to regularize the function space through the use of kernels defined on contextual inputs. We show these models provide desirable test-time uncertainty estimates on out-of-distribution data, demonstrate cases of modeling inductive biases for neural networks with kernels which help both interpolation and extrapolation from training data, and demonstrate competitive predictive performance on an active learning benchmark.
statistics
We develop an abstract framework for studying the strong form of Malle's conjecture for nilpotent groups $G$ in their regular representation. This framework is then used to prove the strong form of Malle's conjecture for any nilpotent group $G$ such that all elements of order $p$ are central, where $p$ is the smallest prime divisor of $\# G$. We also give an upper bound for any nilpotent group $G$ tight up to logarithmic factors, and tight up to a constant factor in case all elements of order $p$ pairwise commute. Finally, we give a new heuristical argument supporting Malle's conjecture in the case of nilpotent groups in their regular representation.
mathematics
Using the most general form of the interpolating current for baryons, the strong electric and magnetic coupling constants of light vector mesons $ \rho $ and $ K^* $ with doubly heavy baryons are computed within the light-cone sum rules. We consider 2- and 3-particle distribution amplitudes of the aforementioned vector mesons. The obtained results can be useful in the analysis of experimental data on the properties of doubly heavy baryons conducted at LHC.
high energy physics phenomenology
The orbital anisotropy induced in the superfluid 3He by nematic aerogel is, generally speaking, spatially non-uniform. This anisotropy in its turn induces spatial fluctuations of the order parameter. It is shown here that for the polar phase these fluctuations decrease the overall amplitude of the order parameter and the value of the frequency shift of the transverse NMR. Different contributions to this effect are discussed and estimated. Their temperature dependencies are discssed as well.
condensed matter
We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve $\epsilon$ accuracy in 2-Wasserstein distance, our algorithm achieves $\tilde O(n+\kappa^{2}d^{1/2}/\epsilon+\kappa^{4/3}d^{1/3}n^{2/3}/\epsilon^{2/3})$ gradient complexity (i.e., number of component gradient evaluations), which outperforms the state-of-the-art HMC and stochastic gradient HMC methods in a wide regime. We also extend our algorithm for sampling from smooth and general log-concave distributions, and prove the corresponding gradient complexity as well. Experiments on both synthetic and real data demonstrate the superior performance of our algorithm.
statistics
Cellular-connected wireless connectivity provides new opportunities for virtual reality(VR) to offer seamless user experience from anywhere at anytime. To realize this vision, the quality-of-service (QoS) for wireless VR needs to be carefully defined to reflect human perception requirements. In this paper, we first identify the primary drivers of VR systems, in terms of applications and use cases. We then map the human perception requirements to corresponding QoS requirements for four phases of VR technology development. To shed light on how to provide short/long-range mobility for VR services, we further list four main use cases for cellular-connected wireless VR and identify their unique research challenges along with their corresponding enabling technologies and solutions in 5G systems and beyond. Last but not least, we present a case study to demonstrate the effectiveness of our proposed solution and the unique QoS performance requirements of VR transmission compared with that of traditional video service in cellular networks.
electrical engineering and systems science